Complete Guide: Automate Oracle Data Pump Exports to OCI Object Storage
Introduction
Managing Oracle database backups efficiently requires automation, especially as databases grow and storage costs increase. This comprehensive guide demonstrates how to create a production-ready solution that automatically exports Oracle databases using Data Pump and seamlessly uploads them to Oracle Cloud Infrastructure (OCI) Object Storage.
What This Solution Provides:
- Automated Data Pump export execution with comprehensive validation
- Conditional OCI upload (only after successful export)
- Organized cloud storage with date-based folder structure
- Local retention management with automatic cleanup
- Enterprise-grade error handling and logging
- Performance optimization for large databases
- Comprehensive monitoring and troubleshooting tools
Workflow Overview
┌─────────────────────────────────────────────────────────────────┐
│ BACKUP AUTOMATION WORKFLOW │
├─────────────────────────────────────────────────────────────────┤
│ │
│ 1. Environment Validation │
│ ├── Check Oracle environment variables │
│ ├── Verify disk space and directory access │
│ ├── Test OCI CLI connectivity │
│ └── Confirm bucket access permissions │
│ │
│ 2. Data Pump Export Execution │
│ ├── Run expdp with parameter file │
│ ├── Monitor export status and exit codes │
│ └── Verify dump file creation and integrity │
│ │
│ 3. Conditional OCI Upload │
│ ├── Upload dump file with metadata tagging │
│ ├── Upload log file for audit trail │
│ ├── Verify upload completion │
│ └── Organize files in date-based structure │
│ │
│ 4. Local File Management │
│ ├── Remove files older than retention period │
│ ├── Generate cleanup reports │
│ └── Maintain optimal disk usage │
│ │
│ 5. Comprehensive Reporting │
│ └── Generate detailed backup status summary │
│ │
└─────────────────────────────────────────────────────────────────┘
OCI Bucket Organization
YOUR_BUCKET_NAME/
└── EXPORT_DUMP/
├── 20241215/
│ ├── mydb_20241215_020001.dmp
│ └── mydb_20241215_020001.log
├── 20241216/
│ ├── mydb_20241216_020001.dmp
│ └── mydb_20241216_020001.log
└── 20241217/
├── mydb_20241217_020001.dmp
└── mydb_20241217_020001.log
Prerequisites
System Requirements
- Oracle Database 11g or higher
- Oracle Linux / RHEL / CentOS 7+ or Oracle Linux 8+
- Minimum 10GB free disk space in dump directory
- OCI CLI installed and configured
- Oracle user with Data Pump privileges (
DATAPUMP_EXP_FULL_DATABASErole)
OCI Configuration Verification
# Verify OCI CLI installation and version
oci --version
# Test OCI connectivity (should return your namespace)
oci os ns get
# Expected output format:
{
"data": "your-namespace-string"
}
# Verify bucket access (replace with your actual bucket name)
oci os bucket get --bucket-name YOUR_BUCKET_NAME
🔧 Complete Backup Solution
Main Script: /home/oracle/scripts/backup_database_oci.sh
#!/bin/bash
################################################################################
# Oracle Database Backup Script with OCI Object Storage Integration
# Location: /home/oracle/scripts/backup_database_oci.sh
# Purpose: Automated Data Pump Export → OCI Upload → Local Cleanup
# Author: Database Administration Team
################################################################################
# Source Oracle Environment
. ~/.bash_profile
# ═══════════════════════════════════════════════════════════════════════════
# CONFIGURATION SECTION (UPDATE THESE VALUES FOR YOUR ENVIRONMENT)
# ═══════════════════════════════════════════════════════════════════════════
# Directory Configuration
SCRIPT_DIR="/home/oracle/scripts" # Script and parameter file location
DUMP_DIR="/u01/backup/dump" # Local dump file storage
LOG_DIR="/home/oracle/logs/backup" # Script logging directory
# Database Configuration
DB_NAME="MYDB" # Your database identifier for filenames
# OCI Configuration
OCI_CONFIG_FILE="/home/oracle/.oci/config" # OCI CLI configuration file
OCI_BUCKET="YOUR_BUCKET_NAME" # Your OCI Object Storage bucket
OCI_BASE_PREFIX="EXPORT_DUMP" # Base folder in bucket
# Parameter File
EXPORT_PARFILE="export_full_database.par" # Data Pump parameter file name
# Retention Policy
LOCAL_RETENTION_DAYS=3 # Keep local backups for 3 days
# Performance Settings
OCI_PART_SIZE=128 # OCI upload part size (MB)
OCI_PARALLEL_COUNT=10 # Parallel upload threads
# ═══════════════════════════════════════════════════════════════════════════
# SYSTEM VARIABLES (DO NOT MODIFY)
# ═══════════════════════════════════════════════════════════════════════════
TIMESTAMP=$(date +%Y%m%d_%H%M%S) # Example: 20241215_143025
DATE_STAMP=$(date +%Y%m%d) # Example: 20241215
# Dynamic Filenames
DUMP_FILE="${DB_NAME}_${TIMESTAMP}.dmp" # Example: MYDB_20241215_143025.dmp
LOG_FILE="${DB_NAME}_${TIMESTAMP}.log" # Example: MYDB_20241215_143025.log
SCRIPT_LOG="${LOG_DIR}/backup_${TIMESTAMP}.log" # Script execution log
# Create log directory
mkdir -p "$LOG_DIR"
# ═══════════════════════════════════════════════════════════════════════════
# LOGGING FUNCTIONS
# ═══════════════════════════════════════════════════════════════════════════
log_message() {
echo "[$(date '+%Y-%m-%d %H:%M:%S')] $1" | tee -a "$SCRIPT_LOG"
}
log_error() {
echo "[$(date '+%Y-%m-%d %H:%M:%S')] ❌ ERROR: $1" | tee -a "$SCRIPT_LOG" >&2
}
log_success() {
echo "[$(date '+%Y-%m-%d %H:%M:%S')] ✅ $1" | tee -a "$SCRIPT_LOG"
}
log_warning() {
echo "[$(date '+%Y-%m-%d %H:%M:%S')] ⚠️ $1" | tee -a "$SCRIPT_LOG"
}
# ═══════════════════════════════════════════════════════════════════════════
# ENVIRONMENT VALIDATION
# ═══════════════════════════════════════════════════════════════════════════
validate_environment() {
log_message "=========================================="
log_message "Environment Validation"
log_message "=========================================="
# Check Oracle Environment Variables
if [[ -z "$ORACLE_SID" || -z "$ORACLE_HOME" ]]; then
log_error "Oracle environment not properly configured"
log_error "ORACLE_SID: ${ORACLE_SID:-NOT SET}"
log_error "ORACLE_HOME: ${ORACLE_HOME:-NOT SET}"
log_error "Ensure ~/.bash_profile is properly configured"
exit 1
fi
log_success "Oracle environment verified: SID=$ORACLE_SID"
# Verify Database is Running
if ! pgrep -f "ora_pmon_${ORACLE_SID}" >/dev/null; then
log_error "Database instance $ORACLE_SID is not running"
exit 1
fi
log_success "Database instance is running"
# Check Dump Directory Access
if [[ ! -d "$DUMP_DIR" ]]; then
log_message "Creating dump directory: $DUMP_DIR"
mkdir -p "$DUMP_DIR" || {
log_error "Cannot create dump directory: $DUMP_DIR"
exit 1
}
fi
if [[ ! -w "$DUMP_DIR" ]]; then
log_error "Dump directory not writable: $DUMP_DIR"
exit 1
fi
log_success "Dump directory accessible: $DUMP_DIR"
# Verify Parameter File Exists
if [[ ! -f "$SCRIPT_DIR/$EXPORT_PARFILE" ]]; then
log_error "Parameter file not found: $SCRIPT_DIR/$EXPORT_PARFILE"
log_error "Create the parameter file with your export settings"
exit 1
fi
log_success "Parameter file found: $EXPORT_PARFILE"
# Check Available Disk Space (require minimum 5GB)
local available_space
available_space=$(df -BG "$DUMP_DIR" | tail -1 | awk '{print $4}' | sed 's/G//')
log_message "Available disk space: ${available_space}GB"
if [[ "$available_space" -lt 5 ]]; then
log_error "Insufficient disk space (${available_space}GB available)"
log_error "Minimum requirement: 5GB free space"
exit 1
fi
log_success "Sufficient disk space available"
# Verify OCI CLI Installation
if ! command -v oci &> /dev/null; then
log_error "OCI CLI not found in PATH"
log_error "Install OCI CLI: https://docs.oracle.com/en-us/iaas/Content/API/SDKDocs/cliinstall.htm"
exit 1
fi
local oci_version
oci_version=$(oci --version 2>&1 | head -1)
log_success "OCI CLI found: $oci_version"
# Check OCI Configuration File
if [[ ! -f "$OCI_CONFIG_FILE" ]]; then
log_error "OCI configuration file not found: $OCI_CONFIG_FILE"
log_error "Run: oci setup config"
exit 1
fi
log_success "OCI configuration file exists"
# Test OCI Connectivity
if ! oci os ns get --config-file "$OCI_CONFIG_FILE" >/dev/null 2>&1; then
log_error "OCI CLI connectivity test failed"
log_error "Verify your OCI configuration and network connectivity"
exit 1
fi
local oci_namespace
oci_namespace=$(oci os ns get --config-file "$OCI_CONFIG_FILE" --query data --raw-output 2>/dev/null)
log_success "OCI connectivity verified (Namespace: $oci_namespace)"
# Verify Bucket Access
if ! oci os bucket get --bucket-name "$OCI_BUCKET" --config-file "$OCI_CONFIG_FILE" >/dev/null 2>&1; then
log_error "Cannot access OCI bucket: $OCI_BUCKET"
log_error "Verify bucket exists and you have proper permissions"
exit 1
fi
log_success "OCI bucket access verified: $OCI_BUCKET"
log_message "=========================================="
}
# ═══════════════════════════════════════════════════════════════════════════
# DATA PUMP EXPORT EXECUTION
# ═══════════════════════════════════════════════════════════════════════════
perform_export() {
log_message "=========================================="
log_message "Starting Data Pump Export"
log_message "=========================================="
log_message "Database: $ORACLE_SID"
log_message "Dump file: $DUMP_FILE"
log_message "Log file: $LOG_FILE"
log_message "Location: $DUMP_DIR"
log_message "Parameter file: $EXPORT_PARFILE"
log_message "Start time: $(date '+%Y-%m-%d %H:%M:%S')"
log_message "=========================================="
# Change to dump directory
cd "$DUMP_DIR" || {
log_error "Cannot change to dump directory: $DUMP_DIR"
exit 1
}
# Execute Data Pump Export
# The parameter file should contain export settings like:
# FULL=Y, DIRECTORY=DATA_PUMP_DIR, COMPRESSION=ALL, PARALLEL=4
expdp parfile="$SCRIPT_DIR/$EXPORT_PARFILE" \
dumpfile="$DUMP_FILE" \
logfile="$LOG_FILE"
local export_status=$?
# Analyze Export Status
# Exit code 0 = Complete success
# Exit code 5 = Success with warnings (acceptable)
# Other exit codes = Failure
if [[ $export_status -eq 0 ]] || [[ $export_status -eq 5 ]]; then
if [[ $export_status -eq 5 ]]; then
log_warning "Export completed with warnings (exit code: $export_status)"
else
log_success "Export completed successfully (exit code: $export_status)"
fi
# Verify dump file was actually created
if [[ ! -f "$DUMP_DIR/$DUMP_FILE" ]]; then
log_error "Export reported success but dump file not found: $DUMP_DIR/$DUMP_FILE"
return 1
fi
# Display file information for verification
local dump_size log_size
dump_size=$(du -h "$DUMP_DIR/$DUMP_FILE" | cut -f1)
log_message "Dump file size: $dump_size"
if [[ -f "$DUMP_DIR/$LOG_FILE" ]]; then
log_size=$(du -h "$DUMP_DIR/$LOG_FILE" | cut -f1)
log_message "Log file size: $log_size"
fi
log_message "Export end time: $(date '+%Y-%m-%d %H:%M:%S')"
return 0
else
log_error "Export failed with exit code: $export_status"
# Display relevant portions of export log for troubleshooting
if [[ -f "$DUMP_DIR/$LOG_FILE" ]]; then
log_error "Last 20 lines of export log:"
tail -20 "$DUMP_DIR/$LOG_FILE" | while IFS= read -r line; do
log_error " $line"
done
fi
return 1
fi
}
# ═══════════════════════════════════════════════════════════════════════════
# OCI OBJECT STORAGE UPLOAD
# ═══════════════════════════════════════════════════════════════════════════
upload_to_oci() {
log_message "=========================================="
log_message "Starting OCI Object Storage Upload"
log_message "=========================================="
# Construct OCI path: EXPORT_DUMP/YYYYMMDD/
local oci_prefix="$OCI_BASE_PREFIX/$DATE_STAMP"
log_message "Bucket: $OCI_BUCKET"
log_message "Target path: $oci_prefix/"
log_message "Upload start: $(date '+%Y-%m-%d %H:%M:%S')"
local upload_success=true
# Upload Dump File with Performance Optimization
log_message "Uploading dump file: $DUMP_FILE"
if ! oci os object put \
--config-file "$OCI_CONFIG_FILE" \
--bucket-name "$OCI_BUCKET" \
--file "$DUMP_DIR/$DUMP_FILE" \
--name "$oci_prefix/$DUMP_FILE" \
--part-size "$OCI_PART_SIZE" \
--parallel-upload-count "$OCI_PARALLEL_COUNT" \
--metadata "{\"database\":\"$ORACLE_SID\",\"backup-date\":\"$DATE_STAMP\",\"backup-type\":\"datapump-export\",\"timestamp\":\"$TIMESTAMP\",\"db-name\":\"$DB_NAME\"}" \
>> "$SCRIPT_LOG" 2>&1; then
log_error "Failed to upload dump file to OCI"
upload_success=false
else
log_success "Dump file uploaded successfully"
# Display upload information
local dump_size
dump_size=$(du -h "$DUMP_DIR/$DUMP_FILE" | cut -f1)
log_message "Uploaded size: $dump_size"
fi
# Upload Log File (if exists)
if [[ -f "$DUMP_DIR/$LOG_FILE" ]]; then
log_message "Uploading log file: $LOG_FILE"
if ! oci os object put \
--config-file "$OCI_CONFIG_FILE" \
--bucket-name "$OCI_BUCKET" \
--file "$DUMP_DIR/$LOG_FILE" \
--name "$oci_prefix/$LOG_FILE" \
--metadata "{\"database\":\"$ORACLE_SID\",\"backup-date\":\"$DATE_STAMP\",\"type\":\"export-log\",\"db-name\":\"$DB_NAME\"}" \
>> "$SCRIPT_LOG" 2>&1; then
log_warning "Failed to upload log file (non-critical)"
else
log_success "Log file uploaded successfully"
fi
fi
# Verify Upload Completion
if [[ "$upload_success" == true ]]; then
log_message "Verifying OCI upload completion..."
# Allow time for OCI to index the files
sleep 5
# Count uploaded files in the target path
local uploaded_count
uploaded_count=$(oci os object list \
--config-file "$OCI_CONFIG_FILE" \
--bucket-name "$OCI_BUCKET" \
--prefix "$oci_prefix/" \
--all 2>/dev/null | grep -c '"name"' || echo "0")
log_message "Files found in OCI: $uploaded_count"
if [[ $uploaded_count -ge 1 ]]; then
log_success "Upload verification successful"
log_message "OCI location: $OCI_BUCKET/$oci_prefix/"
log_message "Upload end: $(date '+%Y-%m-%d %H:%M:%S')"
return 0
else
log_error "Upload verification failed - files not found in OCI"
return 1
fi
else
log_error "Upload process failed"
return 1
fi
}
# ═══════════════════════════════════════════════════════════════════════════
# LOCAL FILE CLEANUP
# ═══════════════════════════════════════════════════════════════════════════
cleanup_local_files() {
log_message "=========================================="
log_message "Local File Cleanup"
log_message "Retention policy: $LOCAL_RETENTION_DAYS days"
log_message "=========================================="
# Calculate correct age for find command
# For 3-day retention: keep today (0), yesterday (1), day-before (2), delete older
local delete_age=$((LOCAL_RETENTION_DAYS - 1))
# Remove Old Dump Files
log_message "Searching for dump files older than $LOCAL_RETENTION_DAYS days..."
local deleted_dumps
deleted_dumps=$(find "$DUMP_DIR" -name "${DB_NAME}_*.dmp" -type f -mtime +"$delete_age" -print -delete 2>/dev/null | wc -l)
if [[ $deleted_dumps -gt 0 ]]; then
log_success "Removed $deleted_dumps old dump file(s)"
else
log_message "No old dump files to remove"
fi
# Remove Old Log Files
log_message "Searching for log files older than $LOCAL_RETENTION_DAYS days..."
local deleted_logs
deleted_logs=$(find "$DUMP_DIR" -name "${DB_NAME}_*.log" -type f -mtime +"$delete_age" -print -delete 2>/dev/null | wc -l)
if [[ $deleted_logs -gt 0 ]]; then
log_success "Removed $deleted_logs old log file(s)"
else
log_message "No old log files to remove"
fi
# Clean Old Script Logs
find "$LOG_DIR" -name "backup_*.log" -type f -mtime +"$delete_age" -delete 2>/dev/null
# Display Remaining Local Backups
log_message "Remaining local backups:"
if ls -lht "$DUMP_DIR"/${DB_NAME}_*.dmp 2>/dev/null | head -5 | while IFS= read -r line; do
log_message " $line"
done; then
:
else
log_message " No local backup files found"
fi
# Display Current Disk Usage
local disk_info
disk_info=$(df -h "$DUMP_DIR" | tail -1 | awk '{print "Used: "$3" ("$5") | Available: "$4}')
log_message "Disk space: $disk_info"
}
# ═══════════════════════════════════════════════════════════════════════════
# SUMMARY REPORT GENERATION
# ═══════════════════════════════════════════════════════════════════════════
generate_summary() {
log_message "=========================================="
log_message "BACKUP SUMMARY REPORT"
log_message "=========================================="
log_message ""
log_message "Database Information:"
log_message " Database SID: $ORACLE_SID"
log_message " Database Name: $DB_NAME"
log_message " Backup Date: $DATE_STAMP"
log_message " Timestamp: $TIMESTAMP"
log_message ""
log_message "Local Files:"
log_message " Location: $DUMP_DIR"
log_message " Dump file: $DUMP_FILE"
log_message " Log file: $LOG_FILE"
if [[ -f "$DUMP_DIR/$DUMP_FILE" ]]; then
local dump_size
dump_size=$(du -h "$DUMP_DIR/$DUMP_FILE" | cut -f1)
log_message " File size: $dump_size"
fi
log_message ""
log_message "OCI Object Storage:"
log_message " Bucket: $OCI_BUCKET"
log_message " Path: $OCI_BASE_PREFIX/$DATE_STAMP/"
log_message " Access URL: https://objectstorage.[region].oraclecloud.com/n/[namespace]/b/$OCI_BUCKET/o/$OCI_BASE_PREFIX/$DATE_STAMP/$DUMP_FILE"
log_message ""
log_message "Retention Policy:"
log_message " Local retention: $LOCAL_RETENTION_DAYS days"
log_message " OCI retention: Managed by lifecycle policy (recommended: 30+ days)"
log_message ""
log_message "Performance Settings:"
log_message " OCI part size: ${OCI_PART_SIZE}MB"
log_message " Parallel uploads: $OCI_PARALLEL_COUNT"
log_message ""
log_message "Status: ✅ SUCCESS"
log_message "Completed: $(date '+%Y-%m-%d %H:%M:%S')"
log_message "Script log: $SCRIPT_LOG"
log_message "=========================================="
}
# ═══════════════════════════════════════════════════════════════════════════
# MAIN EXECUTION FLOW
# ═══════════════════════════════════════════════════════════════════════════
main() {
# Display startup banner
log_message "╔══════════════════════════════════════════════════════════════════════════╗"
log_message "║ ORACLE DATABASE BACKUP WITH OCI INTEGRATION ║"
log_message "║ Production Ready Solution ║"
log_message "╚══════════════════════════════════════════════════════════════════════════╝"
log_message ""
# Step 1: Validate Environment
validate_environment
# Step 2: Perform Data Pump Export
if ! perform_export; then
log_error "Export failed. Backup process aborted."
log_error "No files will be uploaded to OCI."
log_message "Check the export log for detailed error information."
exit 1
fi
# Step 3: Upload to OCI (Only if export succeeded)
if ! upload_to_oci; then
log_error "OCI upload failed."
log_error "Local backup exists but was not uploaded to cloud storage."
log_message ""
log_message "Manual upload commands:"
log_message "oci os object put --bucket-name $OCI_BUCKET \\"
log_message " --file $DUMP_DIR/$DUMP_FILE \\"
log_message " --name $OCI_BASE_PREFIX/$DATE_STAMP/$DUMP_FILE"
exit 1
fi
# Step 4: Cleanup Local Files (Only after successful upload)
cleanup_local_files
# Step 5: Generate Summary Report
generate_summary
# Display completion banner
log_message ""
log_message "╔══════════════════════════════════════════════════════════════════════════╗"
log_message "║ BACKUP COMPLETED SUCCESSFULLY ║"
log_message "╚══════════════════════════════════════════════════════════════════════════╝"
exit 0
}
# Execute main function
main "$@"
📝 Data Pump Parameter File
Create /home/oracle/scripts/export_full_database.par:
################################################################################
# Data Pump Export Parameter File
# Location: /home/oracle/scripts/export_full_database.par
# Purpose: Full database export configuration
################################################################################
# Export Scope
FULL=Y
# Directory Object (must exist in database)
DIRECTORY=DATA_PUMP_DIR
# Compression Settings
COMPRESSION=ALL
COMPRESSION_ALGORITHM=MEDIUM
# Performance Tuning
PARALLEL=4
# File Management
FILESIZE=2G
# Statistics and Metadata
STATISTICS=BLOCK
LOGTIME=ALL
# Optional: Exclude certain schemas
# EXCLUDE=SCHEMA:"IN ('APEX_030200','APEX_PUBLIC_USER','FLOWS_FILES')"
# Optional: Include specific schemas only
# SCHEMAS=HR,SALES,FINANCE
🛠️ Complete Setup Guide
Step 1: Environment Preparation
# Switch to oracle user
su - oracle
# Create directory structure
mkdir -p /home/oracle/scripts
mkdir -p /home/oracle/logs/backup
mkdir -p /u01/backup/dump
# Set proper permissions
chmod 755 /home/oracle/scripts
chmod 755 /home/oracle/logs/backup
chmod 755 /u01/backup/dump
Step 2: Database Directory Object Setup
-- Connect as SYSDBA
sqlplus / as sysdba
-- Create directory object pointing to your dump location
CREATE OR REPLACE DIRECTORY DATA_PUMP_DIR AS '/u01/backup/dump';
-- Grant necessary permissions
GRANT READ, WRITE ON DIRECTORY DATA_PUMP_DIR TO SYSTEM;
-- Verify directory creation
SELECT directory_name, directory_path
FROM dba_directories
WHERE directory_name = 'DATA_PUMP_DIR';
Step 3: OCI CLI Configuration
# As oracle user, configure OCI CLI
oci setup config
# Configuration prompts:
# - User OCID (from OCI Console → Profile → User Settings)
# - Tenancy OCID (from OCI Console → Profile → Tenancy)
# - Region (e.g., us-ashburn-1, eu-frankfurt-1, ap-sydney-1)
# - Generate new API key pair (recommended: Y)
# Test configuration
oci os ns get
# Expected successful output:
{
"data": "your-tenancy-namespace"
}
Step 4: OCI Bucket Setup
# List existing buckets
oci os bucket list --compartment-id <your-compartment-ocid>
# Create new bucket (if needed)
oci os bucket create \
--compartment-id <your-compartment-ocid> \
--name YOUR_BUCKET_NAME \
--storage-tier Standard
# Verify bucket access
oci os bucket get --bucket-name YOUR_BUCKET_NAME
Step 5: Script Installation and Configuration
# Install the main script
cd /home/oracle/scripts
vi backup_database_oci.sh
# Copy and paste the complete script
# Save and exit (:wq)
# Make executable
chmod +x backup_database_oci.sh
# Create parameter file
vi export_full_database.par
# Copy and paste the parameter file content
# Save and exit
# Customize configuration variables in the script:
vi backup_database_oci.sh
# Update:
# - DB_NAME="MYDB" (change to your database identifier)
# - OCI_BUCKET="YOUR_BUCKET_NAME" (your actual bucket name)
# - DUMP_DIR="/u01/backup/dump" (your preferred dump location)
🧪 Testing and Validation
Test 1: Environment Validation
# Test just the validation function
cd /home/oracle/scripts
# Create temporary test script
cat > test_validation.sh << 'EOF'
#!/bin/bash
. ~/.bash_profile
source backup_database_oci.sh
validate_environment
echo "Validation test completed"
EOF
chmod +x test_validation.sh
./test_validation.sh
Test 2: Small Export Test
Create a test parameter file for a small schema:
# Create test parameter file
cat > export_test_schema.par << 'EOF'
SCHEMAS=HR
DIRECTORY=DATA_PUMP_DIR
COMPRESSION=ALL
PARALLEL=2
EOF
# Temporarily modify script to use test parameter
# Change EXPORT_PARFILE="export_test_schema.par"
# Run test
./backup_database_oci.sh
Test 3: Full Production Test
# Execute full backup
./backup_database_oci.sh
# Monitor progress in another terminal
tail -f /home/oracle/logs/backup/backup_*.log
Test 4: OCI Verification
# List uploaded files for today
TODAY=$(date +%Y%m%d)
oci os object list \
--bucket-name YOUR_BUCKET_NAME \
--prefix "EXPORT_DUMP/${TODAY}/" \
--output table
# Verify file integrity by downloading and comparing
oci os object get \
--bucket-name YOUR_BUCKET_NAME \
--name "EXPORT_DUMP/${TODAY}/MYDB_*.dmp" \
--file /tmp/test_download.dmp
# Compare checksums
md5sum /u01/backup/dump/MYDB_*.dmp
md5sum /tmp/test_download.dmp
Cron Configuration
# Edit crontab as oracle user
crontab -e
# Daily backup at 2:00 AM
0 2 * * * /home/oracle/scripts/backup_database_oci.sh >> /home/oracle/logs/backup/cron_$(date +\%Y\%m\%d).log 2>&1
# Alternative schedules:
# Every 12 hours (2 AM and 2 PM)
0 2,14 * * * /home/oracle/scripts/backup_database_oci.sh
# Weekly on Sunday at 3:00 AM
0 3 * * 0 /home/oracle/scripts/backup_database_oci.sh
# Business hours backup (Monday-Friday at 6 PM)
0 18 * * 1-5 /home/oracle/scripts/backup_database_oci.sh
Verify Cron Setup
# List current cron jobs
crontab -l
# Check cron service status
systemctl status crond
# Monitor cron execution
tail -f /var/log/cron | grep backup_database_oci
Monitoring and Verification Tools
Backup Status Monitoring Script
Create /home/oracle/scripts/monitor_backups.sh:
#!/bin/bash
################################################################################
# Backup Monitoring Script
# Location: /home/oracle/scripts/monitor_backups.sh
################################################################################
. ~/.bash_profile
DB_NAME="MYDB" # Update to match your database name
BUCKET_NAME="YOUR_BUCKET_NAME" # Update to match your bucket
echo "╔══════════════════════════════════════════════════════════════╗"
echo "║ BACKUP STATUS REPORT ║"
echo "╚══════════════════════════════════════════════════════════════╝"
echo ""
# Recent Backup Status (Last 7 Days)
echo "Recent Backup Status (Last 7 Days):"
echo "─────────────────────────────────────────────────────────────"
for i in {0..6}; do
date_check=$(date -d "$i days ago" +%Y%m%d 2>/dev/null || date -v-"${i}d" +%Y%m%d)
# Check local files
local_count=$(find /u01/backup/dump -name "${DB_NAME}_${date_check}*.dmp" 2>/dev/null | wc -l)
# Check OCI files
oci_count=$(oci os object list \
--bucket-name "$BUCKET_NAME" \
--prefix "EXPORT_DUMP/$date_check/" \
--all 2>/dev/null | grep -c '"name"' || echo "0")
# Status determination
if [[ $oci_count -gt 0 ]]; then
status="✅ BACKED UP"
elif [[ $local_count -gt 0 ]]; then
status="⚠️ LOCAL ONLY"
else
status="❌ MISSING"
fi
printf "%-10s | Local: %-2s | OCI: %-2s | %s\n" "$date_check" "$local_count" "$oci_count" "$status"
done
echo ""
# Disk Usage Information
echo "Storage Information:"
echo "─────────────────────────────────────────────────────────────"
df -h /u01/backup/dump | tail -1 | awk '{printf "Local Storage: %s used (%s) | %s available\n", $3, $5, $4}'
# Count total files
local_total=$(find /u01/backup/dump -name "${DB_NAME}_*.dmp" 2>/dev/null | wc -l)
echo "Total local backup files: $local_total"
echo ""
# Recent Activity
echo "Recent Activity:"
echo "─────────────────────────────────────────────────────────────"
echo "Latest backup logs:"
ls -lt /home/oracle/logs/backup/backup_*.log 2>/dev/null | head -3 | while IFS= read -r line; do
echo " $line"
done
echo ""
# Last Backup Status
echo "Last Backup Status:"
echo "─────────────────────────────────────────────────────────────"
LAST_LOG=$(ls -t /home/oracle/logs/backup/backup_*.log 2>/dev/null | head -1)
if [[ -n "$LAST_LOG" ]]; then
echo "Log file: $(basename "$LAST_LOG")"
echo "Status lines:"
grep -E "SUCCESS|ERROR|✅|❌" "$LAST_LOG" | tail -5 | while IFS= read -r line; do
echo " $line"
done
else
echo "No backup logs found"
fi
Make it executable:
chmod +x /home/oracle/scripts/monitor_backups.sh
Usage:
# Run monitoring report
./monitor_backups.sh
# Schedule monitoring report (daily at 9 AM)
crontab -e
# Add: 0 9 * * * /home/oracle/scripts/monitor_backups.sh > /home/oracle/logs/backup/daily_report_$(date +\%Y\%m\%d).txt
Troubleshooting Guide
Common Issue 1: Export Fails with ORA-31626
Symptoms:
❌ ERROR: Export failed with exit code: 39
ORA-31626: job does not exist
Solutions:
# Check for hanging Data Pump jobs
sqlplus / as sysdba << EOF
SELECT job_name, state, operation
FROM dba_datapump_jobs
WHERE state = 'EXECUTING';
EXIT;
EOF
# Kill hanging jobs if found
sqlplus / as sysdba << EOF
-- Replace JOB_NAME with actual job name
BEGIN
DBMS_DATAPUMP.STOP_JOB('JOB_NAME');
END;
/
EXIT;
EOF
# Retry the backup
./backup_database_oci.sh
Common Issue 2: OCI Upload Timeout
Symptoms:
❌ ERROR: Failed to upload dump file to OCI
Solutions:
# Test basic OCI connectivity
oci os ns get
# Check network connectivity to OCI
ping objectstorage.us-ashburn-1.oraclecloud.com
# Test with smaller file first
echo "test" > /tmp/test.txt
oci os object put --bucket-name YOUR_BUCKET_NAME --file /tmp/test.txt --name test.txt
# If large files fail, reduce parallel upload count
# In script, change: OCI_PARALLEL_COUNT=5
# And reduce: OCI_PART_SIZE=64
# Check available bandwidth during upload time
# Consider scheduling during off-peak hours
Common Issue 3: Insufficient Disk Space
Symptoms:
❌ ERROR: Insufficient disk space (3GB available). Minimum requirement: 5GB
Solutions:
# Check current disk usage
df -h /u01/backup/dump
# Find large files consuming space
find /u01/backup/dump -type f -size +1G -exec ls -lh {} \;
# Manual cleanup of old backups
find /u01/backup/dump -name "MYDB_*.dmp" -mtime +3 -delete
find /u01/backup/dump -name "MYDB_*.log" -mtime +3 -delete
# Check for temporary or core files
find /u01/backup/dump -name "*.tmp" -delete
find /u01/backup/dump -name "core.*" -delete
# Verify space recovered
df -h /u01/backup/dump
Common Issue 4: Permission Denied Errors
Symptoms:
❌ ERROR: Dump directory not writable: /u01/backup/dump
Solutions:
# Check directory ownership and permissions
ls -ld /u01/backup/dump
# Fix ownership (run as root)
sudo chown -R oracle:oinstall /u01/backup/dump
sudo chmod 755 /u01/backup/dump
# Verify oracle user can write
su - oracle
touch /u01/backup/dump/test.txt
rm /u01/backup/dump/test.txt
# Check Oracle directory object
sqlplus / as sysdba << EOF
SELECT directory_name, directory_path
FROM dba_directories
WHERE directory_name = 'DATA_PUMP_DIR';
EXIT;
EOF
Best Practices and Optimization
Security Best Practices
# Secure script permissions
chmod 750 /home/oracle/scripts/backup_database_oci.sh
# Protect OCI configuration
chmod 600 /home/oracle/.oci/config
chmod 600 /home/oracle/.oci/oci_api_key.pem
# Restrict log directory access
chmod 750 /home/oracle/logs/backup
# Regular key rotation (every 90 days)
# Generate new API keys in OCI Console
# Update ~/.oci/config with new key path
Performance Optimization
For Large Databases (>500GB):
# In export_full_database.par:
PARALLEL=8
FILESIZE=4G
# In backup script:
OCI_PART_SIZE=256
OCI_PARALLEL_COUNT=15
For Many Small Files:
# In export_full_database.par:
PARALLEL=2
FILESIZE=1G
# In backup script:
OCI_PART_SIZE=64
OCI_PARALLEL_COUNT=20
OCI Lifecycle Management
Configure automatic deletion of old backups:
# Create lifecycle policy (30-day retention)
oci os object-lifecycle-policy put \
--bucket-name YOUR_BUCKET_NAME \
--items '[{
"name": "delete-old-exports",
"action": "DELETE",
"timeAmount": 30,
"timeUnit": "DAYS",
"objectNameFilter": {
"inclusionPrefixes": ["EXPORT_DUMP/"]
},
"isEnabled": true
}]'
Backup Validation Strategy
# Monthly restore test script
cat > /home/oracle/scripts/test_restore.sh << 'EOF'
#!/bin/bash
# Download latest backup and test import
LATEST_BACKUP=$(oci os object list --bucket-name YOUR_BUCKET_NAME --prefix "EXPORT_DUMP/" --all | grep -o '"name": "[^"]*\.dmp"' | tail -1 | cut -d'"' -f4)
oci os object get --bucket-name YOUR_BUCKET_NAME --name "$LATEST_BACKUP" --file /tmp/test_restore.dmp
# Test import (metadata only)
impdp system/password \
directory=DATA_PUMP_DIR \
dumpfile=/tmp/test_restore.dmp \
sqlfile=test_restore.sql \
logfile=test_restore.log
rm /tmp/test_restore.dmp
EOF
This solution transforms manual backup processes into a reliable, automated system that ensures your Oracle databases are safely backed up to the cloud while maintaining local efficiency and providing comprehensive monitoring capabilities.

Post a Comment
Post a Comment