📚 Complete Step-by-Step Guide: Upload Files and Folders to OCI Bucket Using OCI CLI
Welcome to the most comprehensive guide for mastering Oracle Cloud Infrastructure Object Storage uploads! Whether you're a beginner just starting with OCI CLI or an experienced administrator looking to automate your backup workflows, this guide has everything you need.
In this tutorial, you'll learn how to install and configure OCI CLI, upload single files and entire directories, optimize performance for large files, automate RMAN backups and Data Pump exports, and implement production-ready automation scripts with error handling and logging.
📋 Table of Contents
- Phase 1: Prerequisites and Initial Setup
- Phase 2: Bucket Preparation
- Phase 3: Single File Upload Operations
- Phase 4: Directory and Bulk Upload Operations
- Phase 5: Advanced Upload Features
- Phase 6: Verification and Management
- Phase 7: Practical Oracle Database Examples
- Phase 8: Automation and Scheduling
- Phase 9: Troubleshooting and Performance
- Quick Reference Commands
Step 1: Install OCI CLI
The first step in your OCI journey is installing the Oracle Cloud Infrastructure Command Line Interface. This powerful tool allows you to interact with OCI services directly from your terminal.
# Download and execute the official installation script
bash -c "$(curl -L https://raw.githubusercontent.com/oracle/oci-cli/master/scripts/install/install.sh)"
During Installation Prompts:
- Installation directory: Press Enter for default (
/root/lib/oracle-cli) - Executable location: Press Enter for default (
/root/bin) - Modify PATH: Type
Yand press Enter - RC file: Press Enter for default (
/root/.bashrc)
After installation completes, reload your shell environment and verify the installation:
# Reload your shell environment
exec -l $SHELL
# Verify installation success
oci --version
3.xx.x. If you see this, congratulations - OCI CLI is successfully installed!
Step 2: Generate API Key Pair
API keys provide secure authentication between OCI CLI and your Oracle Cloud account. You'll need to generate a private/public key pair for authentication.
# Create OCI configuration directory
mkdir -p ~/.oci
cd ~/.oci
# Generate private key (2048-bit RSA)
openssl genrsa -out oci_api_key.pem 2048
# Set secure permissions on private key
chmod 600 oci_api_key.pem
# Generate corresponding public key
openssl rsa -pubout -in oci_api_key.pem -out oci_api_key_public.pem
# Display public key for OCI Console upload
cat oci_api_key_public.pem
oci_api_key.pem) - keep it secure on your server only!
Step 3: Add Public Key to OCI Console
Now you need to register your public key with Oracle Cloud Infrastructure so OCI CLI can authenticate your requests.
Follow These Steps in OCI Console:
- Log into OCI Console at cloud.oracle.com
- Click your Profile icon (top right) → Click your username
- In the left menu, click API Keys
- Click the "Add API Key" button
- Select "Paste Public Key"
- Paste the entire content from
oci_api_key_public.pem - Click Add
Step 4: Configure OCI CLI Authentication
With your API keys generated and registered, it's time to configure OCI CLI to use them for authentication.
# Run the interactive configuration
oci setup config
Configuration Prompts and Responses:
- Config location: Press Enter for default (
~/.oci/config) - User OCID: Paste from OCI Console (starts with
ocid1.user.oc1...) - Tenancy OCID: Paste from OCI Console (starts with
ocid1.tenancy.oc1...) - Region: Enter your region (e.g.,
us-ashburn-1,eu-frankfurt-1) - Generate new key pair: Type
n(we already created keys) - Private key location: Enter
~/.oci/oci_api_key.pem
Step 5: Test Configuration
Let's verify that everything is configured correctly by testing basic connectivity to Oracle Cloud Infrastructure.
# Test basic connectivity
oci os ns get
# Save namespace for future use
NAMESPACE=$(oci os ns get --query data --raw-output)
echo "Your namespace: $NAMESPACE"
Step 6: List Available Buckets
Before creating a new bucket, let's see what buckets already exist in your compartment. You'll need your compartment OCID for this step.
# Replace with your actual compartment OCID
oci os bucket list --compartment-id ocid1.compartment.oc1..aaaaaa...
# For better readability, use table output format
oci os bucket list \
--compartment-id ocid1.compartment.oc1..aaaaaa... \
--output table
Step 7: Create New Bucket (If Needed)
If you need a new bucket for your uploads, creating one is straightforward. Choose a meaningful name that reflects the bucket's purpose.
# Create bucket for database backups
oci os bucket create \
--compartment-id ocid1.compartment.oc1..aaaaaa... \
--name dbbackups
# Verify bucket creation
oci os bucket list \
--compartment-id ocid1.compartment.oc1..aaaaaa... \
--output table
Step 8: Basic Single File Upload
Let's start with the fundamentals - uploading a single file to your OCI bucket. This is the foundation for all other upload operations.
# Create a test file
echo "This is a test upload" > /tmp/test_file.txt
# Basic upload command (Note: NO --progress-bar flag)
oci os object put \
--bucket-name dbbackups \
--file /tmp/test_file.txt \
--name test_file.txt
--name parameter specifies the object name in the bucket. It can be different from your local filename and can include path separators to create folder-like structures.
Step 9: Upload Large Files with Optimization
For larger files, OCI CLI automatically handles multipart uploads, but you can optimize performance by specifying part size and parallel upload count.
# Create a larger test file (100MB)
dd if=/dev/zero of=/tmp/large_test.dat bs=1M count=100
# Upload with multipart optimization
oci os object put \
--bucket-name dbbackups \
--file /tmp/large_test.dat \
--name large_files/large_test.dat \
--part-size 128 \
--parallel-upload-count 10
Step 10: Upload to Organized Paths
Object Storage doesn't have true folders, but you can use forward slashes in object names to create a hierarchical organization that looks like folders in the console.
# Upload to "backups/2024/march/" path structure
oci os object put \
--bucket-name dbbackups \
--file /tmp/test_file.txt \
--name backups/2024/march/daily_backup.txt
# Upload database files to organized structure
oci os object put \
--bucket-name dbbackups \
--file /u01/exports/schema_export.dmp \
--name exports/database/schema_export.dmp
Step 11: Verify Single File Upload
Always verify your uploads completed successfully. OCI CLI provides several commands for checking upload status and file details.
# List all objects in bucket
oci os object list --bucket-name dbbackups
# List objects with specific prefix (folder-like structure)
oci os object list \
--bucket-name dbbackups \
--prefix backups/2024/march/
# Get detailed information about specific object
oci os object head \
--bucket-name dbbackups \
--name test_file.txt
Step 12: Create Test Directory Structure
Before exploring bulk upload capabilities, let's create a realistic directory structure with various file types to demonstrate different upload scenarios.
# Create organized directory structure
mkdir -p /tmp/backup_test/{daily,weekly,monthly,logs}
# Create sample text files
echo "Daily backup 1" > /tmp/backup_test/daily/backup1.txt
echo "Daily backup 2" > /tmp/backup_test/daily/backup2.txt
echo "Weekly backup" > /tmp/backup_test/weekly/backup_week1.txt
echo "Monthly backup" > /tmp/backup_test/monthly/backup_jan.txt
# Create binary test files
dd if=/dev/zero of=/tmp/backup_test/daily/data1.dat bs=1M count=10
dd if=/dev/zero of=/tmp/backup_test/logs/application.log bs=1M count=5
# Verify structure
find /tmp/backup_test -type f -exec ls -lh {} \;
Step 13: Basic Directory Upload
The bulk-upload command is your powerhouse for uploading entire directories. It recursively uploads all files and subdirectories while maintaining the directory structure.
# Upload all files in directory recursively
oci os object bulk-upload \
--bucket-name dbbackups \
--src-dir /tmp/backup_test
Step 14: Directory Upload with Prefix Organization
Add a prefix to organize your uploads with timestamps or other metadata. This is essential for maintaining organized backups over time.
# Create date stamp for organization
DATE_STAMP=$(date +%Y%m%d_%H%M%S)
# Upload with date-stamped prefix
oci os object bulk-upload \
--bucket-name dbbackups \
--src-dir /tmp/backup_test \
--prefix "server_backups/${DATE_STAMP}/"
This creates a structure like: server_backups/20240312_143000/daily/backup1.txt
Step 15: Selective File Type Upload
Use include and exclude patterns to upload only specific file types. This is perfect for filtering out temporary files or uploading only certain formats.
# Upload only text files
oci os object bulk-upload \
--bucket-name dbbackups \
--src-dir /tmp/backup_test \
--prefix "text_files/" \
--include "*.txt"
# Upload multiple file types
oci os object bulk-upload \
--bucket-name dbbackups \
--src-dir /tmp/backup_test \
--prefix "data_files/" \
--include "*.txt" \
--include "*.dat" \
--include "*.log"
# Upload with exclusions (exclude temp files and temp directories)
oci os object bulk-upload \
--bucket-name dbbackups \
--src-dir /tmp/backup_test \
--prefix "filtered_backup/" \
--exclude "*.tmp" \
--exclude "*/temp/*"
Step 16: Performance-Optimized Bulk Upload
Maximize upload speed by configuring parallel operations. This is crucial for large-scale backup operations.
# Upload with parallel processing (good for many files)
oci os object bulk-upload \
--bucket-name dbbackups \
--src-dir /tmp/backup_test \
--prefix "optimized_backup/" \
--parallel-operations-count 10 \
--overwrite
# For directories with very large files
oci os object bulk-upload \
--bucket-name dbbackups \
--src-dir /tmp/backup_test \
--prefix "large_files/" \
--parallel-operations-count 15 \
--part-size 128
Step 17: Large File Multipart Upload
For files larger than 100MB, use multipart upload with optimized settings for better performance and reliability. The file is split into chunks and uploaded in parallel.
# Create a large test file (1GB)
dd if=/dev/zero of=/tmp/large_file.dat bs=1M count=1024
# Upload with multipart optimization
oci os object put \
--bucket-name dbbackups \
--file /tmp/large_file.dat \
--name large_files/1gb_test.dat \
--part-size 128 \
--parallel-upload-count 10
| File Size Range | Part Size (MB) | Parallel Count | Best For |
|---|---|---|---|
| < 100MB | Default | Default | Small exports, logs |
| 100MB - 1GB | 100 | 5 | Medium RMAN pieces |
| 1GB - 10GB | 128 | 10 | Large RMAN backups |
| > 10GB | 128 | 15 | Full database exports |
Step 18: Upload with Metadata and Storage Tiers
Add custom metadata to your uploads for better organization and tracking. You can also specify different storage tiers to optimize costs based on access frequency.
# Upload with custom metadata
oci os object put \
--bucket-name dbbackups \
--file /tmp/backup_test/daily/backup1.txt \
--name tagged_backups/backup1.txt \
--metadata '{"backup-type":"daily","server":"prod-db-01","date":"2024-03-12"}'
# Upload to Infrequent Access tier (cost-effective for archives)
oci os object put \
--bucket-name dbbackups \
--file /tmp/backup_test/monthly/backup_jan.txt \
--name archive_backups/backup_jan.txt \
--storage-tier InfrequentAccess
# Upload to Archive tier (lowest cost, retrieval time required)
oci os object put \
--bucket-name dbbackups \
--file /tmp/backup_test/monthly/backup_jan.txt \
--name deep_archive/backup_jan.txt \
--storage-tier Archive
Step 19: Directory Synchronization
The sync command is intelligent - it only uploads new or modified files, making it perfect for incremental backups.
# Initial sync - uploads all files
oci os object sync \
--bucket-name dbbackups \
--src-dir /tmp/backup_test \
--prefix "synced_backups/"
# Modify a file
echo "Updated content $(date)" >> /tmp/backup_test/daily/backup1.txt
# Subsequent sync - only uploads changed files
oci os object sync \
--bucket-name dbbackups \
--src-dir /tmp/backup_test \
--prefix "synced_backups/"
Step 20: Comprehensive Upload Verification
Always verify your uploads to ensure data integrity and completeness. OCI CLI provides powerful querying capabilities for validation.
# List all objects with details in table format
oci os object list \
--bucket-name dbbackups \
--fields name,size,timeCreated \
--output table
# Count total objects uploaded
oci os object list \
--bucket-name dbbackups \
--all | jq '.data | length'
# Calculate total storage used
oci os object list \
--bucket-name dbbackups \
--all \
--query 'sum(data[].size)' \
--output json
Step 21: Download and Compare (Integrity Check)
Verify upload integrity by downloading files and comparing them with the originals using checksum verification.
# Download a file for comparison
oci os object get \
--bucket-name dbbackups \
--name daily/backup1.txt \
--file /tmp/downloaded_backup1.txt
# Compare with original (should show no differences)
diff /tmp/backup_test/daily/backup1.txt /tmp/downloaded_backup1.txt
# Bulk download for verification
mkdir -p /tmp/verify_downloads
oci os object bulk-download \
--bucket-name dbbackups \
--prefix "daily/" \
--download-dir /tmp/verify_downloads
Step 22: RMAN Backup Upload Script
This production-ready script automatically finds and uploads RMAN backup files with proper organization and error handling.
cat > /root/upload_rman_backups.sh << 'EOF'
#!/bin/bash
# Production RMAN Backup Upload Script
set -e # Exit on error
# Configuration
export ORACLE_HOME=/u01/app/oracle/product/19.0.0/dbhome_1
export ORACLE_SID=ORCL
export PATH=$ORACLE_HOME/bin:$PATH
BUCKET_NAME="dbbackups"
BACKUP_DIR="/u01/app/oracle/fast_recovery_area/${ORACLE_SID}/backupset"
DATE_YEAR=$(date +%Y)
DATE_MONTH=$(date +%m)
DATE_DAY=$(date +%d)
DATE_STAMP=$(date +%Y%m%d_%H%M%S)
LOG_FILE="/var/log/oci_rman_upload_${DATE_STAMP}.log"
# Logging function
log_message() {
echo "$(date '+%Y-%m-%d %H:%M:%S'): $1" | tee -a $LOG_FILE
}
# Error handling
handle_error() {
log_message "ERROR: $1"
exit 1
}
log_message "========================================="
log_message "🚀 RMAN Backup Upload Starting"
log_message "Database: ${ORACLE_SID}"
log_message "Target Path: rman_backups/${ORACLE_SID}/${DATE_YEAR}/${DATE_MONTH}/${DATE_DAY}/"
log_message "========================================="
# Find RMAN backups from last 24 hours
BACKUP_FILES=$(find $BACKUP_DIR -type f -name "*.bkp" -mtime -1 2>/dev/null || echo "")
BACKUP_COUNT=$(echo "$BACKUP_FILES" | grep -c "\.bkp" 2>/dev/null || echo "0")
log_message "📊 Found $BACKUP_COUNT RMAN backup files to upload"
if [ "$BACKUP_COUNT" -eq 0 ]; then
log_message "ℹ️ No new backups found. Exiting gracefully."
exit 0
fi
# Upload each backup file with metadata
echo "$BACKUP_FILES" | while read backup_file; do
if [ -f "$backup_file" ]; then
file_name=$(basename "$backup_file")
file_size=$(du -h "$backup_file" | cut -f1)
file_date=$(date -r "$backup_file" +%Y-%m-%d)
# Construct organized object name
object_name="rman_backups/${ORACLE_SID}/${DATE_YEAR}/${DATE_MONTH}/${DATE_DAY}/${file_name}"
log_message "📤 Uploading: $file_name (Size: $file_size)"
# Upload with optimized settings (NO --progress-bar!)
if oci os object put \
--bucket-name $BUCKET_NAME \
--file "$backup_file" \
--name "$object_name" \
--part-size 128 \
--parallel-upload-count 10 \
--metadata "{
\"database\":\"${ORACLE_SID}\",
\"backup-date\":\"${file_date}\",
\"backup-type\":\"rman\",
\"file-size\":\"${file_size}\",
\"upload-timestamp\":\"${DATE_STAMP}\"
}" >> $LOG_FILE 2>&1; then
log_message "✅ Successfully uploaded: $file_name"
else
handle_error "Failed to upload: $file_name"
fi
fi
done
# Upload control file backups
log_message "📤 Uploading control file backups..."
find $BACKUP_DIR -type f -name "*.ctl" -mtime -1 2>/dev/null | while read ctl_file; do
if [ -f "$ctl_file" ]; then
file_name=$(basename "$ctl_file")
object_name="rman_backups/${ORACLE_SID}/${DATE_YEAR}/${DATE_MONTH}/${DATE_DAY}/controlfiles/${file_name}"
oci os object put \
--bucket-name $BUCKET_NAME \
--file "$ctl_file" \
--name "$object_name" >> $LOG_FILE 2>&1
log_message "✅ Uploaded control file: $file_name"
fi
done
log_message "🎉 RMAN backup upload completed successfully!"
EOF
# Make script executable
chmod +x /root/upload_rman_backups.sh
Step 23: Data Pump Export Upload
Automate Oracle Data Pump export uploads with this streamlined script that handles both dump files and log files.
cat > /root/upload_datapump_exports.sh << 'EOF'
#!/bin/bash
# Data Pump Export Upload Script
set -e
export ORACLE_HOME=/u01/app/oracle/product/19.0.0/dbhome_1
export ORACLE_SID=ORCL
export PATH=$ORACLE_HOME/bin:$PATH
BUCKET_NAME="dbbackups"
EXPORT_DIR="/u01/exports"
DATE_YEAR=$(date +%Y)
DATE_MONTH=$(date +%m)
DATE_DAY=$(date +%d)
DATE_STAMP=$(date +%Y%m%d_%H%M%S)
LOG_FILE="/var/log/datapump_upload_${DATE_STAMP}.log"
log_message() {
echo "$(date '+%Y-%m-%d %H:%M:%S'): $1" | tee -a $LOG_FILE
}
log_message "🗂️ Starting Data Pump export upload for ${ORACLE_SID}"
# Check for recent exports (last 24 hours)
EXPORT_FILES=$(find $EXPORT_DIR -type f \( -name "*.dmp" -o -name "*.log" \) -mtime -1 2>/dev/null || echo "")
if [ -z "$EXPORT_FILES" ]; then
log_message "ℹ️ No recent export files found. Exiting."
exit 0
fi
# Upload dump files with metadata
find $EXPORT_DIR -type f -name "*.dmp" -mtime -1 2>/dev/null | while read dmp_file; do
if [ -f "$dmp_file" ]; then
file_name=$(basename "$dmp_file")
file_size=$(du -h "$dmp_file" | cut -f1)
object_name="datapump_exports/${ORACLE_SID}/${DATE_YEAR}/${DATE_MONTH}/${DATE_DAY}/${file_name}"
log_message "📤 Uploading export: $file_name (Size: $file_size)"
oci os object put \
--bucket-name $BUCKET_NAME \
--file "$dmp_file" \
--name "$object_name" \
--part-size 128 \
--parallel-upload-count 10 \
--storage-tier Standard \
--metadata "{
\"database\":\"${ORACLE_SID}\",
\"export-date\":\"$(date +%Y-%m-%d)\",
\"export-type\":\"datapump\",
\"file-size\":\"${file_size}\"
}" >> $LOG_FILE 2>&1
if [ $? -eq 0 ]; then
log_message "✅ Successfully uploaded: $file_name"
else
log_message "❌ Failed to upload: $file_name"
exit 1
fi
fi
done
# Upload log files
find $EXPORT_DIR -type f -name "*.log" -mtime -1 2>/dev/null | while read log_file; do
if [ -f "$log_file" ]; then
file_name=$(basename "$log_file")
object_name="datapump_exports/${ORACLE_SID}/${DATE_YEAR}/${DATE_MONTH}/${DATE_DAY}/logs/${file_name}"
oci os object put \
--bucket-name $BUCKET_NAME \
--file "$log_file" \
--name "$object_name" >> $LOG_FILE 2>&1
log_message "✅ Uploaded log: $file_name"
fi
done
log_message "🎉 Data Pump export upload completed successfully!"
EOF
chmod +x /root/upload_datapump_exports.sh
Step 24: Comprehensive Backup and Upload Script
This production-grade script combines RMAN backup execution with automatic upload to OCI, including comprehensive error handling and verification.
cat > /root/comprehensive_backup_upload.sh << 'EOF'
#!/bin/bash
# Complete Backup and Upload Automation Script
set -e # Exit on any error
# Configuration
export ORACLE_HOME=/u01/app/oracle/product/19.0.0/dbhome_1
export ORACLE_SID=ORCL
export PATH=$ORACLE_HOME/bin:$PATH
BUCKET_NAME="dbbackups"
DATE_STAMP=$(date +%Y%m%d_%H%M%S)
BACKUP_DIR="/u01/backups/${DATE_STAMP}"
LOG_FILE="/var/log/comprehensive_backup_${DATE_STAMP}.log"
# Logging function
log_message() {
echo "$(date '+%Y-%m-%d %H:%M:%S'): $1" | tee -a $LOG_FILE
}
# Error handling
handle_error() {
log_message "ERROR: $1"
exit 1
}
log_message "Starting comprehensive backup and upload process"
# Create backup directory
mkdir -p $BACKUP_DIR || handle_error "Failed to create backup directory"
# Perform RMAN backup
log_message "Starting RMAN backup"
rman target / << RMANEOF >> $LOG_FILE 2>&1 || handle_error "RMAN backup failed"
RUN {
ALLOCATE CHANNEL ch1 DEVICE TYPE DISK FORMAT '${BACKUP_DIR}/backup_%U.bkp';
BACKUP DATABASE PLUS ARCHIVELOG DELETE INPUT;
BACKUP CURRENT CONTROLFILE FORMAT '${BACKUP_DIR}/control_%U.ctl';
RELEASE CHANNEL ch1;
}
EXIT;
RMANEOF
log_message "RMAN backup completed successfully"
# Upload to OCI Object Storage
log_message "Starting upload to OCI Object Storage"
oci os object bulk-upload \
--bucket-name $BUCKET_NAME \
--src-dir $BACKUP_DIR \
--prefix "backups/${ORACLE_SID}/${DATE_STAMP}/" \
--parallel-operations-count 10 \
--part-size 128 \
--overwrite \
>> $LOG_FILE 2>&1 || handle_error "Upload to OCI failed"
log_message "Upload completed successfully"
# Verify upload
UPLOADED_COUNT=$(oci os object list \
--bucket-name $BUCKET_NAME \
--prefix "backups/${ORACLE_SID}/${DATE_STAMP}/" \
--all | jq '.data | length')
LOCAL_COUNT=$(find $BACKUP_DIR -type f | wc -l)
if [ "$UPLOADED_COUNT" -eq "$LOCAL_COUNT" ]; then
log_message "Verification successful: $UPLOADED_COUNT files uploaded"
else
handle_error "File count mismatch. Local: $LOCAL_COUNT, Uploaded: $UPLOADED_COUNT"
fi
# Cleanup old local backups (keep last 7 days)
find /u01/backups -type d -name "2024*" -mtime +7 -exec rm -rf {} + 2>/dev/null || true
log_message "Process completed successfully"
EOF
chmod +x /root/comprehensive_backup_upload.sh
Step 25: Schedule Automated Uploads
Configure cron jobs to run your automation scripts at optimal times, ensuring backups are created and uploaded without manual intervention.
# Edit crontab
crontab -e
# Add these entries for automated backups:
# Daily full backup and upload at 2 AM
0 2 * * * /root/comprehensive_backup_upload.sh >> /var/log/daily_backup_cron.log 2>&1
# RMAN backup upload every 6 hours
0 */6 * * * /root/upload_rman_backups.sh >> /var/log/rman_upload_cron.log 2>&1
# Weekly Data Pump export and upload (Sunday 3 AM)
0 3 * * 0 /root/upload_datapump_exports.sh >> /var/log/weekly_export_cron.log 2>&1
Step 26: Common Issues and Solutions
Authentication Problems
# Check config file
cat ~/.oci/config
# Verify key file permissions (should be 600)
ls -la ~/.oci/oci_api_key.pem
# Fix permissions if needed
chmod 600 ~/.oci/oci_api_key.pem
# Test authentication
oci os ns get
Performance Issues
# For many small files - increase parallel operations
oci os object bulk-upload \
--bucket-name dbbackups \
--src-dir /path/to/files \
--parallel-operations-count 15
# For large files - optimize multipart settings
oci os object put \
--bucket-name dbbackups \
--file /path/to/large/file \
--part-size 128 \
--parallel-upload-count 15
Upload Failures
# Enable debug mode for detailed error messages
oci os object put \
--bucket-name dbbackups \
--file /path/to/file \
--name filename \
--debug
# For problematic uploads, use smaller part sizes
oci os object put \
--bucket-name dbbackups \
--file /path/to/large/file \
--name filename \
--part-size 50 \
--parallel-upload-count 3
Step 27: Performance Testing and Monitoring
Benchmark your upload performance and monitor ongoing operations to ensure optimal throughput.
# Create test file and measure upload speed
dd if=/dev/zero of=/tmp/speedtest.dat bs=1M count=500
# Time the upload
time oci os object put \
--bucket-name dbbackups \
--file /tmp/speedtest.dat \
--name speedtest.dat \
--part-size 128 \
--parallel-upload-count 10
# Monitor ongoing uploads
watch -n 2 'oci os object list --bucket-name dbbackups --prefix "backups/" | jq ".data | length"'
Essential Command Summary
# Upload single file
oci os object put --bucket-name BUCKET --file FILE --name NAME
# Upload directory
oci os object bulk-upload --bucket-name BUCKET --src-dir DIR
# Upload directory with prefix
oci os object bulk-upload --bucket-name BUCKET --src-dir DIR --prefix PREFIX/
# Upload specific file types
oci os object bulk-upload --bucket-name BUCKET --src-dir DIR --include "*.dmp" --include "*.log"
# Upload with performance optimization
oci os object bulk-upload --bucket-name BUCKET --src-dir DIR --parallel-operations-count 10
# Sync directory (incremental)
oci os object sync --bucket-name BUCKET --src-dir DIR --prefix PREFIX/
# List objects
oci os object list --bucket-name BUCKET
# Download file
oci os object get --bucket-name BUCKET --name OBJECT_NAME --file LOCAL_FILE
# Bulk download
oci os object bulk-download --bucket-name BUCKET --prefix PREFIX/ --download-dir DIR
Performance Optimization Settings
| File Size Range | Part Size (MB) | Parallel Count | Expected Throughput |
|---|---|---|---|
| < 100MB | Default | Default | 50-100 Mbps |
| 100MB - 1GB | 100 | 5 | 200-400 Mbps |
| 1GB - 10GB | 128 | 10 | 400-800 Mbps |
| > 10GB | 128 | 15 | 800-1200 Mbps |
Post a Comment
Post a Comment