Docker Container Backup and Restore

Implementing reliable backup and restore procedures for Docker containers and their data is critical for business continuity. This comprehensive guide covers container commit, export/import operations, volume backup strategies, tar-based archival, automated backup scripts, scheduling, and disaster recovery procedures. Proper backup practices protect against data loss and enable quick recovery from failures.

Table of Contents

Understanding Backup Strategies

Different backup strategies address different recovery objectives.

Backup types:

  • Full backup: Complete container state and volumes
  • Incremental backup: Only changed data since last backup
  • Differential backup: Changed data since full backup
  • Snapshot backup: Point-in-time state capture
  • Continuous replication: Real-time data sync

Recovery objectives (RTO/RPO):

  • RTO: Recovery Time Objective (acceptable downtime)
  • RPO: Recovery Point Objective (acceptable data loss)
  • Full backups: Lower RTO but slower
  • Incremental/differential: Higher RTO but efficient storage
# Plan backup strategy based on requirements
cat > backup-strategy.txt <<'EOF'
Container: production-db
Type: Stateful (PostgreSQL)
RTO: 2 hours
RPO: 1 hour
Strategy: Daily full backup + hourly volume snapshots

Container: web-api
Type: Stateless (API)
RTO: 15 minutes
RPO: None (rebuilt from code)
Strategy: Image export on release, no volume backup

Container: cache
Type: Ephemeral (Redis)
RTO: N/A
RPO: N/A
Strategy: No backup needed
EOF

Container Image Backup

Back up container configurations as Docker images.

Commit container to image:

# Create container from base image
docker run -d --name myapp-v1 myapp:latest

# Make modifications to container
docker exec myapp-v1 apt-get install -y extra-package

# Commit container as new image
docker commit myapp-v1 myapp:backup-v1

# List images to verify
docker images | grep myapp

# Tag image for storage
docker tag myapp:backup-v1 myregistry.com/myapp:backup-v1

# Push to registry for safekeeping
docker push myregistry.com/myapp:backup-v1

# Verify image backed up
docker pull myregistry.com/myapp:backup-v1

Save image to tar file:

# Save image as tar archive
docker save myapp:latest -o myapp-image.tar

# Verify tar created
ls -lh myapp-image.tar

# Compress for storage efficiency
gzip myapp-image.tar
ls -lh myapp-image.tar.gz

# Multiple images in single tar
docker save \
  myapp:v1 \
  myapp:v2 \
  dependencies:latest \
  -o myapp-multi.tar

# Verify tar contents
tar -tzf myapp-image.tar.gz | head -20

Load image from tar:

# Load image from tar file
docker load -i myapp-image.tar

# Or from compressed tar
gunzip myapp-image.tar.gz
docker load -i myapp-image.tar

# Verify image loaded
docker images | grep myapp

# Tag loaded image appropriately
docker tag myapp:latest myregistry.com/myapp:restored

Volume Backup and Restoration

Back up persistent data stored in volumes.

List and identify volumes:

# List all volumes
docker volume ls

# Inspect volume details
docker volume inspect myapp-data

# Find volume mount point
docker volume inspect myapp-data --format='{{.Mountpoint}}'

# Find volumes used by container
docker inspect myapp | grep -A 10 Mounts

Backup volume data:

# Create backup container for volume access
docker run --rm \
  --volumes-from <container-name> \
  -v $(pwd)/backups:/backup \
  ubuntu tar czf /backup/volume-backup.tar.gz /data

# Backup specific volume
docker run --rm \
  -v myapp-data:/data \
  -v $(pwd)/backups:/backup \
  ubuntu tar czf /backup/myapp-data.tar.gz /data

# Verify backup created
ls -lh backups/myapp-data.tar.gz

# Check backup contents
tar -tzf backups/myapp-data.tar.gz | head

Incremental volume backups:

# Full backup
docker run --rm \
  -v myapp-data:/data \
  -v $(pwd)/backups:/backup \
  ubuntu tar czf /backup/myapp-data-full.tar.gz /data

# Incremental using rsync
docker run --rm \
  -v myapp-data:/data \
  -v $(pwd)/backups:/backup \
  -v $(pwd)/manifest:/manifest \
  ubuntu:latest bash -c '
    find /data -type f -newer /manifest/last-backup \
    -exec tar czf /backup/myapp-data-incremental.tar.gz {} \;
    touch /manifest/last-backup
  '

Restore volume from backup:

# Create new volume for restore
docker volume create myapp-data-restored

# Extract backup into volume
docker run --rm \
  -v myapp-data-restored:/data \
  -v $(pwd)/backups:/backup \
  ubuntu tar -xzf /backup/myapp-data.tar.gz -C /data

# Verify data restored
docker run --rm \
  -v myapp-data-restored:/data \
  ubuntu ls -la /data

# Use restored volume with container
docker run -d \
  --name myapp-restored \
  -v myapp-data-restored:/data \
  myapp:latest

Container Export and Import

Export and import complete container filesystems.

Export container:

# Export running container filesystem
docker export myapp -o myapp-container.tar

# Export stopped container
docker export stopped-container > myapp-backup.tar

# Verify export
ls -lh myapp-container.tar

# Inspect export contents
tar -tzf myapp-container.tar | head -20

# Compress export for efficiency
gzip myapp-container.tar

Import exported container:

# Import as new image
docker import myapp-container.tar.gz myapp:imported

# Verify imported image
docker images | grep imported

# Run container from imported image
docker run -d \
  --name myapp-restored \
  -p 8080:5000 \
  myapp:imported

# Compare original and imported
docker inspect myapp:latest
docker inspect myapp:imported

Export with volume data:

# Export container with mounted volumes
# First, identify volumes
docker inspect myapp | grep -A 5 Mounts

# Export filesystem (includes volume mount points, not data)
docker export myapp -o myapp-with-volumes.tar

# Separately backup volume data
docker run --rm \
  --volumes-from myapp \
  -v $(pwd):/backup \
  ubuntu tar czf /backup/myapp-volumes.tar.gz /var/lib/docker/volumes

# Restore both container and volumes
docker import myapp-with-volumes.tar myapp:restore-test

# Recreate volumes and restore data separately
docker volume create myapp-data
docker run --rm \
  -v myapp-data:/data \
  -v $(pwd):/backup \
  ubuntu tar -xzf /backup/myapp-volumes.tar.gz -C /

Automated Backup Scripts

Create scripts for regular, reliable backups.

Daily backup script:

# Create backup directory
mkdir -p /backups/docker

cat > /usr/local/bin/backup-docker.sh <<'EOF'
#!/bin/bash

BACKUP_DIR="/backups/docker"
TIMESTAMP=$(date +%Y%m%d_%H%M%S)
LOG_FILE="$BACKUP_DIR/backup.log"

# Create timestamp directory
BACKUP_PATH="$BACKUP_DIR/$TIMESTAMP"
mkdir -p "$BACKUP_PATH"

echo "[$(date)] Starting Docker backup" >> "$LOG_FILE"

# Backup container images
IMAGES=$(docker images --format "{{.Repository}}:{{.Tag}}" | grep -v none)
for image in $IMAGES; do
    if [ "$image" != "<none>:<none>" ]; then
        echo "[$(date)] Backing up image: $image" >> "$LOG_FILE"
        docker save "$image" -o "$BACKUP_PATH/${image//\//_}.tar" 2>> "$LOG_FILE"
    fi
done

# Backup volumes
VOLUMES=$(docker volume ls --format "{{.Name}}")
for volume in $VOLUMES; do
    echo "[$(date)] Backing up volume: $volume" >> "$LOG_FILE"
    docker run --rm \
        -v "$volume":/volume \
        -v "$BACKUP_PATH":/backup \
        ubuntu tar czf "/backup/${volume}.tar.gz" -C /volume . 2>> "$LOG_FILE"
done

# Backup container configs
docker ps -a --format "{{.Names}}" | while read container; do
    echo "[$(date)] Exporting container: $container" >> "$LOG_FILE"
    docker export "$container" -o "$BACKUP_PATH/${container}.tar" 2>> "$LOG_FILE"
done

# Compress backup directory
echo "[$(date)] Compressing backup" >> "$LOG_FILE"
cd "$BACKUP_DIR"
tar czf "${TIMESTAMP}.tar.gz" "$TIMESTAMP"
rm -rf "$TIMESTAMP"

# Calculate size
SIZE=$(du -sh "${TIMESTAMP}.tar.gz" | cut -f1)
echo "[$(date)] Backup completed. Size: $SIZE" >> "$LOG_FILE"

# Keep only last 7 backups
find "$BACKUP_DIR" -maxdepth 1 -name "*.tar.gz" -type f -mtime +7 -delete

EOF

chmod +x /usr/local/bin/backup-docker.sh

# Run backup
/usr/local/bin/backup-docker.sh

# Verify backup
ls -lh /backups/docker/

Selective backup script:

cat > /usr/local/bin/backup-selective.sh <<'EOF'
#!/bin/bash

BACKUP_DIR="/backups/docker"
TIMESTAMP=$(date +%Y%m%d_%H%M%S)
BACKUP_PATH="$BACKUP_DIR/$TIMESTAMP"

mkdir -p "$BACKUP_PATH"

# List of containers to backup
CONTAINERS_TO_BACKUP=(
    "production-db"
    "production-api"
    "cache-redis"
)

# Backup specific containers only
for container in "${CONTAINERS_TO_BACKUP[@]}"; do
    if docker ps -a --format "{{.Names}}" | grep -q "^${container}$"; then
        echo "Backing up: $container"
        
        # Export container
        docker export "$container" -o "$BACKUP_PATH/${container}-fs.tar"
        
        # Backup volumes used by container
        VOLUMES=$(docker inspect "$container" --format='{{range .Mounts}}{{.Name}}{{"\n"}}{{end}}')
        for volume in $VOLUMES; do
            if [ ! -z "$volume" ]; then
                docker run --rm \
                    -v "$volume":/volume \
                    -v "$BACKUP_PATH":/backup \
                    ubuntu tar czf "/backup/${container}-${volume}.tar.gz" -C /volume .
            fi
        done
    fi
done

# Create manifest
cat > "$BACKUP_PATH/manifest.txt" <<MANIFEST
Backup Date: $(date)
Containers: ${CONTAINERS_TO_BACKUP[@]}
MANIFEST

# Compress and cleanup
cd "$BACKUP_DIR"
tar czf "${TIMESTAMP}.tar.gz" "$TIMESTAMP"
rm -rf "$TIMESTAMP"

echo "Backup completed: ${TIMESTAMP}.tar.gz"
EOF

chmod +x /usr/local/bin/backup-selective.sh

Scheduling and Retention

Schedule regular backups with retention policies.

Cron scheduling:

# Edit crontab
crontab -e

# Add backup job (daily at 2 AM)
0 2 * * * /usr/local/bin/backup-docker.sh

# Hourly backups for critical containers
0 * * * * /usr/local/bin/backup-selective.sh

# Weekly full backup
0 3 0 * * /usr/local/bin/backup-docker.sh --full

# View scheduled jobs
crontab -l

Retention policy:

cat > /usr/local/bin/cleanup-backups.sh <<'EOF'
#!/bin/bash

BACKUP_DIR="/backups/docker"
RETENTION_DAYS=30

echo "Cleaning up backups older than $RETENTION_DAYS days..."

# Remove old backups
find "$BACKUP_DIR" -maxdepth 1 -name "*.tar.gz" -type f -mtime +$RETENTION_DAYS -delete

# Remove incomplete backups
find "$BACKUP_DIR" -maxdepth 1 -type d -name "[0-9]*" -mtime +1 -exec rm -rf {} \;

# Report cleanup
echo "Cleanup completed. Current backups:"
ls -lh "$BACKUP_DIR" | grep tar.gz

EOF

chmod +x /usr/local/bin/cleanup-backups.sh

# Schedule cleanup daily
# Add to crontab:
# 0 4 * * * /usr/local/bin/cleanup-backups.sh

Backup rotation:

# Keep multiple versions: daily, weekly, monthly

cat > /usr/local/bin/backup-with-rotation.sh <<'EOF'
#!/bin/bash

BACKUP_DIR="/backups/docker"
TIMESTAMP=$(date +%Y%m%d_%H%M%S)

# Determine backup type based on date
DAY=$(date +%d)
DOW=$(date +%w)

if [ "$DAY" -eq 1 ]; then
    BACKUP_TYPE="monthly"
elif [ "$DOW" -eq 0 ]; then
    BACKUP_TYPE="weekly"
else
    BACKUP_TYPE="daily"
fi

BACKUP_PATH="$BACKUP_DIR/${BACKUP_TYPE}-${TIMESTAMP}"
mkdir -p "$BACKUP_PATH"

# Run backup
docker save $(docker images --format "{{.Repository}}:{{.Tag}}" | grep -v none) \
    -o "$BACKUP_PATH/images.tar"

# Keep only recent backups
# Daily: 7 most recent
# Weekly: 4 most recent
# Monthly: 12 most recent

case $BACKUP_TYPE in
    daily)
        find "$BACKUP_DIR" -maxdepth 1 -name "daily-*" -type d | sort -r | tail -n +8 | xargs rm -rf
        ;;
    weekly)
        find "$BACKUP_DIR" -maxdepth 1 -name "weekly-*" -type d | sort -r | tail -n +5 | xargs rm -rf
        ;;
    monthly)
        find "$BACKUP_DIR" -maxdepth 1 -name "monthly-*" -type d | sort -r | tail -n +13 | xargs rm -rf
        ;;
esac

EOF

chmod +x /usr/local/bin/backup-with-rotation.sh

Disaster Recovery Procedures

Implement systematic recovery procedures for various failure scenarios.

Total host failure recovery:

# Scenario: Host completely lost, need to rebuild everything

# Step 1: Create new host with Docker installed

# Step 2: Transfer backups to new host
scp -r /backups/docker user@newhost:/backups/

# Step 3: Restore all images
BACKUP_FILE="/backups/docker/latest.tar.gz"
tar -xzf "$BACKUP_FILE"
cd extracted-backup
for image_tar in *.tar; do
    docker load -i "$image_tar"
done

# Step 4: Recreate volumes
for volume_backup in *-volume.tar.gz; do
    VOLUME_NAME=${volume_backup%-volume.tar.gz}
    docker volume create "$VOLUME_NAME"
    docker run --rm \
        -v "$VOLUME_NAME":/data \
        -v .:/backup \
        ubuntu tar -xzf "/backup/$volume_backup" -C /data
done

# Step 5: Start containers
docker run -d --name myapp -v myapp-data:/data myapp:latest

# Step 6: Verify restoration
docker ps
docker volume ls
docker logs myapp

Database container recovery:

# Scenario: Database container corrupted, need restore

# Stop container
docker stop production-db

# Backup current volume for forensics
docker run --rm \
    -v production-db-data:/data \
    -v $(pwd):/backup \
    ubuntu tar czf /backup/corrupted-data.tar.gz /data

# Remove corrupted volume
docker volume rm production-db-data

# Restore from backup
docker volume create production-db-data
docker run --rm \
    -v production-db-data:/data \
    -v /backups/docker:/backup \
    ubuntu tar -xzf /backup/production-db-data.tar.gz -C /data --strip-components=1

# Restart container
docker start production-db

# Verify recovery
docker logs production-db

Backup Verification

Verify backups are valid and recoverable.

Test restore procedures:

# Periodically test restores in non-production environment

# Create test directory
mkdir -p /test-restore

# Extract backup
tar -xzf /backups/docker/latest.tar.gz -C /test-restore/

# Test image restoration
cd /test-restore/latest
docker load -i $(ls *.tar | head -1)

# Test volume restoration
docker volume create test-restore-vol
docker run --rm \
    -v test-restore-vol:/data \
    -v $(pwd):/backup \
    ubuntu tar -xzf /backup/*.tar.gz -C /data

# Cleanup test
docker volume rm test-restore-vol
rm -rf /test-restore

Backup integrity checks:

# Verify backup file integrity

# Check tar validity
tar -tzf /backups/docker/latest.tar.gz > /dev/null && echo "Backup valid" || echo "Backup corrupted"

# Verify checksums
sha256sum /backups/docker/*.tar.gz > /backups/docker/checksums.txt
sha256sum -c /backups/docker/checksums.txt

# Document backup metadata
cat > /backups/docker/backup-manifest.txt <<'EOF'
Backup Date: $(date)
Total Size: $(du -sh /backups/docker | cut -f1)
Containers Backed Up: $(ls *.tar | wc -l)
Volumes Backed Up: $(ls *-volume.tar.gz 2>/dev/null | wc -l)
Verified: Yes
EOF

Cloud Storage Integration

Store backups in cloud storage for offsite protection.

Upload to AWS S3:

# Install AWS CLI
sudo apt-get install -y awscli

# Configure credentials
aws configure

# Upload backups to S3
aws s3 cp /backups/docker/latest.tar.gz \
    s3://my-backup-bucket/docker-backups/

# Backup with timestamp
TIMESTAMP=$(date +%Y%m%d)
aws s3 cp /backups/docker/latest.tar.gz \
    s3://my-backup-bucket/docker-backups/${TIMESTAMP}/

# List backups
aws s3 ls s3://my-backup-bucket/docker-backups/

# Download for restore
aws s3 cp s3://my-backup-bucket/docker-backups/latest.tar.gz \
    /restore/

Automated cloud backup:

cat > /usr/local/bin/backup-to-s3.sh <<'EOF'
#!/bin/bash

BACKUP_DIR="/backups/docker"
TIMESTAMP=$(date +%Y%m%d_%H%M%S)
S3_BUCKET="s3://my-backup-bucket"

# Run backup
/usr/local/bin/backup-docker.sh

# Upload to S3
aws s3 cp "$BACKUP_DIR/$TIMESTAMP.tar.gz" \
    "$S3_BUCKET/docker/$TIMESTAMP.tar.gz"

# Keep only recent backups locally (7 days)
find "$BACKUP_DIR" -maxdepth 1 -name "*.tar.gz" -mtime +7 -delete

# Keep all backups in S3

EOF

chmod +x /usr/local/bin/backup-to-s3.sh

# Schedule
# 0 2 * * * /usr/local/bin/backup-to-s3.sh

Conclusion

Comprehensive Docker backup and restore procedures protect your containerized applications and data from loss. By implementing multiple backup strategies (images, volumes, container filesystems), automating schedules, and verifying recovery procedures, you create a resilient infrastructure. Start with simple daily full backups, progress to selective backups for critical containers, and eventually implement cloud storage integration for offsite protection. Regular testing of restore procedures ensures you can actually recover when needed. As your containerized infrastructure grows, backup management becomes increasingly critical to business continuity. Make backup verification and disaster recovery drills part of your regular operational procedures.