Backup Automation with Scripts and Cron: Complete Guide

Introduction

Manual backups are unreliable—they depend on human memory, discipline, and availability. In production environments, backup automation is not optional; it's essential. A single forgotten backup during a critical period can result in catastrophic data loss when disaster strikes. Automation ensures consistent, timely backups regardless of holidays, weekends, staff availability, or workload pressures.

This comprehensive guide explores backup automation using shell scripts and cron scheduling on Linux systems. We'll cover script development best practices, cron fundamentals, advanced scheduling techniques, error handling, notification systems, monitoring, and real-world automation scenarios that implement the 3-2-1 backup rule effectively.

Whether you're automating backups for a single server or orchestrating complex multi-tier backup strategies across distributed infrastructure, mastering shell scripting and cron provides the foundation for reliable, hands-off data protection.

Understanding Backup Automation Components

Why Automate Backups

Consistency: Automated backups run on schedule without human intervention, ensuring no gaps in backup coverage.

Reliability: Removes human error from the backup process—scripts execute the same way every time.

Scalability: Once configured, automation scales to handle multiple servers, databases, and applications.

Compliance: Regular automated backups help meet regulatory requirements for data protection and retention.

Peace of mind: Set up once, monitor regularly, sleep soundly knowing backups are running.

Key Automation Components

Backup scripts: Shell scripts that encapsulate backup logic, error handling, and notifications.

Schedulers: Cron (or systemd timers) that trigger scripts at defined intervals.

Logging: Comprehensive logs for troubleshooting, auditing, and compliance.

Notifications: Email, Slack, or other alerts for success/failure conditions.

Monitoring: Health checks that verify backups completed successfully and are current.

Testing: Automated or semi-automated restoration testing to ensure backup viability.

Bash Scripting Fundamentals for Backups

Basic Script Structure

Every robust backup script should follow this structure:

#!/bin/bash
#
# Script: backup-production.sh
# Description: Production server backup automation
# Author: System Administrator
# Date: 2026-01-11
#

# Exit on error, undefined variables, pipe failures
set -euo pipefail

# Configuration section
BACKUP_SOURCE="/var/www"
BACKUP_DESTINATION="/backup/www"
LOG_FILE="/var/log/backup.log"
ADMIN_EMAIL="[email protected]"

# Functions
log() {
    echo "[$(date '+%Y-%m-%d %H:%M:%S')] $*" | tee -a "$LOG_FILE"
}

error_exit() {
    log "ERROR: $1"
    send_alert "BACKUP FAILED" "$1"
    exit 1
}

send_alert() {
    local subject="$1"
    local message="$2"
    echo "$message" | mail -s "$subject - $(hostname)" "$ADMIN_EMAIL"
}

# Main backup logic
main() {
    log "Starting backup process"

    # Pre-backup checks
    # Backup execution
    # Post-backup verification
    # Cleanup

    log "Backup completed successfully"
}

# Execute main function
main "$@"

Best Practices for Backup Scripts

1. Use set -euo pipefail:

set -e  # Exit on error
set -u  # Treat unset variables as error
set -o pipefail  # Pipe commands return failure if any command fails

2. Implement comprehensive logging:

log() {
    local level="${1:-INFO}"
    shift
    echo "[$(date '+%Y-%m-%d %H:%M:%S')] [$level] $*" | tee -a "$LOG_FILE"
}

log INFO "Backup started"
log ERROR "Backup failed"

3. Error handling and exit codes:

#!/bin/bash
set -e

cleanup() {
    # Cleanup on exit
    rm -f /tmp/backup-lock
}
trap cleanup EXIT

# Main logic
if ! backup_command; then
    log ERROR "Backup command failed"
    exit 1
fi

exit 0  # Success

4. Lock files to prevent concurrent execution:

LOCK_FILE="/var/run/backup.lock"

if [ -f "$LOCK_FILE" ]; then
    log ERROR "Backup already running (lock file exists)"
    exit 1
fi

# Create lock file
echo $$ > "$LOCK_FILE"

# Remove lock on exit
trap "rm -f $LOCK_FILE" EXIT

5. Pre-flight checks:

# Check source exists
if [ ! -d "$BACKUP_SOURCE" ]; then
    error_exit "Source directory does not exist: $BACKUP_SOURCE"
fi

# Check available space
AVAILABLE=$(df "$BACKUP_DESTINATION" | awk 'NR==2 {print $4}')
REQUIRED=10485760  # 10GB in KB
if [ "$AVAILABLE" -lt "$REQUIRED" ]; then
    error_exit "Insufficient disk space (available: ${AVAILABLE}KB, required: ${REQUIRED}KB)"
fi

# Test backup destination is writable
if [ ! -w "$BACKUP_DESTINATION" ]; then
    error_exit "Backup destination is not writable: $BACKUP_DESTINATION"
fi

Complete Production Backup Script Example

#!/bin/bash
#
# Production Backup Script with Comprehensive Error Handling
# Implements 3-2-1 backup strategy: local, remote, and cloud
#

set -euo pipefail

# Configuration
HOSTNAME=$(hostname -s)
BACKUP_NAME="backup-${HOSTNAME}-$(date +%Y%m%d-%H%M%S)"
BACKUP_ROOT="/backup"
LOCAL_BACKUP="$BACKUP_ROOT/local/$BACKUP_NAME"
REMOTE_SERVER="backup-server.example.com"
REMOTE_USER="backup"
REMOTE_PATH="/backups/${HOSTNAME}"
S3_BUCKET="s3://company-backups/${HOSTNAME}"
LOG_DIR="/var/log/backups"
LOG_FILE="$LOG_DIR/backup-$(date +%Y%m%d).log"
ADMIN_EMAIL="[email protected]"
LOCK_FILE="/var/run/backup.lock"

# Sources to backup
BACKUP_SOURCES=(
    "/etc"
    "/home"
    "/var/www"
    "/opt/application"
)

# Database backup directory
DB_BACKUP_DIR="/var/backups/databases"

# Retention settings
KEEP_LOCAL_DAYS=7
KEEP_REMOTE_DAYS=30
KEEP_S3_DAYS=365

# Logging function
log() {
    local level="${1:-INFO}"
    shift
    local message="$*"
    echo "[$(date '+%Y-%m-%d %H:%M:%S')] [$level] $message" | tee -a "$LOG_FILE"
}

# Error handler
error_exit() {
    log ERROR "$1"
    send_notification "BACKUP FAILED" "$1"
    cleanup
    exit 1
}

# Send notification
send_notification() {
    local subject="$1"
    local message="$2"

    # Email notification
    {
        echo "Hostname: $(hostname)"
        echo "Date: $(date)"
        echo "Message: $message"
        echo ""
        echo "Recent log entries:"
        tail -20 "$LOG_FILE"
    } | mail -s "$subject - $(hostname)" "$ADMIN_EMAIL"

    # Optional: Slack notification
    # curl -X POST -H 'Content-type: application/json' \
    #     --data "{\"text\":\"$subject: $message\"}" \
    #     "$SLACK_WEBHOOK_URL"
}

# Cleanup function
cleanup() {
    log INFO "Performing cleanup"
    rm -f "$LOCK_FILE"
    # Remove temporary files if any
}

# Setup trap for cleanup
trap cleanup EXIT

# Check for existing backup process
check_lock() {
    if [ -f "$LOCK_FILE" ]; then
        local pid=$(cat "$LOCK_FILE")
        if ps -p "$pid" > /dev/null 2>&1; then
            error_exit "Backup already running (PID: $pid)"
        else
            log WARN "Removing stale lock file"
            rm -f "$LOCK_FILE"
        fi
    fi
    echo $$ > "$LOCK_FILE"
}

# Pre-flight checks
preflight_checks() {
    log INFO "Running preflight checks"

    # Check required commands
    for cmd in rsync mysqldump pg_dump tar gzip aws; do
        if ! command -v "$cmd" &> /dev/null; then
            log WARN "Command not found: $cmd (some features may not work)"
        fi
    done

    # Check available disk space
    local available=$(df "$BACKUP_ROOT" | awk 'NR==2 {print $4}')
    local required=10485760  # 10GB in KB
    if [ "$available" -lt "$required" ]; then
        error_exit "Insufficient disk space (available: ${available}KB, required: ${required}KB)"
    fi

    # Check source directories exist
    for source in "${BACKUP_SOURCES[@]}"; do
        if [ ! -d "$source" ]; then
            log WARN "Source directory does not exist: $source"
        fi
    done

    log INFO "Preflight checks passed"
}

# Database backup
backup_databases() {
    log INFO "Backing up databases"

    mkdir -p "$DB_BACKUP_DIR"

    # MySQL/MariaDB
    if command -v mysqldump &> /dev/null; then
        log INFO "Dumping MySQL databases"
        mysqldump --all-databases --single-transaction --quick \
            --routines --triggers --events \
            | gzip > "$DB_BACKUP_DIR/mysql-all.sql.gz"

        if [ ${PIPESTATUS[0]} -eq 0 ]; then
            log INFO "MySQL backup completed"
        else
            log ERROR "MySQL backup failed"
        fi
    fi

    # PostgreSQL
    if command -v pg_dumpall &> /dev/null; then
        log INFO "Dumping PostgreSQL databases"
        sudo -u postgres pg_dumpall \
            | gzip > "$DB_BACKUP_DIR/postgresql-all.sql.gz"

        if [ ${PIPESTATUS[0]} -eq 0 ]; then
            log INFO "PostgreSQL backup completed"
        else
            log ERROR "PostgreSQL backup failed"
        fi
    fi
}

# Create local backup
create_local_backup() {
    log INFO "Creating local backup: $LOCAL_BACKUP"

    mkdir -p "$LOCAL_BACKUP"

    # Backup filesystem data
    for source in "${BACKUP_SOURCES[@]}"; do
        if [ -d "$source" ]; then
            log INFO "Backing up: $source"
            rsync -av --delete \
                --exclude='*.tmp' \
                --exclude='.cache' \
                --exclude='lost+found' \
                "$source/" "$LOCAL_BACKUP$(dirname $source)/$(basename $source)/" \
                >> "$LOG_FILE" 2>&1
        fi
    done

    # Include database backups
    if [ -d "$DB_BACKUP_DIR" ]; then
        log INFO "Including database backups"
        rsync -av "$DB_BACKUP_DIR/" "$LOCAL_BACKUP/databases/" \
            >> "$LOG_FILE" 2>&1
    fi

    # Create backup manifest
    log INFO "Creating backup manifest"
    {
        echo "Backup: $BACKUP_NAME"
        echo "Date: $(date)"
        echo "Hostname: $(hostname)"
        echo "Sources:"
        for source in "${BACKUP_SOURCES[@]}"; do
            echo "  - $source"
        done
        echo ""
        echo "File counts:"
        find "$LOCAL_BACKUP" -type f | wc -l
        echo ""
        echo "Total size:"
        du -sh "$LOCAL_BACKUP"
    } > "$LOCAL_BACKUP/MANIFEST.txt"

    log INFO "Local backup created successfully"
}

# Sync to remote server
sync_to_remote() {
    log INFO "Syncing to remote server: $REMOTE_SERVER"

    rsync -avz --delete \
        -e "ssh -i /root/.ssh/backup_key" \
        "$LOCAL_BACKUP/" \
        "${REMOTE_USER}@${REMOTE_SERVER}:${REMOTE_PATH}/current/" \
        >> "$LOG_FILE" 2>&1

    if [ $? -eq 0 ]; then
        log INFO "Remote sync completed successfully"
    else
        log ERROR "Remote sync failed"
        return 1
    fi
}

# Upload to S3
upload_to_s3() {
    log INFO "Uploading to S3: $S3_BUCKET"

    # Create compressed archive
    local archive_name="${BACKUP_NAME}.tar.gz"
    local archive_path="/tmp/${archive_name}"

    tar -czf "$archive_path" -C "$BACKUP_ROOT/local" "$(basename $LOCAL_BACKUP)" \
        >> "$LOG_FILE" 2>&1

    # Upload to S3
    aws s3 cp "$archive_path" "${S3_BUCKET}/${archive_name}" \
        --storage-class STANDARD_IA \
        >> "$LOG_FILE" 2>&1

    if [ $? -eq 0 ]; then
        log INFO "S3 upload completed successfully"
        rm -f "$archive_path"
    else
        log ERROR "S3 upload failed"
        rm -f "$archive_path"
        return 1
    fi
}

# Cleanup old backups
cleanup_old_backups() {
    log INFO "Cleaning up old backups"

    # Local cleanup
    find "$BACKUP_ROOT/local" -maxdepth 1 -type d -mtime "+$KEEP_LOCAL_DAYS" \
        -exec rm -rf {} \; 2>> "$LOG_FILE"

    # Remote cleanup (via SSH)
    ssh -i /root/.ssh/backup_key "${REMOTE_USER}@${REMOTE_SERVER}" \
        "find ${REMOTE_PATH} -maxdepth 1 -type d -mtime +${KEEP_REMOTE_DAYS} -exec rm -rf {} \;" \
        2>> "$LOG_FILE"

    # S3 cleanup (using lifecycle policies or manual deletion)
    # Note: S3 lifecycle policies are preferred for this

    log INFO "Cleanup completed"
}

# Verify backup integrity
verify_backup() {
    log INFO "Verifying backup integrity"

    # Check backup size is reasonable
    local backup_size=$(du -s "$LOCAL_BACKUP" | awk '{print $1}')
    local min_size=102400  # 100MB in KB

    if [ "$backup_size" -lt "$min_size" ]; then
        error_exit "Backup size suspiciously small: ${backup_size}KB"
    fi

    # Verify critical files exist
    local critical_files=(
        "etc/passwd"
        "etc/hostname"
    )

    for file in "${critical_files[@]}"; do
        if [ ! -f "$LOCAL_BACKUP/$file" ]; then
            log WARN "Critical file missing from backup: $file"
        fi
    done

    # Create verification checksum
    find "$LOCAL_BACKUP" -type f -exec md5sum {} \; > "$LOCAL_BACKUP/checksums.md5"

    log INFO "Backup verification completed"
}

# Main execution
main() {
    log INFO "========================================="
    log INFO "Starting backup: $BACKUP_NAME"
    log INFO "========================================="

    # Initialize
    mkdir -p "$LOG_DIR"
    mkdir -p "$BACKUP_ROOT/local"
    check_lock

    # Execute backup stages
    preflight_checks
    backup_databases
    create_local_backup
    verify_backup

    # Offsite replication (3-2-1 rule)
    sync_to_remote || log ERROR "Remote sync failed, continuing..."
    upload_to_s3 || log ERROR "S3 upload failed, continuing..."

    # Cleanup
    cleanup_old_backups

    # Success notification
    log INFO "========================================="
    log INFO "Backup completed successfully"
    log INFO "========================================="

    send_notification "BACKUP SUCCESS" "Backup completed successfully: $BACKUP_NAME"

    exit 0
}

# Execute main
main "$@"

Make executable:

sudo chmod +x /usr/local/bin/backup-production.sh

Cron Scheduling Fundamentals

Understanding Cron Syntax

Cron uses a five-field time specification:

* * * * * command
│ │ │ │ │
│ │ │ │ └─── Day of week (0-7, Sunday = 0 or 7)
│ │ │ └───── Month (1-12)
│ │ └─────── Day of month (1-31)
│ └───────── Hour (0-23)
└─────────── Minute (0-59)

Special characters:

  • *: Any value
  • ,: Value list (1,3,5)
  • -: Range (1-5)
  • /: Step values (*/5 = every 5 units)

Common Cron Schedule Examples

# Every minute
* * * * * /path/to/script.sh

# Every 5 minutes
*/5 * * * * /path/to/script.sh

# Every hour at minute 30
30 * * * * /path/to/script.sh

# Every day at 2:00 AM
0 2 * * * /path/to/script.sh

# Every Sunday at 3:00 AM
0 3 * * 0 /path/to/script.sh

# First day of every month at 1:30 AM
30 1 1 * * /path/to/script.sh

# Weekdays at 11:00 PM
0 23 * * 1-5 /path/to/script.sh

# Every 6 hours
0 */6 * * * /path/to/script.sh

# Multiple times per day
0 2,14 * * * /path/to/script.sh  # 2 AM and 2 PM

Editing Crontab

# Edit user crontab
crontab -e

# Edit root crontab
sudo crontab -e

# List current crontab
crontab -l

# Remove crontab
crontab -r

System-Wide Cron Configuration

System cron directories:

/etc/cron.hourly/    # Scripts run hourly
/etc/cron.daily/     # Scripts run daily
/etc/cron.weekly/    # Scripts run weekly
/etc/cron.monthly/   # Scripts run monthly

Place executable scripts:

# Create daily backup script
sudo cp backup-script.sh /etc/cron.daily/
sudo chmod +x /etc/cron.daily/backup-script.sh

# Runs automatically at configured time (usually 3 AM)

Custom system cron (/etc/cron.d/backup):

# /etc/cron.d/backup
SHELL=/bin/bash
PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin

# Daily full backup at 2 AM
0 2 * * * root /usr/local/bin/backup-full.sh >> /var/log/backup-full.log 2>&1

# Hourly incremental backup
0 * * * * root /usr/local/bin/backup-incremental.sh >> /var/log/backup-incr.log 2>&1

# Weekly verification on Sunday at 4 AM
0 4 * * 0 root /usr/local/bin/backup-verify.sh >> /var/log/backup-verify.log 2>&1

Advanced Backup Automation Patterns

Multi-Tier Backup Automation

Implement hourly, daily, weekly, and monthly backups:

#!/bin/bash
# /usr/local/bin/backup-tiered.sh

BACKUP_TYPE="$1"  # hourly, daily, weekly, or monthly
BACKUP_ROOT="/backup"
TIMESTAMP=$(date +%Y%m%d-%H%M%S)

case "$BACKUP_TYPE" in
    hourly)
        # Lightweight incremental
        rsync -av --link-dest="$BACKUP_ROOT/hourly.1" \
            /var/www/ "$BACKUP_ROOT/hourly.0/"

        # Rotate hourly backups (keep 24)
        for i in {23..1}; do
            if [ -d "$BACKUP_ROOT/hourly.$i" ]; then
                mv "$BACKUP_ROOT/hourly.$i" "$BACKUP_ROOT/hourly.$((i+1))"
            fi
        done
        ;;

    daily)
        # Full backup with database dumps
        /usr/local/bin/dump-databases.sh
        rsync -av /var/www/ /etc/ /home/ "$BACKUP_ROOT/daily.0/"

        # Rotate (keep 7 days)
        for i in {6..1}; do
            if [ -d "$BACKUP_ROOT/daily.$i" ]; then
                mv "$BACKUP_ROOT/daily.$i" "$BACKUP_ROOT/daily.$((i+1))"
            fi
        done
        ;;

    weekly)
        # Copy from daily.0 to weekly
        cp -al "$BACKUP_ROOT/daily.0" "$BACKUP_ROOT/weekly.$(date +%Y%W)"

        # Cleanup old weekly backups (keep 4 weeks)
        find "$BACKUP_ROOT" -maxdepth 1 -name "weekly.*" -mtime +28 -exec rm -rf {} \;
        ;;

    monthly)
        # Copy from daily.0 to monthly
        cp -al "$BACKUP_ROOT/daily.0" "$BACKUP_ROOT/monthly.$(date +%Y%m)"

        # Cleanup old monthly (keep 12 months)
        find "$BACKUP_ROOT" -maxdepth 1 -name "monthly.*" -mtime +365 -exec rm -rf {} \;
        ;;

    *)
        echo "Usage: $0 {hourly|daily|weekly|monthly}"
        exit 1
        ;;
esac

Cron schedule for tiered backups:

# /etc/cron.d/backup-tiered

# Hourly backups at minute 0
0 * * * * root /usr/local/bin/backup-tiered.sh hourly

# Daily backup at 2 AM
0 2 * * * root /usr/local/bin/backup-tiered.sh daily

# Weekly backup on Sunday at 3 AM
0 3 * * 0 root /usr/local/bin/backup-tiered.sh weekly

# Monthly backup on 1st at 4 AM
0 4 1 * * root /usr/local/bin/backup-tiered.sh monthly

Parallel Backup Execution

Backup multiple sources concurrently:

#!/bin/bash
# /usr/local/bin/backup-parallel.sh

# Backup functions
backup_web() {
    rsync -av /var/www/ /backup/www/ > /var/log/backup-www.log 2>&1
}

backup_databases() {
    mysqldump --all-databases | gzip > /backup/databases.sql.gz 2> /var/log/backup-db.log
}

backup_home() {
    rsync -av /home/ /backup/home/ > /var/log/backup-home.log 2>&1
}

backup_config() {
    tar -czf /backup/config.tar.gz /etc > /var/log/backup-config.log 2>&1
}

# Run backups in parallel
backup_web &
PID_WEB=$!

backup_databases &
PID_DB=$!

backup_home &
PID_HOME=$!

backup_config &
PID_CONFIG=$!

# Wait for all to complete
wait $PID_WEB
EXIT_WEB=$?

wait $PID_DB
EXIT_DB=$?

wait $PID_HOME
EXIT_HOME=$?

wait $PID_CONFIG
EXIT_CONFIG=$?

# Check results
FAILED=0
[ $EXIT_WEB -ne 0 ] && echo "Web backup failed" && FAILED=1
[ $EXIT_DB -ne 0 ] && echo "Database backup failed" && FAILED=1
[ $EXIT_HOME -ne 0 ] && echo "Home backup failed" && FAILED=1
[ $EXIT_CONFIG -ne 0 ] && echo "Config backup failed" && FAILED=1

if [ $FAILED -eq 0 ]; then
    echo "All backups completed successfully"
    exit 0
else
    echo "Some backups failed"
    exit 1
fi

Conditional Backup Scheduling

Run backups based on conditions:

#!/bin/bash
# /usr/local/bin/backup-conditional.sh

# Only backup during off-peak hours (10 PM to 6 AM)
CURRENT_HOUR=$(date +%H)
if [ "$CURRENT_HOUR" -ge 6 ] && [ "$CURRENT_HOUR" -lt 22 ]; then
    echo "Backup skipped: Peak hours"
    exit 0
fi

# Check server load
LOAD=$(uptime | awk -F'load average:' '{print $2}' | awk '{print $1}' | cut -d'.' -f1)
if [ "$LOAD" -gt 5 ]; then
    echo "Backup skipped: High server load ($LOAD)"
    exit 0
fi

# Check available space
AVAILABLE=$(df /backup | awk 'NR==2 {print $4}')
if [ "$AVAILABLE" -lt 10485760 ]; then  # Less than 10GB
    echo "Backup skipped: Insufficient space"
    exit 1
fi

# Conditions met, proceed with backup
echo "Conditions satisfied, starting backup..."
/usr/local/bin/backup-main.sh

Notification and Monitoring

Email Notifications

Simple email notifications:

# Success notification
echo "Backup completed successfully on $(date)" | \
    mail -s "Backup Success - $(hostname)" [email protected]

# Failure notification with log excerpt
{
    echo "Backup failed on $(hostname)"
    echo "Time: $(date)"
    echo ""
    echo "Recent log entries:"
    tail -50 /var/log/backup.log
} | mail -s "BACKUP FAILED - $(hostname)" [email protected]

HTML email notifications:

#!/bin/bash
# Send HTML email with backup report

RECIPIENT="[email protected]"
SUBJECT="Backup Report - $(hostname) - $(date +%Y-%m-%d)"

# Generate HTML report
cat > /tmp/backup-report.html << EOF
<html>
<head><style>
    body { font-family: Arial, sans-serif; }
    .success { color: green; font-weight: bold; }
    .error { color: red; font-weight: bold; }
    table { border-collapse: collapse; width: 100%; }
    th, td { border: 1px solid #ddd; padding: 8px; text-align: left; }
    th { background-color: #4CAF50; color: white; }
</style></head>
<body>
    <h2>Backup Report</h2>
    <p><strong>Server:</strong> $(hostname)</p>
    <p><strong>Date:</strong> $(date)</p>
    <p><strong>Status:</strong> <span class="success">SUCCESS</span></p>

    <h3>Backup Details</h3>
    <table>
        <tr><th>Component</th><th>Size</th><th>Status</th></tr>
        <tr><td>Web Files</td><td>5.2 GB</td><td class="success">OK</td></tr>
        <tr><td>Databases</td><td>1.8 GB</td><td class="success">OK</td></tr>
        <tr><td>Config Files</td><td>45 MB</td><td class="success">OK</td></tr>
    </table>
</body>
</html>
EOF

# Send email
(
    echo "From: backup@$(hostname)"
    echo "To: $RECIPIENT"
    echo "Subject: $SUBJECT"
    echo "Content-Type: text/html"
    echo ""
    cat /tmp/backup-report.html
) | sendmail -t

rm /tmp/backup-report.html

Slack Notifications

#!/bin/bash
# Send Slack notification

send_slack_notification() {
    local message="$1"
    local color="${2:-good}"  # good, warning, danger
    local webhook_url="https://hooks.slack.com/services/YOUR/WEBHOOK/URL"

    curl -X POST -H 'Content-type: application/json' \
        --data "{
            \"attachments\": [{
                \"color\": \"$color\",
                \"title\": \"Backup Notification\",
                \"text\": \"$message\",
                \"fields\": [
                    {\"title\": \"Server\", \"value\": \"$(hostname)\", \"short\": true},
                    {\"title\": \"Time\", \"value\": \"$(date)\", \"short\": true}
                ]
            }]
        }" \
        "$webhook_url"
}

# Usage
send_slack_notification "Backup completed successfully" "good"
send_slack_notification "Backup failed!" "danger"

Monitoring Script

#!/bin/bash
# /usr/local/bin/monitor-backups.sh
# Monitor backup completion and freshness

BACKUP_DIR="/backup"
MAX_AGE_HOURS=26
ALERT_EMAIL="[email protected]"

# Function to check backup age
check_backup_age() {
    local backup_marker="$1"
    local backup_name="$2"

    if [ ! -f "$backup_marker" ]; then
        echo "ERROR: $backup_name marker not found"
        return 1
    fi

    local backup_time=$(stat -c %Y "$backup_marker")
    local current_time=$(date +%s)
    local age_hours=$(( (current_time - backup_time) / 3600 ))

    if [ $age_hours -gt $MAX_AGE_HOURS ]; then
        echo "WARNING: $backup_name is $age_hours hours old (>$MAX_AGE_HOURS)"
        return 1
    else
        echo "OK: $backup_name is $age_hours hours old"
        return 0
    fi
}

# Check various backups
FAILURES=0

check_backup_age "$BACKUP_DIR/.last-full-backup" "Full Backup"
[ $? -ne 0 ] && ((FAILURES++))

check_backup_age "$BACKUP_DIR/.last-db-backup" "Database Backup"
[ $? -ne 0 ] && ((FAILURES++))

check_backup_age "$BACKUP_DIR/.last-remote-sync" "Remote Sync"
[ $? -ne 0 ] && ((FAILURES++))

# Report results
if [ $FAILURES -gt 0 ]; then
    echo "Backup monitoring detected $FAILURES issue(s)" | \
        mail -s "BACKUP MONITORING ALERT - $(hostname)" "$ALERT_EMAIL"
    exit 1
else
    echo "All backups current and healthy"
    exit 0
fi

Schedule monitoring (check every hour):

0 * * * * /usr/local/bin/monitor-backups.sh >> /var/log/backup-monitoring.log 2>&1

Real-World Automation Scenarios

Scenario 1: Small Business Website

Requirements:

  • Daily full backup
  • Weekly offsite sync
  • 30-day retention
  • Email notifications

Implementation:

Daily backup script (/etc/cron.daily/backup-website):

#!/bin/bash
set -e

BACKUP_DIR="/backup/website"
BACKUP_DATE=$(date +%Y%m%d)
ADMIN_EMAIL="[email protected]"

# Create backup
mkdir -p "$BACKUP_DIR/$BACKUP_DATE"

# Backup web files
rsync -av /var/www/ "$BACKUP_DIR/$BACKUP_DATE/www/"

# Backup database
mysqldump --all-databases --single-transaction | \
    gzip > "$BACKUP_DIR/$BACKUP_DATE/database.sql.gz"

# Cleanup old backups (keep 30 days)
find "$BACKUP_DIR" -maxdepth 1 -type d -mtime +30 -exec rm -rf {} \;

# Notification
echo "Website backup completed: $BACKUP_DATE" | \
    mail -s "Backup Success - $(hostname)" "$ADMIN_EMAIL"

Weekly offsite sync (/etc/cron.weekly/sync-offsite):

#!/bin/bash
rsync -avz --delete \
    -e "ssh -i /root/.ssh/backup_key" \
    /backup/website/ \
    backup@remote-server:/backups/website/

Scenario 2: Enterprise Multi-Server Environment

Requirements:

  • 20 application servers
  • Hourly incremental, daily full
  • Centralized backup server
  • Comprehensive monitoring

Central backup orchestration (on backup server):

#!/bin/bash
# /usr/local/bin/backup-all-servers.sh

SERVERS=(
    "web1.example.com"
    "web2.example.com"
    "app1.example.com"
    "app2.example.com"
    "db1.example.com"
)

LOG_DIR="/var/log/backups"
BACKUP_ROOT="/backups"

for server in "${SERVERS[@]}"; do
    echo "Backing up $server..."

    # Trigger backup on remote server
    ssh root@$server "/usr/local/bin/local-backup.sh"

    # Pull backup to central server
    rsync -avz --delete \
        root@$server:/backup/latest/ \
        "$BACKUP_ROOT/$server/" \
        > "$LOG_DIR/$server-$(date +%Y%m%d).log" 2>&1 &
done

# Wait for all backups to complete
wait

echo "All server backups completed"

Schedule:

# /etc/cron.d/backup-enterprise
0 * * * * root /usr/local/bin/backup-all-servers.sh incremental
0 2 * * * root /usr/local/bin/backup-all-servers.sh full

Troubleshooting Cron Issues

Cron Jobs Not Running

Check cron service:

# SystemD
systemctl status cron  # Debian/Ubuntu
systemctl status crond # CentOS/RHEL

# Start if not running
systemctl start cron
systemctl enable cron

Check cron logs:

# Debian/Ubuntu
grep CRON /var/log/syslog

# CentOS/RHEL
grep CRON /var/log/cron

# View recent cron executions
journalctl -u cron

Test cron syntax:

# Use online tools or:
# Install cronie if available
cronnext

# Test your crontab
crontab -l | grep -v '^#' | grep -v '^$'

Environment Issues

Cron runs with minimal environment:

# Bad: Assumes PATH
0 2 * * * backup-script.sh

# Good: Full path
0 2 * * * /usr/local/bin/backup-script.sh

# Better: Set environment in crontab
PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin
0 2 * * * backup-script.sh

# Or in script itself
#!/bin/bash
export PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin

Permission Issues

# Ensure script is executable
chmod +x /usr/local/bin/backup-script.sh

# Check ownership
ls -la /usr/local/bin/backup-script.sh

# For system cron, ensure proper permissions
chmod 644 /etc/cron.d/backup

Conclusion

Backup automation through shell scripting and cron scheduling removes human error from the data protection process, ensuring consistent, reliable backups that form the foundation of disaster recovery preparedness.

Key takeaways:

  1. Write robust scripts: Implement error handling, logging, pre-flight checks, and notifications.

  2. Schedule appropriately: Use cron for consistent, timely backup execution aligned with RTO/RPO requirements.

  3. Monitor actively: Implement monitoring to detect failures promptly.

  4. Test regularly: Automation includes restoration testing—schedule periodic drills.

  5. Follow 3-2-1 rule: Automate local, remote, and offsite backups.

  6. Document thoroughly: Maintain documentation of scripts, schedules, and procedures.

  7. Iterate and improve: Review logs, refine scripts, and optimize based on experience.

Proper backup automation provides peace of mind, ensuring your data protection strategy operates reliably without ongoing manual intervention. Combined with comprehensive monitoring and regular testing, automated backups form the cornerstone of effective disaster recovery planning.