Automatización de Copias de Seguridad con Scripts y Cron: Guía Completa
Introducción
Las copias de seguridad manuales no son confiables: dependen de la memoria humana, la disciplina y la disponibilidad. En entornos de producción, la automatización de copias de seguridad no es opcional; es esencial. Una sola copia de seguridad olvidada durante un período crítico puede resultar en una pérdida catastrófica de datos cuando ocurre un desastre. La automatización garantiza copias de seguridad consistentes y puntuales independientemente de vacaciones, fines de semana, disponibilidad del personal o presiones de carga de trabajo.
Esta guía completa explora la automatización de copias de seguridad utilizando scripts de shell y programación cron en sistemas Linux. Cubriremos las mejores prácticas de desarrollo de scripts, fundamentos de cron, técnicas avanzadas de programación, manejo de errores, sistemas de notificación, monitoreo y escenarios de automatización del mundo real que implementan la regla de copia de seguridad 3-2-1 de manera efectiva.
Ya sea que esté automatizando copias de seguridad para un solo servidor o orquestando estrategias complejas de copia de seguridad multinivel en infraestructura distribuida, dominar los scripts de shell y cron proporciona la base para una protección de datos confiable y sin intervención.
Comprendiendo los Componentes de Automatización de Copias de Seguridad
Por qué Automatizar las Copias de Seguridad
Consistencia: Las copias de seguridad automatizadas se ejecutan según el horario sin intervención humana, asegurando que no haya vacíos en la cobertura de copias de seguridad.
Confiabilidad: Elimina el error humano del proceso de copia de seguridad: los scripts se ejecutan de la misma manera cada vez.
Escalabilidad: Una vez configurada, la automatización escala para manejar múltiples servidores, bases de datos y aplicaciones.
Cumplimiento: Las copias de seguridad automatizadas regulares ayudan a cumplir con los requisitos regulatorios para la protección y retención de datos.
Tranquilidad: Configure una vez, monitoree regularmente, duerma tranquilo sabiendo que las copias de seguridad se están ejecutando.
Componentes Clave de Automatización
Scripts de copia de seguridad: Scripts de shell que encapsulan la lógica de copia de seguridad, manejo de errores y notificaciones.
Programadores: Cron (o temporizadores systemd) que activan scripts a intervalos definidos.
Registro: Registros completos para solución de problemas, auditoría y cumplimiento.
Notificaciones: Alertas por correo electrónico, Slack u otras para condiciones de éxito/fallo.
Monitoreo: Verificaciones de salud que confirman que las copias de seguridad se completaron exitosamente y están actualizadas.
Pruebas: Pruebas de restauración automatizadas o semiautomatizadas para garantizar la viabilidad de la copia de seguridad.
Fundamentos de Scripting en Bash para Copias de Seguridad
Estructura Básica del Script
Cada script robusto de copia de seguridad debe seguir esta estructura:
#!/bin/bash
#
# Script: backup-production.sh
# Description: Production server backup automation
# Author: System Administrator
# Date: 2026-01-11
#
# Exit on error, undefined variables, pipe failures
set -euo pipefail
# Configuration section
BACKUP_SOURCE="/var/www"
BACKUP_DESTINATION="/backup/www"
LOG_FILE="/var/log/backup.log"
ADMIN_EMAIL="[email protected]"
# Functions
log() {
echo "[$(date '+%Y-%m-%d %H:%M:%S')] $*" | tee -a "$LOG_FILE"
}
error_exit() {
log "ERROR: $1"
send_alert "BACKUP FAILED" "$1"
exit 1
}
send_alert() {
local subject="$1"
local message="$2"
echo "$message" | mail -s "$subject - $(hostname)" "$ADMIN_EMAIL"
}
# Main backup logic
main() {
log "Starting backup process"
# Pre-backup checks
# Backup execution
# Post-backup verification
# Cleanup
log "Backup completed successfully"
}
# Execute main function
main "$@"
Mejores Prácticas para Scripts de Copia de Seguridad
1. Usar set -euo pipefail:
set -e # Exit on error
set -u # Treat unset variables as error
set -o pipefail # Pipe commands return failure if any command fails
2. Implementar registro completo:
log() {
local level="${1:-INFO}"
shift
echo "[$(date '+%Y-%m-%d %H:%M:%S')] [$level] $*" | tee -a "$LOG_FILE"
}
log INFO "Backup started"
log ERROR "Backup failed"
3. Manejo de errores y códigos de salida:
#!/bin/bash
set -e
cleanup() {
# Cleanup on exit
rm -f /tmp/backup-lock
}
trap cleanup EXIT
# Main logic
if ! backup_command; then
log ERROR "Backup command failed"
exit 1
fi
exit 0 # Success
4. Archivos de bloqueo para prevenir ejecución concurrente:
LOCK_FILE="/var/run/backup.lock"
if [ -f "$LOCK_FILE" ]; then
log ERROR "Backup already running (lock file exists)"
exit 1
fi
# Create lock file
echo $$ > "$LOCK_FILE"
# Remove lock on exit
trap "rm -f $LOCK_FILE" EXIT
5. Verificaciones previas al vuelo:
# Check source exists
if [ ! -d "$BACKUP_SOURCE" ]; then
error_exit "Source directory does not exist: $BACKUP_SOURCE"
fi
# Check available space
AVAILABLE=$(df "$BACKUP_DESTINATION" | awk 'NR==2 {print $4}')
REQUIRED=10485760 # 10GB in KB
if [ "$AVAILABLE" -lt "$REQUIRED" ]; then
error_exit "Insufficient disk space (available: ${AVAILABLE}KB, required: ${REQUIRED}KB)"
fi
# Test backup destination is writable
if [ ! -w "$BACKUP_DESTINATION" ]; then
error_exit "Backup destination is not writable: $BACKUP_DESTINATION"
fi
Ejemplo Completo de Script de Copia de Seguridad de Producción
#!/bin/bash
#
# Production Backup Script with Comprehensive Error Handling
# Implements 3-2-1 backup strategy: local, remote, and cloud
#
set -euo pipefail
# Configuration
HOSTNAME=$(hostname -s)
BACKUP_NAME="backup-${HOSTNAME}-$(date +%Y%m%d-%H%M%S)"
BACKUP_ROOT="/backup"
LOCAL_BACKUP="$BACKUP_ROOT/local/$BACKUP_NAME"
REMOTE_SERVER="backup-server.example.com"
REMOTE_USER="backup"
REMOTE_PATH="/backups/${HOSTNAME}"
S3_BUCKET="s3://company-backups/${HOSTNAME}"
LOG_DIR="/var/log/backups"
LOG_FILE="$LOG_DIR/backup-$(date +%Y%m%d).log"
ADMIN_EMAIL="[email protected]"
LOCK_FILE="/var/run/backup.lock"
# Sources to backup
BACKUP_SOURCES=(
"/etc"
"/home"
"/var/www"
"/opt/application"
)
# Database backup directory
DB_BACKUP_DIR="/var/backups/databases"
# Retention settings
KEEP_LOCAL_DAYS=7
KEEP_REMOTE_DAYS=30
KEEP_S3_DAYS=365
# Logging function
log() {
local level="${1:-INFO}"
shift
local message="$*"
echo "[$(date '+%Y-%m-%d %H:%M:%S')] [$level] $message" | tee -a "$LOG_FILE"
}
# Error handler
error_exit() {
log ERROR "$1"
send_notification "BACKUP FAILED" "$1"
cleanup
exit 1
}
# Send notification
send_notification() {
local subject="$1"
local message="$2"
# Email notification
{
echo "Hostname: $(hostname)"
echo "Date: $(date)"
echo "Message: $message"
echo ""
echo "Recent log entries:"
tail -20 "$LOG_FILE"
} | mail -s "$subject - $(hostname)" "$ADMIN_EMAIL"
# Optional: Slack notification
# curl -X POST -H 'Content-type: application/json' \
# --data "{\"text\":\"$subject: $message\"}" \
# "$SLACK_WEBHOOK_URL"
}
# Cleanup function
cleanup() {
log INFO "Performing cleanup"
rm -f "$LOCK_FILE"
# Remove temporary files if any
}
# Setup trap for cleanup
trap cleanup EXIT
# Check for existing backup process
check_lock() {
if [ -f "$LOCK_FILE" ]; then
local pid=$(cat "$LOCK_FILE")
if ps -p "$pid" > /dev/null 2>&1; then
error_exit "Backup already running (PID: $pid)"
else
log WARN "Removing stale lock file"
rm -f "$LOCK_FILE"
fi
fi
echo $$ > "$LOCK_FILE"
}
# Pre-flight checks
preflight_checks() {
log INFO "Running preflight checks"
# Check required commands
for cmd in rsync mysqldump pg_dump tar gzip aws; do
if ! command -v "$cmd" &> /dev/null; then
log WARN "Command not found: $cmd (some features may not work)"
fi
done
# Check available disk space
local available=$(df "$BACKUP_ROOT" | awk 'NR==2 {print $4}')
local required=10485760 # 10GB in KB
if [ "$available" -lt "$required" ]; then
error_exit "Insufficient disk space (available: ${available}KB, required: ${required}KB)"
fi
# Check source directories exist
for source in "${BACKUP_SOURCES[@]}"; do
if [ ! -d "$source" ]; then
log WARN "Source directory does not exist: $source"
fi
done
log INFO "Preflight checks passed"
}
# Database backup
backup_databases() {
log INFO "Backing up databases"
mkdir -p "$DB_BACKUP_DIR"
# MySQL/MariaDB
if command -v mysqldump &> /dev/null; then
log INFO "Dumping MySQL databases"
mysqldump --all-databases --single-transaction --quick \
--routines --triggers --events \
| gzip > "$DB_BACKUP_DIR/mysql-all.sql.gz"
if [ ${PIPESTATUS[0]} -eq 0 ]; then
log INFO "MySQL backup completed"
else
log ERROR "MySQL backup failed"
fi
fi
# PostgreSQL
if command -v pg_dumpall &> /dev/null; then
log INFO "Dumping PostgreSQL databases"
sudo -u postgres pg_dumpall \
| gzip > "$DB_BACKUP_DIR/postgresql-all.sql.gz"
if [ ${PIPESTATUS[0]} -eq 0 ]; then
log INFO "PostgreSQL backup completed"
else
log ERROR "PostgreSQL backup failed"
fi
fi
}
# Create local backup
create_local_backup() {
log INFO "Creating local backup: $LOCAL_BACKUP"
mkdir -p "$LOCAL_BACKUP"
# Backup filesystem data
for source in "${BACKUP_SOURCES[@]}"; do
if [ -d "$source" ]; then
log INFO "Backing up: $source"
rsync -av --delete \
--exclude='*.tmp' \
--exclude='.cache' \
--exclude='lost+found' \
"$source/" "$LOCAL_BACKUP$(dirname $source)/$(basename $source)/" \
>> "$LOG_FILE" 2>&1
fi
done
# Include database backups
if [ -d "$DB_BACKUP_DIR" ]; then
log INFO "Including database backups"
rsync -av "$DB_BACKUP_DIR/" "$LOCAL_BACKUP/databases/" \
>> "$LOG_FILE" 2>&1
fi
# Create backup manifest
log INFO "Creating backup manifest"
{
echo "Backup: $BACKUP_NAME"
echo "Date: $(date)"
echo "Hostname: $(hostname)"
echo "Sources:"
for source in "${BACKUP_SOURCES[@]}"; do
echo " - $source"
done
echo ""
echo "File counts:"
find "$LOCAL_BACKUP" -type f | wc -l
echo ""
echo "Total size:"
du -sh "$LOCAL_BACKUP"
} > "$LOCAL_BACKUP/MANIFEST.txt"
log INFO "Local backup created successfully"
}
# Sync to remote server
sync_to_remote() {
log INFO "Syncing to remote server: $REMOTE_SERVER"
rsync -avz --delete \
-e "ssh -i /root/.ssh/backup_key" \
"$LOCAL_BACKUP/" \
"${REMOTE_USER}@${REMOTE_SERVER}:${REMOTE_PATH}/current/" \
>> "$LOG_FILE" 2>&1
if [ $? -eq 0 ]; then
log INFO "Remote sync completed successfully"
else
log ERROR "Remote sync failed"
return 1
fi
}
# Upload to S3
upload_to_s3() {
log INFO "Uploading to S3: $S3_BUCKET"
# Create compressed archive
local archive_name="${BACKUP_NAME}.tar.gz"
local archive_path="/tmp/${archive_name}"
tar -czf "$archive_path" -C "$BACKUP_ROOT/local" "$(basename $LOCAL_BACKUP)" \
>> "$LOG_FILE" 2>&1
# Upload to S3
aws s3 cp "$archive_path" "${S3_BUCKET}/${archive_name}" \
--storage-class STANDARD_IA \
>> "$LOG_FILE" 2>&1
if [ $? -eq 0 ]; then
log INFO "S3 upload completed successfully"
rm -f "$archive_path"
else
log ERROR "S3 upload failed"
rm -f "$archive_path"
return 1
fi
}
# Cleanup old backups
cleanup_old_backups() {
log INFO "Cleaning up old backups"
# Local cleanup
find "$BACKUP_ROOT/local" -maxdepth 1 -type d -mtime "+$KEEP_LOCAL_DAYS" \
-exec rm -rf {} \; 2>> "$LOG_FILE"
# Remote cleanup (via SSH)
ssh -i /root/.ssh/backup_key "${REMOTE_USER}@${REMOTE_SERVER}" \
"find ${REMOTE_PATH} -maxdepth 1 -type d -mtime +${KEEP_REMOTE_DAYS} -exec rm -rf {} \;" \
2>> "$LOG_FILE"
# S3 cleanup (using lifecycle policies or manual deletion)
# Note: S3 lifecycle policies are preferred for this
log INFO "Cleanup completed"
}
# Verify backup integrity
verify_backup() {
log INFO "Verifying backup integrity"
# Check backup size is reasonable
local backup_size=$(du -s "$LOCAL_BACKUP" | awk '{print $1}')
local min_size=102400 # 100MB in KB
if [ "$backup_size" -lt "$min_size" ]; then
error_exit "Backup size suspiciously small: ${backup_size}KB"
fi
# Verify critical files exist
local critical_files=(
"etc/passwd"
"etc/hostname"
)
for file in "${critical_files[@]}"; do
if [ ! -f "$LOCAL_BACKUP/$file" ]; then
log WARN "Critical file missing from backup: $file"
fi
done
# Create verification checksum
find "$LOCAL_BACKUP" -type f -exec md5sum {} \; > "$LOCAL_BACKUP/checksums.md5"
log INFO "Backup verification completed"
}
# Main execution
main() {
log INFO "========================================="
log INFO "Starting backup: $BACKUP_NAME"
log INFO "========================================="
# Initialize
mkdir -p "$LOG_DIR"
mkdir -p "$BACKUP_ROOT/local"
check_lock
# Execute backup stages
preflight_checks
backup_databases
create_local_backup
verify_backup
# Offsite replication (3-2-1 rule)
sync_to_remote || log ERROR "Remote sync failed, continuing..."
upload_to_s3 || log ERROR "S3 upload failed, continuing..."
# Cleanup
cleanup_old_backups
# Success notification
log INFO "========================================="
log INFO "Backup completed successfully"
log INFO "========================================="
send_notification "BACKUP SUCCESS" "Backup completed successfully: $BACKUP_NAME"
exit 0
}
# Execute main
main "$@"
Hacer ejecutable:
sudo chmod +x /usr/local/bin/backup-production.sh
Fundamentos de Programación con Cron
Comprendiendo la Sintaxis de Cron
Cron utiliza una especificación de tiempo de cinco campos:
* * * * * command
│ │ │ │ │
│ │ │ │ └─── Día de la semana (0-7, Domingo = 0 o 7)
│ │ │ └───── Mes (1-12)
│ │ └─────── Día del mes (1-31)
│ └───────── Hora (0-23)
└─────────── Minuto (0-59)
Caracteres especiales:
*: Cualquier valor,: Lista de valores (1,3,5)-: Rango (1-5)/: Valores de paso (*/5 = cada 5 unidades)
Ejemplos Comunes de Programación con Cron
# Every minute
* * * * * /path/to/script.sh
# Every 5 minutes
*/5 * * * * /path/to/script.sh
# Every hour at minute 30
30 * * * * /path/to/script.sh
# Every day at 2:00 AM
0 2 * * * /path/to/script.sh
# Every Sunday at 3:00 AM
0 3 * * 0 /path/to/script.sh
# First day of every month at 1:30 AM
30 1 1 * * /path/to/script.sh
# Weekdays at 11:00 PM
0 23 * * 1-5 /path/to/script.sh
# Every 6 hours
0 */6 * * * /path/to/script.sh
# Multiple times per day
0 2,14 * * * /path/to/script.sh # 2 AM and 2 PM
Editando el Crontab
# Edit user crontab
crontab -e
# Edit root crontab
sudo crontab -e
# List current crontab
crontab -l
# Remove crontab
crontab -r
Configuración de Cron a Nivel del Sistema
Directorios de cron del sistema:
/etc/cron.hourly/ # Scripts run hourly
/etc/cron.daily/ # Scripts run daily
/etc/cron.weekly/ # Scripts run weekly
/etc/cron.monthly/ # Scripts run monthly
Colocar scripts ejecutables:
# Create daily backup script
sudo cp backup-script.sh /etc/cron.daily/
sudo chmod +x /etc/cron.daily/backup-script.sh
# Runs automatically at configured time (usually 3 AM)
Cron personalizado del sistema (/etc/cron.d/backup):
# /etc/cron.d/backup
SHELL=/bin/bash
PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin
# Daily full backup at 2 AM
0 2 * * * root /usr/local/bin/backup-full.sh >> /var/log/backup-full.log 2>&1
# Hourly incremental backup
0 * * * * root /usr/local/bin/backup-incremental.sh >> /var/log/backup-incr.log 2>&1
# Weekly verification on Sunday at 4 AM
0 4 * * 0 root /usr/local/bin/backup-verify.sh >> /var/log/backup-verify.log 2>&1
Patrones Avanzados de Automatización de Copias de Seguridad
Automatización de Copias de Seguridad Multinivel
Implementar copias de seguridad cada hora, diarias, semanales y mensuales:
#!/bin/bash
# /usr/local/bin/backup-tiered.sh
BACKUP_TYPE="$1" # hourly, daily, weekly, or monthly
BACKUP_ROOT="/backup"
TIMESTAMP=$(date +%Y%m%d-%H%M%S)
case "$BACKUP_TYPE" in
hourly)
# Lightweight incremental
rsync -av --link-dest="$BACKUP_ROOT/hourly.1" \
/var/www/ "$BACKUP_ROOT/hourly.0/"
# Rotate hourly backups (keep 24)
for i in {23..1}; do
if [ -d "$BACKUP_ROOT/hourly.$i" ]; then
mv "$BACKUP_ROOT/hourly.$i" "$BACKUP_ROOT/hourly.$((i+1))"
fi
done
;;
daily)
# Full backup with database dumps
/usr/local/bin/dump-databases.sh
rsync -av /var/www/ /etc/ /home/ "$BACKUP_ROOT/daily.0/"
# Rotate (keep 7 days)
for i in {6..1}; do
if [ -d "$BACKUP_ROOT/daily.$i" ]; then
mv "$BACKUP_ROOT/daily.$i" "$BACKUP_ROOT/daily.$((i+1))"
fi
done
;;
weekly)
# Copy from daily.0 to weekly
cp -al "$BACKUP_ROOT/daily.0" "$BACKUP_ROOT/weekly.$(date +%Y%W)"
# Cleanup old weekly backups (keep 4 weeks)
find "$BACKUP_ROOT" -maxdepth 1 -name "weekly.*" -mtime +28 -exec rm -rf {} \;
;;
monthly)
# Copy from daily.0 to monthly
cp -al "$BACKUP_ROOT/daily.0" "$BACKUP_ROOT/monthly.$(date +%Y%m)"
# Cleanup old monthly (keep 12 months)
find "$BACKUP_ROOT" -maxdepth 1 -name "monthly.*" -mtime +365 -exec rm -rf {} \;
;;
*)
echo "Usage: $0 {hourly|daily|weekly|monthly}"
exit 1
;;
esac
Programación con cron para copias de seguridad escalonadas:
# /etc/cron.d/backup-tiered
# Hourly backups at minute 0
0 * * * * root /usr/local/bin/backup-tiered.sh hourly
# Daily backup at 2 AM
0 2 * * * root /usr/local/bin/backup-tiered.sh daily
# Weekly backup on Sunday at 3 AM
0 3 * * 0 root /usr/local/bin/backup-tiered.sh weekly
# Monthly backup on 1st at 4 AM
0 4 1 * * root /usr/local/bin/backup-tiered.sh monthly
Ejecución Paralela de Copias de Seguridad
Respaldar múltiples fuentes concurrentemente:
#!/bin/bash
# /usr/local/bin/backup-parallel.sh
# Backup functions
backup_web() {
rsync -av /var/www/ /backup/www/ > /var/log/backup-www.log 2>&1
}
backup_databases() {
mysqldump --all-databases | gzip > /backup/databases.sql.gz 2> /var/log/backup-db.log
}
backup_home() {
rsync -av /home/ /backup/home/ > /var/log/backup-home.log 2>&1
}
backup_config() {
tar -czf /backup/config.tar.gz /etc > /var/log/backup-config.log 2>&1
}
# Run backups in parallel
backup_web &
PID_WEB=$!
backup_databases &
PID_DB=$!
backup_home &
PID_HOME=$!
backup_config &
PID_CONFIG=$!
# Wait for all to complete
wait $PID_WEB
EXIT_WEB=$?
wait $PID_DB
EXIT_DB=$?
wait $PID_HOME
EXIT_HOME=$?
wait $PID_CONFIG
EXIT_CONFIG=$?
# Check results
FAILED=0
[ $EXIT_WEB -ne 0 ] && echo "Web backup failed" && FAILED=1
[ $EXIT_DB -ne 0 ] && echo "Database backup failed" && FAILED=1
[ $EXIT_HOME -ne 0 ] && echo "Home backup failed" && FAILED=1
[ $EXIT_CONFIG -ne 0 ] && echo "Config backup failed" && FAILED=1
if [ $FAILED -eq 0 ]; then
echo "All backups completed successfully"
exit 0
else
echo "Some backups failed"
exit 1
fi
Programación Condicional de Copias de Seguridad
Ejecutar copias de seguridad basadas en condiciones:
#!/bin/bash
# /usr/local/bin/backup-conditional.sh
# Only backup during off-peak hours (10 PM to 6 AM)
CURRENT_HOUR=$(date +%H)
if [ "$CURRENT_HOUR" -ge 6 ] && [ "$CURRENT_HOUR" -lt 22 ]; then
echo "Backup skipped: Peak hours"
exit 0
fi
# Check server load
LOAD=$(uptime | awk -F'load average:' '{print $2}' | awk '{print $1}' | cut -d'.' -f1)
if [ "$LOAD" -gt 5 ]; then
echo "Backup skipped: High server load ($LOAD)"
exit 0
fi
# Check available space
AVAILABLE=$(df /backup | awk 'NR==2 {print $4}')
if [ "$AVAILABLE" -lt 10485760 ]; then # Less than 10GB
echo "Backup skipped: Insufficient space"
exit 1
fi
# Conditions met, proceed with backup
echo "Conditions satisfied, starting backup..."
/usr/local/bin/backup-main.sh
Notificación y Monitoreo
Notificaciones por Correo Electrónico
Notificaciones simples por correo electrónico:
# Success notification
echo "Backup completed successfully on $(date)" | \
mail -s "Backup Success - $(hostname)" [email protected]
# Failure notification with log excerpt
{
echo "Backup failed on $(hostname)"
echo "Time: $(date)"
echo ""
echo "Recent log entries:"
tail -50 /var/log/backup.log
} | mail -s "BACKUP FAILED - $(hostname)" [email protected]
Notificaciones por correo electrónico HTML:
#!/bin/bash
# Send HTML email with backup report
RECIPIENT="[email protected]"
SUBJECT="Backup Report - $(hostname) - $(date +%Y-%m-%d)"
# Generate HTML report
cat > /tmp/backup-report.html << EOF
<html>
<head><style>
body { font-family: Arial, sans-serif; }
.success { color: green; font-weight: bold; }
.error { color: red; font-weight: bold; }
table { border-collapse: collapse; width: 100%; }
th, td { border: 1px solid #ddd; padding: 8px; text-align: left; }
th { background-color: #4CAF50; color: white; }
</style></head>
<body>
<h2>Backup Report</h2>
<p><strong>Server:</strong> $(hostname)</p>
<p><strong>Date:</strong> $(date)</p>
<p><strong>Status:</strong> <span class="success">SUCCESS</span></p>
<h3>Backup Details</h3>
<table>
<tr><th>Component</th><th>Size</th><th>Status</th></tr>
<tr><td>Web Files</td><td>5.2 GB</td><td class="success">OK</td></tr>
<tr><td>Databases</td><td>1.8 GB</td><td class="success">OK</td></tr>
<tr><td>Config Files</td><td>45 MB</td><td class="success">OK</td></tr>
</table>
</body>
</html>
EOF
# Send email
(
echo "From: backup@$(hostname)"
echo "To: $RECIPIENT"
echo "Subject: $SUBJECT"
echo "Content-Type: text/html"
echo ""
cat /tmp/backup-report.html
) | sendmail -t
rm /tmp/backup-report.html
Notificaciones de Slack
#!/bin/bash
# Send Slack notification
send_slack_notification() {
local message="$1"
local color="${2:-good}" # good, warning, danger
local webhook_url="https://hooks.slack.com/services/YOUR/WEBHOOK/URL"
curl -X POST -H 'Content-type: application/json' \
--data "{
\"attachments\": [{
\"color\": \"$color\",
\"title\": \"Backup Notification\",
\"text\": \"$message\",
\"fields\": [
{\"title\": \"Server\", \"value\": \"$(hostname)\", \"short\": true},
{\"title\": \"Time\", \"value\": \"$(date)\", \"short\": true}
]
}]
}" \
"$webhook_url"
}
# Usage
send_slack_notification "Backup completed successfully" "good"
send_slack_notification "Backup failed!" "danger"
Script de Monitoreo
#!/bin/bash
# /usr/local/bin/monitor-backups.sh
# Monitor backup completion and freshness
BACKUP_DIR="/backup"
MAX_AGE_HOURS=26
ALERT_EMAIL="[email protected]"
# Function to check backup age
check_backup_age() {
local backup_marker="$1"
local backup_name="$2"
if [ ! -f "$backup_marker" ]; then
echo "ERROR: $backup_name marker not found"
return 1
fi
local backup_time=$(stat -c %Y "$backup_marker")
local current_time=$(date +%s)
local age_hours=$(( (current_time - backup_time) / 3600 ))
if [ $age_hours -gt $MAX_AGE_HOURS ]; then
echo "WARNING: $backup_name is $age_hours hours old (>$MAX_AGE_HOURS)"
return 1
else
echo "OK: $backup_name is $age_hours hours old"
return 0
fi
}
# Check various backups
FAILURES=0
check_backup_age "$BACKUP_DIR/.last-full-backup" "Full Backup"
[ $? -ne 0 ] && ((FAILURES++))
check_backup_age "$BACKUP_DIR/.last-db-backup" "Database Backup"
[ $? -ne 0 ] && ((FAILURES++))
check_backup_age "$BACKUP_DIR/.last-remote-sync" "Remote Sync"
[ $? -ne 0 ] && ((FAILURES++))
# Report results
if [ $FAILURES -gt 0 ]; then
echo "Backup monitoring detected $FAILURES issue(s)" | \
mail -s "BACKUP MONITORING ALERT - $(hostname)" "$ALERT_EMAIL"
exit 1
else
echo "All backups current and healthy"
exit 0
fi
Programar monitoreo (verificar cada hora):
0 * * * * /usr/local/bin/monitor-backups.sh >> /var/log/backup-monitoring.log 2>&1
Escenarios de Automatización del Mundo Real
Escenario 1: Sitio Web de Pequeña Empresa
Requisitos:
- Copia de seguridad completa diaria
- Sincronización remota semanal
- Retención de 30 días
- Notificaciones por correo electrónico
Implementación:
Script de copia de seguridad diaria (/etc/cron.daily/backup-website):
#!/bin/bash
set -e
BACKUP_DIR="/backup/website"
BACKUP_DATE=$(date +%Y%m%d)
ADMIN_EMAIL="[email protected]"
# Create backup
mkdir -p "$BACKUP_DIR/$BACKUP_DATE"
# Backup web files
rsync -av /var/www/ "$BACKUP_DIR/$BACKUP_DATE/www/"
# Backup database
mysqldump --all-databases --single-transaction | \
gzip > "$BACKUP_DIR/$BACKUP_DATE/database.sql.gz"
# Cleanup old backups (keep 30 days)
find "$BACKUP_DIR" -maxdepth 1 -type d -mtime +30 -exec rm -rf {} \;
# Notification
echo "Website backup completed: $BACKUP_DATE" | \
mail -s "Backup Success - $(hostname)" "$ADMIN_EMAIL"
Sincronización remota semanal (/etc/cron.weekly/sync-offsite):
#!/bin/bash
rsync -avz --delete \
-e "ssh -i /root/.ssh/backup_key" \
/backup/website/ \
backup@remote-server:/backups/website/
Escenario 2: Entorno Empresarial Multi-Servidor
Requisitos:
- 20 servidores de aplicación
- Incremental cada hora, completa diaria
- Servidor de copia de seguridad centralizado
- Monitoreo completo
Orquestación central de copias de seguridad (en servidor de copia de seguridad):
#!/bin/bash
# /usr/local/bin/backup-all-servers.sh
SERVERS=(
"web1.example.com"
"web2.example.com"
"app1.example.com"
"app2.example.com"
"db1.example.com"
)
LOG_DIR="/var/log/backups"
BACKUP_ROOT="/backups"
for server in "${SERVERS[@]}"; do
echo "Backing up $server..."
# Trigger backup on remote server
ssh root@$server "/usr/local/bin/local-backup.sh"
# Pull backup to central server
rsync -avz --delete \
root@$server:/backup/latest/ \
"$BACKUP_ROOT/$server/" \
> "$LOG_DIR/$server-$(date +%Y%m%d).log" 2>&1 &
done
# Wait for all backups to complete
wait
echo "All server backups completed"
Programación:
# /etc/cron.d/backup-enterprise
0 * * * * root /usr/local/bin/backup-all-servers.sh incremental
0 2 * * * root /usr/local/bin/backup-all-servers.sh full
Solución de Problemas de Cron
Los Trabajos de Cron No se Ejecutan
Verificar el servicio cron:
# SystemD
systemctl status cron # Debian/Ubuntu
systemctl status crond # CentOS/RHEL
# Start if not running
systemctl start cron
systemctl enable cron
Verificar los registros de cron:
# Debian/Ubuntu
grep CRON /var/log/syslog
# CentOS/RHEL
grep CRON /var/log/cron
# View recent cron executions
journalctl -u cron
Probar la sintaxis de cron:
# Use online tools or:
# Install cronie if available
cronnext
# Test your crontab
crontab -l | grep -v '^#' | grep -v '^$'
Problemas de Entorno
Cron se ejecuta con un entorno mínimo:
# Bad: Assumes PATH
0 2 * * * backup-script.sh
# Good: Full path
0 2 * * * /usr/local/bin/backup-script.sh
# Better: Set environment in crontab
PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin
0 2 * * * backup-script.sh
# Or in script itself
#!/bin/bash
export PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin
Problemas de Permisos
# Ensure script is executable
chmod +x /usr/local/bin/backup-script.sh
# Check ownership
ls -la /usr/local/bin/backup-script.sh
# For system cron, ensure proper permissions
chmod 644 /etc/cron.d/backup
Conclusión
La automatización de copias de seguridad mediante scripts de shell y programación con cron elimina el error humano del proceso de protección de datos, asegurando copias de seguridad consistentes y confiables que forman la base de la preparación para la recuperación ante desastres.
Puntos clave:
-
Escribir scripts robustos: Implementar manejo de errores, registro, verificaciones previas y notificaciones.
-
Programar apropiadamente: Usar cron para una ejecución de copia de seguridad consistente y oportuna alineada con los requisitos de RTO/RPO.
-
Monitorear activamente: Implementar monitoreo para detectar fallas rápidamente.
-
Probar regularmente: La automatización incluye pruebas de restauración—programar simulacros periódicos.
-
Seguir la regla 3-2-1: Automatizar copias de seguridad locales, remotas y fuera del sitio.
-
Documentar exhaustivamente: Mantener documentación de scripts, horarios y procedimientos.
-
Iterar y mejorar: Revisar registros, refinar scripts y optimizar basándose en la experiencia.
La automatización adecuada de copias de seguridad proporciona tranquilidad, asegurando que su estrategia de protección de datos opere de manera confiable sin intervención manual continua. Combinado con monitoreo integral y pruebas regulares, las copias de seguridad automatizadas forman la piedra angular de una planificación efectiva de recuperación ante desastres.


