MySQL/PostgreSQL Database Backup: Complete Guide for Production Environments
Introduction
Databases are the heart of modern applications, storing critical business data, user information, transactions, and operational state. Unlike static files, databases require special backup considerations due to their dynamic nature, consistency requirements, and complex relationships between data. A corrupted or incomplete database backup can be worse than no backup at all—restoration might fail or result in data corruption that cascades through your application.
This comprehensive guide covers professional database backup strategies for MySQL/MariaDB and PostgreSQL, the two most popular open-source relational database systems. We'll explore logical and physical backups, point-in-time recovery, replication-based backups, automation strategies, restoration procedures, and real-world scenarios implementing the 3-2-1 backup rule for database protection.
Whether you're managing a small application database or enterprise-scale database infrastructure, understanding proper backup techniques ensures data integrity, business continuity, and regulatory compliance.
Understanding Database Backup Types
Logical vs Physical Backups
Logical backups (using mysqldump, pg_dump):
- Export data as SQL statements or custom formats
- Human-readable, portable across platforms
- Can backup specific databases, tables, or rows
- Slower for large databases
- Easier for selective restoration
- Generally larger file sizes
Physical backups (filesystem copies, snapshots):
- Copy actual database files
- Much faster for large databases
- Full server or tablespace backup
- Platform-specific (cannot move between architectures easily)
- Requires consistent snapshot
- Smaller backup sizes
Backup Consistency Requirements
Why consistency matters: Databases maintain relationships between tables. An inconsistent backup captures data mid-transaction, resulting in referential integrity violations, corrupted indexes, or application failures upon restoration.
Ensuring consistency:
MySQL/MariaDB:
--single-transaction: Uses transaction isolation for InnoDB- Table locking for MyISAM tables
- Stop writes during backup (for critical consistency)
PostgreSQL:
- pg_dump uses snapshot isolation automatically
- Running transactions don't affect backup consistency
- MVCC (Multi-Version Concurrency Control) provides consistency
MySQL/MariaDB Backup Strategies
mysqldump - Logical Backups
The standard tool for MySQL logical backups:
Basic mysqldump syntax:
mysqldump [options] database_name > backup.sql
Complete database server backup:
# All databases
mysqldump --all-databases > all-databases.sql
# All databases with routines, triggers, events
mysqldump --all-databases \
--routines \
--triggers \
--events \
> all-databases-complete.sql
Single database backup:
# Specific database
mysqldump database_name > database_name.sql
# Multiple databases
mysqldump --databases db1 db2 db3 > multiple-databases.sql
InnoDB consistent backup (recommended):
mysqldump --all-databases \
--single-transaction \
--quick \
--lock-tables=false \
--routines \
--triggers \
--events \
> backup.sql
Explanation of critical options:
--single-transaction: Uses consistent read for InnoDB (no table locking)--quick: Retrieves rows one at a time (reduces memory usage)--lock-tables=false: Don't lock tables (safe with --single-transaction)--routines: Include stored procedures/functions--triggers: Include triggers--events: Include scheduled events
Compressed backup:
mysqldump --all-databases \
--single-transaction \
--routines \
--triggers \
--events \
| gzip > backup-$(date +%Y%m%d).sql.gz
Split large databases:
# Backup each database separately
for db in $(mysql -e "SHOW DATABASES;" | grep -Ev "Database|information_schema|performance_schema|mysql|sys"); do
echo "Backing up database: $db"
mysqldump --single-transaction \
--routines --triggers --events \
"$db" | gzip > "$db-$(date +%Y%m%d).sql.gz"
done
Production MySQL Backup Script
#!/bin/bash
# /usr/local/bin/mysql-backup.sh
# Production MySQL backup with error handling and verification
set -euo pipefail
# Configuration
BACKUP_DIR="/backup/mysql"
BACKUP_DATE=$(date +%Y%m%d-%H%M%S)
BACKUP_PATH="$BACKUP_DIR/$BACKUP_DATE"
LOG_FILE="/var/log/mysql-backup.log"
RETENTION_DAYS=30
ADMIN_EMAIL="[email protected]"
# MySQL credentials (prefer .my.cnf for security)
MYSQL_USER="backup"
MYSQL_PASSWORD="secure-password"
# Or use: --defaults-extra-file=/root/.my.cnf
# Logging
log() {
echo "[$(date '+%Y-%m-%d %H:%M:%S')] $*" | tee -a "$LOG_FILE"
}
error_exit() {
log "ERROR: $1"
echo "MySQL backup failed: $1" | mail -s "MySQL Backup FAILED" "$ADMIN_EMAIL"
exit 1
}
# Create backup directory
mkdir -p "$BACKUP_PATH"
log "Starting MySQL backup to $BACKUP_PATH"
# Get list of databases (exclude system databases)
DATABASES=$(mysql -u "$MYSQL_USER" -p"$MYSQL_PASSWORD" -e "SHOW DATABASES;" \
| grep -Ev "Database|information_schema|performance_schema|mysql|sys")
# Backup each database
for db in $DATABASES; do
log "Backing up database: $db"
mysqldump -u "$MYSQL_USER" -p"$MYSQL_PASSWORD" \
--single-transaction \
--quick \
--lock-tables=false \
--routines \
--triggers \
--events \
"$db" | gzip > "$BACKUP_PATH/$db.sql.gz"
if [ ${PIPESTATUS[0]} -ne 0 ]; then
error_exit "Failed to backup database: $db"
fi
# Verify backup file created
if [ ! -f "$BACKUP_PATH/$db.sql.gz" ]; then
error_exit "Backup file not found: $db.sql.gz"
fi
# Check file size is reasonable (>1KB)
SIZE=$(stat -c%s "$BACKUP_PATH/$db.sql.gz" 2>/dev/null || stat -f%z "$BACKUP_PATH/$db.sql.gz")
if [ "$SIZE" -lt 1024 ]; then
log "WARNING: Backup file suspiciously small: $db.sql.gz ($SIZE bytes)"
fi
done
# Backup all databases together (for convenience)
log "Creating complete backup archive"
mysqldump -u "$MYSQL_USER" -p"$MYSQL_PASSWORD" \
--all-databases \
--single-transaction \
--routines \
--triggers \
--events \
| gzip > "$BACKUP_PATH/all-databases.sql.gz"
# Backup MySQL configuration
log "Backing up MySQL configuration"
cp -a /etc/mysql "$BACKUP_PATH/mysql-config"
# Create backup manifest
cat > "$BACKUP_PATH/MANIFEST.txt" << EOF
MySQL Backup Manifest
Date: $(date)
Server: $(hostname)
MySQL Version: $(mysql -V)
Databases backed up:
$(echo "$DATABASES" | tr '\n' ', ')
Backup files:
$(ls -lh "$BACKUP_PATH"/*.sql.gz)
Total backup size:
$(du -sh "$BACKUP_PATH")
EOF
# Cleanup old backups
log "Cleaning up backups older than $RETENTION_DAYS days"
find "$BACKUP_DIR" -maxdepth 1 -type d -mtime +$RETENTION_DAYS -exec rm -rf {} \;
# Success notification
log "MySQL backup completed successfully"
{
echo "MySQL backup completed successfully"
echo ""
cat "$BACKUP_PATH/MANIFEST.txt"
} | mail -s "MySQL Backup Success - $(hostname)" "$ADMIN_EMAIL"
exit 0
Binary Log Backups for Point-in-Time Recovery
Enable point-in-time recovery using binary logs:
Enable binary logging (/etc/mysql/my.cnf):
[mysqld]
server-id = 1
log_bin = /var/log/mysql/mysql-bin.log
binlog_format = ROW
expire_logs_days = 7
max_binlog_size = 100M
Backup binary logs:
#!/bin/bash
# /usr/local/bin/mysql-binlog-backup.sh
BINLOG_BACKUP_DIR="/backup/mysql-binlogs"
DATE=$(date +%Y%m%d)
mkdir -p "$BINLOG_BACKUP_DIR/$DATE"
# Flush logs to start new binlog
mysql -e "FLUSH BINARY LOGS;"
# Copy binary logs
cp /var/log/mysql/mysql-bin.* "$BINLOG_BACKUP_DIR/$DATE/"
# Keep 30 days of binary logs
find "$BINLOG_BACKUP_DIR" -type d -mtime +30 -exec rm -rf {} \;
Point-in-time recovery procedure:
# 1. Restore full backup
gunzip < all-databases.sql.gz | mysql
# 2. Apply binary logs up to specific point in time
mysqlbinlog --stop-datetime="2026-01-11 14:30:00" \
/backup/mysql-binlogs/*/mysql-bin.* | mysql
# Or stop at specific position
mysqlbinlog --stop-position=12345 mysql-bin.000001 | mysql
Physical Backups with Percona XtraBackup
For large databases, physical backups are much faster:
Install Percona XtraBackup:
# Ubuntu/Debian
wget https://repo.percona.com/apt/percona-release_latest.generic_all.deb
sudo dpkg -i percona-release_latest.generic_all.deb
sudo apt update
sudo apt install percona-xtrabackup-80
# CentOS/RHEL
sudo yum install https://repo.percona.com/yum/percona-release-latest.noarch.rpm
sudo yum install percona-xtrabackup-80
Full backup:
# Create full backup
xtrabackup --backup \
--target-dir=/backup/mysql-physical/full-$(date +%Y%m%d)
# Prepare backup for restoration
xtrabackup --prepare \
--target-dir=/backup/mysql-physical/full-20260111
Incremental backup:
# Full backup (base)
xtrabackup --backup \
--target-dir=/backup/mysql-physical/base
# Incremental backup 1
xtrabackup --backup \
--target-dir=/backup/mysql-physical/inc1 \
--incremental-basedir=/backup/mysql-physical/base
# Prepare for restoration
xtrabackup --prepare --apply-log-only \
--target-dir=/backup/mysql-physical/base
xtrabackup --prepare --apply-log-only \
--target-dir=/backup/mysql-physical/base \
--incremental-dir=/backup/mysql-physical/inc1
PostgreSQL Backup Strategies
pg_dump - Logical Backups
PostgreSQL's standard logical backup tool:
All databases:
pg_dumpall > all-databases.sql
# Compressed
pg_dumpall | gzip > all-databases.sql.gz
# As postgres user
sudo -u postgres pg_dumpall | gzip > all-databases.sql.gz
Single database:
# Plain SQL format
pg_dump database_name > database_name.sql
# Custom format (compressed, supports parallel restore)
pg_dump -Fc database_name > database_name.dump
# Directory format (parallel dump and restore)
pg_dump -Fd database_name -f database_name_dump/
# Compressed SQL
pg_dump database_name | gzip > database_name.sql.gz
Critical pg_dump options:
-Fc: Custom format (compressed, flexible restore)-Fd: Directory format (parallel operations)-Fp: Plain SQL format (human-readable)-j N: Parallel dump with N jobs (directory format only)--clean: Include DROP statements--if-exists: Use IF EXISTS with DROP--no-owner: Don't restore ownership--no-privileges: Don't restore privileges
Parallel backup for large databases:
# 4 parallel workers
pg_dump -Fd database_name -j 4 -f database_name_dump/
Production PostgreSQL Backup Script
#!/bin/bash
# /usr/local/bin/postgresql-backup.sh
# Production PostgreSQL backup script
set -euo pipefail
# Configuration
BACKUP_DIR="/backup/postgresql"
BACKUP_DATE=$(date +%Y%m%d-%H%M%S)
BACKUP_PATH="$BACKUP_DIR/$BACKUP_DATE"
LOG_FILE="/var/log/postgresql-backup.log"
RETENTION_DAYS=30
PG_USER="postgres"
ADMIN_EMAIL="[email protected]"
# Logging
log() {
echo "[$(date '+%Y-%m-%d %H:%M:%S')] $*" | tee -a "$LOG_FILE"
}
error_exit() {
log "ERROR: $1"
echo "PostgreSQL backup failed: $1" | mail -s "PostgreSQL Backup FAILED" "$ADMIN_EMAIL"
exit 1
}
# Create backup directory
mkdir -p "$BACKUP_PATH"
log "Starting PostgreSQL backup to $BACKUP_PATH"
# Backup all databases (globals + all databases)
log "Backing up all databases with pg_dumpall"
sudo -u "$PG_USER" pg_dumpall | gzip > "$BACKUP_PATH/all-databases.sql.gz"
if [ ${PIPESTATUS[0]} -ne 0 ]; then
error_exit "pg_dumpall failed"
fi
# Get list of databases
DATABASES=$(sudo -u "$PG_USER" psql -t -c "SELECT datname FROM pg_database WHERE datistemplate = false AND datname != 'postgres'")
# Backup each database in custom format
for db in $DATABASES; do
db=$(echo $db | xargs) # Trim whitespace
log "Backing up database: $db"
sudo -u "$PG_USER" pg_dump -Fc "$db" > "$BACKUP_PATH/$db.dump"
if [ $? -ne 0 ]; then
log "WARNING: Failed to backup database: $db"
fi
# Verify backup file
if [ ! -f "$BACKUP_PATH/$db.dump" ]; then
log "WARNING: Backup file not found: $db.dump"
fi
done
# Backup PostgreSQL configuration
log "Backing up PostgreSQL configuration"
sudo cp -a /etc/postgresql "$BACKUP_PATH/postgresql-config"
sudo -u "$PG_USER" cp -a /var/lib/postgresql/*/main/postgresql.conf "$BACKUP_PATH/" 2>/dev/null || true
sudo -u "$PG_USER" cp -a /var/lib/postgresql/*/main/pg_hba.conf "$BACKUP_PATH/" 2>/dev/null || true
# Create manifest
cat > "$BACKUP_PATH/MANIFEST.txt" << EOF
PostgreSQL Backup Manifest
Date: $(date)
Server: $(hostname)
PostgreSQL Version: $(sudo -u postgres psql --version)
Databases backed up:
$(echo "$DATABASES")
Backup files:
$(ls -lh "$BACKUP_PATH"/)
Total backup size:
$(du -sh "$BACKUP_PATH")
EOF
# Cleanup old backups
log "Cleaning up backups older than $RETENTION_DAYS days"
find "$BACKUP_DIR" -maxdepth 1 -type d -mtime +$RETENTION_DAYS -exec rm -rf {} \;
log "PostgreSQL backup completed successfully"
# Success notification
{
echo "PostgreSQL backup completed successfully"
echo ""
cat "$BACKUP_PATH/MANIFEST.txt"
} | mail -s "PostgreSQL Backup Success - $(hostname)" "$ADMIN_EMAIL"
exit 0
WAL Archiving for Point-in-Time Recovery
Configure Write-Ahead Log (WAL) archiving:
Configure WAL archiving (/etc/postgresql/14/main/postgresql.conf):
wal_level = replica
archive_mode = on
archive_command = 'test ! -f /backup/postgresql-wal/%f && cp %p /backup/postgresql-wal/%f'
archive_timeout = 300 # Force WAL switch every 5 minutes
Create WAL archive directory:
sudo mkdir -p /backup/postgresql-wal
sudo chown postgres:postgres /backup/postgresql-wal
sudo chmod 700 /backup/postgresql-wal
Restart PostgreSQL:
sudo systemctl restart postgresql
Base backup for PITR:
#!/bin/bash
# Create base backup for point-in-time recovery
BACKUP_DIR="/backup/postgresql-pitr"
DATE=$(date +%Y%m%d-%H%M%S)
sudo -u postgres pg_basebackup -D "$BACKUP_DIR/base-$DATE" -Ft -z -P
echo "Base backup created: $BACKUP_DIR/base-$DATE"
Point-in-time recovery procedure:
# 1. Stop PostgreSQL
sudo systemctl stop postgresql
# 2. Backup current data directory
sudo mv /var/lib/postgresql/14/main /var/lib/postgresql/14/main.old
# 3. Extract base backup
sudo -u postgres tar -xzf /backup/postgresql-pitr/base-20260111/base.tar.gz \
-C /var/lib/postgresql/14/main
# 4. Create recovery configuration
sudo -u postgres cat > /var/lib/postgresql/14/main/recovery.signal << EOF
restore_command = 'cp /backup/postgresql-wal/%f %p'
recovery_target_time = '2026-01-11 14:30:00'
EOF
# 5. Start PostgreSQL (recovery begins automatically)
sudo systemctl start postgresql
# 6. Verify recovery
sudo -u postgres psql -c "SELECT pg_is_in_recovery();"
Physical Backups with pg_basebackup
Built-in physical backup tool:
Basic usage:
# Create physical backup
sudo -u postgres pg_basebackup -D /backup/pg-physical -Ft -z -P
# Options:
# -D: Target directory
# -Ft: Tar format
# -z: gzip compression
# -P: Progress reporting
Parallel compression:
sudo -u postgres pg_basebackup -D - -Ft | \
pigz -p 4 > /backup/pg-physical-$(date +%Y%m%d).tar.gz
Automation and Scheduling
Systemd Timer for Database Backups
MySQL backup service (/etc/systemd/system/mysql-backup.service):
[Unit]
Description=MySQL Backup Service
After=mysql.service
[Service]
Type=oneshot
ExecStart=/usr/local/bin/mysql-backup.sh
User=root
StandardOutput=journal
StandardError=journal
[Install]
WantedBy=multi-user.target
MySQL backup timer (/etc/systemd/system/mysql-backup.timer):
[Unit]
Description=Daily MySQL Backup
Requires=mysql-backup.service
[Timer]
OnCalendar=daily
OnCalendar=*-*-* 02:00:00
Persistent=true
[Install]
WantedBy=timers.target
Enable:
sudo systemctl daemon-reload
sudo systemctl enable --now mysql-backup.timer
Cron-Based Scheduling
# /etc/cron.d/database-backups
# MySQL daily backup at 2 AM
0 2 * * * root /usr/local/bin/mysql-backup.sh >> /var/log/mysql-backup-cron.log 2>&1
# PostgreSQL daily backup at 2:30 AM
30 2 * * * root /usr/local/bin/postgresql-backup.sh >> /var/log/postgresql-backup-cron.log 2>&1
# MySQL binary log backup every 6 hours
0 */6 * * * root /usr/local/bin/mysql-binlog-backup.sh >> /var/log/mysql-binlog.log 2>&1
# PostgreSQL WAL cleanup weekly
0 3 * * 0 root find /backup/postgresql-wal -type f -mtime +7 -delete
Restoration Procedures
MySQL Restoration
Restore full backup:
# From uncompressed SQL
mysql < all-databases.sql
# From gzipped SQL
gunzip < all-databases.sql.gz | mysql
# Specific database
mysql database_name < database_name.sql
Restore with user/password:
mysql -u root -p < all-databases.sql
# With password in command (less secure)
mysql -u root -pYourPassword < all-databases.sql
Restore to different database name:
# Create new database
mysql -e "CREATE DATABASE new_database_name;"
# Restore (edit SQL to change database name)
sed 's/old_database_name/new_database_name/g' backup.sql | mysql
PostgreSQL Restoration
Restore from pg_dumpall:
# Drop existing databases first (if recreating)
sudo -u postgres psql -c "DROP DATABASE database_name;"
# Restore
gunzip < all-databases.sql.gz | sudo -u postgres psql
Restore single database:
# Custom format
sudo -u postgres pg_restore -d database_name database_name.dump
# With options
sudo -u postgres pg_restore \
--clean \
--if-exists \
-d database_name \
database_name.dump
# Parallel restore (4 jobs)
sudo -u postgres pg_restore -j 4 -d database_name database_name.dump
Restore to different database:
# Create target database
sudo -u postgres createdb new_database_name
# Restore
sudo -u postgres pg_restore -d new_database_name old_database_name.dump
Real-World Scenarios
Scenario 1: E-commerce Database Protection
Requirements:
- High-transaction MySQL database
- 15-minute RPO
- Point-in-time recovery capability
Implementation:
# Daily full backup at 2 AM
0 2 * * * /usr/local/bin/mysql-full-backup.sh
# Binary log backup every 15 minutes
*/15 * * * * /usr/local/bin/mysql-binlog-backup.sh
# Offsite sync hourly
0 * * * * rsync -az /backup/mysql/ backup-server:/backups/mysql-production/
Scenario 2: Multi-Database Environment
Requirements:
- MySQL and PostgreSQL
- Different retention for each
- Automated verification
Comprehensive backup script:
#!/bin/bash
# /usr/local/bin/backup-all-databases.sh
# MySQL backup
/usr/local/bin/mysql-backup.sh
# PostgreSQL backup
/usr/local/bin/postgresql-backup.sh
# Verify both completed
if [ -f /backup/mysql/*/MANIFEST.txt ] && [ -f /backup/postgresql/*/MANIFEST.txt ]; then
echo "All database backups completed" | mail -s "DB Backup Success" [email protected]
else
echo "Database backup verification failed" | mail -s "DB Backup FAILED" [email protected]
fi
Conclusion
Database backups require specialized approaches to ensure consistency, integrity, and recoverability. Whether using logical backups with mysqldump/pg_dump for flexibility or physical backups for performance, implementing robust database backup strategies protects your organization's most critical asset—its data.
Key takeaways:
- Ensure consistency: Use appropriate options (--single-transaction, etc.)
- Implement PITR: Binary logs/WAL archiving for point-in-time recovery
- Automate reliably: Schedule regular backups with monitoring
- Test restoration: Regular restoration drills are essential
- Secure backups: Encrypt sensitive database dumps
- Follow 3-2-1: Local, remote, offsite database backup copies
- Document procedures: Maintain detailed restoration documentation
Proper database backup strategies, combined with automation, monitoring, and regular testing, ensure business continuity and data protection in the face of hardware failures, human errors, or disasters.


