Database Backup with mysqldump and pg_dump: Complete Guide

Introduction

Database backups are the cornerstone of any disaster recovery strategy. Whether facing hardware failures, human errors, security breaches, or natural disasters, having reliable, tested backups means the difference between minor inconvenience and catastrophic data loss that could end a business.

mysqldump (for MySQL/MariaDB) and pg_dump (for PostgreSQL) are the standard logical backup tools that create portable, human-readable SQL files or compressed binary formats. These tools are essential for database administrators, providing flexibility in backup strategies, easy restoration, and the ability to migrate data between servers or versions.

This comprehensive guide covers everything from basic backup commands to advanced techniques, automation strategies, restoration procedures, and best practices for both MySQL/MariaDB and PostgreSQL databases.

Why Logical Backups Matter

Logical backups offer several advantages:

  • Portability: Works across different platforms and versions
  • Selectivity: Backup specific databases, tables, or even rows
  • Human-readable: SQL format can be inspected and edited
  • Compression: Significantly reduce backup size
  • Consistency: Point-in-time consistent snapshots
  • Easy restoration: Simple to restore to any compatible server

Prerequisites

Before proceeding:

  • MySQL/MariaDB or PostgreSQL installed and running
  • Sufficient disk space for backups (estimate 1:1 ratio, more with compression)
  • Administrative access to databases
  • Backup destination directory with proper permissions
  • Understanding of database size and backup time requirements

Checking Database Sizes

MySQL/MariaDB:

# Connect to MySQL
mysql -u root -p

# Check all database sizes
SELECT
    table_schema AS 'Database',
    ROUND(SUM(data_length + index_length) / 1024 / 1024, 2) AS 'Size (MB)'
FROM information_schema.tables
GROUP BY table_schema
ORDER BY SUM(data_length + index_length) DESC;

# Check specific database size
SELECT
    table_name AS 'Table',
    ROUND(((data_length + index_length) / 1024 / 1024), 2) AS 'Size (MB)'
FROM information_schema.tables
WHERE table_schema = 'your_database'
ORDER BY (data_length + index_length) DESC;

PostgreSQL:

# Connect to PostgreSQL
sudo -u postgres psql

# Check all database sizes
SELECT
    datname AS database,
    pg_size_pretty(pg_database_size(datname)) AS size
FROM pg_database
ORDER BY pg_database_size(datname) DESC;

# Check table sizes in database
\c your_database
SELECT
    schemaname,
    tablename,
    pg_size_pretty(pg_total_relation_size(schemaname||'.'||tablename)) AS size
FROM pg_tables
ORDER BY pg_total_relation_size(schemaname||'.'||tablename) DESC
LIMIT 20;

MySQL/MariaDB Backups with mysqldump

Basic mysqldump Commands

# Backup single database
mysqldump -u root -p database_name > backup_$(date +%Y%m%d).sql

# Backup with timestamp
mysqldump -u root -p myapp_db > myapp_backup_$(date +%Y%m%d_%H%M%S).sql

# Backup all databases
mysqldump -u root -p --all-databases > all_databases_$(date +%Y%m%d).sql

# Backup multiple specific databases
mysqldump -u root -p --databases db1 db2 db3 > multiple_dbs_$(date +%Y%m%d).sql

# Backup specific tables
mysqldump -u root -p database_name table1 table2 > tables_backup_$(date +%Y%m%d).sql

Essential mysqldump Options

# Complete backup with all database objects
mysqldump -u root -p \
    --single-transaction \
    --routines \
    --triggers \
    --events \
    --quick \
    --add-drop-database \
    --databases myapp_db > complete_backup_$(date +%Y%m%d).sql

# Options explained:
# --single-transaction: Consistent snapshot for InnoDB tables
# --routines: Include stored procedures and functions
# --triggers: Include triggers
# --events: Include scheduled events
# --quick: Retrieve rows one at a time (less memory)
# --add-drop-database: Add DROP DATABASE before CREATE DATABASE

Compressed Backups

# Compress with gzip
mysqldump -u root -p --all-databases | gzip > all_dbs_$(date +%Y%m%d).sql.gz

# Compress with higher compression
mysqldump -u root -p --all-databases | gzip -9 > all_dbs_$(date +%Y%m%d).sql.gz

# Compress with pigz (parallel gzip, faster)
mysqldump -u root -p --all-databases | pigz > all_dbs_$(date +%Y%m%d).sql.gz

# Compress with bzip2 (better compression, slower)
mysqldump -u root -p --all-databases | bzip2 > all_dbs_$(date +%Y%m%d).sql.bz2

# Compress with xz (best compression, slowest)
mysqldump -u root -p --all-databases | xz > all_dbs_$(date +%Y%m%d).sql.xz

Backing Up Large Databases

# Use extended insert for faster restore
mysqldump -u root -p \
    --single-transaction \
    --quick \
    --extended-insert \
    --max_allowed_packet=1G \
    large_database > large_db_$(date +%Y%m%d).sql

# Disable locks for MyISAM tables
mysqldump -u root -p \
    --single-transaction \
    --skip-lock-tables \
    database_name > backup_$(date +%Y%m%d).sql

# Split large database by table
for table in $(mysql -u root -p -e "SHOW TABLES FROM database_name" | grep -v Tables_in); do
    mysqldump -u root -p database_name $table | gzip > database_${table}_$(date +%Y%m%d).sql.gz
done

Backup with Replication Information

# Backup with master data (for replication setup)
mysqldump -u root -p \
    --master-data=2 \
    --single-transaction \
    --flush-logs \
    --all-databases > master_backup_$(date +%Y%m%d).sql

# Options:
# --master-data=1: Include CHANGE MASTER statement (executable)
# --master-data=2: Include CHANGE MASTER as comment (for reference)
# --flush-logs: Rotate binary logs after backup

Schema-Only and Data-Only Backups

# Backup only schema (structure)
mysqldump -u root -p --no-data database_name > schema_$(date +%Y%m%d).sql

# Backup only data (no CREATE statements)
mysqldump -u root -p --no-create-info database_name > data_$(date +%Y%m%d).sql

# Backup specific tables with schema only
mysqldump -u root -p --no-data database_name table1 table2 > tables_schema_$(date +%Y%m%d).sql

Excluding Specific Tables

# Exclude specific tables
mysqldump -u root -p \
    --single-transaction \
    --ignore-table=database.table1 \
    --ignore-table=database.table2 \
    database_name > backup_without_tables_$(date +%Y%m%d).sql

# Backup all except log tables
mysqldump -u root -p \
    --single-transaction \
    --ignore-table=myapp.access_logs \
    --ignore-table=myapp.error_logs \
    --ignore-table=myapp.audit_logs \
    myapp_db > myapp_no_logs_$(date +%Y%m%d).sql

Using Configuration File for Credentials

# Create credentials file
cat > ~/.my.cnf << EOF
[client]
user=backup_user
password=backup_password
host=localhost
EOF

# Secure the file
chmod 600 ~/.my.cnf

# Now run mysqldump without -u and -p
mysqldump --single-transaction --all-databases > backup_$(date +%Y%m%d).sql

PostgreSQL Backups with pg_dump

Basic pg_dump Commands

# Backup single database
pg_dump -U postgres database_name > backup_$(date +%Y%m%d).sql

# Backup with username and host
pg_dump -U postgres -h localhost database_name > backup_$(date +%Y%m%d).sql

# Backup all databases (use pg_dumpall)
pg_dumpall -U postgres > all_databases_$(date +%Y%m%d).sql

# Backup specific schema
pg_dump -U postgres -n schema_name database_name > schema_backup_$(date +%Y%m%d).sql

# Backup specific tables
pg_dump -U postgres -t table1 -t table2 database_name > tables_$(date +%Y%m%d).sql

pg_dump Output Formats

# Plain SQL format (default, human-readable)
pg_dump -U postgres -F p database_name > backup_$(date +%Y%m%d).sql

# Custom format (compressed, allows parallel restore)
pg_dump -U postgres -F c database_name -f backup_$(date +%Y%m%d).dump

# Directory format (parallel backup and restore)
pg_dump -U postgres -F d -j 4 database_name -f backup_$(date +%Y%m%d)

# Tar format
pg_dump -U postgres -F t database_name -f backup_$(date +%Y%m%d).tar

# Format options:
# -F p: plain SQL (default)
# -F c: custom (compressed, recommended)
# -F d: directory (parallel operations)
# -F t: tar archive

Compressed Backups

# Compress plain SQL format
pg_dump -U postgres database_name | gzip > backup_$(date +%Y%m%d).sql.gz

# Custom format is already compressed
pg_dump -U postgres -F c database_name -f backup_$(date +%Y%m%d).dump

# Adjust compression level (0-9)
pg_dump -U postgres -F c -Z 9 database_name -f backup_$(date +%Y%m%d).dump

Parallel Backup

# Parallel backup with 4 jobs (directory format)
pg_dump -U postgres -F d -j 4 database_name -f backup_$(date +%Y%m%d)

# The -j option specifies number of parallel jobs
# Useful for large databases
# Requires directory format (-F d)

# Example with 8 parallel jobs
pg_dump -U postgres -F d -j 8 large_database -f large_db_backup_$(date +%Y%m%d)

Backing Up Large Databases

# Custom format with verbose output
pg_dump -U postgres -F c -v database_name -f backup_$(date +%Y%m%d).dump

# Directory format with parallel jobs and verbose
pg_dump -U postgres -F d -j 4 -v database_name -f backup_$(date +%Y%m%d)

# Exclude large tables
pg_dump -U postgres \
    --exclude-table=large_table1 \
    --exclude-table=large_table2 \
    database_name -f backup_$(date +%Y%m%d).dump

Schema-Only and Data-Only Backups

# Schema only (structure)
pg_dump -U postgres --schema-only database_name > schema_$(date +%Y%m%d).sql

# Data only
pg_dump -U postgres --data-only database_name > data_$(date +%Y%m%d).sql

# Specific table schema
pg_dump -U postgres --schema-only -t table_name database_name > table_schema_$(date +%Y%m%d).sql

Excluding Tables and Schemas

# Exclude specific tables
pg_dump -U postgres \
    --exclude-table=logs \
    --exclude-table=temp_data \
    database_name -f backup_$(date +%Y%m%d).dump

# Exclude table pattern
pg_dump -U postgres \
    --exclude-table='log_*' \
    database_name -f backup_$(date +%Y%m%d).dump

# Exclude schema
pg_dump -U postgres \
    --exclude-schema=test_schema \
    database_name -f backup_$(date +%Y%m%d).dump

Backup Global Objects with pg_dumpall

# Backup all databases and global objects
pg_dumpall -U postgres > all_databases_$(date +%Y%m%d).sql

# Backup only global objects (roles, tablespaces)
pg_dumpall -U postgres --globals-only > globals_$(date +%Y%m%d).sql

# Backup only roles
pg_dumpall -U postgres --roles-only > roles_$(date +%Y%m%d).sql

# Backup only tablespaces
pg_dumpall -U postgres --tablespaces-only > tablespaces_$(date +%Y%m%d).sql

Using .pgpass for Password

# Create .pgpass file
cat > ~/.pgpass << EOF
localhost:5432:*:postgres:your_password
localhost:5432:database_name:backup_user:backup_password
EOF

# Secure the file (required)
chmod 600 ~/.pgpass

# Now run pg_dump without password prompt
pg_dump -U postgres database_name > backup_$(date +%Y%m%d).sql

Restoring Backups

Restoring MySQL/MariaDB Backups

# Restore single database from SQL file
mysql -u root -p database_name < backup_20260111.sql

# Restore all databases
mysql -u root -p < all_databases_20260111.sql

# Restore compressed backup
gunzip < backup_20260111.sql.gz | mysql -u root -p database_name

# Restore with progress indicator
pv backup_20260111.sql | mysql -u root -p database_name

# Create database before restoring
mysql -u root -p -e "CREATE DATABASE database_name CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_ci;"
mysql -u root -p database_name < backup_20260111.sql

# Restore specific table
mysql -u root -p database_name < table_backup_20260111.sql

# Restore and show progress with pipe viewer
pv backup_20260111.sql.gz | gunzip | mysql -u root -p database_name

Restoring PostgreSQL Backups

# Restore plain SQL format
psql -U postgres -d database_name -f backup_20260111.sql

# Create database first
createdb -U postgres database_name
psql -U postgres -d database_name -f backup_20260111.sql

# Restore compressed SQL
gunzip < backup_20260111.sql.gz | psql -U postgres -d database_name

# Restore custom format
pg_restore -U postgres -d database_name backup_20260111.dump

# Restore with parallel jobs
pg_restore -U postgres -d database_name -j 4 backup_20260111.dump

# Restore directory format with parallel jobs
pg_restore -U postgres -d database_name -j 4 backup_20260111/

# Restore with clean (drop existing objects first)
pg_restore -U postgres -d database_name --clean backup_20260111.dump

# Restore with create database
pg_restore -U postgres --create -d postgres backup_20260111.dump

# Restore specific table
pg_restore -U postgres -d database_name -t table_name backup_20260111.dump

# Restore all databases
psql -U postgres -f all_databases_20260111.sql

Restore Options and Considerations

# MySQL: Stop application before restore
sudo systemctl stop apache2

# Restore with verbose output
mysql -u root -p -v database_name < backup_20260111.sql

# PostgreSQL: Create database with specific encoding
createdb -U postgres -E UTF8 -T template0 database_name
pg_restore -U postgres -d database_name backup_20260111.dump

# Handle errors during restore
# MySQL
mysql -u root -p --force database_name < backup_20260111.sql

# PostgreSQL
pg_restore -U postgres -d database_name --exit-on-error backup_20260111.dump

Backup Verification

Verifying MySQL/MariaDB Backups

# Check backup file integrity
gunzip -t backup_20260111.sql.gz

# Verify SQL syntax
mysql -u root -p --force < backup_20260111.sql 2>&1 | grep -i error

# Test restore to temporary database
mysql -u root -p -e "CREATE DATABASE temp_restore_test;"
mysql -u root -p temp_restore_test < backup_20260111.sql
mysql -u root -p -e "DROP DATABASE temp_restore_test;"

# Count tables in backup
grep "CREATE TABLE" backup_20260111.sql | wc -l

# Check backup file size
ls -lh backup_20260111.sql

Verifying PostgreSQL Backups

# Check custom format backup
pg_restore --list backup_20260111.dump | head -20

# Verify backup structure
pg_restore --list backup_20260111.dump

# Test restore to temporary database
createdb -U postgres temp_restore_test
pg_restore -U postgres -d temp_restore_test backup_20260111.dump
dropdb -U postgres temp_restore_test

# Check backup file integrity
file backup_20260111.dump

# For directory format
ls -lh backup_20260111/

Best Practices

Backup Strategy

# Implement 3-2-1 backup strategy:
# 3 copies of data
# 2 different media types
# 1 offsite backup

# Daily backups with retention
# Keep 7 days of daily backups
# Keep 4 weeks of weekly backups
# Keep 12 months of monthly backups

# Example directory structure
/backups/
├── daily/
├── weekly/
├── monthly/
└── offsite/

Naming Conventions

# Include date, time, and type in filename
# Format: database_type_YYYYMMDD_HHMMSS.extension

# Examples:
myapp_full_20260111_020000.sql.gz
postgres_schema_20260111_030000.dump
mysql_data_20260111_040000.sql.gz

# Function for consistent naming
backup_name() {
    local db=$1
    local type=$2
    echo "${db}_${type}_$(date +%Y%m%d_%H%M%S)"
}

# Usage
mysqldump -u root -p myapp_db > $(backup_name "myapp" "full").sql

Backup Security

# Encrypt backups
# MySQL backup with encryption
mysqldump -u root -p --all-databases | gzip | openssl enc -aes-256-cbc -salt -out backup_$(date +%Y%m%d).sql.gz.enc

# Decrypt backup
openssl enc -d -aes-256-cbc -in backup_20260111.sql.gz.enc | gunzip | mysql -u root -p

# PostgreSQL backup with encryption
pg_dump -U postgres database_name | gzip | gpg --encrypt --recipient [email protected] > backup_$(date +%Y%m%d).sql.gz.gpg

# Decrypt
gpg --decrypt backup_20260111.sql.gz.gpg | gunzip | psql -U postgres -d database_name

# Set secure permissions
chmod 600 backup_*.sql
chown backup_user:backup_group backup_*.sql

Offsite Backup Transfer

# Transfer to remote server with rsync
rsync -avz --progress /backup/mysql/ user@backup-server:/remote/backup/mysql/

# Transfer with SSH compression
rsync -avz -e "ssh -C" /backup/postgresql/ user@backup-server:/remote/backup/postgresql/

# Upload to S3
aws s3 cp backup_20260111.sql.gz s3://my-backup-bucket/mysql/

# Upload to remote server with scp
scp backup_20260111.sql.gz user@backup-server:/remote/backup/

Monitoring Backup Size and Time

# Log backup metrics
echo "$(date): Backup started" >> /var/log/backup.log
START_TIME=$(date +%s)

# Perform backup
mysqldump -u root -p --all-databases | gzip > backup_$(date +%Y%m%d).sql.gz

# Calculate duration and size
END_TIME=$(date +%s)
DURATION=$((END_TIME - START_TIME))
SIZE=$(du -h backup_$(date +%Y%m%d).sql.gz | cut -f1)

echo "$(date): Backup completed in ${DURATION}s, size: ${SIZE}" >> /var/log/backup.log

Troubleshooting

Common mysqldump Issues

# Error: Access denied
# Solution: Check user privileges
GRANT SELECT, LOCK TABLES, SHOW VIEW ON *.* TO 'backup_user'@'localhost';

# Error: mysqldump: Got error: 1044: Access denied for user
# Solution: Add --no-tablespaces option
mysqldump -u root -p --no-tablespaces --all-databases > backup.sql

# Error: Out of memory
# Solution: Use --quick option
mysqldump -u root -p --quick --all-databases > backup.sql

# Error: Lock wait timeout exceeded
# Solution: Increase lock_wait_timeout
SET GLOBAL lock_wait_timeout = 600;

Common pg_dump Issues

# Error: permission denied
# Solution: Grant necessary privileges
GRANT SELECT ON ALL TABLES IN SCHEMA public TO backup_user;

# Error: could not connect
# Solution: Check pg_hba.conf and connection parameters
# Add to pg_hba.conf:
host    all    backup_user    127.0.0.1/32    md5

# Large object errors
# Solution: Include large objects or exclude them
pg_dump -U postgres --no-blobs database_name > backup.sql

# Parallel dump fails
# Solution: Ensure directory has write permissions
mkdir -p /backup/pgdump
chown postgres:postgres /backup/pgdump

Conclusion

Reliable database backups are essential for business continuity and data protection. Both mysqldump and pg_dump provide powerful, flexible tools for creating consistent, portable backups.

Key Takeaways

  1. Regular Backups: Automate daily backups with retention policies
  2. Test Restores: Regularly verify backup integrity through test restores
  3. Multiple Copies: Implement 3-2-1 backup strategy
  4. Compression: Use compression to save storage space
  5. Security: Encrypt sensitive backups
  6. Monitoring: Track backup size, duration, and success
  7. Documentation: Maintain clear backup and restore procedures

Backup Checklist

  • ✓ Automated daily backups configured
  • ✓ Backup retention policy implemented
  • ✓ Offsite backups configured
  • ✓ Backup encryption enabled for sensitive data
  • ✓ Regular restore tests scheduled
  • ✓ Backup monitoring and alerting active
  • ✓ Documentation updated and accessible
  • ✓ Backup permissions secured
  • ✓ Storage capacity monitored
  • ✓ Recovery time objective (RTO) defined and tested

Additional Resources

Remember: A backup is only as good as your last successful restore. Test your backups regularly!