Database Migration Between Servers: Complete Zero-Downtime Guide
Database migration is a critical operation that requires precision, planning, and expertise. Whether you're upgrading infrastructure, consolidating databases, or moving to a new hosting provider, migrating databases between servers without data loss or extended downtime is essential for maintaining business continuity. This comprehensive guide covers migration strategies for MySQL/MariaDB, PostgreSQL, and MongoDB with emphasis on zero-downtime techniques.
Understanding Database Migration Challenges
Database migration presents unique challenges compared to file-based migrations:
- Data Consistency: Ensuring no data is lost during transfer
- Downtime Requirements: Minimizing service interruption
- Referential Integrity: Maintaining relationships between tables
- Performance Impact: Managing resource usage during migration
- Application Dependencies: Coordinating with application connection strings
- Large Dataset Handling: Efficiently transferring terabytes of data
- Character Encoding: Preserving data encoding across systems
A successful database migration requires understanding these challenges and implementing appropriate strategies to mitigate risks.
Pre-Migration Planning and Assessment
Database Environment Assessment
Before starting, conduct a thorough assessment:
# MySQL/MariaDB assessment
mysql -u root -p << 'EOF'
-- Check database sizes
SELECT
table_schema AS 'Database',
ROUND(SUM(data_length + index_length) / 1024 / 1024, 2) AS 'Size (MB)'
FROM information_schema.tables
GROUP BY table_schema
ORDER BY SUM(data_length + index_length) DESC;
-- Check table engines
SELECT
table_schema,
table_name,
engine,
table_rows
FROM information_schema.tables
WHERE table_schema NOT IN ('information_schema', 'mysql', 'performance_schema', 'sys');
-- Check stored procedures and functions
SELECT routine_schema, routine_name, routine_type
FROM information_schema.routines
WHERE routine_schema NOT IN ('information_schema', 'mysql', 'performance_schema', 'sys');
-- Check triggers
SELECT trigger_schema, trigger_name, event_object_table
FROM information_schema.triggers;
-- Check views
SELECT table_schema, table_name
FROM information_schema.views
WHERE table_schema NOT IN ('information_schema', 'mysql', 'performance_schema', 'sys');
EOF
For PostgreSQL:
# PostgreSQL assessment
psql -U postgres << 'EOF'
-- Check database sizes
SELECT
datname,
pg_size_pretty(pg_database_size(datname)) AS size
FROM pg_database
ORDER BY pg_database_size(datname) DESC;
-- List all schemas
\dn
-- List all tables with sizes
SELECT
schemaname,
tablename,
pg_size_pretty(pg_total_relation_size(schemaname||'.'||tablename)) AS size
FROM pg_tables
WHERE schemaname NOT IN ('pg_catalog', 'information_schema')
ORDER BY pg_total_relation_size(schemaname||'.'||tablename) DESC;
-- Check extensions
SELECT extname, extversion FROM pg_extension;
EOF
Choosing the Right Migration Strategy
Select a strategy based on your requirements:
- Dump and Restore: Simple, requires downtime
- Replication-Based: Zero-downtime, complex setup
- Logical Replication: PostgreSQL 10+, selective replication
- Physical Backup: Fast for large databases
- Cloud Native Tools: AWS DMS, Azure Database Migration Service
Pre-Migration Checklist
Complete this checklist before proceeding:
- Document current database version and configuration
- Calculate total database size and growth rate
- Identify peak and off-peak usage times
- List all databases, users, and permissions
- Document application connection strings
- Create complete backup of source database
- Test backup restoration procedure
- Verify destination server meets resource requirements
- Ensure network connectivity between servers
- Install same or compatible database version on destination
- Configure firewall rules for database communication
- Prepare rollback plan
- Set up monitoring on both servers
- Schedule migration during maintenance window
- Notify stakeholders and application teams
MySQL/MariaDB Migration Methods
Method 1: Traditional Dump and Restore
For smaller databases or when downtime is acceptable:
# On source server: Create full backup
mysqldump -u root -p \
--single-transaction \
--quick \
--lock-tables=false \
--routines \
--triggers \
--events \
--all-databases \
> /tmp/full_backup.sql
# Compress for faster transfer
gzip /tmp/full_backup.sql
# Transfer to new server
scp /tmp/full_backup.sql.gz user@new-server:/tmp/
# On new server: Extract and restore
gunzip /tmp/full_backup.sql.gz
mysql -u root -p < /tmp/full_backup.sql
# Verify restoration
mysql -u root -p << 'EOF'
SHOW DATABASES;
SELECT COUNT(*) FROM database_name.table_name;
EOF
Method 2: Zero-Downtime Replication Migration
This is the recommended approach for production databases:
# Step 1: Configure source server for replication
# Edit MySQL configuration
sudo nano /etc/mysql/mysql.conf.d/mysqld.cnf
# Add or modify these settings:
# server-id = 1
# log_bin = /var/log/mysql/mysql-bin.log
# binlog_format = ROW
# binlog_do_db = your_database_name
# Restart MySQL
sudo systemctl restart mysql
# Create replication user
mysql -u root -p << 'EOF'
CREATE USER 'replicator'@'%' IDENTIFIED BY 'strong_replication_password';
GRANT REPLICATION SLAVE ON *.* TO 'replicator'@'%';
FLUSH PRIVILEGES;
EOF
# Step 2: Create initial data snapshot
mysql -u root -p << 'EOF'
FLUSH TABLES WITH READ LOCK;
SHOW MASTER STATUS;
EOF
# Note the File and Position values - you'll need these!
# Example output:
# File: mysql-bin.000003
# Position: 73
# In another terminal, create backup
mysqldump -u root -p \
--single-transaction \
--master-data=2 \
--routines \
--triggers \
--events \
your_database_name > /tmp/initial_snapshot.sql
# Unlock tables
mysql -u root -p -e "UNLOCK TABLES;"
# Transfer snapshot
scp /tmp/initial_snapshot.sql user@new-server:/tmp/
# Step 3: Configure new server as replica
# On new server, edit configuration
sudo nano /etc/mysql/mysql.conf.d/mysqld.cnf
# Add these settings:
# server-id = 2
# relay_log = /var/log/mysql/mysql-relay-bin
# log_bin = /var/log/mysql/mysql-bin.log
# read_only = 1
sudo systemctl restart mysql
# Create database and import snapshot
mysql -u root -p << 'EOF'
CREATE DATABASE your_database_name;
EOF
mysql -u root -p your_database_name < /tmp/initial_snapshot.sql
# Configure replication
mysql -u root -p << 'EOF'
CHANGE MASTER TO
MASTER_HOST='source-server-ip',
MASTER_USER='replicator',
MASTER_PASSWORD='strong_replication_password',
MASTER_LOG_FILE='mysql-bin.000003', -- From SHOW MASTER STATUS
MASTER_LOG_POS=73; -- From SHOW MASTER STATUS
START SLAVE;
EOF
# Step 4: Monitor replication
mysql -u root -p << 'EOF'
SHOW SLAVE STATUS\G
EOF
# Key fields to check:
# Slave_IO_Running: Yes
# Slave_SQL_Running: Yes
# Seconds_Behind_Master: 0 (should decrease to 0)
# Last_Error: (should be empty)
# Continuous monitoring script
cat > /root/check-replication.sh << 'EOF'
#!/bin/bash
while true; do
mysql -u root -p'your_password' -e "SHOW SLAVE STATUS\G" | \
grep -E "(Slave_IO_Running|Slave_SQL_Running|Seconds_Behind_Master|Last_Error)"
sleep 10
done
EOF
chmod +x /root/check-replication.sh
Method 3: Percona XtraBackup (Hot Backup)
For large InnoDB databases requiring minimal downtime:
# On source server: Install Percona XtraBackup
wget https://repo.percona.com/apt/percona-release_latest.generic_all.deb
sudo dpkg -i percona-release_latest.generic_all.deb
sudo apt update
sudo apt install percona-xtrabackup-80 -y
# Create backup
sudo xtrabackup --backup \
--user=root \
--password=your_password \
--target-dir=/backup/mysql
# Prepare backup
sudo xtrabackup --prepare --target-dir=/backup/mysql
# Transfer to new server
rsync -avz --progress /backup/mysql/ user@new-server:/backup/mysql/
# On new server: Stop MySQL and restore
sudo systemctl stop mysql
sudo rm -rf /var/lib/mysql/*
sudo xtrabackup --copy-back --target-dir=/backup/mysql
sudo chown -R mysql:mysql /var/lib/mysql
sudo systemctl start mysql
PostgreSQL Migration Methods
Method 1: pg_dump and pg_restore
Standard approach for PostgreSQL:
# On source server: Create backup
pg_dump -U postgres \
-F c \
-b \
-v \
-f /tmp/database_backup.dump \
database_name
# For all databases
pg_dumpall -U postgres \
--clean \
--if-exists \
-f /tmp/all_databases.sql
# Transfer to new server
scp /tmp/database_backup.dump user@new-server:/tmp/
# On new server: Restore
createdb -U postgres database_name
pg_restore -U postgres \
-d database_name \
-v \
/tmp/database_backup.dump
# Verify
psql -U postgres database_name << 'EOF'
\dt
SELECT COUNT(*) FROM your_table;
EOF
Method 2: PostgreSQL Streaming Replication
For zero-downtime PostgreSQL migration:
# On source server (primary): Configure replication
sudo nano /etc/postgresql/14/main/postgresql.conf
# Modify these settings:
# listen_addresses = '*'
# wal_level = replica
# max_wal_senders = 10
# max_replication_slots = 10
# hot_standby = on
# Configure authentication
sudo nano /etc/postgresql/14/main/pg_hba.conf
# Add this line:
# host replication replicator new-server-ip/32 md5
# Create replication user
sudo -u postgres psql << 'EOF'
CREATE USER replicator WITH REPLICATION ENCRYPTED PASSWORD 'replication_password';
EOF
# Restart PostgreSQL
sudo systemctl restart postgresql
# On new server (standby): Stop PostgreSQL
sudo systemctl stop postgresql
# Remove existing data directory
sudo rm -rf /var/lib/postgresql/14/main/*
# Create base backup using pg_basebackup
sudo -u postgres pg_basebackup \
-h source-server-ip \
-D /var/lib/postgresql/14/main \
-U replicator \
-P \
-v \
-R \
-X stream \
-C -S replica_slot
# The -R flag automatically creates standby.signal
# and configures recovery parameters
# Start PostgreSQL on standby
sudo systemctl start postgresql
# Verify replication status
# On primary:
sudo -u postgres psql -c "SELECT * FROM pg_stat_replication;"
# On standby:
sudo -u postgres psql -c "SELECT * FROM pg_stat_wal_receiver;"
Method 3: Logical Replication (PostgreSQL 10+)
For selective replication or cross-version migration:
# On source server: Enable logical replication
sudo nano /etc/postgresql/14/main/postgresql.conf
# Set:
# wal_level = logical
# max_replication_slots = 10
# max_wal_senders = 10
sudo systemctl restart postgresql
# Create publication
sudo -u postgres psql database_name << 'EOF'
CREATE PUBLICATION my_publication FOR ALL TABLES;
EOF
# On new server: Create subscription
sudo -u postgres psql database_name << 'EOF'
CREATE SUBSCRIPTION my_subscription
CONNECTION 'host=source-server-ip port=5432 dbname=database_name user=postgres password=password'
PUBLICATION my_publication;
EOF
# Monitor replication
sudo -u postgres psql database_name << 'EOF'
SELECT * FROM pg_stat_subscription;
EOF
MongoDB Migration Methods
Method 1: mongodump and mongorestore
# On source server: Create backup
mongodump \
--host localhost \
--port 27017 \
--username admin \
--password password \
--authenticationDatabase admin \
--out /tmp/mongodb_backup
# Transfer to new server
rsync -avz /tmp/mongodb_backup/ user@new-server:/tmp/mongodb_backup/
# On new server: Restore
mongorestore \
--host localhost \
--port 27017 \
--username admin \
--password password \
--authenticationDatabase admin \
/tmp/mongodb_backup
Method 2: MongoDB Replica Set Migration
# Add new server as replica set member
mongo --host primary-server << 'EOF'
rs.add("new-server:27017")
EOF
# Wait for synchronization
mongo --host new-server << 'EOF'
rs.status()
EOF
# When synced, step down primary and promote new member
mongo --host primary-server << 'EOF'
rs.stepDown()
EOF
Performing the Cutover
Application Configuration Update
# Update application database configuration
# Example for PHP application
sudo nano /var/www/html/config.php
# Change:
# define('DB_HOST', 'old-server-ip');
# To:
# define('DB_HOST', 'new-server-ip');
# For environment variable based apps
sudo nano /etc/environment
# Update:
# DB_HOST=new-server-ip
# Restart application services
sudo systemctl restart php8.2-fpm
sudo systemctl restart nginx
Final Synchronization
# For replication-based migration:
# MySQL: Stop writes on source
mysql -u root -p << 'EOF'
SET GLOBAL read_only = ON;
FLUSH TABLES WITH READ LOCK;
EOF
# Wait for replica to catch up
mysql -u root -p -h new-server -e "SHOW SLAVE STATUS\G" | grep Seconds_Behind_Master
# Should show: Seconds_Behind_Master: 0
# Stop replication on new server
mysql -u root -p -h new-server << 'EOF'
STOP SLAVE;
RESET SLAVE ALL;
SET GLOBAL read_only = OFF;
EOF
# For PostgreSQL: Promote standby to primary
sudo -u postgres /usr/lib/postgresql/14/bin/pg_ctl promote \
-D /var/lib/postgresql/14/main
# Verify promotion
sudo -u postgres psql -c "SELECT pg_is_in_recovery();"
# Should return: f (false)
Post-Migration Verification
Data Integrity Verification
# MySQL verification
# Compare row counts
mysql -u root -p << 'EOF'
SELECT
TABLE_SCHEMA,
TABLE_NAME,
TABLE_ROWS
FROM information_schema.TABLES
WHERE TABLE_SCHEMA = 'your_database'
ORDER BY TABLE_NAME;
EOF
# Compare checksums
mysql -u root -p << 'EOF'
CHECKSUM TABLE your_database.your_table EXTENDED;
EOF
# PostgreSQL verification
sudo -u postgres psql database_name << 'EOF'
SELECT
schemaname,
tablename,
n_tup_ins,
n_tup_upd,
n_tup_del
FROM pg_stat_user_tables
ORDER BY tablename;
EOF
Performance Testing
# MySQL performance baseline
sysbench /usr/share/sysbench/oltp_read_write.lua \
--mysql-host=new-server-ip \
--mysql-user=root \
--mysql-password=password \
--mysql-db=testdb \
--tables=10 \
--table-size=100000 \
prepare
sysbench /usr/share/sysbench/oltp_read_write.lua \
--mysql-host=new-server-ip \
--mysql-user=root \
--mysql-password=password \
--mysql-db=testdb \
--tables=10 \
--table-size=100000 \
--threads=10 \
--time=60 \
run
# PostgreSQL performance test
pgbench -i -s 50 database_name
pgbench -c 10 -j 2 -t 10000 database_name
Rollback Procedures
If migration fails, execute rollback:
# For replication-based migration:
# 1. Re-enable writes on source
mysql -u root -p << 'EOF'
SET GLOBAL read_only = OFF;
UNLOCK TABLES;
EOF
# 2. Update application to use old server
sudo nano /var/www/html/config.php
# Revert DB_HOST to old-server-ip
sudo systemctl restart php8.2-fpm nginx
# 3. Stop services on new server
sudo systemctl stop mysql
# 4. Document issues for analysis
echo "Migration failed at $(date): [reason]" >> /var/log/migration.log
Risk Mitigation Strategies
Connection Pooling During Migration
# Configure ProxySQL for connection management
# Install ProxySQL
wget https://github.com/sysown/proxysql/releases/download/v2.5.0/proxysql_2.5.0-ubuntu20_amd64.deb
sudo dpkg -i proxysql_2.5.0-ubuntu20_amd64.deb
# Configure backend servers
mysql -u admin -padmin -h 127.0.0.1 -P6032 << 'EOF'
INSERT INTO mysql_servers(hostgroup_id, hostname, port) VALUES (0, 'old-server', 3306);
INSERT INTO mysql_servers(hostgroup_id, hostname, port) VALUES (1, 'new-server', 3306);
LOAD MYSQL SERVERS TO RUNTIME;
SAVE MYSQL SERVERS TO DISK;
EOF
# Applications connect to ProxySQL, which routes to appropriate backend
Monitoring During Migration
# Create monitoring script
cat > /root/monitor-migration.sh << 'EOF'
#!/bin/bash
LOG="/var/log/migration-monitor.log"
while true; do
echo "=== $(date) ===" >> $LOG
# Check replication lag
LAG=$(mysql -u root -ppassword -e "SHOW SLAVE STATUS\G" | \
grep Seconds_Behind_Master | awk '{print $2}')
echo "Replication Lag: $LAG seconds" >> $LOG
# Check connections
CONN=$(mysql -u root -ppassword -e "SHOW STATUS LIKE 'Threads_connected';" | \
awk 'NR==2 {print $2}')
echo "Active Connections: $CONN" >> $LOG
# Check disk space
DISK=$(df -h /var/lib/mysql | awk 'NR==2 {print $5}')
echo "Disk Usage: $DISK" >> $LOG
# Alert if issues detected
if [ "$LAG" -gt 10 ]; then
echo "ALERT: High replication lag!" | mail -s "Migration Alert" [email protected]
fi
sleep 60
done
EOF
chmod +x /root/monitor-migration.sh
nohup /root/monitor-migration.sh &
Performance Optimization After Migration
# MySQL optimization
sudo nano /etc/mysql/mysql.conf.d/mysqld.cnf
# Add optimized settings:
# innodb_buffer_pool_size = 4G # 70-80% of available RAM
# innodb_log_file_size = 1G
# innodb_flush_log_at_trx_commit = 2
# innodb_flush_method = O_DIRECT
# innodb_file_per_table = 1
# max_connections = 500
# query_cache_type = 0
# query_cache_size = 0
sudo systemctl restart mysql
# PostgreSQL optimization
sudo nano /etc/postgresql/14/main/postgresql.conf
# Optimize settings:
# shared_buffers = 4GB
# effective_cache_size = 12GB
# maintenance_work_mem = 1GB
# checkpoint_completion_target = 0.9
# wal_buffers = 16MB
# default_statistics_target = 100
# random_page_cost = 1.1
# effective_io_concurrency = 200
# work_mem = 20MB
# min_wal_size = 1GB
# max_wal_size = 4GB
sudo systemctl restart postgresql
Conclusion
Successful database migration between servers requires meticulous planning, appropriate tooling, and careful execution. Key takeaways from this guide:
- Choose the right migration strategy based on database size, acceptable downtime, and technical requirements
- Use replication-based methods for zero-downtime migrations in production environments
- Test extensively before cutover, including application connectivity and performance
- Monitor continuously during migration to detect and address issues immediately
- Maintain comprehensive rollback procedures to ensure business continuity
- Verify data integrity post-migration using checksums and row counts
- Optimize performance after migration to leverage new infrastructure
Whether migrating MySQL, PostgreSQL, or MongoDB, the principles remain consistent: backup, test, replicate, verify, and optimize. With proper execution, database migration can be completed with minimal risk and zero downtime, ensuring continuous service availability for your applications and users.
Always remember that database migration is not just a technical challenge but a business-critical operation. Plan conservatively, test thoroughly, and execute methodically to achieve successful outcomes.


