Synchronization During Migration: Strategies for Continuous Data Consistency
Maintaining data synchronization during server migration is critical for achieving zero-downtime transitions and ensuring data integrity. Unlike one-time data transfers, synchronization establishes continuous data flow between source and destination systems, allowing both environments to operate in parallel while changes propagate seamlessly. This comprehensive guide covers synchronization strategies, tools, and best practices for various migration scenarios.
Understanding Migration Synchronization
Synchronization during migration involves maintaining data consistency across source and destination systems while services remain operational. The challenge lies in capturing and replicating changes that occur during the migration window without impacting user experience or causing data loss.
Key Synchronization Concepts
- Bidirectional vs Unidirectional Sync: Whether data flows one way or both ways
- Real-time vs Batch Sync: Continuous replication versus periodic updates
- Eventual Consistency: Accepting temporary divergence with guaranteed convergence
- Conflict Resolution: Handling simultaneous updates to the same data
- Incremental Updates: Transferring only changed data
- Checkpointing: Maintaining recovery points for interrupted syncs
- Verification: Ensuring synchronized data matches source
Synchronization Challenges
- Performance Impact: Sync processes consuming system resources
- Network Reliability: Handling connection interruptions
- Data Consistency: Maintaining referential integrity
- Time Synchronization: Coordinating timestamps across systems
- Bandwidth Constraints: Managing large data transfers
- Application Impact: Minimizing disruption to running services
- Conflict Resolution: Managing concurrent modifications
- Monitoring Lag: Tracking synchronization delay
Pre-Synchronization Planning
Assess Synchronization Requirements
# Analyze data change rate
# Monitor write operations over time
vmstat 1 60 # Track I/O for 60 seconds
iostat -x 5 12 # Detailed I/O statistics
# Database write rate
mysql -e "SHOW GLOBAL STATUS LIKE 'Com_insert';"
mysql -e "SHOW GLOBAL STATUS LIKE 'Com_update';"
mysql -e "SHOW GLOBAL STATUS LIKE 'Com_delete';"
# File system change rate
inotifywatch -t 300 -r /var/www/html
Synchronization Planning Checklist
- Identify all data sources requiring synchronization
- Calculate data change rate (transactions per second)
- Estimate network bandwidth requirements
- Determine acceptable synchronization lag
- Choose synchronization tools and methods
- Plan synchronization architecture (push vs pull)
- Define conflict resolution policies
- Establish monitoring and alerting
- Create synchronization testing plan
- Document rollback procedures
- Schedule initial bulk sync
- Plan incremental sync frequency
- Define success criteria for sync completion
File System Synchronization Strategies
Method 1: Continuous rsync Synchronization
Real-time file synchronization using rsync in a loop:
# Create continuous sync script
cat > /root/continuous-sync.sh << 'EOF'
#!/bin/bash
SOURCE="/var/www/html/"
DEST="user@new-server:/var/www/html/"
LOG="/var/log/continuous-sync.log"
SYNC_INTERVAL=300 # 5 minutes
EXCLUDE_FILE="/root/sync-exclude.txt"
# Logging function
log() {
echo "[$(date '+%Y-%m-%d %H:%M:%S')] $1" | tee -a $LOG
}
log "Starting continuous synchronization"
while true; do
START=$(date +%s)
log "Sync cycle started"
rsync -avz --delete \
--exclude-from=$EXCLUDE_FILE \
--timeout=300 \
--partial \
--log-file=$LOG \
$SOURCE $DEST
EXIT_CODE=$?
if [ $EXIT_CODE -eq 0 ]; then
log "Sync completed successfully"
else
log "Sync failed with exit code $EXIT_CODE"
fi
END=$(date +%s)
DURATION=$((END - START))
log "Sync took $DURATION seconds"
# Calculate sleep time
SLEEP_TIME=$((SYNC_INTERVAL - DURATION))
if [ $SLEEP_TIME -gt 0 ]; then
log "Sleeping for $SLEEP_TIME seconds"
sleep $SLEEP_TIME
else
log "Sync took longer than interval, starting immediately"
fi
done
EOF
chmod +x /root/continuous-sync.sh
# Run in background with nohup
nohup /root/continuous-sync.sh &
# Or use systemd service
cat > /etc/systemd/system/continuous-sync.service << 'EOF'
[Unit]
Description=Continuous File Synchronization
After=network.target
[Service]
Type=simple
User=root
ExecStart=/root/continuous-sync.sh
Restart=always
RestartSec=30
[Install]
WantedBy=multi-user.target
EOF
systemctl daemon-reload
systemctl enable continuous-sync
systemctl start continuous-sync
Method 2: inotify-based Real-Time Sync
Event-driven synchronization using inotify:
# Install inotify tools
sudo apt install inotify-tools -y
# Create real-time sync script
cat > /root/inotify-sync.sh << 'EOF'
#!/bin/bash
SOURCE="/var/www/html/"
DEST="user@new-server:/var/www/html/"
LOG="/var/log/inotify-sync.log"
log() {
echo "[$(date '+%Y-%m-%d %H:%M:%S')] $1" | tee -a $LOG
}
log "Starting inotify-based synchronization"
# Monitor file system events and sync
inotifywait -m -r -e modify,create,delete,move $SOURCE --format '%w%f' | while read FILE
do
log "Change detected: $FILE"
rsync -avz --delete \
--timeout=60 \
$SOURCE $DEST >> $LOG 2>&1
if [ $? -eq 0 ]; then
log "Sync completed for change: $FILE"
else
log "Sync failed for change: $FILE"
fi
done
EOF
chmod +x /root/inotify-sync.sh
nohup /root/inotify-sync.sh &
Method 3: Lsyncd for Production Environments
Lsyncd combines inotify with rsync for efficient real-time sync:
# Install lsyncd
sudo apt install lsyncd -y
# Configure lsyncd
sudo nano /etc/lsyncd/lsyncd.conf.lua
-- Add configuration
settings {
logfile = "/var/log/lsyncd/lsyncd.log",
statusFile = "/var/log/lsyncd/lsyncd.status",
statusInterval = 20,
maxProcesses = 4,
nodaemon = false,
}
sync {
default.rsync,
source = "/var/www/html",
target = "user@new-server:/var/www/html",
exclude = { '*.log', 'cache/', 'tmp/' },
rsync = {
archive = true,
compress = true,
verbose = true,
_extra = {"--delete-after", "--partial", "--bwlimit=50000"}
},
delay = 5,
}
-- Save and start lsyncd
sudo systemctl enable lsyncd
sudo systemctl start lsyncd
sudo systemctl status lsyncd
# Monitor lsyncd status
sudo lsyncd -status /var/log/lsyncd/lsyncd.status
Database Synchronization Strategies
MySQL/MariaDB Replication Synchronization
Master-slave replication for continuous database sync:
# On source server (master): Configure for replication
sudo nano /etc/mysql/mysql.conf.d/mysqld.cnf
# Add these settings:
# server-id = 1
# log_bin = /var/log/mysql/mysql-bin.log
# binlog_format = ROW
# binlog_do_db = your_database
sudo systemctl restart mysql
# Create replication user
mysql -u root -p << 'EOF'
CREATE USER 'replicator'@'%' IDENTIFIED BY 'strong_password';
GRANT REPLICATION SLAVE ON *.* TO 'replicator'@'%';
FLUSH PRIVILEGES;
FLUSH TABLES WITH READ LOCK;
SHOW MASTER STATUS;
EOF
# Record File and Position values
# Create initial dump
mysqldump -u root -p \
--single-transaction \
--master-data=2 \
your_database > /tmp/initial_dump.sql
# Unlock tables
mysql -u root -p -e "UNLOCK TABLES;"
# Transfer dump to new server
scp /tmp/initial_dump.sql user@new-server:/tmp/
# On new server (slave): Configure replication
sudo nano /etc/mysql/mysql.conf.d/mysqld.cnf
# server-id = 2
# relay_log = /var/log/mysql/mysql-relay-bin
# log_bin = /var/log/mysql/mysql-bin.log
sudo systemctl restart mysql
# Import dump
mysql -u root -p your_database < /tmp/initial_dump.sql
# Set up replication
mysql -u root -p << 'EOF'
CHANGE MASTER TO
MASTER_HOST='source-server-ip',
MASTER_USER='replicator',
MASTER_PASSWORD='strong_password',
MASTER_LOG_FILE='mysql-bin.000001',
MASTER_LOG_POS=12345;
START SLAVE;
SHOW SLAVE STATUS\G
EOF
# Monitor replication continuously
cat > /root/monitor-replication.sh << 'EOF'
#!/bin/bash
while true; do
echo "=== $(date) ==="
mysql -u root -pYOUR_PASSWORD -e "SHOW SLAVE STATUS\G" | \
grep -E "Slave_IO_Running|Slave_SQL_Running|Seconds_Behind_Master|Last_Error"
sleep 10
done
EOF
chmod +x /root/monitor-replication.sh
nohup /root/monitor-replication.sh > /var/log/replication-monitor.log &
PostgreSQL Streaming Replication
Real-time PostgreSQL synchronization:
# On primary server: Configure streaming replication
sudo nano /etc/postgresql/14/main/postgresql.conf
# listen_addresses = '*'
# wal_level = replica
# max_wal_senders = 10
# max_replication_slots = 10
# hot_standby = on
# archive_mode = on
# archive_command = 'test ! -f /var/lib/postgresql/14/archive/%f && cp %p /var/lib/postgresql/14/archive/%f'
# Configure authentication
sudo nano /etc/postgresql/14/main/pg_hba.conf
# Add:
# host replication replicator new-server-ip/32 md5
sudo systemctl restart postgresql
# Create replication user
sudo -u postgres psql << 'EOF'
CREATE USER replicator REPLICATION LOGIN ENCRYPTED PASSWORD 'password';
EOF
# On standby server: Set up streaming replication
sudo systemctl stop postgresql
sudo rm -rf /var/lib/postgresql/14/main/*
# Create base backup
sudo -u postgres pg_basebackup \
-h primary-server-ip \
-D /var/lib/postgresql/14/main \
-U replicator \
-P -v -R -X stream -C -S standby_slot
# Start PostgreSQL
sudo systemctl start postgresql
# Monitor replication lag
cat > /root/monitor-pg-replication.sh << 'EOF'
#!/bin/bash
while true; do
echo "=== $(date) ==="
# On primary
ssh primary-server "sudo -u postgres psql -c 'SELECT * FROM pg_stat_replication;'"
# On standby
sudo -u postgres psql -c "SELECT * FROM pg_stat_wal_receiver;"
sleep 10
done
EOF
chmod +x /root/monitor-pg-replication.sh
Logical Replication for Selective Sync
PostgreSQL logical replication for specific tables:
# On source: Create publication
sudo -u postgres psql your_database << 'EOF'
CREATE PUBLICATION migration_pub FOR TABLE table1, table2, table3;
-- Or for all tables:
-- CREATE PUBLICATION migration_pub FOR ALL TABLES;
EOF
# On destination: Create subscription
sudo -u postgres psql your_database << 'EOF'
CREATE SUBSCRIPTION migration_sub
CONNECTION 'host=source-server port=5432 dbname=your_database user=replicator password=password'
PUBLICATION migration_pub;
EOF
# Monitor subscription status
sudo -u postgres psql your_database << 'EOF'
SELECT * FROM pg_stat_subscription;
SELECT * FROM pg_subscription_rel;
EOF
Application-Level Synchronization
Session Synchronization
Shared session storage for seamless user experience:
# Install Redis for shared sessions
sudo apt install redis-server -y
# Configure Redis for replication
sudo nano /etc/redis/redis.conf
# On master:
# bind 0.0.0.0
# requirepass your_strong_password
# On replica:
# replicaof master-server-ip 6379
# masterauth your_strong_password
sudo systemctl restart redis-server
# Configure PHP to use Redis for sessions
sudo nano /etc/php/8.2/fpm/php.ini
# session.save_handler = redis
# session.save_path = "tcp://redis-server:6379?auth=your_strong_password"
sudo systemctl restart php8.2-fpm
# Monitor Redis replication
redis-cli -a your_strong_password INFO replication
Queue Synchronization
Ensure job queues remain synchronized:
# RabbitMQ mirrored queues
# Configure queue mirroring for high availability
rabbitmqctl set_policy ha-migrate "^migrate\." \
'{"ha-mode":"all","ha-sync-mode":"automatic"}'
# Monitor queue synchronization
rabbitmqctl list_queues name messages consumers
# Redis queue replication
# Already handled by Redis replication configuration above
Cache Synchronization
Synchronized caching across environments:
# Memcached with mcrouter for distributed caching
# Install mcrouter
git clone https://github.com/facebook/mcrouter.git
cd mcrouter/mcrouter/scripts
sudo ./install_ubuntu_20.04.sh /usr/local
# Configure mcrouter
cat > /etc/mcrouter/config.json << 'EOF'
{
"pools": {
"A": {
"servers": [
"old-server:11211",
"new-server:11211"
]
}
},
"route": "PoolRoute|A"
}
EOF
# Start mcrouter
mcrouter --config-file=/etc/mcrouter/config.json \
--port=5000 \
--num-proxies=4
Monitoring Synchronization Status
Comprehensive Monitoring Script
cat > /root/monitor-sync-status.sh << 'EOF'
#!/bin/bash
LOG="/var/log/sync-monitoring.log"
ALERT_EMAIL="[email protected]"
log() {
echo "[$(date '+%Y-%m-%d %H:%M:%S')] $1" | tee -a $LOG
}
alert() {
log "ALERT: $1"
echo "$1" | mail -s "Sync Alert" $ALERT_EMAIL
}
# Monitor file sync lag
check_file_sync() {
SOURCE_FILES=$(find /var/www/html -type f | wc -l)
DEST_FILES=$(ssh user@new-server "find /var/www/html -type f | wc -l")
DIFF=$((SOURCE_FILES - DEST_FILES))
log "File sync status: Source=$SOURCE_FILES, Dest=$DEST_FILES, Diff=$DIFF"
if [ $DIFF -gt 100 ]; then
alert "File sync lag detected: $DIFF files behind"
fi
}
# Monitor database replication lag
check_db_replication() {
LAG=$(mysql -u root -pPASSWORD -e "SHOW SLAVE STATUS\G" | \
grep Seconds_Behind_Master | awk '{print $2}')
log "Database replication lag: $LAG seconds"
if [ "$LAG" != "0" ] && [ "$LAG" -gt 10 ]; then
alert "Database replication lag: $LAG seconds"
fi
# Check replication status
IO_RUNNING=$(mysql -u root -pPASSWORD -e "SHOW SLAVE STATUS\G" | \
grep Slave_IO_Running | awk '{print $2}')
SQL_RUNNING=$(mysql -u root -pPASSWORD -e "SHOW SLAVE STATUS\G" | \
grep Slave_SQL_Running | awk '{print $2}')
if [ "$IO_RUNNING" != "Yes" ] || [ "$SQL_RUNNING" != "Yes" ]; then
alert "Database replication stopped! IO=$IO_RUNNING, SQL=$SQL_RUNNING"
fi
}
# Monitor network connectivity
check_network() {
if ! ping -c 3 new-server-ip > /dev/null 2>&1; then
alert "Network connectivity lost to new server"
fi
}
# Monitor disk space
check_disk_space() {
SOURCE_USAGE=$(df -h /var/www/html | awk 'NR==2 {print $5}' | sed 's/%//')
DEST_USAGE=$(ssh user@new-server "df -h /var/www/html | awk 'NR==2 {print \$5}' | sed 's/%//'")
log "Disk usage: Source=$SOURCE_USAGE%, Dest=$DEST_USAGE%"
if [ $DEST_USAGE -gt 85 ]; then
alert "Destination disk usage critical: $DEST_USAGE%"
fi
}
# Main monitoring loop
while true; do
log "=== Starting sync status check ==="
check_network
check_file_sync
check_db_replication
check_disk_space
log "=== Sync status check complete ==="
sleep 300 # Check every 5 minutes
done
EOF
chmod +x /root/monitor-sync-status.sh
nohup /root/monitor-sync-status.sh &
Real-Time Sync Dashboard
# Create dashboard script
cat > /root/sync-dashboard.sh << 'EOF'
#!/bin/bash
while true; do
clear
echo "============================================"
echo " MIGRATION SYNCHRONIZATION DASHBOARD"
echo "============================================"
echo ""
echo "Time: $(date)"
echo ""
# File sync status
echo "--- FILE SYNC STATUS ---"
SOURCE_SIZE=$(du -sh /var/www/html | awk '{print $1}')
DEST_SIZE=$(ssh user@new-server "du -sh /var/www/html | awk '{print \$1}'")
echo "Source size: $SOURCE_SIZE"
echo "Destination size: $DEST_SIZE"
# Check if lsyncd is running
if pgrep -x lsyncd > /dev/null; then
echo "Lsyncd: RUNNING"
else
echo "Lsyncd: STOPPED"
fi
echo ""
# Database replication status
echo "--- DATABASE REPLICATION ---"
mysql -u root -pPASSWORD -e "SHOW SLAVE STATUS\G" 2>/dev/null | \
grep -E "Slave_IO_Running|Slave_SQL_Running|Seconds_Behind_Master"
echo ""
# Network status
echo "--- NETWORK STATUS ---"
PING=$(ping -c 1 new-server-ip | grep time= | awk -F'time=' '{print $2}' | awk '{print $1}')
echo "Latency to new server: $PING ms"
echo ""
# System resources
echo "--- SYSTEM RESOURCES ---"
echo "Source server:"
ssh old-server "uptime"
echo "Destination server:"
ssh new-server "uptime"
echo ""
echo "Press Ctrl+C to exit"
sleep 5
done
EOF
chmod +x /root/sync-dashboard.sh
Handling Synchronization Conflicts
Conflict Detection and Resolution
# Create conflict detection script
cat > /root/detect-conflicts.sh << 'EOF'
#!/bin/bash
SOURCE="/var/www/html"
DEST_USER="user@new-server"
DEST_PATH="/var/www/html"
CONFLICT_LOG="/var/log/sync-conflicts.log"
log_conflict() {
echo "[$(date)] CONFLICT: $1" >> $CONFLICT_LOG
}
# Find files modified on both servers
find $SOURCE -type f -mtime -1 | while read SOURCE_FILE; do
REL_PATH="${SOURCE_FILE#$SOURCE}"
DEST_FILE="$DEST_PATH$REL_PATH"
# Check if file exists on destination
if ssh $DEST_USER "[ -f $DEST_FILE ]"; then
SOURCE_MTIME=$(stat -c %Y "$SOURCE_FILE")
DEST_MTIME=$(ssh $DEST_USER "stat -c %Y $DEST_FILE")
# Both modified within last hour
NOW=$(date +%s)
if [ $((NOW - SOURCE_MTIME)) -lt 3600 ] && [ $((NOW - DEST_MTIME)) -lt 3600 ]; then
# Check if content differs
SOURCE_MD5=$(md5sum "$SOURCE_FILE" | awk '{print $1}')
DEST_MD5=$(ssh $DEST_USER "md5sum $DEST_FILE | awk '{print \$1}'")
if [ "$SOURCE_MD5" != "$DEST_MD5" ]; then
log_conflict "File modified on both servers: $REL_PATH"
echo "CONFLICT: $REL_PATH (source: $SOURCE_MD5, dest: $DEST_MD5)"
# Resolution strategy: source wins
rsync -avz "$SOURCE_FILE" "$DEST_USER:$DEST_FILE"
log_conflict "Resolved by copying from source: $REL_PATH"
fi
fi
fi
done
EOF
chmod +x /root/detect-conflicts.sh
Synchronization Performance Optimization
Optimize rsync Performance
# Parallel rsync for large directories
cat > /root/parallel-rsync.sh << 'EOF'
#!/bin/bash
SOURCE="/var/www/html"
DEST="user@new-server:/var/www/html"
PARALLEL_JOBS=4
export SOURCE DEST
# Find top-level directories
find $SOURCE -maxdepth 1 -type d | \
parallel -j $PARALLEL_JOBS \
rsync -avz --delete {} $DEST/
EOF
# Optimize rsync with custom settings
rsync -avz \
--compress-level=6 \
--block-size=131072 \
--partial-dir=/tmp/rsync-partial \
--timeout=300 \
--bwlimit=0 \
-e "ssh -c [email protected] -o Compression=no" \
/source/ user@dest:/destination/
Database Sync Optimization
# Optimize MySQL replication
sudo nano /etc/mysql/mysql.conf.d/mysqld.cnf
# Add optimizations:
# binlog_format = ROW # More efficient for replication
# sync_binlog = 0 # Less disk I/O (less safe, acceptable for migration)
# innodb_flush_log_at_trx_commit = 2 # Better performance during sync
# On slave:
# slave_parallel_workers = 4 # Parallel replication
# slave_parallel_type = LOGICAL_CLOCK
sudo systemctl restart mysql
Final Synchronization and Cutover
Pre-Cutover Final Sync
cat > /root/final-sync.sh << 'EOF'
#!/bin/bash
set -e
LOG="/var/log/final-sync.log"
log() {
echo "[$(date '+%Y-%m-%d %H:%M:%S')] $1" | tee -a $LOG
}
log "=== Starting final synchronization ==="
# Stop continuous sync processes
log "Stopping continuous sync processes"
pkill -f continuous-sync.sh
pkill -f inotify-sync.sh
systemctl stop lsyncd
# Wait for processes to finish
sleep 5
# Final file sync
log "Performing final file synchronization"
rsync -avz --delete \
--checksum \
/var/www/html/ \
user@new-server:/var/www/html/ | tee -a $LOG
# Verify database replication is caught up
log "Checking database replication status"
LAG=$(mysql -u root -pPASSWORD -e "SHOW SLAVE STATUS\G" | \
grep Seconds_Behind_Master | awk '{print $2}')
log "Current replication lag: $LAG seconds"
while [ "$LAG" != "0" ] && [ "$LAG" != "NULL" ]; do
log "Waiting for replication to catch up... ($LAG seconds behind)"
sleep 5
LAG=$(mysql -u root -pPASSWORD -e "SHOW SLAVE STATUS\G" | \
grep Seconds_Behind_Master | awk '{print $2}')
done
log "Database fully synchronized"
# Stop writes to source database
log "Setting source database to read-only"
mysql -u root -pPASSWORD << 'MYSQL'
SET GLOBAL read_only = ON;
FLUSH TABLES WITH READ LOCK;
MYSQL
# Wait for final replication
sleep 10
# Stop replication on new server
log "Stopping replication on new server"
ssh user@new-server "mysql -u root -pPASSWORD << 'MYSQL'
STOP SLAVE;
RESET SLAVE ALL;
SET GLOBAL read_only = OFF;
MYSQL"
log "=== Final synchronization complete ==="
log "Ready for cutover to new server"
EOF
chmod +x /root/final-sync.sh
Verification After Synchronization
Data Integrity Verification
cat > /root/verify-sync.sh << 'EOF'
#!/bin/bash
REPORT="/var/log/sync-verification-$(date +%F-%H%M%S).txt"
echo "=== Synchronization Verification Report ===" > $REPORT
echo "Generated: $(date)" >> $REPORT
echo "" >> $REPORT
# File count comparison
echo "--- File Count Comparison ---" >> $REPORT
SOURCE_COUNT=$(find /var/www/html -type f | wc -l)
DEST_COUNT=$(ssh user@new-server "find /var/www/html -type f | wc -l")
echo "Source files: $SOURCE_COUNT" >> $REPORT
echo "Destination files: $DEST_COUNT" >> $REPORT
if [ $SOURCE_COUNT -eq $DEST_COUNT ]; then
echo "Status: PASS" >> $REPORT
else
echo "Status: FAIL - File count mismatch" >> $REPORT
fi
echo "" >> $REPORT
# Size comparison
echo "--- Size Comparison ---" >> $REPORT
SOURCE_SIZE=$(du -sb /var/www/html | awk '{print $1}')
DEST_SIZE=$(ssh user@new-server "du -sb /var/www/html | awk '{print \$1}'")
echo "Source size: $SOURCE_SIZE bytes" >> $REPORT
echo "Destination size: $DEST_SIZE bytes" >> $REPORT
if [ $SOURCE_SIZE -eq $DEST_SIZE ]; then
echo "Status: PASS" >> $REPORT
else
echo "Status: WARNING - Size mismatch" >> $REPORT
fi
echo "" >> $REPORT
# Database comparison
echo "--- Database Comparison ---" >> $REPORT
# Compare table counts
mysql -u root -pPASSWORD -e "
SELECT
TABLE_SCHEMA,
COUNT(*) as table_count
FROM information_schema.TABLES
WHERE TABLE_SCHEMA NOT IN ('information_schema', 'mysql', 'performance_schema')
GROUP BY TABLE_SCHEMA
" > /tmp/source-tables.txt
ssh user@new-server "mysql -u root -pPASSWORD -e \"
SELECT
TABLE_SCHEMA,
COUNT(*) as table_count
FROM information_schema.TABLES
WHERE TABLE_SCHEMA NOT IN ('information_schema', 'mysql', 'performance_schema')
GROUP BY TABLE_SCHEMA
\"" > /tmp/dest-tables.txt
echo "Table counts:" >> $REPORT
cat /tmp/source-tables.txt >> $REPORT
echo "" >> $REPORT
cat /tmp/dest-tables.txt >> $REPORT
# Compare row counts for critical tables
mysql -u root -pPASSWORD your_database -e "
SELECT
TABLE_NAME,
TABLE_ROWS
FROM information_schema.TABLES
WHERE TABLE_SCHEMA = 'your_database'
ORDER BY TABLE_NAME
" >> $REPORT
echo "" >> $REPORT
echo "=== Verification Complete ===" >> $REPORT
cat $REPORT
EOF
chmod +x /root/verify-sync.sh
Best Practices for Synchronization
1. Start Early
- Begin synchronization well before cutover
- Perform multiple sync passes to reduce final sync time
- Allow time to identify and resolve issues
2. Monitor Continuously
- Track synchronization lag in real-time
- Set up alerts for sync failures
- Monitor resource usage on both servers
3. Test Thoroughly
- Verify synchronization with checksums
- Test application functionality during sync
- Validate data integrity regularly
4. Plan for Failures
- Implement automatic retry logic
- Maintain detailed logs of sync operations
- Have rollback procedures ready
5. Optimize Performance
- Use appropriate sync intervals
- Implement bandwidth limiting during peak hours
- Leverage compression for slow networks
6. Maintain Communication
- Keep stakeholders informed of sync status
- Document any issues and resolutions
- Provide regular progress updates
Conclusion
Effective synchronization during migration is the cornerstone of zero-downtime server transitions. Key takeaways:
- Choose the right tools: rsync for files, replication for databases
- Monitor continuously: Track lag and detect issues immediately
- Optimize for your scenario: Balance between performance and safety
- Verify thoroughly: Always confirm data integrity after sync
- Plan the final sync: Minimize the gap between last sync and cutover
- Document everything: Maintain detailed logs and reports
- Test extensively: Validate sync procedures before production use
Whether synchronizing terabytes of files or real-time database transactions, these strategies ensure data consistency and minimize risk during server migrations. Master these techniques, and you'll confidently execute even the most complex migration scenarios with minimal downtime and zero data loss.
Remember: Synchronization isn't just about copying data—it's about maintaining operational continuity while transforming your infrastructure.


