Live VM Migration: Complete Guide
Introduction
Live migration is one of the most powerful features of modern virtualization platforms, enabling virtual machines to move between physical hosts with zero downtime. This capability is essential for load balancing, hardware maintenance, disaster recovery, and optimizing resource utilization in production environments.
This comprehensive guide explores KVM/QEMU live migration using libvirt, covering everything from basic setup to advanced migration scenarios. Whether you're managing a small cluster or orchestrating migrations across data centers, understanding the technical details, requirements, and best practices is crucial for successful implementation.
Live migration works by transferring a running VM's memory, CPU state, and device state from one host to another while the VM continues to execute. The process is transparent to applications and users, with only a brief performance impact during the final switchover phase. Modern implementations can migrate VMs in seconds with imperceptible downtime.
By the end of this guide, you'll master live migration configuration, understand different migration types, optimize migration performance, and troubleshoot common issues that arise in production environments.
Understanding Live Migration
What is Live Migration?
Live migration (also called hot migration) is the process of moving a running virtual machine from one physical host to another without stopping the VM or interrupting running services. The entire VM state, including memory contents, CPU registers, and device states, is transferred to the destination host.
Migration Process Phases
┌────────────────────────────────────────┐
│ Phase 1: Pre-Migration Setup │
│ - Verify compatibility │
│ - Check resources │
│ - Establish connection │
└──────────────┬─────────────────────────┘
│
┌──────────────▼─────────────────────────┐
│ Phase 2: Iterative Pre-Copy │
│ - Copy memory pages │
│ - Track dirty pages │
│ - Re-copy modified pages │
└──────────────┬─────────────────────────┘
│
┌──────────────▼─────────────────────────┐
│ Phase 3: Stop-and-Copy │
│ - Pause VM briefly │
│ - Transfer remaining state │
│ - ~100-500ms downtime │
└──────────────┬─────────────────────────┘
│
┌──────────────▼─────────────────────────┐
│ Phase 4: Post-Migration │
│ - Resume VM on destination │
│ - Clean up source │
│ - Update network/storage │
└────────────────────────────────────────┘
Types of Migration
1. Live Migration (Hot Migration)
- VM remains running throughout
- Memory and state copied while VM executes
- Brief pause during final switchover
- Zero visible downtime
2. Offline Migration (Cold Migration)
- VM stopped before migration
- Faster transfer (no dirty page tracking)
- Downtime during entire process
- Simpler, more reliable
3. Live Storage Migration
- Move VM disk images while running
- Can combine with live migration
- Useful for storage maintenance
4. Post-Copy Migration
- Start VM on destination immediately
- Fetch memory pages on-demand
- Lower total migration time
- Risk if network fails
Prerequisites and Requirements
Host Configuration Requirements
Both source and destination hosts must have:
# 1. Same CPU architecture
lscpu | grep "Model name"
lscpu | grep "Architecture"
# 2. Compatible CPU features
virsh capabilities | grep -A 20 "<cpu>"
# 3. Same libvirt version (or compatible)
virsh version
# 4. KVM/QEMU installed
which qemu-system-x86_64
lsmod | grep kvm
# 5. Shared storage or storage migration configured
# (for disk images)
# 6. Network connectivity
ping destination-host
Network Requirements
# Low latency network (preferably dedicated)
ping -c 100 destination-host | tail -n 1
# RTT should be < 1ms for best performance
# High bandwidth (1Gbps minimum, 10Gbps recommended)
iperf3 -s # On destination
iperf3 -c destination-host -t 30 # On source
# Open required ports
# libvirt default: 16509 (TLS) or 16514 (TCP)
# qemu+ssh: 22 (SSH)
Storage Requirements
Option 1: Shared Storage (Recommended)
# NFS shared storage
# Mount same NFS share on both hosts
mount -t nfs nfs-server:/exports/vms /var/lib/libvirt/images
# Verify both hosts see same storage
ls -la /var/lib/libvirt/images/
# Add to /etc/fstab for persistence
echo "nfs-server:/exports/vms /var/lib/libvirt/images nfs defaults 0 0" >> /etc/fstab
Option 2: Storage Migration
# Copy disk images during migration
# Requires sufficient bandwidth
# Takes longer than shared storage migration
CPU Compatibility
# Check CPU flags on both hosts
virsh capabilities | grep features
# Use host-passthrough or host-model carefully
# Better: Use named CPU model for compatibility
# View available CPU models
virsh domcapabilities | grep -A 50 cpu
qemu-system-x86_64 -cpu help
# Configure VM with compatible CPU
virsh edit vm-name
<cpu mode='custom' match='exact'>
<model>Broadwell</model>
</cpu>
# Or use host-model with feature checking
<cpu mode='host-model'>
<model fallback='forbid'/>
</cpu>
Setting Up Shared Storage
NFS Configuration
On NFS Server:
# Install NFS server
apt install nfs-kernel-server # Debian/Ubuntu
dnf install nfs-utils # RHEL/CentOS
# Create export directory
mkdir -p /exports/vms
chown -R qemu:qemu /exports/vms
chmod 755 /exports/vms
# Configure exports
cat >> /etc/exports << 'EOF'
/exports/vms 192.168.1.0/24(rw,sync,no_root_squash,no_subtree_check)
EOF
# Apply changes
exportfs -arv
# Start NFS
systemctl enable nfs-server
systemctl start nfs-server
# Verify exports
showmount -e localhost
On KVM Hosts (both source and destination):
# Install NFS client
apt install nfs-common # Debian/Ubuntu
dnf install nfs-utils # RHEL/CentOS
# Create mount point
mkdir -p /var/lib/libvirt/images
# Mount NFS share
mount -t nfs nfs-server:/exports/vms /var/lib/libvirt/images
# Test write access
touch /var/lib/libvirt/images/test
rm /var/lib/libvirt/images/test
# Add to fstab
echo "nfs-server:/exports/vms /var/lib/libvirt/images nfs defaults 0 0" >> /etc/fstab
# Verify mount
df -h | grep vms
Configuring Storage Pool
# Create storage pool on shared storage
cat > nfs-pool.xml << 'EOF'
<pool type='dir'>
<name>nfs-pool</name>
<target>
<path>/var/lib/libvirt/images</path>
</target>
</pool>
EOF
# Define pool on both hosts
virsh pool-define nfs-pool.xml
virsh pool-start nfs-pool
virsh pool-autostart nfs-pool
# Verify
virsh pool-list
virsh pool-info nfs-pool
Configuring Hosts for Migration
Network Configuration
Using libvirtd TCP (Unencrypted - Test Only):
# Edit /etc/libvirt/libvirtd.conf on both hosts
sudo vim /etc/libvirt/libvirtd.conf
# Uncomment and modify:
listen_tls = 0
listen_tcp = 1
tcp_port = "16509"
auth_tcp = "none" # For testing only!
# Edit /etc/default/libvirtd (Debian/Ubuntu)
sudo vim /etc/default/libvirtd
libvirtd_opts="--listen"
# Or /etc/sysconfig/libvirtd (RHEL/CentOS)
LIBVIRTD_ARGS="--listen"
# Restart libvirtd
sudo systemctl restart libvirtd
# Verify listening
ss -tulpn | grep 16509
Using libvirtd TLS (Secure - Production):
# Generate TLS certificates (on certificate authority)
mkdir -p /etc/pki/CA
cd /etc/pki/CA
# Create CA
certtool --generate-privkey > cakey.pem
cat > ca.info << EOF
cn = CA
ca
cert_signing_key
EOF
certtool --generate-self-signed --load-privkey cakey.pem \
--template ca.info --outfile cacert.pem
# Create server certificates for each host
certtool --generate-privkey > serverkey.pem
cat > server.info << EOF
organization = MyOrg
cn = host1.example.com
tls_www_server
encryption_key
signing_key
EOF
certtool --generate-certificate --load-privkey serverkey.pem \
--load-ca-certificate cacert.pem --load-ca-privkey cakey.pem \
--template server.info --outfile servercert.pem
# Install certificates on both hosts
sudo mkdir -p /etc/pki/libvirt/private
sudo cp cacert.pem /etc/pki/CA/
sudo cp servercert.pem /etc/pki/libvirt/
sudo cp serverkey.pem /etc/pki/libvirt/private/
# Configure libvirtd for TLS
sudo vim /etc/libvirt/libvirtd.conf
listen_tls = 1
listen_tcp = 0
# Restart libvirtd
sudo systemctl restart libvirtd
Using SSH (Simplest - Recommended):
# No special libvirtd configuration needed
# Just setup SSH key authentication
# On source host
ssh-keygen -t rsa -b 4096
# Copy key to destination
ssh-copy-id root@destination-host
# Test connection
ssh root@destination-host 'virsh version'
# This is the recommended method!
Firewall Configuration
# Allow libvirt ports
# TCP: 16509 (non-TLS), 16514 (TLS)
# SSH: 22
# Debian/Ubuntu (UFW)
ufw allow 16509/tcp
ufw allow 16514/tcp
ufw allow 22/tcp
# RHEL/CentOS (firewalld)
firewall-cmd --permanent --add-port=16509/tcp
firewall-cmd --permanent --add-port=16514/tcp
firewall-cmd --permanent --add-service=ssh
firewall-cmd --reload
# iptables directly
iptables -A INPUT -p tcp --dport 16509 -j ACCEPT
iptables -A INPUT -p tcp --dport 16514 -j ACCEPT
iptables -A INPUT -p tcp --dport 22 -j ACCEPT
Performing Live Migration
Basic Live Migration
Using virsh (SSH method - recommended):
# Syntax:
# virsh migrate [options] domain desturi [migrateuri] [dname]
# Simple migration using SSH
virsh migrate --live ubuntu-vm qemu+ssh://destination-host/system
# With verbose output
virsh migrate --live --verbose ubuntu-vm qemu+ssh://destination-host/system
# Persistent migration (keep configuration)
virsh migrate --live --persistent ubuntu-vm qemu+ssh://destination-host/system
# Undefine source after migration
virsh migrate --live --persistent --undefinesource ubuntu-vm \
qemu+ssh://destination-host/system
Using TCP connection:
# Direct TCP migration
virsh migrate --live ubuntu-vm qemu+tcp://destination-host/system
# With custom port
virsh migrate --live ubuntu-vm qemu+tcp://destination-host:16509/system
Monitoring migration progress:
# Monitor in separate terminal
watch -n 1 'virsh domjobinfo ubuntu-vm'
# Shows:
# - Time elapsed
# - Data processed
# - Data remaining
# - Memory transfer rate
# - Migration status
Peer-to-Peer Migration
# P2P migration (destination host initiated)
virsh migrate --live --p2p ubuntu-vm qemu+ssh://destination-host/system
# P2P with tunneled migration (encrypted)
virsh migrate --live --p2p --tunnelled ubuntu-vm \
qemu+ssh://destination-host/system
# Advantages:
# - Simpler network topology
# - Only need source->dest connectivity
# - Automatic destination URI selection
Migration with Custom Options
# Specify migration URI for data transfer
virsh migrate --live --p2p --tunnelled \
--migrateuri tcp://192.168.100.1:49152 \
ubuntu-vm qemu+ssh://destination-host/system
# Set bandwidth limit (MB/s)
virsh migrate --live --verbose --bandwidth 100 \
ubuntu-vm qemu+ssh://destination-host/system
# Suspend destination VM after migration (for testing)
virsh migrate --live --suspend ubuntu-vm qemu+ssh://destination-host/system
# Change VM name on destination
virsh migrate --live ubuntu-vm qemu+ssh://destination-host/system \
--dname ubuntu-vm-migrated
# Unsafe migration (skip safety checks - use carefully!)
virsh migrate --live --unsafe ubuntu-vm qemu+ssh://destination-host/system
Live Storage Migration
Migrate VM with non-shared storage:
# Copy disk during migration
virsh migrate --live --copy-storage-all ubuntu-vm \
qemu+ssh://destination-host/system
# Copy only incremental changes (if disk partially exists)
virsh migrate --live --copy-storage-inc ubuntu-vm \
qemu+ssh://destination-host/system
# Specify destination disk paths
virsh migrate --live --copy-storage-all \
--migrate-disks vda \
ubuntu-vm qemu+ssh://destination-host/system
# This is much slower due to disk copying!
# Monitor with: virsh domjobinfo ubuntu-vm
Offline (Cold) Migration
# Shutdown VM first
virsh shutdown ubuntu-vm
# Wait for shutdown
while [ "$(virsh domstate ubuntu-vm)" != "shut off" ]; do
sleep 1
done
# Migrate configuration and disk
virsh dumpxml ubuntu-vm > ubuntu-vm.xml
scp ubuntu-vm.xml root@destination-host:/tmp/
scp /var/lib/libvirt/images/ubuntu-vm.qcow2 \
root@destination-host:/var/lib/libvirt/images/
# On destination host
virsh define /tmp/ubuntu-vm.xml
virsh start ubuntu-vm
# Remove from source
virsh undefine ubuntu-vm
Advanced Migration Scenarios
Compressed Migration
# Use compression to reduce bandwidth (higher CPU usage)
virsh migrate --live --compressed ubuntu-vm \
qemu+ssh://destination-host/system
# Adjust compression level and threads
virsh migrate --live --compressed \
--comp-methods mt \
--comp-mt-level 9 \
--comp-mt-threads 4 \
ubuntu-vm qemu+ssh://destination-host/system
Auto-Converge Migration
# Automatically throttle VM if migration not converging
virsh migrate --live --auto-converge ubuntu-vm \
qemu+ssh://destination-host/system
# Set initial and increment throttle
virsh migrate --live --auto-converge \
--auto-converge-initial 20 \
--auto-converge-increment 10 \
ubuntu-vm qemu+ssh://destination-host/system
# Helps with memory-intensive workloads
Post-Copy Migration
# Start VM on destination before full memory transfer
virsh migrate --live --postcopy ubuntu-vm \
qemu+ssh://destination-host/system
# Switch to post-copy mode during migration
virsh migrate-setmaxdowntime ubuntu-vm 1000
virsh migrate-postcopy ubuntu-vm
# Advantages:
# - Guaranteed convergence
# - Lower total migration time
# - Shorter pre-migration phase
# Disadvantages:
# - Network failure can lose VM
# - Potential performance issues until full transfer
Migration with Persistent Configuration
# Persist VM on destination, remove from source
virsh migrate --live --persistent --undefinesource ubuntu-vm \
qemu+ssh://destination-host/system
# Keep VM defined on both hosts (non-exclusive)
virsh migrate --live --persistent ubuntu-vm \
qemu+ssh://destination-host/system
# Verify on destination
ssh root@destination-host 'virsh list --all'
Multi-VM Migration
#!/bin/bash
# Migrate multiple VMs sequentially
VMS=("web1" "web2" "web3")
DEST="qemu+ssh://destination-host/system"
for vm in "${VMS[@]}"; do
echo "Migrating $vm..."
virsh migrate --live --verbose --persistent --undefinesource \
"$vm" "$DEST"
if [ $? -eq 0 ]; then
echo "$vm migrated successfully"
else
echo "ERROR: Failed to migrate $vm"
exit 1
fi
done
echo "All VMs migrated successfully"
Parallel Migration (Multiple VMs)
#!/bin/bash
# Migrate VMs in parallel (use carefully!)
VMS=("web1" "web2" "web3")
DEST="qemu+ssh://destination-host/system"
for vm in "${VMS[@]}"; do
(
virsh migrate --live --verbose "$vm" "$DEST" &
) &
done
# Wait for all migrations to complete
wait
echo "All migrations initiated"
Performance Optimization
Bandwidth Management
# Set migration bandwidth limit (MB/s)
virsh migrate-setspeed ubuntu-vm 100
# Check current bandwidth limit
virsh migrate-getspeed ubuntu-vm
# Set during migration
virsh migrate --live --bandwidth 200 ubuntu-vm \
qemu+ssh://destination-host/system
# Dynamic adjustment during migration
virsh migrate-setspeed ubuntu-vm 150
Downtime Management
# Set maximum tolerable downtime (milliseconds)
virsh migrate-setmaxdowntime ubuntu-vm 500
# Default is usually 300ms
# Lower values may cause migration to fail
# Higher values reduce total migration time
# Check if downtime threshold can be met
virsh domjobinfo ubuntu-vm | grep downtime
Network Tuning
# Use dedicated migration network
virsh migrate --live --migrateuri tcp://10.0.1.1:49152 \
ubuntu-vm qemu+ssh://destination-host/system
# Configure multi-queue virtio-net for better performance
virsh edit ubuntu-vm
<interface type='network'>
<source network='default'/>
<model type='virtio'/>
<driver name='vhost' queues='4'/>
</interface>
# Enable large MTU (jumbo frames) on migration network
ip link set dev eth1 mtu 9000
Memory Optimization
# Enable memory balloon for better migration
virsh edit ubuntu-vm
<memballoon model='virtio'>
<stats period='10'/>
</memballoon>
# Reduce VM memory before migration
virsh setmem ubuntu-vm 2G
# Enable huge pages for faster migration
virsh edit ubuntu-vm
<memoryBacking>
<hugepages/>
</memoryBacking>
# Verify huge pages on host
cat /proc/meminfo | grep Huge
CPU Configuration
# Use CPU pinning for consistent performance
virsh vcpupin ubuntu-vm 0 0
virsh vcpupin ubuntu-vm 1 1
# Ensure compatible CPU model
virsh edit ubuntu-vm
<cpu mode='custom' match='exact'>
<model>Broadwell</model>
<feature policy='require' name='pdpe1gb'/>
</cpu>
# Check CPU compatibility
virsh cpu-compare cpu.xml
Monitoring and Troubleshooting
Monitoring Migration Progress
# Real-time migration statistics
virsh domjobinfo ubuntu-vm
# Output includes:
# Job type: Unbounded
# Time elapsed: 42123 ms
# Data processed: 2.5 GiB
# Data remaining: 512 MiB
# Memory processed: 2.3 GiB
# Memory remaining: 256 MiB
# Memory bandwidth: 128 MiB/s
# Continuous monitoring
watch -n 1 'virsh domjobinfo ubuntu-vm'
# Script for detailed monitoring
#!/bin/bash
while true; do
clear
virsh domjobinfo ubuntu-vm
sleep 1
done
Migration Logs
# Check libvirt logs
tail -f /var/log/libvirt/libvirtd.log
# QEMU logs for specific VM
tail -f /var/log/libvirt/qemu/ubuntu-vm.log
# System logs
journalctl -u libvirtd -f
# Filter for migration events
journalctl -u libvirtd | grep -i migrate
Common Issues and Solutions
Issue: Migration stalls or never completes
# Check if VM has high memory churn
virsh domjobinfo ubuntu-vm | grep "Memory bandwidth"
# Solutions:
# 1. Enable auto-converge
virsh migrate --live --auto-converge ubuntu-vm qemu+ssh://dest/system
# 2. Increase bandwidth
virsh migrate-setspeed ubuntu-vm 500
# 3. Use compression
virsh migrate --live --compressed ubuntu-vm qemu+ssh://dest/system
# 4. Switch to post-copy
virsh migrate-postcopy ubuntu-vm
# 5. Reduce VM workload temporarily
Issue: CPU compatibility error
# Error: "migration of domain failed: Unsafe migration..."
# Check CPU compatibility
virsh capabilities | grep features
# Solution: Use compatible CPU mode
virsh edit ubuntu-vm
# Change from:
<cpu mode='host-passthrough'/>
# To:
<cpu mode='host-model'>
<model fallback='allow'/>
</cpu>
# Or use specific model:
<cpu mode='custom' match='exact'>
<model>Westmere</model>
</cpu>
# Force unsafe migration (testing only)
virsh migrate --live --unsafe ubuntu-vm qemu+ssh://dest/system
Issue: Network unreachable after migration
# Verify network configuration matches
virsh net-list --all # On both hosts
# Check bridge configuration
brctl show # On both hosts
# Verify VM network interface
virsh domiflist ubuntu-vm
# Solutions:
# 1. Ensure same network names on both hosts
# 2. Use bridge mode for consistency
# 3. Configure proper routing between hosts
Issue: Storage not accessible
# Verify shared storage mounted on both hosts
df -h | grep libvirt
# Check NFS connectivity
showmount -e nfs-server
# Verify file permissions
ls -la /var/lib/libvirt/images/
# Solutions:
# 1. Mount shared storage on destination
# 2. Use --copy-storage-all if no shared storage
# 3. Check NFS/storage connectivity
Issue: Permission denied
# Check libvirt permissions
ls -la /var/run/libvirt/
# Verify group membership
groups $USER
# Check SELinux/AppArmor
getenforce # SELinux
aa-status # AppArmor
# Solutions:
# 1. Add user to libvirt group
usermod -aG libvirt $USER
# 2. Configure SELinux
setsebool -P virt_use_nfs 1
# 3. Use root for testing
Issue: Slow migration performance
# Check network bandwidth
iperf3 -c destination-host
# Check CPU usage
top
htop
# Monitor disk I/O
iotop
# Solutions:
# 1. Increase migration bandwidth
virsh migrate-setspeed ubuntu-vm 1000
# 2. Use compression if CPU available
virsh migrate --live --compressed ubuntu-vm qemu+ssh://dest/system
# 3. Use dedicated migration network with jumbo frames
# 4. Reduce VM memory or pause workload
# 5. Enable auto-converge
Migration Scripts and Automation
Pre-Migration Check Script
#!/bin/bash
# pre-migration-check.sh
VM=$1
DEST_HOST=$2
if [ -z "$VM" ] || [ -z "$DEST_HOST" ]; then
echo "Usage: $0 <vm-name> <destination-host>"
exit 1
fi
echo "Pre-migration checks for $VM to $DEST_HOST"
echo "============================================"
# Check VM exists and is running
if ! virsh domstate "$VM" | grep -q "running"; then
echo "ERROR: VM $VM is not running"
exit 1
fi
echo "✓ VM is running"
# Check destination host reachable
if ! ping -c 1 "$DEST_HOST" &>/dev/null; then
echo "ERROR: Cannot reach destination host"
exit 1
fi
echo "✓ Destination host reachable"
# Check SSH connectivity
if ! ssh root@"$DEST_HOST" 'exit' &>/dev/null; then
echo "ERROR: Cannot SSH to destination"
exit 1
fi
echo "✓ SSH connectivity OK"
# Check destination has KVM
if ! ssh root@"$DEST_HOST" 'virsh version' &>/dev/null; then
echo "ERROR: KVM not available on destination"
exit 1
fi
echo "✓ KVM available on destination"
# Check CPU compatibility
echo "✓ CPU compatibility (manual verification recommended)"
# Check memory availability
VM_MEM=$(virsh dominfo "$VM" | grep "Max memory" | awk '{print $3}')
DEST_FREE=$(ssh root@"$DEST_HOST" "free | grep Mem | awk '{print \$4}'")
if [ "$DEST_FREE" -lt "$VM_MEM" ]; then
echo "WARNING: Low memory on destination"
fi
echo "✓ Memory check complete"
# Check shared storage
VM_DISK=$(virsh domblklist "$VM" | awk 'NR>2 {print $2}' | head -n 1)
if ! ssh root@"$DEST_HOST" "ls $VM_DISK" &>/dev/null; then
echo "ERROR: Disk not accessible on destination"
echo " Consider using --copy-storage-all"
exit 1
fi
echo "✓ Shared storage accessible"
echo ""
echo "All checks passed! Ready to migrate."
echo ""
echo "Suggested command:"
echo "virsh migrate --live --verbose --persistent --undefinesource \\"
echo " $VM qemu+ssh://$DEST_HOST/system"
Automated Migration Script
#!/bin/bash
# migrate-vm.sh
VM=$1
DEST=$2
BANDWIDTH=500 # MB/s
if [ -z "$VM" ] || [ -z "$DEST" ]; then
echo "Usage: $0 <vm-name> <destination-host>"
exit 1
fi
DEST_URI="qemu+ssh://${DEST}/system"
echo "Starting migration of $VM to $DEST"
echo "===================================="
# Pre-migration checks
echo "Running pre-migration checks..."
if ! bash pre-migration-check.sh "$VM" "$DEST"; then
echo "Pre-migration checks failed!"
exit 1
fi
# Create snapshot before migration (safety)
echo "Creating safety snapshot..."
SNAPSHOT="pre-migrate-$(date +%Y%m%d-%H%M%S)"
virsh snapshot-create-as "$VM" "$SNAPSHOT" "Pre-migration snapshot"
# Perform migration
echo "Starting migration..."
virsh migrate --live --verbose --persistent --undefinesource \
--bandwidth "$BANDWIDTH" \
--auto-converge \
"$VM" "$DEST_URI" 2>&1 | tee migration-${VM}.log
if [ ${PIPESTATUS[0]} -eq 0 ]; then
echo "Migration completed successfully!"
# Verify VM is running on destination
if ssh root@"$DEST" "virsh domstate $VM" | grep -q "running"; then
echo "VM verified running on destination"
# Delete safety snapshot
virsh snapshot-delete "$VM" "$SNAPSHOT" 2>/dev/null || true
else
echo "WARNING: VM may not be running on destination"
fi
else
echo "Migration failed!"
echo "Check migration-${VM}.log for details"
exit 1
fi
Load Balancing Migration Script
#!/bin/bash
# load-balance.sh
# Migrate VMs to balance load across hosts
HOSTS=("host1" "host2" "host3")
THRESHOLD=70 # CPU usage threshold
for host in "${HOSTS[@]}"; do
CPU_USAGE=$(ssh root@"$host" "top -bn1 | grep 'Cpu(s)' | awk '{print \$2}' | cut -d'%' -f1")
if (( $(echo "$CPU_USAGE > $THRESHOLD" | bc -l) )); then
echo "Host $host overloaded (${CPU_USAGE}%)"
# Find least loaded host
MIN_HOST=""
MIN_LOAD=100
for target in "${HOSTS[@]}"; do
if [ "$target" != "$host" ]; then
LOAD=$(ssh root@"$target" "top -bn1 | grep 'Cpu(s)' | awk '{print \$2}' | cut -d'%' -f1")
if (( $(echo "$LOAD < $MIN_LOAD" | bc -l) )); then
MIN_LOAD=$LOAD
MIN_HOST=$target
fi
fi
done
# Migrate least critical VM
VM=$(ssh root@"$host" "virsh list --name | head -n 1")
echo "Migrating $VM from $host to $MIN_HOST"
ssh root@"$host" "virsh migrate --live $VM qemu+ssh://${MIN_HOST}/system"
fi
done
Best Practices
Planning and Preparation
- Always test migrations in dev/staging first
- Verify CPU compatibility before production migrations
- Use shared storage when possible
- Configure dedicated migration network for production
- Document migration procedures and runbooks
Performance Considerations
# 1. Use compression for WAN migrations
virsh migrate --live --compressed
# 2. Set appropriate bandwidth limits
virsh migrate --bandwidth 500
# 3. Enable auto-converge for memory-intensive VMs
virsh migrate --auto-converge
# 4. Use post-copy for guaranteed convergence
virsh migrate --postcopy
# 5. Schedule migrations during low-usage periods
Security Best Practices
# 1. Always use SSH or TLS for production
virsh migrate --live ubuntu-vm qemu+ssh://dest/system
# 2. Never use auth_tcp = "none" in production
# 3. Implement proper certificate management for TLS
# 4. Use dedicated, isolated migration network
# 5. Enable firewall rules for migration ports only between trusted hosts
Monitoring and Validation
# 1. Always monitor migrations
watch -n 1 'virsh domjobinfo vm-name'
# 2. Validate VM after migration
ssh dest-host 'virsh domstate vm-name'
ssh dest-host 'virsh dominfo vm-name'
# 3. Test application connectivity
curl http://vm-ip/health
# 4. Check logs for errors
journalctl -u libvirtd | grep -i error
# 5. Keep detailed migration logs
Conclusion
Live migration is a critical capability for modern virtualized infrastructure, enabling zero-downtime maintenance, dynamic resource optimization, and enhanced disaster recovery strategies. Mastering KVM/QEMU live migration with libvirt provides the foundation for building highly available, flexible virtualization platforms.
Key takeaways:
- Shared storage significantly simplifies live migration
- SSH-based migration is the simplest and most secure method
- CPU compatibility is crucial for successful migrations
- Auto-converge and compression help with challenging workloads
- Always monitor migration progress and validate results
- Post-copy migration guarantees convergence but has risks
As you gain experience with live migration, explore advanced scenarios like cross-datacenter migrations, storage migration strategies, and integration with orchestration platforms like OpenStack or oVirt. The flexibility and power of KVM live migration make it a cornerstone technology for cloud infrastructure and enterprise virtualization deployments.
Remember that successful migrations require careful planning, thorough testing, and proper monitoring. With the knowledge and techniques covered in this guide, you're equipped to implement reliable, efficient live migration workflows in production environments.


