Memory Overcommit in Virtualization: Complete Guide
Introduction
Memory overcommitment is a powerful virtualization technique that allows allocating more memory to virtual machines than physically available on the host. This capability enables higher VM density, improved resource utilization, and cost savings, but requires careful configuration and monitoring to avoid performance degradation.
This comprehensive guide explores memory overcommit strategies, technologies, and best practices for KVM/QEMU environments. You'll learn how to safely overcommit memory, implement memory ballooning, configure KSM (Kernel Same-page Merging), and monitor memory pressure to maintain optimal performance while maximizing infrastructure efficiency.
Without understanding memory overcommit mechanics, you risk either underutilizing expensive hardware by being too conservative, or causing severe performance problems through excessive overcommitment leading to swapping and OOM kills. The key is finding the right balance based on workload characteristics and monitoring capabilities.
By the end of this guide, you'll master memory overcommit techniques that enable running more VMs per host while maintaining performance guarantees, understand the trade-offs involved, and know how to troubleshoot memory-related issues in virtualized environments.
Understanding Memory Overcommit
What is Memory Overcommit?
Memory overcommit means the sum of memory allocated to all VMs exceeds the physical RAM available on the host.
Example:
Host Physical RAM: 64 GB
VM1: 16 GB
VM2: 16 GB
VM3: 16 GB
VM4: 16 GB
VM5: 16 GB
Total Allocated: 80 GB
Overcommit Ratio: 1.25:1 (80/64)
Why Overcommit Memory?
Benefits:
- Higher VM density (more VMs per host)
- Better resource utilization (VMs rarely use 100% RAM)
- Cost savings (fewer physical servers needed)
- Flexibility in VM sizing
Risks:
- Performance degradation if memory pressure occurs
- Swapping to disk (extremely slow)
- OOM (Out of Memory) kills
- Unpredictable performance
Memory Overcommit Technologies
1. Memory Ballooning
- Guest cooperates with host
- Returns unused memory to host
- Requires balloon driver in guest
2. KSM (Kernel Same-page Merging)
- Deduplicates identical memory pages
- Reduces actual memory usage
- CPU overhead for scanning
3. Transparent Huge Pages (THP)
- Uses 2MB pages instead of 4KB
- Reduces TLB misses
- Better performance
4. Memory Swapping
- Last resort mechanism
- Extremely slow
- Should be avoided
5. zswap/zram
- Compressed RAM cache
- Better than disk swap
- CPU overhead for compression
Checking Host Memory Status
View Physical Memory
# Total system memory
free -h
# Output:
# total used free shared buff/cache available
# Mem: 62Gi 15Gi 30Gi 1.0Gi 16Gi 45Gi
# Swap: 8.0Gi 0B 8.0Gi
# Detailed memory info
cat /proc/meminfo | head -20
# Memory by NUMA node
numactl --hardware
# Per-node free memory
numastat -m
Check Current VM Memory Usage
# List all VMs with memory allocation
virsh list --all
for vm in $(virsh list --name); do
echo "VM: $vm"
virsh dominfo $vm | grep memory
done
# Total allocated memory
virsh list --name | while read vm; do
virsh dominfo $vm | grep "Max memory"
done | awk '{sum+=$3} END {print "Total allocated: " sum/1024/1024 " GB"}'
# Actual memory usage per VM
virsh list --name | while read vm; do
echo -n "$vm: "
virsh dommemstat $vm 2>/dev/null | grep actual
done
Calculate Current Overcommit Ratio
#!/bin/bash
# Calculate memory overcommit ratio
HOST_MEM=$(free -b | awk 'NR==2 {print $2}')
HOST_MEM_GB=$(echo "scale=2; $HOST_MEM/1024/1024/1024" | bc)
TOTAL_ALLOCATED=0
for vm in $(virsh list --all --name); do
VM_MEM=$(virsh dominfo $vm 2>/dev/null | grep "Max memory" | awk '{print $3}')
TOTAL_ALLOCATED=$((TOTAL_ALLOCATED + VM_MEM))
done
TOTAL_ALLOCATED_GB=$(echo "scale=2; $TOTAL_ALLOCATED/1024/1024" | bc)
OVERCOMMIT=$(echo "scale=2; $TOTAL_ALLOCATED_GB / $HOST_MEM_GB" | bc)
echo "Host Memory: ${HOST_MEM_GB} GB"
echo "Total Allocated: ${TOTAL_ALLOCATED_GB} GB"
echo "Overcommit Ratio: ${OVERCOMMIT}:1"
if (( $(echo "$OVERCOMMIT > 1.5" | bc -l) )); then
echo "WARNING: High overcommit ratio!"
fi
Memory Ballooning
Understanding Memory Ballooning
Memory ballooning allows the host to reclaim memory from VMs dynamically:
┌─────────────────────────────────────┐
│ Virtual Machine │
│ │
│ ┌────────────────────────────┐ │
│ │ Balloon Driver (virtio) │ │
│ │ Inflates/Deflates │ │
│ └────────────────────────────┘ │
│ │ │
└───────────┼─────────────────────────┘
│ Balloon Control
┌───────────▼─────────────────────────┐
│ Host System │
│ Memory reclaimed/allocated │
└─────────────────────────────────────┘
Configure Memory Ballooning
Enable balloon device in VM:
# Check if VM has balloon device
virsh dumpxml ubuntu-vm | grep balloon
# If not present, add it
virsh edit ubuntu-vm
# Add before </devices>:
<memballoon model='virtio'>
<stats period='10'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x08' function='0x0'/>
</memballoon>
# Restart VM
virsh shutdown ubuntu-vm
virsh start ubuntu-vm
# Verify balloon device
virsh dumpxml ubuntu-vm | grep balloon
Inside guest VM (verify driver):
# Check if balloon driver loaded
lsmod | grep virtio_balloon
# If not loaded
modprobe virtio_balloon
# Make persistent
echo "virtio_balloon" >> /etc/modules
Using Memory Ballooning
# View current memory settings
virsh dominfo ubuntu-vm | grep memory
# Output:
# Max memory: 4194304 KiB (4 GB)
# Used memory: 4194304 KiB (4 GB)
# Set maximum memory (requires VM restart)
virsh setmaxmem ubuntu-vm 4G --config
# Set current memory (live, using balloon)
virsh setmem ubuntu-vm 2G --live
# VM now effectively has 2GB, host reclaimed 2GB
# View memory statistics
virsh dommemstat ubuntu-vm
# Output:
# actual 2097152 (current allocation)
# swap_in 0
# swap_out 0
# major_fault 2234
# minor_fault 89563
# unused 1572864 (unused by guest)
# available 2097152
# usable 1835008 (guest can use)
# rss 2359296 (host RSS)
Automatic Ballooning
Using numad for automatic NUMA and memory management:
# Install numad
apt install numad # Debian/Ubuntu
dnf install numad # RHEL/CentOS
# Start numad
systemctl start numad
systemctl enable numad
# Configure VM for automatic management
virsh edit ubuntu-vm
<vcpu placement='auto'>4</vcpu>
<numatune>
<memory mode='strict' placement='auto'/>
</numatune>
# numad will automatically:
# - Place VM on optimal NUMA node
# - Adjust memory based on usage
# - Balance across NUMA nodes
Kernel Same-page Merging (KSM)
Understanding KSM
KSM scans memory for identical pages and merges them, keeping only one copy. When a page is modified, it's copied (COW - Copy-on-Write).
How KSM works:
Before KSM:
VM1: Page A (content: "xxxxx") ─┐
VM2: Page A (content: "xxxxx") ─┤ Identical pages
VM3: Page A (content: "xxxxx") ─┘
Total: 3 pages
After KSM:
VM1: ─┐
VM2: ─┤──> Shared Page A (content: "xxxxx")
VM3: ─┘
Total: 1 page (2 pages freed)
Enable KSM
# Check KSM status
cat /sys/kernel/mm/ksm/run
# 0 = disabled, 1 = enabled
# Enable KSM
echo 1 | sudo tee /sys/kernel/mm/ksm/run
# Configure KSM parameters
# Pages to scan per run
echo 100 | sudo tee /sys/kernel/mm/ksm/pages_to_scan
# Sleep between scans (milliseconds)
echo 20 | sudo tee /sys/kernel/mm/ksm/sleep_millisecs
# Make persistent
cat > /etc/tmpfiles.d/ksm.conf << 'EOF'
w /sys/kernel/mm/ksm/run - - - - 1
w /sys/kernel/mm/ksm/pages_to_scan - - - - 100
w /sys/kernel/mm/ksm/sleep_millisecs - - - - 20
EOF
# Or create systemd service
cat > /etc/systemd/system/ksm.service << 'EOF'
[Unit]
Description=Enable Kernel Same-page Merging
[Service]
Type=oneshot
ExecStart=/bin/bash -c 'echo 1 > /sys/kernel/mm/ksm/run'
ExecStart=/bin/bash -c 'echo 100 > /sys/kernel/mm/ksm/pages_to_scan'
ExecStart=/bin/bash -c 'echo 20 > /sys/kernel/mm/ksm/sleep_millisecs'
[Install]
WantedBy=multi-user.target
EOF
systemctl enable ksm
systemctl start ksm
Monitor KSM Efficiency
# KSM statistics
cat /sys/kernel/mm/ksm/pages_sharing
cat /sys/kernel/mm/ksm/pages_shared
cat /sys/kernel/mm/ksm/pages_unshared
cat /sys/kernel/mm/ksm/pages_volatile
# Calculate memory saved
SHARING=$(cat /sys/kernel/mm/ksm/pages_sharing)
SHARED=$(cat /sys/kernel/mm/ksm/pages_shared)
SAVED=$((SHARING - SHARED))
SAVED_MB=$((SAVED * 4 / 1024))
echo "Memory saved by KSM: ${SAVED_MB} MB"
# Detailed KSM info
grep -H '' /sys/kernel/mm/ksm/*
# Monitor KSM over time
watch -n 5 'echo "Pages sharing: $(cat /sys/kernel/mm/ksm/pages_sharing)"; \
echo "Pages shared: $(cat /sys/kernel/mm/ksm/pages_shared)"; \
echo "Saved: $(( ($(cat /sys/kernel/mm/ksm/pages_sharing) - \
$(cat /sys/kernel/mm/ksm/pages_shared)) * 4 / 1024 )) MB"'
Configure VMs for KSM
# Enable memory merging in VM
virsh edit ubuntu-vm
<memoryBacking>
<nosharepages/> <!-- Disable KSM for this VM (if needed) -->
</memoryBacking>
# Or explicitly enable (default behavior)
# Just remove <nosharepages/> or don't include it
# For maximum KSM benefit, VMs should:
# - Run similar OS (more identical pages)
# - Use similar applications
# - Have similar configurations
Tune KSM Performance
# Aggressive scanning (more CPU, better merging)
echo 500 | sudo tee /sys/kernel/mm/ksm/pages_to_scan
echo 10 | sudo tee /sys/kernel/mm/ksm/sleep_millisecs
# Conservative scanning (less CPU, less merging)
echo 50 | sudo tee /sys/kernel/mm/ksm/pages_to_scan
echo 100 | sudo tee /sys/kernel/mm/ksm/sleep_millisecs
# Balanced (recommended)
echo 100 | sudo tee /sys/kernel/mm/ksm/pages_to_scan
echo 20 | sudo tee /sys/kernel/mm/ksm/sleep_millisecs
# Monitor CPU impact
top -d 1
# Look for [ksmd] process
Transparent Huge Pages (THP)
Understanding THP
Normal pages: 4KB Huge pages: 2MB (512x larger)
Benefits:
- Reduced TLB misses
- Lower page table overhead
- Better memory performance
Enable THP for VMs
# Check THP status
cat /sys/kernel/mm/transparent_hugepage/enabled
# [always] madvise never
# Enable THP
echo always | sudo tee /sys/kernel/mm/transparent_hugepage/enabled
# Configure defrag (compaction)
echo defer | sudo tee /sys/kernel/mm/transparent_hugepage/defrag
# Options: always defer defer+madvise madvise never
# Make persistent
cat >> /etc/rc.local << 'EOF'
echo always > /sys/kernel/mm/transparent_hugepage/enabled
echo defer > /sys/kernel/mm/transparent_hugepage/defrag
EOF
chmod +x /etc/rc.local
Configure VMs for THP
# VMs automatically use THP if enabled on host
# Check inside guest
cat /proc/meminfo | grep Huge
# AnonHugePages: 2097152 kB (2GB in huge pages)
# Verify THP allocation
cat /sys/kernel/mm/transparent_hugepage/khugepaged/pages_collapsed
# Monitor THP effectiveness
watch -n 1 'cat /proc/meminfo | grep Huge'
THP vs Regular Huge Pages
# THP (Transparent Huge Pages)
# - Automatic
# - No pre-allocation
# - Dynamic
# - May cause latency spikes (compaction)
# Regular Huge Pages
# - Manual configuration
# - Pre-allocated
# - Guaranteed availability
# - No compaction overhead
# Configure regular huge pages for VM
virsh edit ubuntu-vm
<memoryBacking>
<hugepages>
<page size='2048' unit='KiB'/>
</hugepages>
<locked/>
</memoryBacking>
# Pre-allocate huge pages on host
echo 4096 > /proc/sys/vm/nr_hugepages
# 4096 pages * 2MB = 8GB
Memory Swapping and zswap
Understanding VM Swapping
Swapping should be avoided:
- Extremely slow (disk vs RAM)
- 100-1000x slower than RAM
- Causes severe performance degradation
- Indicates memory pressure
Configure Swap for Host
# Check current swap
free -h
swapon --show
# Create swap file if needed
sudo fallocate -l 8G /swapfile
sudo chmod 600 /swapfile
sudo mkswap /swapfile
sudo swapon /swapfile
# Make persistent
echo '/swapfile none swap sw 0 0' | sudo tee -a /etc/fstab
# Configure swappiness (how aggressively to swap)
# Default: 60, Range: 0-100
# Lower = less swapping
sudo sysctl vm.swappiness=10
echo "vm.swappiness=10" | sudo tee -a /etc/sysctl.conf
Enable zswap (Compressed RAM Cache)
# zswap compresses pages before writing to swap
# Much faster than disk swap
# Enable zswap
echo 1 | sudo tee /sys/module/zswap/parameters/enabled
# Configure zswap
echo 20 | sudo tee /sys/module/zswap/parameters/max_pool_percent
echo lz4 | sudo tee /sys/module/zswap/parameters/compressor
echo z3fold | sudo tee /sys/module/zswap/parameters/zpool
# Make persistent (add to kernel parameters)
sudo vim /etc/default/grub
GRUB_CMDLINE_LINUX="zswap.enabled=1 zswap.compressor=lz4 zswap.max_pool_percent=20"
sudo update-grub
sudo reboot
# Check zswap stats
cat /sys/kernel/debug/zswap/*
Alternative: zram (Compressed Block Device)
# zram creates compressed block device in RAM
# Better than zswap for some workloads
# Install zram tools
apt install zram-config # Debian/Ubuntu
dnf install zram # RHEL/CentOS
# Manual configuration
modprobe zram
echo lz4 > /sys/block/zram0/comp_algorithm
echo 4G > /sys/block/zram0/disksize
mkswap /dev/zram0
swapon /dev/zram0 -p 10 # priority 10
# Verify
zramctl
swapon --show
# Monitor compression ratio
cat /sys/block/zram0/mm_stat
Monitoring Memory Usage
Real-time Memory Monitoring
# System-wide memory monitoring
free -h -s 1 # Update every second
# Per-VM memory usage
virsh list --name | while read vm; do
echo "=== $vm ==="
virsh dommemstat $vm 2>/dev/null
done
# Watch memory pressure
watch -n 1 'free -h; echo ""; virsh list --name | while read vm; do \
echo "$vm:"; virsh dommemstat $vm 2>/dev/null | grep -E "actual|rss|usable"; done'
# Monitor with virt-top
virt-top
# Press 2 for memory view
Advanced Memory Monitoring
#!/bin/bash
# memory-monitor.sh - Comprehensive memory monitoring
LOG="/var/log/vm-memory.log"
while true; do
timestamp=$(date '+%Y-%m-%d %H:%M:%S')
# Host memory
host_total=$(free -b | awk 'NR==2 {print $2}')
host_used=$(free -b | awk 'NR==2 {print $3}')
host_free=$(free -b | awk 'NR==2 {print $4}')
host_available=$(free -b | awk 'NR==2 {print $7}')
# KSM stats
ksm_sharing=$(cat /sys/kernel/mm/ksm/pages_sharing 2>/dev/null || echo 0)
ksm_shared=$(cat /sys/kernel/mm/ksm/pages_shared 2>/dev/null || echo 0)
ksm_saved=$(( (ksm_sharing - ksm_shared) * 4096 ))
echo "$timestamp | Host: Used=$host_used Free=$host_free Available=$host_available | KSM_Saved=$ksm_saved" >> $LOG
# Per-VM stats
for vm in $(virsh list --name); do
vm_mem=$(virsh dommemstat $vm 2>/dev/null | grep "actual" | awk '{print $2}')
vm_rss=$(virsh dommemstat $vm 2>/dev/null | grep "rss" | awk '{print $2}')
vm_usable=$(virsh dommemstat $vm 2>/dev/null | grep "usable" | awk '{print $2}')
echo "$timestamp | VM: $vm | Allocated=$vm_mem RSS=$vm_rss Usable=$vm_usable" >> $LOG
done
sleep 60
done
Set Up Alerts
#!/bin/bash
# memory-alert.sh - Alert on memory pressure
THRESHOLD=90 # Alert if memory usage > 90%
EMAIL="[email protected]"
check_memory() {
used=$(free | awk 'NR==2 {print $3}')
total=$(free | awk 'NR==2 {print $2}')
percent=$(( used * 100 / total ))
if [ $percent -gt $THRESHOLD ]; then
message="WARNING: Memory usage at ${percent}%"
echo "$message" | mail -s "Memory Alert" $EMAIL
logger "$message"
fi
}
# Run every 5 minutes
while true; do
check_memory
sleep 300
done
Safe Overcommit Strategies
Conservative Overcommit (1.2:1)
# Safe for production
# 64GB host → 76GB allocated
# Low risk, some efficiency gain
# Example configuration:
# VM1: 16GB
# VM2: 16GB
# VM3: 16GB
# VM4: 16GB
# VM5: 12GB
# Total: 76GB on 64GB host
Moderate Overcommit (1.5:1)
# Acceptable for most workloads
# 64GB host → 96GB allocated
# Requires monitoring, KSM, ballooning
# Enable technologies:
# - KSM (expect 10-20% savings)
# - Memory ballooning
# - THP
# - Monitoring alerts
# Example:
# 6 VMs * 16GB = 96GB on 64GB host
Aggressive Overcommit (2:1 or higher)
# High risk, requires careful management
# 64GB host → 128GB+ allocated
# Only for specific scenarios:
# - Dev/test environments
# - VMs with low memory usage
# - Uniform workloads
# - Active monitoring
# Required:
# - KSM enabled and tuned
# - Memory ballooning active
# - 24/7 monitoring
# - Alerting configured
# - Documented procedures for memory pressure
Best Practices
1. Know Your Workloads:
# Profile VM memory usage over time
for vm in $(virsh list --name); do
echo "$vm memory usage:"
virsh dommemstat $vm | grep -E "actual|rss|usable"
done
# Many VMs use < 50% of allocated memory
# This is where overcommit helps
2. Start Conservative:
# Begin with 1.2:1 overcommit
# Monitor for 1-2 weeks
# Gradually increase if safe
# Never exceed 2:1 for production
3. Enable All Technologies:
# KSM
echo 1 > /sys/kernel/mm/ksm/run
# THP
echo always > /sys/kernel/mm/transparent_hugepage/enabled
# Ballooning (in VM configs)
virsh edit vm-name # Add balloon device
# zswap
echo 1 > /sys/module/zswap/parameters/enabled
4. Monitor Continuously:
# Set up automated monitoring
# Alert on:
# - Memory usage > 85%
# - Swap usage > 10%
# - OOM kills
# - High major faults
5. Plan for Growth:
# Leave headroom for:
# - Memory spikes
# - New VMs
# - Updates/maintenance
# - Emergency scenarios
# Rule: Never allocate > 90% even with overcommit
Troubleshooting Memory Issues
Identifying Memory Pressure
# Check for memory pressure indicators
free -h # Low available memory
# Check swap usage
swapon --show
vmstat 1 10 # Watch si/so columns (swap in/out)
# OOM events
dmesg | grep -i oom
journalctl | grep -i "out of memory"
# Per-VM major faults (indicates swapping)
virsh dommemstat ubuntu-vm | grep major_fault
# System pressure
cat /proc/pressure/memory
Reducing Memory Pressure
Immediate actions:
# 1. Increase balloon deflation (give VMs less memory)
for vm in $(virsh list --name); do
current=$(virsh dominfo $vm | grep "Used memory" | awk '{print $3}')
reduced=$(( current * 80 / 100 )) # Reduce to 80%
virsh setmem $vm ${reduced}K --live
done
# 2. Migrate VMs to other hosts
virsh migrate --live vm-name qemu+ssh://other-host/system
# 3. Shutdown non-critical VMs
virsh shutdown dev-vm
# 4. Drop caches (temporary relief)
sync; echo 3 > /proc/sys/vm/drop_caches
Long-term solutions:
# 1. Add physical RAM
# 2. Reduce overcommit ratio
# Migrate VMs to additional hosts
# 3. Optimize VM memory allocation
# Right-size VMs based on actual usage
# 4. Tune KSM more aggressively
echo 500 > /sys/kernel/mm/ksm/pages_to_scan
echo 10 > /sys/kernel/mm/ksm/sleep_millisecs
# 5. Enable huge pages
echo 4096 > /proc/sys/vm/nr_hugepages
Handling OOM Situations
# Check OOM killer logs
dmesg | grep -i "killed process"
journalctl -k | grep -i oom
# Identify OOM victims
grep -i "killed process" /var/log/kern.log
# Configure OOM priorities (per VM)
# Lower score = less likely to be killed
ps aux | grep qemu | grep vm-name
# Get PID
echo -17 > /proc/<PID>/oom_score_adj
# Range: -1000 (never kill) to 1000 (kill first)
# Make persistent via VM XML
virsh edit ubuntu-vm
# This requires custom startup scripts
# Create /etc/libvirt/hooks/qemu:
#!/bin/bash
if [ "$1" = "ubuntu-vm" ] && [ "$2" = "started" ]; then
pid=$(pgrep -f "qemu.*ubuntu-vm")
echo -500 > /proc/$pid/oom_score_adj
fi
Performance Degradation
# Symptoms of memory overcommit issues:
# - Slow VM performance
# - High I/O wait
# - Application timeouts
# - Inconsistent response times
# Diagnosis:
# 1. Check swap usage
free -h
vmstat 1 10
# 2. Check major faults
virsh dommemstat vm-name | grep major_fault
# High and increasing = swapping
# 3. Check KSM impact
top | grep ksmd
# High CPU usage may indicate aggressive KSM
# 4. Check memory pressure stall
cat /proc/pressure/memory
# Solutions:
# - Reduce overcommit
# - Add RAM
# - Optimize applications
# - Migrate VMs
Monitoring Scripts
Comprehensive Monitoring Dashboard
#!/bin/bash
# vm-memory-dashboard.sh
while true; do
clear
echo "================================"
echo " VM Memory Dashboard"
echo " $(date)"
echo "================================"
echo ""
# Host memory
echo "HOST MEMORY:"
free -h | grep -E "Mem|Swap"
echo ""
# KSM stats
if [ -f /sys/kernel/mm/ksm/pages_sharing ]; then
sharing=$(cat /sys/kernel/mm/ksm/pages_sharing)
shared=$(cat /sys/kernel/mm/ksm/pages_shared)
saved=$(( (sharing - shared) * 4 / 1024 ))
echo "KSM: Saved ${saved} MB"
echo ""
fi
# Overcommit ratio
host_mem=$(free -b | awk 'NR==2 {print $2}')
total_alloc=0
echo "VMs:"
echo "Name Allocated RSS Usage%"
echo "--------------------------------------------------------"
for vm in $(virsh list --name); do
alloc=$(virsh dominfo $vm 2>/dev/null | grep "Max memory" | awk '{print $3}')
rss=$(virsh dommemstat $vm 2>/dev/null | grep "rss" | awk '{print $2}')
if [ -n "$alloc" ] && [ -n "$rss" ]; then
alloc_mb=$(( alloc / 1024 ))
rss_mb=$(( rss / 1024 ))
usage=$(( rss * 100 / alloc ))
printf "%-20s %7d MB %7d MB %3d%%\n" "$vm" "$alloc_mb" "$rss_mb" "$usage"
total_alloc=$(( total_alloc + alloc ))
fi
done
echo ""
overcommit=$(echo "scale=2; $total_alloc / $host_mem" | bc)
echo "Total Allocated: $(( total_alloc / 1024 / 1024 )) GB"
echo "Overcommit Ratio: ${overcommit}:1"
sleep 5
done
Conclusion
Memory overcommit is a powerful technique for maximizing virtualization infrastructure efficiency, but it requires careful planning, implementation, and monitoring. By leveraging technologies like memory ballooning, KSM, and transparent huge pages, you can safely run more VMs per host while maintaining acceptable performance.
Key takeaways:
- Start with conservative overcommit ratios (1.2:1)
- Enable all memory optimization technologies (KSM, THP, ballooning)
- Monitor continuously for memory pressure indicators
- Understand your workload characteristics
- Plan for growth and peak usage
- Never exceed 2:1 for production environments
- Have procedures ready for memory pressure situations
Successful memory overcommit strategies balance efficiency with reliability, using monitoring and automation to ensure that resource constraints never impact production workloads. With proper configuration and vigilance, memory overcommit can significantly reduce infrastructure costs while maintaining the performance and availability your applications require.
Remember that overcommit is a trade-off: higher VM density versus increased complexity and monitoring requirements. The optimal strategy depends on your specific workloads, risk tolerance, and operational capabilities. When done correctly, memory overcommit is a cornerstone of efficient, cost-effective virtualization infrastructure.


