Docker Resource Limits CPU and Memory

Proper resource allocation and limiting is critical for containerized environments, preventing resource exhaustion, ensuring fair sharing, and maintaining system stability. This comprehensive guide covers memory and CPU constraints, cgroup mechanisms, Docker Compose configuration, monitoring resource usage, handling out-of-memory conditions, and optimization strategies. Understanding and implementing resource limits protects against runaway containers and ensures predictable application performance.

Table of Contents

Understanding Resource Constraints

Resource limits prevent containers from consuming excessive resources and impacting other workloads.

Constraint types:

  • Limits: Maximum resources container can use
  • Reservations: Minimum guaranteed resources
  • Soft limits: Preferred allocation (adjusts if needed)
  • Hard limits: Maximum possible (container killed if exceeded)

Cgroup control:

  • Memory: limit total RAM usage
  • CPUs: limit CPU time share
  • I/O: limit disk I/O bandwidth
  • Network: limit network bandwidth (via tc)
# Check cgroup version
mount | grep cgroup

# View current limits
docker inspect <container-id> | grep -A 20 "Memory\|Cpu"

# Get numeric values
docker inspect <container-id> --format='{{.HostConfig.Memory}}'
docker inspect <container-id> --format='{{.HostConfig.NanoCpus}}'

Memory Limits and Reservations

Configure memory constraints for containers.

Basic memory limits:

# Set memory limit to 512MB
docker run -d \
  --name memory-limited \
  --memory 512m \
  myapp:latest

# Set memory limit to 1GB
docker run -d \
  --name app \
  --memory 1g \
  app:latest

# Verify limit applied
docker inspect app --format='{{.HostConfig.Memory}}'
# Output: 1073741824 (bytes)

Memory with reservation:

# Guarantee 256MB, allow up to 1GB
docker run -d \
  --name app \
  --memory 1g \
  --memory-reservation 256m \
  app:latest

# Verify both settings
docker inspect app | grep -E "Memory\"|MemoryReservation"

Memory swap limits:

# Limit memory + swap to 2GB total
docker run -d \
  --name app \
  --memory 1g \
  --memory-swap 2g \
  app:latest

# Disable swap completely
docker run -d \
  --name app \
  --memory 1g \
  --memory-swap 1g \
  app:latest

# Check swap limit
docker inspect app --format='{{.HostConfig.MemorySwap}}'

Memory soft limits (cgroup v2):

# Set memory.low for soft limit
docker run -d \
  --name app \
  --memory 2g \
  --memory-min 512m \
  app:latest

# Container will try to keep at least 512MB even under pressure

CPU Limits and Allocation

Configure CPU constraints for containers.

CPU share allocation:

# Default CPU shares: 1024 per container
# Lower shares = lower priority

# Give container 512 shares (50% of default)
docker run -d \
  --name low-priority \
  --cpu-shares 512 \
  app:latest

# Give container 2048 shares (200% of default)
docker run -d \
  --name high-priority \
  --cpu-shares 2048 \
  app:latest

# Under contention, resources allocated proportionally
# low-priority:high-priority = 512:2048 = 1:4

CPU limit (cgroup v2):

# Limit to 1 CPU core
docker run -d \
  --name single-core \
  --cpus 1 \
  app:latest

# Limit to 0.5 cores (can use up to 50% of one core)
docker run -d \
  --name half-core \
  --cpus 0.5 \
  app:latest

# Limit to 2.5 cores
docker run -d \
  --name multi-core \
  --cpus 2.5 \
  app:latest

# Verify CPU limit
docker inspect app --format='{{.HostConfig.NanoCpus}}'

CPU set (processor affinity):

# Pin container to specific CPU cores
# Container can only use CPUs 0 and 1

docker run -d \
  --name pinned \
  --cpuset-cpus 0,1 \
  app:latest

# Pin to CPUs 2-4
docker run -d \
  --name range-pinned \
  --cpuset-cpus 2-4 \
  app:latest

# Pin memory to specific NUMA nodes
docker run -d \
  --name numa-pinned \
  --cpuset-mems 0,1 \
  app:latest

# Verify CPUset
docker inspect app --format='{{.HostConfig.CpusetCpus}}'

Docker Compose Resource Configuration

Define resource constraints in compose files.

Basic resource limits in compose:

cat > docker-compose.yml <<'EOF'
version: '3.9'

services:
  web:
    image: nginx:alpine
    deploy:
      resources:
        limits:
          cpus: '1'
          memory: 512M
        reservations:
          cpus: '0.5'
          memory: 256M

  app:
    image: app:latest
    deploy:
      resources:
        limits:
          cpus: '2'
          memory: 1G
        reservations:
          cpus: '1'
          memory: 512M

  db:
    image: postgres:15
    deploy:
      resources:
        limits:
          cpus: '4'
          memory: 2G
        reservations:
          cpus: '2'
          memory: 1G

EOF

# Deploy with resource limits
docker-compose up -d

# Verify deployment
docker-compose ps
docker stats

Dynamic resource adjustment:

# Update container resource limits
docker update \
  --cpus 2 \
  --memory 1g \
  container-name

# Apply to multiple containers
docker update \
  --cpus 1 \
  --memory 512m \
  web app cache

# Verify updated limits
docker inspect web --format='{{.HostConfig | json}}' | grep -E "Cpus|Memory"

Cgroup Mechanisms

Understand how cgroups enforce resource limits.

Check cgroup configuration:

# Find container's cgroup path
docker inspect <container-id> --format='{{.HostConfig.CgroupParent}}'

# List cgroup settings
cat /sys/fs/cgroup/docker/<container-id>/memory.max
cat /sys/fs/cgroup/docker/<container-id>/cpu.max

# View memory usage
cat /sys/fs/cgroup/docker/<container-id>/memory.current

# View CPU usage
cat /sys/fs/cgroup/docker/<container-id>/cpu.stat

Memory cgroup settings:

# Memory limit
cat /sys/fs/cgroup/memory/docker/*/memory.limit_in_bytes

# Memory used
cat /sys/fs/cgroup/memory/docker/*/memory.usage_in_bytes

# Memory soft limit
cat /sys/fs/cgroup/memory/docker/*/memory.soft_limit_in_bytes

# OOM kill count
cat /sys/fs/cgroup/memory/docker/*/memory.oom_control

CPU cgroup settings:

# CPU shares
cat /sys/fs/cgroup/cpu/docker/*/cpu.shares

# CPU quota (max time in period)
cat /sys/fs/cgroup/cpu/docker/*/cpu.cfs_quota_us

# CPU period (in microseconds)
cat /sys/fs/cgroup/cpu/docker/*/cpu.cfs_period_us

# CPU stat
cat /sys/fs/cgroup/cpu/docker/*/cpu.stat

Monitoring Resource Usage

Monitor actual resource consumption.

Real-time resource monitoring:

# Monitor specific container
docker stats <container-id>

# Monitor all containers
docker stats

# Get one-time snapshot
docker stats --no-stream

# Show only memory
docker stats --format="table {{.Container}}\t{{.MemUsage}}"

# Show only CPU
docker stats --format="table {{.Container}}\t{{.CPUPerc}}"

# Monitor specific format
docker stats \
  --format "table {{.Container}}\t{{.CPUPerc}}\t{{.MemUsage}}\t{{.MemPerc}}"

Detailed resource usage:

# Get detailed stats for container
docker inspect <container-id> --format='{{.State.Pid}}'

# Check /proc stats for container process
ps aux | grep <container-pid>

# Monitor using `top` inside container
docker exec <container-id> top

# Monitor system metrics
vmstat 1 10
iostat 1 5

Historical monitoring with Prometheus:

# Enable metrics in docker daemon config
cat > ~/.docker/daemon.json <<'EOF'
{
  "metrics-addr": "127.0.0.1:9323"
}
EOF

systemctl reload docker

# Access metrics
curl http://localhost:9323/metrics | grep container

# Scrape with Prometheus
cat > prometheus.yml <<'EOF'
global:
  scrape_interval: 15s

scrape_configs:
  - job_name: docker
    static_configs:
      - targets: ['localhost:9323']
EOF

Out-of-Memory Handling

Manage out-of-memory conditions gracefully.

OOM behavior configuration:

# Default: container killed when exceeding memory limit

# Disable OOM killer (container can exceed limit)
docker run -d \
  --name app \
  --memory 512m \
  --oom-kill-disable \
  app:latest

# Set OOM score adjustment
docker run -d \
  --name app \
  --memory 512m \
  --oom-score-adj 500 \
  app:latest

# Lower score = higher priority (less likely to be killed)
# Range: -1000 to 1000

Handle OOM events:

# Monitor OOM events
docker events --filter type=container --filter event=oom

# Detect OOM in logs
docker logs <container-id> | grep -i oom

# Check container OOM count
docker inspect <container-id> | grep -i oom

# Automatic restart on OOM
docker run -d \
  --name app \
  --restart always \
  --memory 512m \
  app:latest

OOM prevention strategies:

# Set aggressive memory limit lower than actual usage
# to force optimization

# Start conservative, increase if needed
docker run -d \
  --name app \
  --memory 512m \
  app:latest

# Monitor for OOM
sleep 60
docker stats app --no-stream

# If approaching limit, increase
docker update --memory 1g app

# Restart to apply
docker restart app

Performance Optimization

Optimize resource allocation and container performance.

Right-sizing containers:

# Monitor actual usage
docker run -d \
  --name app \
  app:latest

sleep 300

# Check peak usage
docker stats app --no-stream

# Analyze usage over time
docker stats --no-stream > stats.log
# Repeat every minute for hour

# Set limits based on actual usage + 20% headroom
# If peak is 400MB, set limit to 480-512MB

Optimizing memory usage:

# Use Alpine base images (smaller)
FROM alpine:latest

# Remove unused packages
RUN apk del apk-tools

# Disable debug symbols
RUN strip /app/binary

# Use thin runtime images
# NodeJS: node:18-alpine vs node:18-alpine (500MB vs 50MB)

Optimizing CPU usage:

# Profile CPU usage
docker run -d \
  --name app \
  app:latest

# Use performance profiling tools
docker exec app python -m cProfile app.py

# Identify bottlenecks
docker exec app pip install py-spy
docker exec app py-spy record -o profile.svg -- python app.py

# Optimize hot paths
# Consider compiled languages for CPU-intensive work

Troubleshooting

Diagnose and resolve resource constraint issues.

Container killed by OOM:

# Symptom: Container suddenly stops
docker logs <container-id>
# May show: killed, segmentation fault, etc.

# Check if OOM occurred
grep -i "oom" /var/log/syslog
dmesg | grep -i oom

# Solution: Increase memory limit
docker update --memory 1g <container-id>
docker restart <container-id>

Container slower than expected:

# Check CPU limit isn't being exceeded
docker stats <container-id>

# Check if container hitting CPU limit
docker inspect <container-id> --format='{{.HostConfig.NanoCpus}}'

# If NanoCpus=0, no limit applied
# If percentage near limit in stats, increase CPU

docker update --cpus 2 <container-id>
docker restart <container-id>

Memory leak detection:

# Monitor memory over time
watch -n 1 'docker inspect <container-id> --format="{{.HostConfig.Memory}}: $(docker stats --no-stream <container-id> | tail -1 | awk \"{print \\$3}\")"'

# If memory constantly increasing, likely leak
# Restart container periodically or implement fix

docker restart <container-id>

# Or use restart policy
docker update --restart unless-stopped <container-id>

Conclusion

Resource limits and reservations are fundamental to stable, predictable containerized environments. By properly constraining memory and CPU usage, you prevent runaway containers from impacting other services and maintain fair resource sharing across your infrastructure. Start with conservative limits based on application requirements, monitor actual usage, and adjust based on data. Combine resource limits with health checks, restart policies, and proper monitoring for a robust, production-grade deployment strategy. Regular performance profiling and optimization ensure your containers run efficiently within their allocated resources. As your infrastructure grows, automated resource management and orchestration systems become increasingly valuable for maintaining optimal resource utilization.