Docker Resource Limits CPU and Memory

Proper resource allocation and limiting is critical for containerized environments, preventing resource exhaustion, ensuring fair sharing, and maintaining system stability. Esta guía completa cubre memory and CPU constraints, cgroup mechanisms, Docker Compose configuration, monitoring resource usage, handling out-of-memory conditions, and optimization strategies. Comprendiendo and implementing resource limits protects against runaway contenedores and ensures predictable application performance.

Tabla de Contenidos

Comprendiendo Resource Constraints

Resource limits prevent contenedores from consuming excessive resources and impacting other workloads.

Constraint types:

  • Limits: Maximum resources contenedor can use
  • Reservations: Minimum guaranteed resources
  • Soft limits: Preferred allocation (adjusts if needed)
  • Hard limits: Maximum possible (contenedor killed if exceeded)

Cgroup control:

  • Memory: limit total RAM usage
  • CPUs: limit CPU time share
  • I/O: limit disk I/O bandwidth
  • Red: limit red bandwidth (via tc)
# Check cgroup version
mount | grep cgroup

# View current limits
docker inspect <contenedor-id> | grep -A 20 "Memory\|Cpu"

# Get numeric values
docker inspect <contenedor-id> --format='{{.HostConfig.Memory}}'
docker inspect <contenedor-id> --format='{{.HostConfig.NanoCpus}}'

Memory Limits and Reservations

Configura memory constraints for contenedores.

Basic memory limits:

# Set memory limit to 512MB
docker run -d \
  --name memory-limited \
  --memory 512m \
  myapp:latest

# Set memory limit to 1GB
docker run -d \
  --name app \
  --memory 1g \
  app:latest

# Verifica limit applied
docker inspect app --format='{{.HostConfig.Memory}}'
# Output: 1073741824 (bytes)

Memory with reservation:

# Guarantee 256MB, allow up to 1GB
docker run -d \
  --name app \
  --memory 1g \
  --memory-reservation 256m \
  app:latest

# Verifica both settings
docker inspect app | grep -E "Memory\"|MemoryReservation"

Memory swap limits:

# Limit memory + swap to 2GB total
docker run -d \
  --name app \
  --memory 1g \
  --memory-swap 2g \
  app:latest

# Deshabilita swap completely
docker run -d \
  --name app \
  --memory 1g \
  --memory-swap 1g \
  app:latest

# Check swap limit
docker inspect app --format='{{.HostConfig.MemorySwap}}'

Memory soft limits (cgroup v2):

# Set memory.low for soft limit
docker run -d \
  --name app \
  --memory 2g \
  --memory-min 512m \
  app:latest

# Contenedor will try to keep at least 512MB even under pressure

CPU Limits and Allocation

Configura CPU constraints for contenedores.

CPU share allocation:

# Default CPU shares: 1024 per contenedor
# Lower shares = lower priority

# Give contenedor 512 shares (50% of default)
docker run -d \
  --name low-priority \
  --cpu-shares 512 \
  app:latest

# Give contenedor 2048 shares (200% of default)
docker run -d \
  --name high-priority \
  --cpu-shares 2048 \
  app:latest

# Under contention, resources allocated proportionally
# low-priority:high-priority = 512:2048 = 1:4

CPU limit (cgroup v2):

# Limit to 1 CPU core
docker run -d \
  --name single-core \
  --cpus 1 \
  app:latest

# Limit to 0.5 cores (can use up to 50% of one core)
docker run -d \
  --name half-core \
  --cpus 0.5 \
  app:latest

# Limit to 2.5 cores
docker run -d \
  --name multi-core \
  --cpus 2.5 \
  app:latest

# Verifica CPU limit
docker inspect app --format='{{.HostConfig.NanoCpus}}'

CPU set (processor affinity):

# Pin contenedor to specific CPU cores
# Contenedor can only use CPUs 0 and 1

docker run -d \
  --name pinned \
  --cpuset-cpus 0,1 \
  app:latest

# Pin to CPUs 2-4
docker run -d \
  --name range-pinned \
  --cpuset-cpus 2-4 \
  app:latest

# Pin memory to specific NUMA nodos
docker run -d \
  --name numa-pinned \
  --cpuset-mems 0,1 \
  app:latest

# Verifica CPUset
docker inspect app --format='{{.HostConfig.CpusetCpus}}'

Docker Compose Resource Configuración

Define resource constraints in compose files.

Basic resource limits in compose:

cat > docker-compose.yml <<'EOF'
version: '3.9'

servicios:
  web:
    imagen: nginx:alpine
    deploy:
      resources:
        limits:
          cpus: '1'
          memory: 512M
        reservations:
          cpus: '0.5'
          memory: 256M

  app:
    imagen: app:latest
    deploy:
      resources:
        limits:
          cpus: '2'
          memory: 1G
        reservations:
          cpus: '1'
          memory: 512M

  db:
    imagen: postgres:15
    deploy:
      resources:
        limits:
          cpus: '4'
          memory: 2G
        reservations:
          cpus: '2'
          memory: 1G

EOF

# Despliega with resource limits
docker-compose up -d

# Verifica deployment
docker-compose ps
docker stats

Dynamic resource adjustment:

# Actualiza contenedor resource limits
docker update \
  --cpus 2 \
  --memory 1g \
  contenedor-name

# Apply to multiple contenedores
docker update \
  --cpus 1 \
  --memory 512m \
  web app cache

# Verifica updated limits
docker inspect web --format='{{.HostConfig | json}}' | grep -E "Cpus|Memory"

Cgroup Mechanisms

Understand how cgroups enforce resource limits.

Check cgroup configuration:

# Find contenedor's cgroup path
docker inspect <contenedor-id> --format='{{.HostConfig.CgroupParent}}'

# List cgroup settings
cat /sys/fs/cgroup/docker/<contenedor-id>/memory.max
cat /sys/fs/cgroup/docker/<contenedor-id>/cpu.max

# View memory usage
cat /sys/fs/cgroup/docker/<contenedor-id>/memory.current

# View CPU usage
cat /sys/fs/cgroup/docker/<contenedor-id>/cpu.stat

Memory cgroup settings:

# Memory limit
cat /sys/fs/cgroup/memory/docker/*/memory.limit_in_bytes

# Memory used
cat /sys/fs/cgroup/memory/docker/*/memory.usage_in_bytes

# Memory soft limit
cat /sys/fs/cgroup/memory/docker/*/memory.soft_limit_in_bytes

# OOM kill count
cat /sys/fs/cgroup/memory/docker/*/memory.oom_control

CPU cgroup settings:

# CPU shares
cat /sys/fs/cgroup/cpu/docker/*/cpu.shares

# CPU quota (max time in period)
cat /sys/fs/cgroup/cpu/docker/*/cpu.cfs_quota_us

# CPU period (in microseconds)
cat /sys/fs/cgroup/cpu/docker/*/cpu.cfs_period_us

# CPU stat
cat /sys/fs/cgroup/cpu/docker/*/cpu.stat

Monitoreo Resource Usage

Monitorea actual resource consumption.

Real-time resource monitoring:

# Monitorea specific contenedor
docker stats <contenedor-id>

# Monitorea all contenedores
docker stats

# Get one-time snapshot
docker stats --no-stream

# Show only memory
docker stats --format="table {{.Contenedor}}\t{{.MemUsage}}"

# Show only CPU
docker stats --format="table {{.Contenedor}}\t{{.CPUPerc}}"

# Monitorea specific format
docker stats \
  --format "table {{.Contenedor}}\t{{.CPUPerc}}\t{{.MemUsage}}\t{{.MemPerc}}"

Detailed resource usage:

# Get detailed stats for contenedor
docker inspect <contenedor-id> --format='{{.State.Pid}}'

# Check /proc stats for contenedor process
ps aux | grep <contenedor-pid>

# Monitorea using `top` inside contenedor
docker exec <contenedor-id> top

# Monitorea system metrics
vmstat 1 10
iostat 1 5

Historical monitoring with Prometheus:

# Habilita metrics in docker daemon config
cat > ~/.docker/daemon.json <<'EOF'
{
  "metrics-addr": "127.0.0.1:9323"
}
EOF

systemctl reload docker

# Access metrics
curl http://localhost:9323/metrics | grep contenedor

# Scrape with Prometheus
cat > prometheus.yml <<'EOF'
global:
  scrape_interval: 15s

scrape_configs:
  - job_name: docker
    static_configs:
      - targets: ['localhost:9323']
EOF

Out-of-Memory Handling

Manage out-of-memory conditions gracefully.

OOM behavior configuration:

# Default: contenedor killed when exceeding memory limit

# Deshabilita OOM killer (contenedor can exceed limit)
docker run -d \
  --name app \
  --memory 512m \
  --oom-kill-disable \
  app:latest

# Set OOM score adjustment
docker run -d \
  --name app \
  --memory 512m \
  --oom-score-adj 500 \
  app:latest

# Lower score = higher priority (less likely to be killed)
# Range: -1000 to 1000

Handle OOM events:

# Monitorea OOM events
docker events --filter type=contenedor --filter event=oom

# Detect OOM in logs
docker logs <contenedor-id> | grep -i oom

# Check contenedor OOM count
docker inspect <contenedor-id> | grep -i oom

# Automatic restart on OOM
docker run -d \
  --name app \
  --restart always \
  --memory 512m \
  app:latest

OOM prevention strategies:

# Set aggressive memory limit lower than actual usage
# to force optimization

# Inicia conservative, increase if needed
docker run -d \
  --name app \
  --memory 512m \
  app:latest

# Monitorea for OOM
sleep 60
docker stats app --no-stream

# If approaching limit, increase
docker update --memory 1g app

# Reinicia to apply
docker restart app

Performance Optimization

Optimiza resource allocation and contenedor performance.

Right-sizing contenedores:

# Monitorea actual usage
docker run -d \
  --name app \
  app:latest

sleep 300

# Check peak usage
docker stats app --no-stream

# Analiza usage over time
docker stats --no-stream > stats.log
# Repeat every minute for hour

# Set limits based on actual usage + 20% headroom
# If peak is 400MB, set limit to 480-512MB

Optimizing memory usage:

# Use Alpine base imágenes (smaller)
FROM alpine:latest

# Remueve unused packages
RUN apk del apk-tools

# Deshabilita debug symbols
RUN strip /app/binary

# Use thin runtime imágenes
# NodeJS: nodo:18-alpine vs nodo:18-alpine (500MB vs 50MB)

Optimizing CPU usage:

# Profile CPU usage
docker run -d \
  --name app \
  app:latest

# Use performance profiling tools
docker exec app python -m cProfile app.py

# Identify bottlenecks
docker exec app pip install py-spy
docker exec app py-spy record -o profile.svg -- python app.py

# Optimiza hot paths
# Consider compiled languages for CPU-intensive work

Solución de Problemas

Diagnose and resolve resource constraint issues.

Contenedor killed by OOM:

# Symptom: Contenedor suddenly stops
docker logs <contenedor-id>
# May show: killed, segmentation fault, etc.

# Check if OOM occurred
grep -i "oom" /var/log/syslog
dmesg | grep -i oom

# Solution: Increase memory limit
docker update --memory 1g <contenedor-id>
docker restart <contenedor-id>

Contenedor slower than expected:

# Check CPU limit isn't being exceeded
docker stats <contenedor-id>

# Check if contenedor hitting CPU limit
docker inspect <contenedor-id> --format='{{.HostConfig.NanoCpus}}'

# If NanoCpus=0, no limit applied
# If percentage near limit in stats, increase CPU

docker update --cpus 2 <contenedor-id>
docker restart <contenedor-id>

Memory leak detection:

# Monitorea memory over time
watch -n 1 'docker inspect <contenedor-id> --format="{{.HostConfig.Memory}}: $(docker stats --no-stream <contenedor-id> | tail -1 | awk \"{print \\$3}\")"'

# If memory constantly increasing, likely leak
# Reinicia contenedor periodically or implement fix

docker restart <contenedor-id>

# Or use restart policy
docker update --restart unless-stopped <contenedor-id>

Conclusión

Resource limits and reservations are fundamental to stable, predictable containerized environments. By properly constraining memory and CPU usage, you prevent runaway contenedores from impacting other servicios and maintain fair resource sharing across your infrastructure. Inicia with conservative limits based on application requirements, monitor actual usage, and adjust based on data. Combine resource limits with health checks, restart policies, and proper monitoring for a robust, production-grade deployment strategy. Regular performance profiling and optimization asegúrate de que your contenedores run efficiently within their allocated resources. As your infrastructure grows, automated resource management and orchestration systems become increasingly valuable for maintaining optimal resource utilization.