Linux Kernel Tuning for Web Servers

Modern high-performance web servers require careful kernel-level tuning to handle millions of concurrent connections and deliver consistent response times. Linux kernel parameters directly impact connection handling, memory management, and network performance. This guide covers essential kernel tuning parameters for web server optimization.

Table of Contents

  1. TCP Network Stack Optimization
  2. Connection Handling Parameters
  3. Buffer and Memory Tuning
  4. Network Device Optimization
  5. Load Balancer Specific Tuning
  6. Monitoring Kernel Metrics
  7. Persistent Configuration
  8. Conclusion

TCP Network Stack Optimization

TCP Buffer Sizing

# View current TCP buffer settings
sysctl net.ipv4.tcp_rmem
sysctl net.ipv4.tcp_wmem

# Output format: min default max
# Default: 4096 131072 6291456 (4KB, 128KB, 6MB)

# Optimize for web servers (increase defaults for high throughput)
sudo sysctl -w net.ipv4.tcp_rmem='4096 65536 2097152'    # 2MB max read
sudo sysctl -w net.ipv4.tcp_wmem='4096 65536 2097152'    # 2MB max write

# For high-performance scenarios
sudo sysctl -w net.ipv4.tcp_rmem='4096 262144 8388608'   # 8MB max
sudo sysctl -w net.ipv4.tcp_wmem='4096 262144 8388608'   # 8MB max

# Socket buffer settings
sudo sysctl -w net.core.rmem_default=131072
sudo sysctl -w net.core.wmem_default=131072
sudo sysctl -w net.core.rmem_max=8388608
sudo sysctl -w net.core.wmem_max=8388608

TCP Window Scaling

# Enable TCP window scaling (allows larger windows)
sudo sysctl -w net.ipv4.tcp_window_scaling=1

# Enable for fast networks (high latency * high bandwidth)
# Supports windows larger than 64KB

# View current window size
cat /proc/sys/net/ipv4.tcp_window_scaling

TCP Timestamps and SACK

# Enable TCP timestamps (improves performance)
sudo sysctl -w net.ipv4.tcp_timestamps=1

# Enable Selective ACK (improves recovery from loss)
sudo sysctl -w net.ipv4.tcp_sack=1

# Enable Duplicate SACK (helps identify retransmissions)
sudo sysctl -w net.ipv4.tcp_dsack=1

Connection Handling Parameters

Connection Queue Limits

# Maximum number of connections in listen backlog
sudo sysctl -w net.core.somaxconn=65535

# Per-socket backlog limit
sudo sysctl -w net.ipv4.tcp_max_syn_backlog=65535

# These should match your application's listen queue
# Verify in server config: nginx/apache listen backlog parameter

TIME_WAIT Socket Management

# Reuse TIME_WAIT sockets (important for high-connection-rate servers)
sudo sysctl -w net.ipv4.tcp_tw_reuse=1

# Allow rapid reconnections from same IP
sudo sysctl -w net.ipv4.tcp_tw_recycle=0  # Usually disabled, problematic with NAT

# Reduce TIME_WAIT duration (default 60 seconds)
# Limited control via kernel - managed by protocol
# Can use SO_LINGER socket option from application

Connection Recycling

# Enable fast recycling of time-wait sockets
sudo sysctl -w net.ipv4.tcp_fin_timeout=30

# Number of times to retry before timing out
sudo sysctl -w net.ipv4.tcp_retries2=8

# Initial SYN timeout
sudo sysctl -w net.ipv4.tcp_syn_retries=3

Buffer and Memory Tuning

UDP Buffer Settings

# UDP read buffer (for applications using UDP)
sudo sysctl -w net.ipv4.udp_rmem_min=4096
sudo sysctl -w net.ipv4.udp_rmem_default=131072

# UDP write buffer
sudo sysctl -w net.ipv4.udp_wmem_min=4096
sudo sysctl -w net.ipv4.udp_wmem_default=131072

Memory Management

# Maximum socket receive buffer size across system
sudo sysctl -w net.core.rmem_max=134217728   # 128MB

# Maximum socket write buffer size across system
sudo sysctl -w net.core.wmem_max=134217728   # 128MB

# Default backlog buffer size
sudo sysctl -w net.core.netdev_max_backlog=65535

Network Device Optimization

Ring Buffers

# Check current ring buffer settings
ethtool -g eth0

# Increase ring buffers for high traffic
sudo ethtool -G eth0 rx 4096 tx 4096

# Persistent configuration (Ubuntu/Debian)
cat >> /etc/network/interfaces <<'EOF'
post-up ethtool -G eth0 rx 4096 tx 4096
EOF

# Or with netplan
cat > /etc/netplan/99-ring-buffer.yaml <<'EOF'
network:
  version: 2
  ethernets:
    eth0:
      dhcp4: true
      set-name: eth0
EOF

# Apply changes
sudo netplan apply

Offloading Options

# Check current offload settings
ethtool -k eth0

# Enable offloading features (if supported)
sudo ethtool -K eth0 gso on       # Generic Segmentation Offload
sudo ethtool -K eth0 gro on       # Generic Receive Offload
sudo ethtool -K eth0 tso on       # TCP Segmentation Offload
sudo ethtool -K eth0 lro off      # Large Receive Offload (usually disabled)

# Persistent offload configuration
cat > /etc/systemd/system-sleep/ethtool.service <<'EOF'
[Unit]
Description=Configure ethtool
Before=network.target

[Service]
Type=oneshot
ExecStart=/sbin/ethtool -K eth0 gso on gro on tso on

[Install]
WantedBy=multi-user.target
EOF

sudo systemctl enable ethtool.service

Load Balancer Specific Tuning

Connection Port Range

# Extend ephemeral port range (for many outbound connections)
sudo sysctl -w net.ipv4.ip_local_port_range='1024 65535'

# For extreme cases
sudo sysctl -w net.ipv4.ip_local_port_range='1024 65000'

# View current range
cat /proc/sys/net/ipv4/ip_local_port_range

TCP Keepalive

# Keepalive time (how long before sending keepalive probe)
sudo sysctl -w net.ipv4.tcp_keepalive_time=600

# Keepalive interval (how often to retry)
sudo sysctl -w net.ipv4.tcp_keepalive_intvl=60

# Number of probes before dropping connection
sudo sysctl -w net.ipv4.tcp_keepalive_probes=3

FIN Handling

# Allow reuse of ports in FIN_WAIT state
sudo sysctl -w net.ipv4.tcp_tw_reuse=1

# Reduce FIN timeout for quicker resource cleanup
sudo sysctl -w net.ipv4.tcp_fin_timeout=30

Monitoring Kernel Metrics

Network Statistics Monitoring

# Real-time network statistics
watch -n 1 'cat /proc/net/dev'

# Detailed IP statistics
cat /proc/net/netstat

# Monitor TCP connection states
watch -n 1 'ss -tan | tail -1'

# Count connections by state
ss -tan | grep -oE '[A-Z]+' | sort | uniq -c

# Monitor socket buffer usage
cat /proc/net/sockstat

Connection Diagnostics

# Check for TIME_WAIT accumulation
ss -tan | grep TIME-WAIT | wc -l

# Identify port exhaustion risk
cat /proc/sys/net/ipv4/ip_local_port_range
netstat -tan | grep :80 | wc -l

# Monitor SYN backlog
watch -n 1 'grep TCPBacklogDrop /proc/net/netstat'

# Check dropped connections
grep -E "TCPListen|TCPDrop" /proc/net/netstat

Persistent Configuration

sysctl Configuration File

# Create persistent configuration
sudo tee /etc/sysctl.d/99-webserver-tuning.conf > /dev/null <<'EOF'
# TCP Buffer Optimization
net.ipv4.tcp_rmem = 4096 87380 8388608
net.ipv4.tcp_wmem = 4096 65536 8388608

# Connection Handling
net.core.somaxconn = 65535
net.ipv4.tcp_max_syn_backlog = 65535

# TIME_WAIT Optimization
net.ipv4.tcp_tw_reuse = 1

# Performance Parameters
net.ipv4.tcp_window_scaling = 1
net.ipv4.tcp_timestamps = 1
net.ipv4.tcp_sack = 1
net.ipv4.tcp_dsack = 1

# Memory Settings
net.core.rmem_max = 134217728
net.core.wmem_max = 134217728
net.core.netdev_max_backlog = 65535

# TCP Timeouts
net.ipv4.tcp_fin_timeout = 30
net.ipv4.tcp_keepalive_time = 600
net.ipv4.tcp_keepalive_intvl = 60
net.ipv4.tcp_keepalive_probes = 3

# Port Range
net.ipv4.ip_local_port_range = 1024 65535
EOF

# Apply configuration
sudo sysctl -p /etc/sysctl.d/99-webserver-tuning.conf

# Verify application
sysctl -a | grep net.ipv4.tcp_rmem

Validation and Testing

# Before applying tuning
ab -n 10000 -c 1000 http://example.com/

# Apply tuning
sudo sysctl -p /etc/sysctl.d/99-webserver-tuning.conf

# After tuning - compare results
ab -n 10000 -c 1000 http://example.com/

# Test with realistic workload
wrk -t 4 -c 1000 -d 30s http://example.com/

Conclusion

Linux kernel tuning directly impacts web server performance, enabling systems to handle higher traffic, reduce latency, and utilize hardware efficiently. By carefully configuring TCP buffers, connection limits, and network device parameters, organizations optimize infrastructure for their specific workload characteristics. Regular monitoring of kernel metrics detects degradation and guides further optimization. Combined with application-level tuning and infrastructure planning, kernel optimization ensures web servers deliver responsive, reliable service under demanding production loads.