Sticky Sessions Configuration in Load Balancers
Sticky sessions (session persistence) ensure that requests from the same client route to the same backend server for the duration of a session. This prevents session loss when applications maintain in-memory state rather than using centralized session storage. This guide covers cookie-based persistence in Nginx and HAProxy, IP hash techniques, app-specific cookies, and session replication alternatives.
Table of Contents
- Sticky Sessions Overview
- Cookie-Based Persistence in Nginx
- Cookie-Based Persistence in HAProxy
- IP Hash Load Balancing
- Application-Specific Cookies
- Session Replication
- Session Affinity Timeouts
- Testing Sticky Sessions
- Troubleshooting
Sticky Sessions Overview
Sticky sessions use multiple mechanisms:
- Cookie-Based: Proxy sets/modifies cookie to route requests
- IP-Based: Hash client IP to determine server
- App-Cookie-Based: Use existing application cookie for routing
- Source IP + Port: Hash connection source and port
Limitations of sticky sessions:
- Prevents load distribution changes
- Makes server maintenance harder
- Reduces effective capacity
- Increases request latency (hash calculations)
- Session loss on server failure
Better alternatives:
- Distributed session storage (Redis, Memcached)
- Stateless application design
- Session database (PostgreSQL, MySQL)
- Centralized cache
Use sticky sessions only when necessary for stateful applications.
Cookie-Based Persistence in Nginx
Nginx requires third-party modules for native cookie-based persistence. Use the sticky directive or implement with map and route:
Using Sticky Module (if compiled in)
upstream backend {
least_conn;
server 192.168.1.100:8000;
server 192.168.1.101:8000;
server 192.168.1.102:8000;
sticky cookie srv_route expires=1h domain=.example.com path=/ httponly secure;
}
server {
listen 443 ssl http2;
server_name app.example.com;
ssl_certificate /etc/nginx/ssl/example.com.crt;
ssl_certificate_key /etc/nginx/ssl/example.com.key;
location / {
proxy_pass http://backend;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
Using Map-Based Routing (without module)
# Define upstream servers with identifiers
upstream backend_1 { server 192.168.1.100:8000; }
upstream backend_2 { server 192.168.1.101:8000; }
upstream backend_3 { server 192.168.1.102:8000; }
# Create map to determine backend based on session ID
map $cookie_session_id $upstream {
# Hash the session cookie to one of three backends
~*^(?<prefix>[a-f0-9]{2}) $prefix;
default "00";
}
server {
listen 80;
server_name app.example.com;
# Extract session hash value
set $session_route $upstream;
location / {
# Route based on session hash
if ($session_route ~* "^00$") { proxy_pass http://backend_1; }
if ($session_route ~* "^01$") { proxy_pass http://backend_2; }
if ($session_route ~* "^02$") { proxy_pass http://backend_3; }
# Set session cookie if not exists
add_header Set-Cookie "session_id=$request_id; Path=/; HttpOnly; Max-Age=3600" always;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
}
}
Using Consistent Hash
Implement consistent hashing for better distribution:
# Lua-based consistent hashing (requires ngx_lua module)
upstream backend {
server 192.168.1.100:8000;
server 192.168.1.101:8000;
server 192.168.1.102:8000;
}
# Nginx Lua script for consistent hashing
location / {
set_by_lua_file $backend_pool /etc/nginx/lua/consistent_hash.lua $cookie_session_id;
proxy_pass http://$backend_pool;
}
# Create /etc/nginx/lua/consistent_hash.lua
# Local consistent hash function
local function crc32(data)
local CRC32_POLY = 0xEDB88320
local crc = 0xFFFFFFFF
for i = 1, #data do
crc = bit.bxor(crc, string.byte(data, i))
for _ = 1, 8 do
if bit.band(crc, 1) == 1 then
crc = bit.bxor(bit.rshift(crc, 1), CRC32_POLY)
else
crc = bit.rshift(crc, 1)
end
end
end
return bit.bxor(crc, 0xFFFFFFFF)
end
local session_id = ngx.arg[1]
local servers = {"192.168.1.100", "192.168.1.101", "192.168.1.102"}
local hash = crc32(session_id)
local selected = servers[hash % #servers + 1]
return selected .. ":8000"
Cookie-Based Persistence in HAProxy
HAProxy provides native cookie-based persistence:
Basic Cookie Persistence
global
log stdout local0
stats socket /run/haproxy/admin.sock
defaults
mode http
timeout connect 5000
timeout client 50000
timeout server 50000
frontend web_in
bind *:80
default_backend web_servers
backend web_servers
balance roundrobin
# Enable cookie-based persistence
cookie SERVERID insert indirect secure httponly
server srv1 192.168.1.100:8000 check cookie srv1
server srv2 192.168.1.101:8000 check cookie srv2
server srv3 192.168.1.102:8000 check cookie srv3
Parameters:
SERVERID: Cookie nameinsert: Add new cookie if missingindirect: Don't remove HAProxy-set cookiesecure: Set secure flag for HTTPShttponly: Set HttpOnly flagcookie srv1: Server identifier
Advanced Cookie Configuration
backend web_servers
balance roundrobin
# Cookie with domain, path, and expiration
cookie SERVERID insert indirect secure httponly nocache domain .example.com path /
# Use existing application cookie for routing
appsession JSESSIONID len 52 timeout 1h
server srv1 192.168.1.100:8000 check cookie srv1
server srv2 192.168.1.101:8000 check cookie srv2
Cookie with Failure Handling
backend web_servers
balance roundrobin
cookie SERVERID insert indirect httponly
# Primary servers
server srv1 192.168.1.100:8000 check cookie srv1
server srv2 192.168.1.101:8000 check cookie srv2
# Backup servers (if session lost)
server srv3 192.168.1.102:8000 check cookie srv3 backup
server srv4 192.168.1.103:8000 check cookie srv4 backup
Sticky Sessions with Multiple Paths
backend web_servers
balance roundrobin
cookie SERVERID insert indirect secure httponly
stick-table type string len 32 size 100k expire 30m
stick on cookie(JSESSIONID)
server srv1 192.168.1.100:8000 check cookie srv1
server srv2 192.168.1.101:8000 check cookie srv2
server srv3 192.168.1.102:8000 check cookie srv3
IP Hash Load Balancing
IP hash (source IP based routing) provides persistence without cookies:
Nginx IP Hash
upstream backend {
ip_hash;
server 192.168.1.100:8000 weight=3;
server 192.168.1.101:8000 weight=2;
server 192.168.1.102:8000 weight=1;
}
server {
listen 80;
server_name app.example.com;
location / {
proxy_pass http://backend;
# Important: Use X-Forwarded-For only if trusted sources
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
IP hash characteristics:
- Deterministic: Same client IP always routes to same server
- No cookies required
- Survives proxy/NAT transitions (if behind same NAT)
- Server down causes remapping for ~1/N clients
HAProxy Source Hash
backend web_servers
balance source
server srv1 192.168.1.100:8000 check
server srv2 192.168.1.101:8000 check
server srv3 192.168.1.102:8000 check
With client connection tracking:
backend web_servers
balance source
# Track connections by source IP
stick-table type ip size 100k expire 1h
stick on src
server srv1 192.168.1.100:8000 check
server srv2 192.168.1.101:8000 check
Application-Specific Cookies
Route based on application session cookies:
HAProxy appsession
backend web_servers
# Use existing JSESSIONID (Java)
appsession JSESSIONID len 52 timeout 1h
server srv1 192.168.1.100:8000 check
server srv2 192.168.1.101:8000 check
server srv3 192.168.1.102:8000 check
Nginx with Application Cookie
# Map application session ID to backend
map $cookie_phpsessionid $php_backend {
~(?P<hash>.+) http://backend;
}
upstream backend {
server 192.168.1.100:8000;
server 192.168.1.101:8000;
server 192.168.1.102:8000;
}
server {
listen 80;
location / {
proxy_pass http://backend;
proxy_set_header X-Session-ID $cookie_phpsessionid;
}
}
Route Based on Cookie Value
frontend web_in
bind *:80
# Extract customer ID from session cookie
set-var(sess.customer_id) cookie(sessionid)
use_backend gold_servers if { var(sess.customer_id) -m reg -i ^gold_ }
use_backend silver_servers if { var(sess.customer_id) -m reg -i ^silver_ }
default_backend bronze_servers
backend gold_servers
balance roundrobin
server srv1 192.168.1.110:8000 check
server srv2 192.168.1.111:8000 check
backend silver_servers
balance roundrobin
server srv3 192.168.1.120:8000 check
backend bronze_servers
balance roundrobin
server srv4 192.168.1.130:8000 check
Session Replication
Move away from sticky sessions by replicating sessions:
Redis-Based Session Storage
Configure application to use Redis:
# Install Redis
sudo apt install redis-server
sudo systemctl start redis-server
# Verify Redis
redis-cli ping
Example with Spring Boot (Java):
# application.yml
spring:
session:
store-type: redis
redis:
host: localhost
port: 6379
timeout: 2000ms
Example with Node.js (Express):
const session = require('express-session');
const RedisStore = require('connect-redis').default;
const { createClient } = require('redis');
const redisClient = createClient();
redisClient.connect();
app.use(session({
store: new RedisStore({ client: redisClient }),
secret: 'secret-key',
resave: false,
saveUninitialized: false,
cookie: {
secure: true,
maxAge: 1800000
}
}));
Memcached Session Storage
Use Memcached for distributed session cache:
# Install Memcached
sudo apt install memcached
sudo systemctl start memcached
Configure application (PHP example):
<?php
ini_set('session.save_handler', 'memcached');
ini_set('session.save_path', 'localhost:11211');
session_start();
$_SESSION['user_id'] = 123;
?>
Database Session Storage
Store sessions in shared database:
-- Create sessions table
CREATE TABLE sessions (
id VARCHAR(255) PRIMARY KEY,
user_id INT,
data TEXT,
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
updated_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
expires_at TIMESTAMP,
INDEX(expires_at)
);
Configure HAProxy to use database backend:
backend session_db
mode tcp
balance roundrobin
server db1 192.168.1.200:5432 check
server db2 192.168.1.201:5432 check
Session Affinity Timeouts
Configure session persistence durations:
Nginx Timeout
upstream backend {
least_conn;
keepalive 32;
keepalive_timeout 60s;
server 192.168.1.100:8000;
server 192.168.1.101:8000;
}
# Sticky using Traefik-style header
map $http_x_session_id $session_backend {
~(?P<sid>.+) http://backend;
}
server {
location / {
proxy_pass http://backend;
# Session timeout
proxy_read_timeout 30s;
proxy_send_timeout 30s;
proxy_connect_timeout 5s;
}
}
HAProxy Timeout
backend web_servers
balance roundrobin
# Cookie expires in 1 hour
cookie SERVERID insert indirect max-age 3600
# Stick table expires in 30 minutes
stick-table type ip size 100k expire 1800s
stick on src
timeout server 30s
timeout connect 5s
server srv1 192.168.1.100:8000 check inter 2000
Testing Sticky Sessions
Test cookie-based persistence:
# Extract session cookie
COOKIE=$(curl -s -c - http://app.example.com/ | grep -i server | awk '{print $NF}')
# Make multiple requests with same cookie
for i in {1..5}; do
curl -s -b "SERVERID=$COOKIE" http://app.example.com/ | grep -i server
done
Test IP hash routing:
# Multiple requests from same IP should go to same server
for i in {1..5}; do
curl -s http://app.example.com/ | head -1
done
# Test from different IPs (using VPN or proxy)
for ip in 1.2.3.4 5.6.7.8 9.10.11.12; do
curl -s --interface $ip http://app.example.com/ | head -1
done
Verify session replication:
# Check Redis sessions
redis-cli
> KEYS *
> GET session:*
# Check session count
> DBSIZE
# Monitor session access
redis-cli MONITOR
Troubleshooting
Check sticky session configuration:
# Nginx check
nginx -T | grep -A 10 "upstream"
# HAProxy check
haproxy -f /etc/haproxy/haproxy.cfg -c
echo "show backend" | socat - /run/haproxy/admin.sock
Monitor session cookies:
# Capture cookie traffic
tcpdump -A -s 1024 'tcp port 80' | grep -i "set-cookie"
# Monitor cookie with curl
curl -v -c cookies.txt http://app.example.com/ | head -20
cat cookies.txt
# Verify cookie attributes
curl -i http://app.example.com/ | grep -i "set-cookie"
Verify server routing:
# Add tracking headers in proxy
# Test multiple requests capture the Server header
for i in {1..10}; do
curl -s -b "SERVERID=srv1" http://app.example.com/ | grep -i "X-Backend-Server"
done
# Check HAProxy stats
curl http://localhost:8404/stats | grep -i server
Test session loss scenarios:
# Kill a backend server
ssh 192.168.1.100 "sudo systemctl stop application"
# Attempt request with existing session
curl -b "SERVERID=srv1" http://app.example.com/
# Verify failover to backup
curl -b "SERVERID=srv1" http://app.example.com/ | grep -i server
Conclusion
Sticky sessions enable stateful application deployments but limit scalability and operational flexibility. While cookie-based and IP hash persistence solve immediate session problems, distributed session storage with Redis or Memcached provides superior scalability and resilience. Evaluate stateless application architecture as the preferred solution, implementing sticky sessions only when necessary and with clear timeout policies and backup mechanisms.


