Server-Side Caching Strategies Comparison

Choosing the right server-side caching strategy can reduce response times by orders of magnitude, but each approach — Redis, Varnish, Nginx, Memcached — has different trade-offs. This guide compares caching solutions for Linux web servers, covering setup, cache invalidation patterns, TTL strategies, and when to use each.

Prerequisites

  • Linux (Ubuntu 20.04+/Debian 11+ or CentOS 8+/Rocky Linux 8+)
  • Nginx installed
  • Root or sudo access
  • A web application to cache

Caching Layers Overview

LayerToolWhat It CachesTypical TTL
Web serverNginx FastCGI/Proxy cacheFull HTTP responsesMinutes to hours
HTTP reverse proxyVarnishFull HTTP responsesSeconds to days
ApplicationRedisObjects, sessions, HTML fragmentsSeconds to days
ApplicationMemcachedSimple objects, sessionsSeconds to hours
BrowserCache-Control headersStatic assetsDays to years

Nginx Caching

Nginx can cache responses from FastCGI (PHP) or upstream proxies:

FastCGI cache (for PHP applications):

# Create cache directory
sudo mkdir -p /var/cache/nginx
sudo chown www-data:www-data /var/cache/nginx
# In http {} block
fastcgi_cache_path /var/cache/nginx/fcgi
    levels=1:2
    keys_zone=FCGI_CACHE:100m
    max_size=2g
    inactive=60m
    use_temp_path=off;

server {
    listen 443 ssl;
    server_name www.yourdomain.com;

    # Cache key: protocol + method + host + URI
    fastcgi_cache_key "$scheme$request_method$host$request_uri";

    # Cache valid responses
    fastcgi_cache FCGI_CACHE;
    fastcgi_cache_valid 200 10m;
    fastcgi_cache_valid 301 302 1m;
    fastcgi_cache_valid 404 1m;

    # Cache even if backend is down (serve stale)
    fastcgi_cache_use_stale error timeout updating http_500 http_503;
    fastcgi_cache_background_update on;
    fastcgi_cache_lock on;

    # Skip cache for these conditions
    set $skip_cache 0;
    if ($request_method = POST) { set $skip_cache 1; }
    if ($query_string != "") { set $skip_cache 1; }
    if ($cookie_session != "") { set $skip_cache 1; }
    if ($request_uri ~* "^/admin|^/wp-admin|^/account") { set $skip_cache 1; }

    fastcgi_cache_bypass $skip_cache;
    fastcgi_no_cache $skip_cache;

    add_header X-Cache-Status $upstream_cache_status;

    location ~ \.php$ {
        fastcgi_pass unix:/run/php/php8.2-fpm.sock;
        fastcgi_param SCRIPT_FILENAME $realpath_root$fastcgi_script_name;
        include fastcgi_params;
    }
}

Proxy cache (for upstream applications):

proxy_cache_path /var/cache/nginx/proxy
    levels=1:2
    keys_zone=PROXY_CACHE:50m
    max_size=1g
    inactive=60m;

location / {
    proxy_cache PROXY_CACHE;
    proxy_cache_key "$scheme$host$request_uri";
    proxy_cache_valid 200 5m;
    proxy_cache_use_stale error timeout updating;

    # Purge specific URLs (requires ngx_cache_purge module)
    # proxy_cache_purge PROXY_CACHE "$scheme$host$1";

    proxy_pass http://127.0.0.1:3000;
}

Redis for Application Caching

Redis is the go-to for application-level caching of computed data:

sudo apt install -y redis-server

# Configure Redis for caching use
sudo tee -a /etc/redis/redis.conf << 'EOF'
maxmemory 512mb
maxmemory-policy allkeys-lru   # Evict LRU keys when full
save ""                         # Disable persistence for pure caching
appendonly no
EOF

sudo systemctl restart redis-server

Redis caching patterns in application code:

# Python example with redis-py
import redis
import json
import hashlib

r = redis.Redis(host='localhost', port=6379, decode_responses=True)

def get_cached_data(key, fetch_func, ttl=300):
    """Cache-aside pattern"""
    # Try cache first
    cached = r.get(key)
    if cached:
        return json.loads(cached)

    # Cache miss - fetch from source
    data = fetch_func()
    r.setex(key, ttl, json.dumps(data))
    return data

# Usage
def get_user(user_id):
    return get_cached_data(
        f"user:{user_id}",
        lambda: db.query(f"SELECT * FROM users WHERE id = {user_id}"),
        ttl=600  # 10 minutes
    )

# Cache invalidation on update
def update_user(user_id, data):
    db.execute("UPDATE users SET ...", data)
    r.delete(f"user:{user_id}")  # Invalidate cache

# Batch invalidation with pattern
def invalidate_all_users():
    keys = r.keys("user:*")
    if keys:
        r.delete(*keys)

Redis for rate limiting:

def is_rate_limited(ip, limit=100, window=60):
    key = f"rate:{ip}"
    pipe = r.pipeline()
    pipe.incr(key)
    pipe.expire(key, window)
    count, _ = pipe.execute()
    return count > limit

Varnish HTTP Cache

Varnish is a dedicated HTTP reverse proxy cache that excels at caching full pages:

sudo apt install -y varnish

# Configure Varnish to listen on port 80
sudo systemctl edit varnish
[Service]
ExecStart=/usr/sbin/varnishd -j unix,user=vcache -F -a :80 -a :8443,PROXY -p feature=+http2 -f /etc/varnish/default.vcl -S /etc/varnish/secret -s malloc,256m
# /etc/varnish/default.vcl
vcl 4.1;

import std;

backend default {
    .host = "127.0.0.1";
    .port = "8080";          # Nginx backend
    .connect_timeout = 5s;
    .first_byte_timeout = 60s;
    .between_bytes_timeout = 10s;
}

sub vcl_recv {
    # Normalize Accept-Encoding
    if (req.http.Accept-Encoding) {
        if (req.url ~ "\.(jpg|png|gif|gz|tgz|bz2|tbz|mp3|ogg)$") {
            unset req.http.Accept-Encoding;
        } elsif (req.http.Accept-Encoding ~ "br") {
            set req.http.Accept-Encoding = "br";
        } elsif (req.http.Accept-Encoding ~ "gzip") {
            set req.http.Accept-Encoding = "gzip";
        }
    }

    # Don't cache POST, PUT, DELETE
    if (req.method != "GET" && req.method != "HEAD") {
        return(pass);
    }

    # Don't cache authenticated requests
    if (req.http.Authorization || req.http.Cookie ~ "session|auth|logged_in") {
        return(pass);
    }

    # Strip marketing query parameters that don't affect content
    set req.url = regsuball(req.url, "\?(utm_[^&]+&?)+", "?");
    set req.url = regsub(req.url, "\?$", "");

    return(hash);
}

sub vcl_backend_response {
    # Default TTL
    set beresp.ttl = 5m;
    set beresp.grace = 1h;   # Serve stale for 1h if backend is down

    # Cache HTML pages for 5 minutes
    if (beresp.http.Content-Type ~ "text/html") {
        set beresp.ttl = 5m;
    }

    # Cache static assets longer
    if (bereq.url ~ "\.(css|js|png|jpg|woff2)$") {
        set beresp.ttl = 24h;
    }

    # Don't cache errors
    if (beresp.status >= 500) {
        set beresp.ttl = 0s;
        set beresp.uncacheable = true;
    }
}

sub vcl_deliver {
    # Add cache status header
    if (obj.hits > 0) {
        set resp.http.X-Cache = "HIT";
    } else {
        set resp.http.X-Cache = "MISS";
    }
    set resp.http.X-Cache-Hits = obj.hits;
}

Cache purging via Varnish:

# Purge a specific URL
varnishadm ban "req.url ~ ^/article/123"

# Purge all cached content
varnishadm ban "req.url ~ ."

# Monitor Varnish stats
varnishstat -1 | grep -E 'MAIN.cache_hit|MAIN.cache_miss'

Memcached for Session and Object Caching

Memcached is simpler than Redis but extremely fast for basic key-value caching:

sudo apt install -y memcached

# Configure in /etc/memcached.conf
# -m 256 (256 MB memory)
# -l 127.0.0.1 (listen on localhost only)
sudo systemctl enable --now memcached

Use Memcached for PHP session storage:

; /etc/php/8.2/fpm/php.ini
session.save_handler = memcached
session.save_path = "127.0.0.1:11211"

Compare Redis vs Memcached:

FeatureRedisMemcached
Data structuresStrings, lists, sets, hashes, sorted setsStrings only
PersistenceOptionalNo
ReplicationYesNo
Pub/SubYesNo
ClusteringYesBasic
Memory efficiencyLowerHigher for simple values
Best forComplex caching, queues, pub/subSimple key-value, sessions

Cache Invalidation Patterns

Cache-aside (lazy loading):

# Application checks cache first, loads from DB on miss
# Pros: only caches requested data
# Cons: first request is slow (cold start)

Write-through:

# Write to cache and DB simultaneously
# Pros: cache always current
# Cons: writes are slower, cache may store unused data

Write-behind (write-back):

# Write to cache first, DB update is async
# Pros: fastest writes
# Cons: risk of data loss if cache fails before DB write

Tag-based invalidation with Redis:

# Store tags with cached items
redis-cli hset "tags:article:123" "page:/article/123" "1"
redis-cli hset "tags:article:123" "listing:/articles" "1"

# Invalidate all pages related to an article
for key in $(redis-cli hkeys "tags:article:123"); do
    redis-cli del "$key"
done
redis-cli del "tags:article:123"

TTL Strategy Guide

Content TypeRecommended TTLInvalidation Strategy
Static assets (versioned)1 yearFilename versioning
Static assets (unversioned)1 dayManual purge
HTML pages (dynamic)5-15 minutesEvent-based purge
API responses (read-heavy)1-5 minutesTTL expiry
User-specific dataNo cache (or per-user)Key-based
Database query results1-60 minutesEvent invalidation
Rate limit countersSeconds (sliding window)TTL expiry

Benchmarking Caches

# Benchmark Nginx cache hit rate
grep -c "MISS" /var/log/nginx/access.log
grep -c "HIT" /var/log/nginx/access.log

# Redis benchmark
redis-benchmark -q -n 100000 -c 50

# Varnish hit rate
varnishstat -1 -f MAIN.cache_hit,MAIN.cache_miss | awk '
{total += $2}
/cache_hit/ {hits = $2}
END {printf "Hit rate: %.2f%%\n", (hits/total)*100}'

# AB testing with/without cache
ab -n 1000 -c 50 https://www.yourdomain.com/ | grep "Requests per second"

Choosing the Right Solution

  • Small site, simple PHP: Nginx FastCGI cache - zero dependencies, easy setup
  • High-traffic pages, complex invalidation: Varnish - purpose-built for HTTP caching
  • Microservices, sessions, queues: Redis - versatile, supports complex data structures
  • Pure session/object cache: Memcached - simpler and slightly faster than Redis for basic use cases
  • Multi-tier: Combine Nginx (static) + Varnish (pages) + Redis (objects/sessions)

Troubleshooting

Nginx cache not being used:

# Check cache status header
curl -I https://yourdomain.com | grep X-Cache-Status
# Should show HIT on second request

# Check cache directory
ls -la /var/cache/nginx/fcgi/

# Check skip_cache conditions
grep skip_cache /etc/nginx/sites-available/your-site

Varnish not caching:

# Check why requests aren't being cached
varnishlog -q "TxHeader:X-Cache == MISS" | head -50

# Common reasons: Set-Cookie in response, no-cache headers
varnishlog | grep beresp.http.Set-Cookie

Redis evicting too aggressively:

redis-cli info stats | grep evicted_keys
# Increase maxmemory or change eviction policy
redis-cli config set maxmemory 1gb

Conclusion

Effective server-side caching requires matching the tool to the problem: Nginx FastCGI caching for PHP applications, Varnish for complex HTTP caching rules and purging, and Redis for application-level object and session caching. The biggest performance gains come from caching at the highest level possible — a full-page Varnish cache hit is orders of magnitude faster than an application cache hit. Start with Nginx or Varnish for page caching, add Redis for sessions and computed data, and measure hit rates continuously to validate your TTL choices.