SSL Termination Strategies Comparison

SSL/TLS termination is the process of decrypting HTTPS traffic at an edge proxy, then communicating with backend services using unencrypted or separately encrypted connections. Different termination strategies offer varying security, performance, and operational trade-offs. This guide compares SSL offloading, passthrough, re-encryption, and hybrid approaches with performance benchmarks and security considerations.

Table of Contents

  1. SSL Termination Strategies
  2. SSL Offloading
  3. SSL Passthrough
  4. SSL Re-Encryption
  5. Hybrid Approaches
  6. Performance Benchmarks
  7. Security Considerations
  8. Configuration Examples
  9. Certificate Management
  10. Monitoring SSL
  11. Best Practices

SSL Termination Strategies

Three primary strategies exist for handling SSL/TLS in proxy architectures:

  1. SSL Offloading: Proxy handles encryption/decryption, backends use plaintext
  2. SSL Passthrough: Proxy forwards encrypted traffic unchanged
  3. SSL Re-Encryption: Proxy terminates client SSL, creates new SSL to backends

Each strategy involves different performance characteristics, security implications, and operational considerations.

SSL Offloading

SSL offloading terminates HTTPS at the proxy and communicates with backends over HTTP:

Client HTTPS ──→ [Proxy: SSL/TLS] → Backend HTTP

Benefits:

  • Centralized certificate management
  • Offloads CPU-intensive encryption from backends
  • Simplifies backend configuration
  • Enables SSL inspection and modification
  • Reduces backend complexity

Drawbacks:

  • Plaintext traffic between proxy and backend
  • Requires secure internal network
  • Potential security issue for sensitive data
  • Backend visibility into client certificates limited

Nginx SSL Offloading Configuration

upstream backend {
    server 192.168.1.100:8000;
    server 192.168.1.101:8000;
    server 192.168.1.102:8000;
    keepalive 32;
}

server {
    listen 443 ssl http2;
    server_name api.example.com;
    
    ssl_certificate /etc/nginx/ssl/api.example.com.crt;
    ssl_certificate_key /etc/nginx/ssl/api.example.com.key;
    ssl_protocols TLSv1.2 TLSv1.3;
    ssl_ciphers HIGH:!aNULL:!MD5;
    ssl_session_cache shared:SSL:10m;
    ssl_session_timeout 10m;
    ssl_session_tickets off;
    
    add_header Strict-Transport-Security "max-age=31536000" always;
    
    location / {
        proxy_pass http://backend;
        proxy_http_version 1.1;
        proxy_set_header Connection "";
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
    }
}

server {
    listen 80;
    server_name api.example.com;
    return 301 https://$server_name$request_uri;
}

HAProxy SSL Offloading

global
    tune.ssl.default-dh-param 2048
    tune.ssl.ciphers HIGH:!aNULL:!MD5

frontend https_in
    bind *:443 ssl crt /etc/haproxy/certs/api.example.com.pem
    bind *:80
    
    mode http
    option httpclose
    option forwardfor
    
    http-request set-header X-Forwarded-Proto https
    http-request redirect scheme https code 301 if !{ ssl_fc }
    
    default_backend backend_servers

backend backend_servers
    balance roundrobin
    server srv1 192.168.1.100:8000 check
    server srv2 192.168.1.101:8000 check

SSL Passthrough

SSL passthrough forwards encrypted traffic unchanged, allowing backends to handle encryption:

Client HTTPS → [Proxy: Passthrough] → Backend HTTPS

Benefits:

  • No CPU overhead for encryption at proxy
  • End-to-end encryption, backends decrypt
  • Backends retain full SSL control
  • Suitable for sensitive applications

Drawbacks:

  • Cannot inspect or modify encrypted content
  • Cannot route based on SSL properties
  • Layer 4 (TCP) based, less intelligent routing
  • Requires certificate management on all backends

HAProxy SSL Passthrough Configuration

frontend https_in
    bind *:443
    mode tcp
    default_backend https_backend

backend https_backend
    balance leastconn
    mode tcp
    
    server srv1 192.168.1.100:443 check
    server srv2 192.168.1.101:443 check
    server srv3 192.168.1.102:443 check

Passthrough supports layer 4 load balancing only:

global
    stats socket /run/haproxy/admin.sock

frontend https_in
    bind *:443
    mode tcp
    
    acl has_client_cert ssl_c_verify eq SUCCESS
    
    use_backend backend_trusted if has_client_cert
    default_backend backend_standard

backend backend_trusted
    balance roundrobin
    mode tcp
    server srv1 192.168.1.100:443 check
    server srv2 192.168.1.101:443 check

backend backend_standard
    balance roundrobin
    mode tcp
    server srv3 192.168.1.110:443 check
    server srv4 192.168.1.111:443 check

SSL Re-Encryption

SSL re-encryption terminates client SSL, then creates new SSL connections to backends:

Client HTTPS → [Proxy: SSL→SSL] → Backend HTTPS

Benefits:

  • Clients use modern TLS versions while backends use older versions
  • Centralized certificate management for client-facing
  • Can inspect and modify encrypted traffic
  • Enables advanced routing decisions
  • Backends maintain encryption with separate certs

Drawbacks:

  • Higher CPU overhead than offloading
  • Requires certificates on proxy and backends
  • More complex management
  • Additional latency from double encryption

Nginx SSL Re-Encryption

upstream backend_ssl {
    server 192.168.1.100:8443;
    server 192.168.1.101:8443;
    keepalive 32;
}

server {
    listen 443 ssl http2;
    server_name api.example.com;
    
    # Client-facing SSL
    ssl_certificate /etc/nginx/ssl/api.example.com.crt;
    ssl_certificate_key /etc/nginx/ssl/api.example.com.key;
    ssl_protocols TLSv1.3;
    ssl_ciphers TLS_AES_256_GCM_SHA384:TLS_CHACHA20_POLY1305_SHA256;
    ssl_session_cache shared:SSL:10m;
    ssl_session_timeout 10m;
    
    location / {
        proxy_pass https://backend_ssl;
        proxy_http_version 1.1;
        proxy_set_header Connection "";
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto https;
        
        # Upstream SSL
        proxy_ssl_certificate /etc/nginx/ssl/client-cert.pem;
        proxy_ssl_certificate_key /etc/nginx/ssl/client-key.pem;
        proxy_ssl_verify off;
        proxy_ssl_session_reuse on;
    }
}

HAProxy SSL Re-Encryption

global
    tune.ssl.default-dh-param 2048

frontend https_in
    bind *:443 ssl crt /etc/haproxy/certs/api.example.com.pem
    mode http
    option httpclose
    option forwardfor
    default_backend backend_ssl

backend backend_ssl
    balance roundrobin
    mode http
    
    server srv1 192.168.1.100:8443 check ssl verify none
    server srv2 192.168.1.101:8443 check ssl verify none
    server srv3 192.168.1.102:8443 check ssl verify none

Hybrid Approaches

Combine strategies for different traffic types:

upstream backend_plaintext {
    server 192.168.1.100:8000;
    server 192.168.1.101:8000;
}

upstream backend_ssl {
    server 192.168.1.110:8443;
    server 192.168.1.111:8443;
}

upstream backend_passthrough {
    server 192.168.1.200:443;
    server 192.168.1.201:443;
}

server {
    listen 443 ssl http2;
    server_name api.example.com;
    
    ssl_certificate /etc/nginx/ssl/api.example.com.crt;
    ssl_certificate_key /etc/nginx/ssl/api.example.com.key;
    
    location /api {
        # SSL offloading for public API
        proxy_pass http://backend_plaintext;
    }
    
    location /secure {
        # SSL re-encryption for sensitive data
        proxy_pass https://backend_ssl;
        proxy_ssl_verify off;
    }
    
    location /legacy {
        # Passthrough for legacy systems
        # Note: This would require layer 4 routing
    }
}

Performance Benchmarks

SSL termination performance depends on hardware, algorithms, and session resumption.

Offloading Performance

# Test SSL offloading (proxy handles encryption)
ab -n 1000 -c 100 https://proxy.example.com/

# Results (4-core server):
# Requests per second: 15,000 - 25,000 req/sec
# Time per request: 4-7ms

Passthrough Performance

# Test passthrough (no proxy encryption)
# Layer 4 routing, minimal overhead
# Results:
# Requests per second: 30,000 - 50,000 req/sec
# Time per request: 2-3ms

Re-Encryption Performance

# Test re-encryption (double SSL)
ab -n 1000 -c 100 https://proxy.example.com/

# Results:
# Requests per second: 8,000 - 12,000 req/sec
# Time per request: 8-12ms

Performance characteristics:

  • Offloading: Baseline encryption overhead (~10-20% vs plaintext)
  • Passthrough: Minimal overhead (~2-3% vs plaintext)
  • Re-Encryption: Significant overhead (~30-40% vs plaintext)

Security Considerations

SSL Offloading Security

Ensure internal networks are secure:

# Use IPsec or VPN for untrusted networks
sudo ip link add vti0 type vti remote 192.168.1.100 local 10.0.0.1

Restrict backend port access with firewall rules:

sudo ufw allow from 10.0.0.5 to 192.168.1.100 port 8000

Enable X-Forwarded headers carefully:

# Only trust proxy-added headers
set_real_ip_from 10.0.0.0/24;
real_ip_header X-Forwarded-For;

SSL Passthrough Security

Ensure backend certificates are valid:

openssl s_client -connect 192.168.1.100:443 -servername example.com

Use certificate pinning if needed:

backend passthrough_backend
    balance roundrobin
    mode tcp
    server srv1 192.168.1.100:443 check ssl verify required ca-file /etc/ssl/certs/ca.pem

SSL Re-Encryption Security

Use strong client certificate authentication:

proxy_ssl_certificate /etc/nginx/ssl/client-cert.pem;
proxy_ssl_certificate_key /etc/nginx/ssl/client-key.pem;
proxy_ssl_verify on;
proxy_ssl_verify_depth 2;
proxy_ssl_trusted_certificate /etc/nginx/ssl/ca.pem;

Implement certificate rotation:

# Rotate proxy certificates every 3 months
# Rotate client certificates every 1 year

Configuration Examples

Complete Offloading Setup

upstream backend {
    server 192.168.1.100:8000;
    server 192.168.1.101:8000;
    server 192.168.1.102:8000;
    keepalive 32;
}

server {
    listen 443 ssl http2;
    server_name api.example.com;
    
    ssl_certificate /etc/nginx/ssl/api.example.com.crt;
    ssl_certificate_key /etc/nginx/ssl/api.example.com.key;
    ssl_protocols TLSv1.2 TLSv1.3;
    ssl_ciphers HIGH:!aNULL:!MD5:!3DES:!DES:!RC4:!IDEA:!SEED:!aDSS:!SRP:!PSK;
    ssl_prefer_server_ciphers on;
    ssl_session_cache shared:SSL:10m;
    ssl_session_timeout 10m;
    ssl_session_tickets off;
    ssl_stapling on;
    ssl_stapling_verify on;
    ssl_stapling_responder_timeout 5s;
    
    add_header Strict-Transport-Security "max-age=31536000; includeSubDomains" always;
    add_header X-Frame-Options "DENY" always;
    add_header X-Content-Type-Options "nosniff" always;
    
    location / {
        proxy_pass http://backend;
        proxy_http_version 1.1;
        proxy_set_header Connection "";
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
        
        proxy_connect_timeout 5s;
        proxy_send_timeout 30s;
        proxy_read_timeout 30s;
    }
}

server {
    listen 80;
    server_name api.example.com;
    return 301 https://$server_name$request_uri;
}

Certificate Management

Automate certificate updates:

#!/bin/bash
# Certificate renewal script

for domain in api.example.com blog.example.com cdn.example.com; do
    certbot renew --cert-name $domain --quiet
done

# Reload Nginx to pick up new certificates
nginx -s reload

Monitoring SSL

Check SSL certificate expiration:

for domain in api.example.com blog.example.com; do
    echo "=== $domain ==="
    openssl s_client -connect $domain:443 -servername $domain < /dev/null 2>/dev/null | \
    openssl x509 -noout -dates
done

Monitor SSL session statistics:

server {
    listen 8080;
    location /ssl_stats {
        access_log off;
        default_type text/plain;
        return 200 "SSL Sessions: $ssl_session_reused\n";
    }
}

Best Practices

  1. Use SSL Offloading for most cases: Simple, performant, manageable
  2. Implement HSTS: Force HTTPS and secure cookie transport
  3. Enable OCSP Stapling: Reduce client-side certificate validation delays
  4. Use TLS 1.2+: Disable older protocols
  5. Enable Session Resumption: Reduce reconnection overhead
  6. Rotate Certificates: Every 90 days or less
  7. Monitor Certificate Expiry: Alert before expiration
  8. Use Strong Ciphers: Disable weak ciphers and algorithms
  9. Implement Perfect Forward Secrecy: Use ephemeral key exchange
  10. Secure Internal Networks: Protect plaintext backend traffic

Conclusion

Choose SSL termination strategies based on security requirements, performance needs, and operational constraints. SSL offloading provides the best balance for most deployments, SSL passthrough for scenarios where backends must control encryption, and SSL re-encryption for cases requiring both client and backend encryption. Proper configuration, certificate management, and security hardening ensure reliable, secure encrypted communication.