Web Vitals Optimization for Server-Side
Core Web Vitals (LCP, FID/INP, CLS) measure real-user experience and affect Google search rankings. Server-side optimizations — reducing TTFB, configuring caching, enabling compression, and prioritizing critical resources — often deliver the biggest improvements. This guide covers server-level techniques on Linux to boost Core Web Vitals scores.
Prerequisites
- Nginx or Apache web server
- Linux (Ubuntu/Debian or CentOS/Rocky)
- Root or sudo access
- Basic understanding of HTTP caching headers
Measuring Current Performance
Before making changes, baseline your current metrics:
# Install Google's Lighthouse CLI
npm install -g lighthouse
# Run a Lighthouse audit (headless)
lighthouse https://www.yourdomain.com \
--chrome-flags="--headless" \
--output html \
--output-path ./lighthouse-report.html
# Measure TTFB with curl
curl -o /dev/null -s -w "
DNS: %{time_namelookup}s
Connect: %{time_connect}s
TLS+SSL: %{time_appconnect}s
Pre-xfer: %{time_pretransfer}s
StartTransfer (TTFB): %{time_starttransfer}s
Total: %{time_total}s
Size: %{size_download} bytes
" https://www.yourdomain.com
# Multiple runs to get a consistent baseline
for i in {1..5}; do
curl -o /dev/null -s -w "%{time_starttransfer}\n" https://www.yourdomain.com
done
Target TTFB: under 200ms for good, under 100ms for ideal.
Reducing TTFB
TTFB (Time to First Byte) is the most impactful server-side metric:
Enable FastCGI caching (PHP/Nginx):
# In http {} block
fastcgi_cache_path /var/cache/nginx levels=1:2 keys_zone=MYAPP:100m \
max_size=1g inactive=60m use_temp_path=off;
server {
fastcgi_cache MYAPP;
fastcgi_cache_valid 200 60m;
fastcgi_cache_valid 404 1m;
fastcgi_cache_use_stale error timeout updating http_500 http_503;
fastcgi_cache_lock on;
# Skip cache for logged-in users
fastcgi_cache_bypass $cookie_session $cookie_logged_in;
fastcgi_no_cache $cookie_session $cookie_logged_in;
# Show cache status in response headers (remove in production)
add_header X-Cache-Status $upstream_cache_status;
location ~ \.php$ {
fastcgi_pass unix:/run/php/php8.2-fpm.sock;
fastcgi_param SCRIPT_FILENAME $realpath_root$fastcgi_script_name;
include fastcgi_params;
}
}
Tune PHP-FPM for faster response:
; /etc/php/8.2/fpm/pool.d/www.conf
pm = dynamic
pm.max_children = 50
pm.start_servers = 10
pm.min_spare_servers = 5
pm.max_spare_servers = 20
pm.max_requests = 500
; Enable OPcache
; /etc/php/8.2/fpm/conf.d/10-opcache.ini
opcache.enable = 1
opcache.memory_consumption = 256
opcache.max_accelerated_files = 20000
opcache.revalidate_freq = 0 ; Don't revalidate (use 0 in production)
opcache.validate_timestamps = 0
Compression Configuration
Compression reduces transfer size, improving LCP for text resources:
# Nginx gzip configuration
gzip on;
gzip_vary on;
gzip_proxied any;
gzip_comp_level 6;
gzip_min_length 256;
gzip_types
application/atom+xml
application/javascript
application/json
application/ld+json
application/manifest+json
application/rss+xml
application/vnd.geo+json
application/vnd.ms-fontobject
application/wasm
application/x-font-ttf
application/x-web-app-manifest+json
application/xhtml+xml
application/xml
font/opentype
image/bmp
image/svg+xml
image/x-icon
text/cache-manifest
text/css
text/javascript
text/plain
text/vcard
text/vnd.rim.location.xloc
text/vtt
text/x-component
text/x-cross-domain-policy;
# Serve pre-compressed files (better performance)
gzip_static on;
Enable Brotli for better compression (requires nginx-full or brotli module):
sudo apt install -y libnginx-mod-http-brotli-filter libnginx-mod-http-brotli-static
brotli on;
brotli_static on;
brotli_comp_level 6;
brotli_types text/plain text/css application/javascript application/json
image/svg+xml application/xml font/opentype;
Pre-compress static assets:
# Pre-compress CSS/JS files
find /var/www/html -name "*.css" -o -name "*.js" | while read f; do
gzip -k -9 "$f" # .gz file
brotli -k -q 11 "$f" # .br file (requires brotli package)
done
Caching Strategies
Proper cache headers improve LCP by serving assets from browser cache:
# Static assets - long cache (use content hashing in filenames)
location ~* \.(css|js|woff2?|ttf|eot|ico|svg|webp|avif|png|jpg|jpeg|gif)$ {
expires 1y;
add_header Cache-Control "public, immutable";
access_log off;
}
# HTML - short or no cache (content changes frequently)
location ~* \.html$ {
expires 1h;
add_header Cache-Control "public, must-revalidate";
}
# API responses - no client cache (dynamic)
location /api/ {
add_header Cache-Control "no-store, no-cache, must-revalidate";
add_header Pragma "no-cache";
}
# Service worker - always revalidate
location /sw.js {
add_header Cache-Control "no-cache";
expires 0;
}
Varnish for dynamic content caching:
sudo apt install -y varnish
# Configure Varnish to cache pages for 5 minutes
cat > /etc/varnish/default.vcl << 'EOF'
vcl 4.1;
backend default {
.host = "127.0.0.1";
.port = "8080"; # Nginx/Apache backend
}
sub vcl_recv {
# Don't cache for logged-in users
if (req.http.Cookie ~ "session|auth") {
return(pass);
}
# Only cache GET/HEAD
if (req.method != "GET" && req.method != "HEAD") {
return(pass);
}
}
sub vcl_backend_response {
set beresp.ttl = 5m;
set beresp.grace = 1h;
}
EOF
# Listen on port 80, proxy to Nginx on 8080
Resource Prioritization
Server-side resource hints reduce LCP by preloading critical assets:
# Add Link preload headers for critical resources
location = / {
add_header Link "</css/critical.css>; rel=preload; as=style";
add_header Link "</fonts/main.woff2>; rel=preload; as=font; crossorigin";
add_header Link "</img/hero.webp>; rel=preload; as=image";
}
HTTP/2 Server Push (for HTTP/2 connections):
location = / {
http2_push /css/critical.css;
http2_push /js/app.js;
}
Note: HTTP/2 Server Push has largely been replaced by 103 Early Hints (see the Early Hints guide).
Connection Optimization
Reduce connection overhead to improve FID/INP:
# Keep connections alive longer
keepalive_timeout 65;
keepalive_requests 100;
# TCP optimizations
tcp_nopush on;
tcp_nodelay on;
# Reduce SSL handshake overhead
ssl_session_cache shared:SSL:50m;
ssl_session_timeout 1d;
ssl_session_tickets off; # Disabled for forward secrecy
# OCSP stapling reduces TLS overhead
ssl_stapling on;
ssl_stapling_verify on;
resolver 8.8.8.8 1.1.1.1 valid=300s;
resolver_timeout 5s;
Kernel-level TCP optimization:
# /etc/sysctl.conf - TCP performance tuning
sudo tee -a /etc/sysctl.conf << 'EOF'
# Increase buffer sizes
net.core.rmem_max = 134217728
net.core.wmem_max = 134217728
net.ipv4.tcp_rmem = 4096 87380 67108864
net.ipv4.tcp_wmem = 4096 65536 67108864
# BBR congestion control (better throughput)
net.core.default_qdisc = fq
net.ipv4.tcp_congestion_control = bbr
# Faster connection reuse
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_fin_timeout = 15
EOF
sudo sysctl -p
Database Query Optimization
Slow database queries are a common TTFB bottleneck:
# Enable MySQL slow query log
mysql -u root -p -e "SET GLOBAL slow_query_log = 1;
SET GLOBAL slow_query_log_file = '/var/log/mysql/slow.log';
SET GLOBAL long_query_time = 0.5;"
# Analyze slow queries
mysqldumpslow -s t -t 20 /var/log/mysql/slow.log
# PostgreSQL slow queries
# In postgresql.conf:
# log_min_duration_statement = 500 # Log queries > 500ms
# Then check: tail -f /var/log/postgresql/postgresql-*.log
Add connection pooling to reduce overhead:
# Install PgBouncer for PostgreSQL connection pooling
sudo apt install -y pgbouncer
# Configure PgBouncer to pool connections
cat > /etc/pgbouncer/pgbouncer.ini << 'EOF'
[databases]
myapp = host=127.0.0.1 dbname=myapp
[pgbouncer]
listen_addr = 127.0.0.1
listen_port = 5432
auth_type = md5
pool_mode = transaction
max_client_conn = 200
default_pool_size = 25
EOF
sudo systemctl enable --now pgbouncer
Monitoring Web Vitals
# Set up Real User Monitoring (RUM) data collection
# Add to your HTML pages:
# <script>
# new PerformanceObserver((list) => {
# list.getEntries().forEach(e => fetch('/vitals', {method:'POST', body: JSON.stringify(e)}));
# }).observe({type: 'largest-contentful-paint', buffered: true});
# </script>
# Monitor TTFB from multiple locations using synthetic monitoring
# Use Prometheus + Node Exporter + blackbox_exporter
sudo apt install -y prometheus-blackbox-exporter
# Configure to probe your site every minute
cat > /etc/prometheus/blackbox.yml << 'EOF'
modules:
http_2xx:
prober: http
timeout: 5s
http:
valid_http_versions: ["HTTP/1.1", "HTTP/2.0", "HTTP/3.0"]
fail_if_not_ssl: true
EOF
Troubleshooting
High TTFB despite caching:
# Check if cache is actually being used
curl -I https://yourdomain.com | grep -i 'x-cache\|cf-cache'
# Check backend PHP-FPM response time
tail -f /var/log/nginx/access.log | grep -oP '"[0-9.]+"$'
# Profile with strace
strace -p $(pgrep php-fpm | head -1) -e trace=network 2>&1 | head -50
LCP image loading slowly:
# Check image size
identify /var/www/html/hero.jpg | awk '{print $3}' # Dimensions
ls -lh /var/www/html/hero.jpg # File size
# Verify preload header is being sent
curl -I https://yourdomain.com | grep -i link
Compression not working:
curl -H "Accept-Encoding: gzip, br" -I https://yourdomain.com | \
grep -i 'content-encoding'
# Should show: content-encoding: gzip (or br)
Conclusion
Server-side Web Vitals optimization centers on three pillars: fast TTFB through caching and PHP-FPM tuning, reduced transfer sizes via Brotli/gzip compression and image optimization, and proper cache headers to eliminate repeat downloads. Start with TTFB measurement, implement FastCGI caching, enable Brotli compression, and set immutable cache headers on versioned assets — these four changes alone typically move Lighthouse performance scores from 50-70 to 80-95.


