Slow Website: Performance Analysis

Introduction

Website performance directly impacts user experience, conversion rates, and search engine rankings. A slow website frustrates users, increases bounce rates, and costs businesses revenue. When website performance degrades, identifying the bottleneck quickly - whether it's server resources, database queries, network issues, or application code - is critical for maintaining user satisfaction.

This comprehensive guide provides system administrators and developers with systematic methodologies for diagnosing website performance issues. You'll learn to use command-line tools, analyze web server logs, profile database performance, and identify the root causes of slow page loads, enabling you to optimize your web infrastructure effectively.

Website slowness can originate from numerous sources: overloaded servers, inefficient database queries, large unoptimized assets, misconfigured caching, or network latency. This guide teaches you to methodically eliminate possibilities and pinpoint exact performance bottlenecks using proven diagnostic techniques.

Understanding Website Performance

Performance Metrics

Key performance indicators:

  • Time to First Byte (TTFB): Server response time
  • Page Load Time: Complete page render time
  • Time to Interactive: When page becomes usable
  • Server Response Time: Backend processing time
  • Database Query Time: SQL execution time
  • Network Latency: Connection delays

Performance Baselines

Acceptable thresholds:

  • TTFB: < 200ms (excellent), < 500ms (good)
  • Page Load: < 2s (excellent), < 3s (acceptable)
  • Database Queries: < 100ms (most), < 1s (complex)
  • Server Response: < 500ms for dynamic content

Initial Performance Assessment

Quick Performance Check

# Test website response time
time curl -I https://example.com

# With detailed timing
curl -w "@-" -o /dev/null -s https://example.com << 'EOF'
    time_namelookup:  %{time_namelookup}s\n
       time_connect:  %{time_connect}s\n
    time_appconnect:  %{time_appconnect}s\n
   time_pretransfer:  %{time_pretransfer}s\n
      time_redirect:  %{time_redirect}s\n
 time_starttransfer:  %{time_starttransfer}s (TTFB)\n
                    ----------\n
         time_total:  %{time_total}s\n
EOF

# Check server load
uptime
top -bn1 | head -15

# Web server connections
ss -tan | grep :80 | wc -l
ss -tan | grep :443 | wc -l

# Check for slow queries
tail -100 /var/log/mysql/slow-query.log

# Disk I/O
iostat -x 1 3

Step 1: Web Server Analysis

Apache Performance

# Apache status
systemctl status apache2

# Current connections
apachectl status
# or via URL
curl http://localhost/server-status

# Connection count
ss -tan | grep :80 | wc -l

# Apache processes
ps aux | grep apache2 | wc -l

# Apache error log
tail -100 /var/log/apache2/error.log

# Find slow requests
tail -10000 /var/log/apache2/access.log | \
    awk '{print $NF, $7}' | \
    sort -rn | \
    head -20

# Request rate
tail -10000 /var/log/apache2/access.log | \
    awk '{print $4}' | \
    cut -d: -f1-3 | \
    uniq -c | \
    tail -20

# Top requesting IPs
awk '{print $1}' /var/log/apache2/access.log | \
    sort | uniq -c | \
    sort -rn | \
    head -20

# Configuration check
apache2ctl -t
apache2ctl -S

Nginx Performance

# Nginx status
systemctl status nginx

# Active connections
curl http://localhost/nginx_status

# Connection count
ss -tan | grep :80 | wc -l

# Worker processes
ps aux | grep "nginx: worker" | wc -l

# Error log
tail -100 /var/log/nginx/error.log

# Slow requests
tail -10000 /var/log/nginx/access.log | \
    awk '{print $NF, $7}' | \
    sort -rn | \
    head -20

# Request timing analysis
awk '{print $NF}' /var/log/nginx/access.log | \
    awk '{sum+=$1; count++} END {print "Avg:", sum/count "s"}'

# Upstream response times
grep "upstream_response_time" /var/log/nginx/access.log | \
    tail -100

# Configuration test
nginx -t
nginx -T  # Full config dump

Step 2: Application Performance

PHP-FPM Analysis

# PHP-FPM status
systemctl status php7.4-fpm

# Pool status
curl http://localhost/fpm-status

# Active processes
ps aux | grep php-fpm | wc -l

# Slow PHP scripts
tail -100 /var/log/php7.4-fpm-slow.log

# PHP errors
tail -100 /var/log/php7.4-fpm.log

# Analyze slow requests
cat /var/log/php7.4-fpm-slow.log | \
    grep "script_filename" | \
    sort | uniq -c | \
    sort -rn

# PHP-FPM configuration
cat /etc/php/7.4/fpm/pool.d/www.conf | \
    grep -E "pm.max_children|pm.start_servers|pm.min_spare|pm.max_spare"

# Test PHP execution time
time php -r "sleep(1); echo 'test';"

# Check PHP modules
php -m

Application Profiling

# Enable slow log in php.ini
slowlog = /var/log/php-fpm-slow.log
request_slowlog_timeout = 5s

# Install Xdebug profiler
apt install php-xdebug

# Enable in php.ini
xdebug.profiler_enable=1
xdebug.profiler_output_dir="/tmp/xdebug"

# Analyze profiles with webgrind or KCacheGrind

# New Relic APM integration
# Install New Relic PHP agent
echo 'deb http://apt.newrelic.com/debian/ newrelic non-free' | \
    tee /etc/apt/sources.list.d/newrelic.list
apt update && apt install newrelic-php5

Step 3: Database Performance

MySQL Query Analysis

# Check MySQL status
systemctl status mysql

# Active connections
mysql -e "SHOW PROCESSLIST;"

# Slow queries
mysql -e "SHOW VARIABLES LIKE 'slow_query%';"

# Enable slow query log
mysql -e "SET GLOBAL slow_query_log = 'ON';"
mysql -e "SET GLOBAL long_query_time = 1;"

# Analyze slow query log
tail -100 /var/log/mysql/slow-query.log

# Use mysqldumpslow
mysqldumpslow -s t -t 10 /var/log/mysql/slow-query.log

# Check table locks
mysql -e "SHOW OPEN TABLES WHERE In_use > 0;"

# InnoDB status
mysql -e "SHOW ENGINE INNODB STATUS\G"

# Query cache stats
mysql -e "SHOW STATUS LIKE 'Qcache%';"

# Connection stats
mysql -e "SHOW STATUS LIKE 'Threads%';"
mysql -e "SHOW STATUS LIKE 'Connections';"

Database Optimization

# Analyze specific query
mysql -e "EXPLAIN SELECT * FROM table WHERE condition;"

# Check indexes
mysql -e "SHOW INDEX FROM table_name;"

# Table optimization
mysql -e "OPTIMIZE TABLE table_name;"

# Analyze table
mysql -e "ANALYZE TABLE table_name;"

# Check fragmentation
mysql -e "SELECT TABLE_NAME,
    DATA_FREE/1024/1024 AS FragmentedMB
    FROM information_schema.TABLES
    WHERE TABLE_SCHEMA = 'database_name'
    AND DATA_FREE > 0
    ORDER BY FragmentedMB DESC;"

# Memory usage
mysql -e "SHOW VARIABLES LIKE '%buffer%';"

Step 4: Server Resource Analysis

CPU Analysis

# CPU usage
top -bn1 | grep "Cpu(s)"
mpstat -P ALL 1 3

# Top CPU processes
ps aux --sort=-%cpu | head -10

# CPU wait time
iostat -c 1 3

# Per-process CPU
pidstat -u 1 5

# Apache/Nginx CPU usage
ps aux | grep -E "apache2|nginx" | \
    awk '{sum+=$3} END {print "CPU:", sum "%"}'

Memory Analysis

# Memory usage
free -h

# Top memory consumers
ps aux --sort=-%mem | head -10

# Swap usage
swapon --show
vmstat 1 5

# Cache status
cat /proc/meminfo | grep -E "Cached|Buffers"

# PHP-FPM memory
ps aux | grep php-fpm | \
    awk '{sum+=$6} END {print "Memory:", sum/1024 "MB"}'

Disk I/O Analysis

# I/O statistics
iostat -x 1 5

# Per-process I/O
iotop -o

# Disk utilization
df -h

# Find I/O intensive processes
pidstat -d 1 5

# Check for high I/O wait
top -bn1 | grep "Cpu(s)" | awk '{print $10}'

Step 5: Network Analysis

Connection Analysis

# Active connections
ss -s
netstat -an | awk '{print $6}' | sort | uniq -c

# Connections by IP
netstat -ntu | awk '{print $5}' | \
    cut -d: -f1 | sort | uniq -c | \
    sort -rn | head -20

# Connection states
ss -tan state established | wc -l
ss -tan state time-wait | wc -l

# Network throughput
iftop -i eth0
nethogs

# Bandwidth usage
vnstat -i eth0

Latency Testing

# Test latency to website
ping -c 10 example.com | tail -1

# Measure DNS lookup
time nslookup example.com

# Full connection test
curl -w "@-" -o /dev/null -s https://example.com << 'EOF'
time_namelookup:    %{time_namelookup}s\n
time_connect:       %{time_connect}s\n
time_appconnect:    %{time_appconnect}s\n
time_pretransfer:   %{time_pretransfer}s\n
time_starttransfer: %{time_starttransfer}s\n
time_total:         %{time_total}s\n
EOF

# MTR to check network path
mtr -c 100 example.com

Step 6: Caching Analysis

Web Server Cache

# Check Nginx cache
du -sh /var/cache/nginx/*

# Cache hit ratio
grep -E "HIT|MISS" /var/log/nginx/access.log | \
    awk '{print $NF}' | sort | uniq -c

# Varnish stats (if installed)
varnishstat
varnishlog

# Check cache headers
curl -I https://example.com | grep -i cache

Application Cache

# Redis status
redis-cli info stats
redis-cli --stat

# Memcached stats
echo stats | nc localhost 11211

# OPcache status (PHP)
php -r "print_r(opcache_get_status());"

# File cache size
du -sh /var/cache/application/*

Step 7: Load Testing

Apache Bench

# Install ab
apt install apache2-utils

# Basic load test
ab -n 1000 -c 10 https://example.com/

# With keep-alive
ab -k -n 1000 -c 10 https://example.com/

# POST request test
ab -n 100 -c 10 -p post.txt -T "application/x-www-form-urlencoded" https://example.com/api

# Save results
ab -n 1000 -c 50 https://example.com/ > ab-results.txt

Using wrk

# Install wrk
apt install wrk

# Basic test
wrk -t12 -c400 -d30s https://example.com/

# With custom script
wrk -t4 -c100 -d30s -s script.lua https://example.com/

# Long duration test
wrk -t12 -c400 -d5m https://example.com/

Solutions and Optimization

Web Server Optimization

Apache tuning:

# Edit /etc/apache2/mods-available/mpm_prefork.conf
<IfModule mpm_prefork_module>
    StartServers             5
    MinSpareServers          5
    MaxSpareServers         10
    MaxRequestWorkers      150
    MaxConnectionsPerChild 3000
</IfModule>

# Enable mod_expires
a2enmod expires
cat >> /etc/apache2/apache2.conf << 'EOF'
<IfModule mod_expires.c>
    ExpiresActive On
    ExpiresByType image/jpg "access plus 1 year"
    ExpiresByType image/jpeg "access plus 1 year"
    ExpiresByType image/gif "access plus 1 year"
    ExpiresByType image/png "access plus 1 year"
    ExpiresByType text/css "access plus 1 month"
    ExpiresByType application/javascript "access plus 1 month"
</IfModule>
EOF

# Enable compression
a2enmod deflate

Nginx tuning:

# Edit /etc/nginx/nginx.conf
worker_processes auto;
worker_connections 2048;
keepalive_timeout 30;

# Gzip compression
gzip on;
gzip_vary on;
gzip_proxied any;
gzip_comp_level 6;
gzip_types text/plain text/css application/json application/javascript;

# Browser caching
location ~* \.(jpg|jpeg|png|gif|ico|css|js)$ {
    expires 1y;
    add_header Cache-Control "public, immutable";
}

# Reload
nginx -t && systemctl reload nginx

Database Optimization

# MySQL tuning (/etc/mysql/my.cnf)
[mysqld]
innodb_buffer_pool_size = 4G
innodb_log_file_size = 256M
max_connections = 200
query_cache_size = 128M
query_cache_limit = 2M
tmp_table_size = 128M
max_heap_table_size = 128M

# Restart MySQL
systemctl restart mysql

# Add indexes for slow queries
ALTER TABLE users ADD INDEX idx_email (email);
ALTER TABLE posts ADD INDEX idx_created (created_at);

PHP-FPM Optimization

# Edit /etc/php/7.4/fpm/pool.d/www.conf
pm = dynamic
pm.max_children = 50
pm.start_servers = 5
pm.min_spare_servers = 5
pm.max_spare_servers = 10
pm.max_requests = 500

# OPcache settings (/etc/php/7.4/fpm/php.ini)
opcache.enable=1
opcache.memory_consumption=256
opcache.interned_strings_buffer=16
opcache.max_accelerated_files=10000
opcache.revalidate_freq=2

systemctl restart php7.4-fpm

Monitoring and Prevention

Performance Monitoring Script

cat > /usr/local/bin/performance-monitor.sh << 'EOF'
#!/bin/bash

LOG_FILE="/var/log/performance-monitor.log"
URL="https://example.com"

# Measure response time
RESPONSE_TIME=$(curl -w "%{time_total}" -o /dev/null -s $URL)

echo "$(date): Response time: ${RESPONSE_TIME}s" >> "$LOG_FILE"

# Alert if slow
if (( $(echo "$RESPONSE_TIME > 2.0" | bc -l) )); then
    echo "Slow response: ${RESPONSE_TIME}s on $(hostname)" | \
        mail -s "Performance Alert" [email protected]
fi

# Check server resources
CPU=$(top -bn1 | grep "Cpu(s)" | awk '{print $2}' | cut -d'%' -f1)
LOAD=$(uptime | awk -F'load average:' '{print $2}' | awk '{print $1}' | cut -d',' -f1)

echo "CPU: $CPU%, Load: $LOAD" >> "$LOG_FILE"
EOF

chmod +x /usr/local/bin/performance-monitor.sh
echo "*/5 * * * * /usr/local/bin/performance-monitor.sh" | crontab -

Conclusion

Website performance optimization requires systematic analysis of multiple layers - web server, application, database, and infrastructure. Key takeaways:

  1. Measure first: Establish baselines before optimizing
  2. Identify bottlenecks: Use profiling tools to find slow components
  3. Optimize databases: Slow queries kill performance
  4. Enable caching: Reduce redundant processing
  5. Monitor continuously: Catch degradation early
  6. Load test: Verify improvements under realistic conditions
  7. Optimize iteratively: Make one change at a time, measure results

Regular performance monitoring, proper caching strategies, and understanding these diagnostic tools ensure fast, responsive websites that provide excellent user experiences.