PostgreSQL Optimization (postgresql.conf): Complete Performance Tuning Guide

Introduction

PostgreSQL is renowned for its advanced features and standards compliance, but its default configuration is intentionally conservative to ensure it runs on minimal hardware. For production environments, these defaults can severely limit performance, preventing PostgreSQL from utilizing available system resources effectively.

A well-tuned PostgreSQL server can deliver dramatically better performance, handling higher transaction volumes, reducing query latency, and maximizing hardware utilization. The difference between default and optimized configurations can be the difference between supporting hundreds versus thousands of concurrent users, or query times measured in seconds versus milliseconds.

This comprehensive guide focuses on optimizing the postgresql.conf configuration file, covering memory settings, query planning, write-ahead logging, connection management, autovacuum tuning, and workload-specific optimizations. Whether managing a small application database or an enterprise data warehouse, this guide provides expert tuning strategies.

PostgreSQL Architecture Overview

Understanding these components helps with optimization:

  • Shared Buffers: PostgreSQL's main memory cache
  • Work Memory: Per-operation memory for sorts and hashes
  • Maintenance Work Memory: Memory for maintenance operations (VACUUM, CREATE INDEX)
  • WAL (Write-Ahead Log): Transaction log for durability and recovery
  • Autovacuum: Background process for cleaning up old row versions
  • Query Planner: Determines optimal query execution strategies

Prerequisites

Before optimizing PostgreSQL:

  • PostgreSQL installed and running
  • Root or sudo access to server
  • Understanding of workload characteristics
  • Baseline performance metrics documented
  • Configuration backup created
  • Monitoring tools available

Creating Configuration Backup

# Backup postgresql.conf
# Ubuntu/Debian
sudo cp /etc/postgresql/16/main/postgresql.conf /etc/postgresql/16/main/postgresql.conf.backup

# CentOS/Rocky
sudo cp /var/lib/pgsql/16/data/postgresql.conf /var/lib/pgsql/16/data/postgresql.conf.backup

# Backup entire data directory configuration
sudo tar -czf /backup/postgresql_config_$(date +%Y%m%d).tar.gz /etc/postgresql/

# Document current settings
sudo -u postgres psql -c "SELECT name, setting, unit FROM pg_settings ORDER BY name;" > /backup/pg_settings_before_$(date +%Y%m%d).txt

Gathering System Information

# CPU cores
nproc

# Total memory
free -h

# Disk information
df -h

# Check disk type (SSD vs HDD)
lsblk -d -o name,rota

# PostgreSQL version
sudo -u postgres psql -c "SELECT version();"

# Current configuration file location
sudo -u postgres psql -c "SHOW config_file;"

Configuration File Location

PostgreSQL configuration file locations:

Ubuntu/Debian:

  • Config: /etc/postgresql/16/main/postgresql.conf
  • Data: /var/lib/postgresql/16/main/
  • HBA: /etc/postgresql/16/main/pg_hba.conf

CentOS/Rocky Linux:

  • Config: /var/lib/pgsql/16/data/postgresql.conf
  • Data: /var/lib/pgsql/16/data/
  • HBA: /var/lib/pgsql/16/data/pg_hba.conf

Find active configuration:

# Show config file location
sudo -u postgres psql -c "SHOW config_file;"

# Show data directory
sudo -u postgres psql -c "SHOW data_directory;"

# List all configuration files
sudo -u postgres psql -c "SELECT * FROM pg_file_settings;"

Memory Configuration

Shared Buffers

Most critical memory parameter for PostgreSQL:

# postgresql.conf

# Shared buffers (main cache)
# Recommendation: 25% of RAM for dedicated server
# Recommendation: 15-20% of RAM for shared server
# Maximum practical value: 8-16GB (due to PostgreSQL architecture)

# For 16GB RAM dedicated server:
shared_buffers = 4GB

# For 32GB RAM dedicated server:
shared_buffers = 8GB

# For 64GB RAM dedicated server:
shared_buffers = 16GB

# For 128GB RAM dedicated server:
shared_buffers = 16GB  # Cap at 16GB, use effective_cache_size instead

Shared Buffers Guidelines:

Total RAMDedicated ServerShared Server
4GB1GB768MB
8GB2GB1.5GB
16GB4GB3GB
32GB8GB6GB
64GB16GB12GB
128GB16GB16GB

Effective Cache Size

Tells planner how much memory is available for caching:

# Effective cache size (NOT allocated memory, just hint to planner)
# Recommendation: 50-75% of RAM for dedicated server
# This should include shared_buffers + OS cache

# For 16GB RAM dedicated server:
effective_cache_size = 12GB

# For 32GB RAM dedicated server:
effective_cache_size = 24GB

# For 64GB RAM dedicated server:
effective_cache_size = 48GB

Work Memory

Memory for sorting and hashing operations:

# Work memory (per operation, per connection)
# Used for sorts, hashes, joins
# Calculation: (RAM - shared_buffers) / (max_connections * 2-3)

# For typical OLTP workload:
work_mem = 16MB

# For reporting/analytics workload:
work_mem = 64MB

# For data warehouse:
work_mem = 256MB

# Be careful: total memory usage = work_mem * operations * connections

Work Memory Calculation:

For 32GB RAM, 8GB shared_buffers, 100 connections:

Available: 32GB - 8GB = 24GB
Per connection budget: 24GB / 100 = 240MB
Safe work_mem: 240MB / 3 operations = 80MB
Recommended: 64MB (leave safety margin)

Maintenance Work Memory

Memory for maintenance operations:

# Maintenance work memory (VACUUM, CREATE INDEX, ALTER TABLE)
# Can be set higher than work_mem
# Recommendation: 5-10% of RAM, max 2GB per operation

# For 16GB RAM server:
maintenance_work_mem = 1GB

# For 32GB RAM server:
maintenance_work_mem = 2GB

# For 64GB RAM server:
maintenance_work_mem = 2GB  # PostgreSQL caps around 2GB effectiveness

# Autovacuum work memory (if not set, uses maintenance_work_mem)
autovacuum_work_mem = -1  # Use maintenance_work_mem value
# Or set specifically for autovacuum workers:
# autovacuum_work_mem = 512MB

Huge Pages

Use huge pages for better memory management:

# Huge pages configuration
# Improves performance by reducing TLB misses
huge_pages = try  # try, on, off

# Options:
# try = use if available (recommended)
# on = require huge pages
# off = don't use huge pages

Configure huge pages on Linux:

# Calculate required huge pages
# Formula: (shared_buffers + other shared memory) / 2MB

# For 8GB shared_buffers:
# Required pages: 8GB / 2MB = 4096 pages + 10% overhead = 4500 pages

# Set huge pages
sudo sysctl -w vm.nr_hugepages=4500

# Make permanent
echo "vm.nr_hugepages=4500" | sudo tee -a /etc/sysctl.conf

# Apply changes
sudo sysctl -p

# Verify
cat /proc/meminfo | grep -i huge

Write-Ahead Log (WAL) Configuration

WAL Buffers

# WAL buffers (transaction log buffer)
# Default: -1 (auto-tuned to 3% of shared_buffers)
# Usually auto-tuning is optimal

wal_buffers = -1

# Or set explicitly for high-write workloads:
# wal_buffers = 16MB

WAL Writer Settings

# WAL writer delay (milliseconds)
# How often WAL writer flushes to disk
wal_writer_delay = 200ms

# WAL writer flush after (bytes)
wal_writer_flush_after = 1MB

Checkpoints

Checkpoints write dirty buffers to disk:

# Maximum time between checkpoints
checkpoint_timeout = 15min  # Default: 5min, increase for better performance

# Checkpoint completion target
# Fraction of checkpoint_timeout to complete checkpoint
# 0.9 = spread checkpoint over 90% of interval
checkpoint_completion_target = 0.9

# Minimum WAL size to keep
min_wal_size = 1GB  # Default: 80MB

# Maximum WAL size before forcing checkpoint
max_wal_size = 4GB  # Default: 1GB

# Warning threshold for checkpoint frequency
checkpoint_warning = 300s

Checkpoint Tuning:

For write-heavy workloads:

checkpoint_timeout = 30min
max_wal_size = 8GB
min_wal_size = 2GB
checkpoint_completion_target = 0.9

WAL Archiving

# WAL archiving for backup and replication
wal_level = replica  # minimal, replica, logical

# Archive mode
archive_mode = on

# Archive command
archive_command = 'test ! -f /mnt/wal_archive/%f && cp %p /mnt/wal_archive/%f'

# Archive timeout (force WAL segment switch)
archive_timeout = 300  # 5 minutes

Query Planner Configuration

Cost Parameters for HDD

# Cost parameters for traditional hard drives
random_page_cost = 4.0      # Random vs sequential I/O cost
seq_page_cost = 1.0         # Sequential page cost
cpu_tuple_cost = 0.01       # Cost of processing each row
cpu_index_tuple_cost = 0.005 # Cost of processing each index entry
cpu_operator_cost = 0.0025  # Cost of processing each operator

Cost Parameters for SSD

# Cost parameters for SSD storage
random_page_cost = 1.1      # SSD has minimal random access penalty
seq_page_cost = 1.0
cpu_tuple_cost = 0.01
cpu_index_tuple_cost = 0.005
cpu_operator_cost = 0.0025

# Effective I/O concurrency (number of concurrent I/O operations)
effective_io_concurrency = 200  # For SSD (HDD: 2)

Parallel Query Settings

# Maximum parallel workers per query
max_parallel_workers_per_gather = 4  # For 4-8 cores
# max_parallel_workers_per_gather = 8  # For 16+ cores

# Maximum background workers
max_worker_processes = 8

# Maximum parallel workers (all types)
max_parallel_workers = 8

# Minimum table size for parallel query
min_parallel_table_scan_size = 8MB
min_parallel_index_scan_size = 512kB

# Parallel workers cost parameters
parallel_tuple_cost = 0.1
parallel_setup_cost = 1000.0

Statistics Target

# Default statistics target (100-1000)
# Higher = better estimates, slower ANALYZE
default_statistics_target = 100  # Default

# For complex queries or large tables:
# default_statistics_target = 500

# Or set per-column:
# ALTER TABLE table_name ALTER COLUMN column_name SET STATISTICS 500;

Connection Settings

Connection Limits

# Maximum connections
# Balance between capacity and resource usage
max_connections = 100  # Default, increase for high-traffic

# For web applications:
# max_connections = 200

# For connection pooling (PgBouncer):
# max_connections = 50  # Lower with pooler

# Superuser reserved connections
superuser_reserved_connections = 3

Connection Calculation:

Total memory usage estimate:

shared_buffers + (max_connections * (work_mem * 2 + overhead))

Example for 32GB RAM:
8GB shared + (200 * (16MB * 2 + 10MB))
= 8GB + (200 * 42MB)
= 8GB + 8.4GB = 16.4GB

Connection Pooling Recommendation

Use PgBouncer or pgpool-II:

# With connection pooler, reduce max_connections
max_connections = 50

# Pooler handles 1000s of client connections
# PostgreSQL handles 50 backend connections

Autovacuum Configuration

Critical for PostgreSQL health and performance:

Basic Autovacuum Settings

# Enable autovacuum (should always be ON)
autovacuum = on

# Number of autovacuum worker processes
autovacuum_max_workers = 3  # For typical workload
# autovacuum_max_workers = 6  # For write-heavy workload

# Delay between autovacuum runs
autovacuum_naptime = 1min  # How often to check for work

# Minimum time between vacuums on same table
autovacuum_vacuum_insert_threshold = 1000

Autovacuum Thresholds

# Vacuum threshold = threshold + scale_factor * tuples

# Vacuum threshold (minimum dead tuples)
autovacuum_vacuum_threshold = 50

# Vacuum scale factor (percentage of table size)
autovacuum_vacuum_scale_factor = 0.2  # 20% of table

# Example: 1M row table triggers at: 50 + (0.2 * 1,000,000) = 200,050 dead tuples

# For large tables, reduce scale factor:
# autovacuum_vacuum_scale_factor = 0.05  # 5%

# Analyze threshold
autovacuum_analyze_threshold = 50
autovacuum_analyze_scale_factor = 0.1  # 10% of table

Autovacuum Cost Settings

# Autovacuum cost delay (throttling)
# Lower = faster vacuum, higher I/O load
autovacuum_vacuum_cost_delay = 2ms  # Default: 20ms

# Vacuum cost limit (before sleeping)
autovacuum_vacuum_cost_limit = 200  # Default: -1 (use vacuum_cost_limit)

# For aggressive autovacuum (modern hardware):
# autovacuum_vacuum_cost_delay = 0  # No delay
# autovacuum_vacuum_cost_limit = -1  # No limit

Freeze Settings

# Autovacuum freeze thresholds
# Prevents transaction ID wraparound

# Vacuum freeze minimum age
vacuum_freeze_min_age = 50000000

# Vacuum freeze table age
vacuum_freeze_table_age = 150000000

# Autovacuum freeze max age (emergency vacuum)
autovacuum_freeze_max_age = 200000000

# Multixact freeze settings
vacuum_multixact_freeze_min_age = 5000000
vacuum_multixact_freeze_table_age = 150000000
autovacuum_multixact_freeze_max_age = 400000000

Background Writer Configuration

# Background writer settings
bgwriter_delay = 200ms          # Sleep between rounds
bgwriter_lru_maxpages = 100     # Max pages to write per round
bgwriter_lru_multiplier = 2.0   # Multiplier for next round

# Flush after writing this much
bgwriter_flush_after = 512kB

# Background writer I/O limit
# 0 = unlimited (recommended for SSD)
# bgwriter_lru_maxpages = 1000

Logging Configuration

Log Verbosity

# Logging collector
logging_collector = on

# Log directory and filename
log_directory = 'log'
log_filename = 'postgresql-%Y-%m-%d_%H%M%S.log'

# Log file rotation
log_rotation_age = 1d
log_rotation_size = 100MB
log_truncate_on_rotation = on

# What to log
log_min_duration_statement = 1000  # Log queries > 1 second
log_checkpoints = on
log_connections = on
log_disconnections = on
log_duration = off
log_line_prefix = '%t [%p]: [%l-1] user=%u,db=%d,app=%a,client=%h '
log_lock_waits = on
log_statement = 'ddl'  # none, ddl, mod, all
log_temp_files = 0  # Log all temp files

# Error detail
log_min_messages = warning
log_min_error_statement = error

Slow Query Logging

# Log slow queries
log_min_duration_statement = 1000  # milliseconds

# Log queries taking > 1 second
# Or 0 to log all queries (development only)
# Or -1 to disable

Workload-Specific Configurations

High-Transaction OLTP

# Optimized for many short transactions
shared_buffers = 8GB
effective_cache_size = 24GB
work_mem = 16MB
maintenance_work_mem = 1GB

max_connections = 200
max_parallel_workers_per_gather = 2

# Fast checkpoints
checkpoint_timeout = 15min
max_wal_size = 4GB
checkpoint_completion_target = 0.9

# SSD settings
random_page_cost = 1.1
effective_io_concurrency = 200

# Aggressive autovacuum
autovacuum_max_workers = 4
autovacuum_vacuum_cost_delay = 2ms

Data Warehouse / Analytics

# Optimized for complex queries on large datasets
shared_buffers = 16GB
effective_cache_size = 48GB
work_mem = 256MB
maintenance_work_mem = 2GB

max_connections = 50
max_parallel_workers_per_gather = 8
max_parallel_workers = 16

# Less frequent checkpoints
checkpoint_timeout = 30min
max_wal_size = 8GB

# Parallel query optimization
min_parallel_table_scan_size = 8MB
parallel_tuple_cost = 0.01
parallel_setup_cost = 100.0

# Statistics
default_statistics_target = 500

Mixed Workload

# Balanced configuration
shared_buffers = 8GB
effective_cache_size = 24GB
work_mem = 32MB
maintenance_work_mem = 1GB

max_connections = 150
max_parallel_workers_per_gather = 4

checkpoint_timeout = 20min
max_wal_size = 4GB
checkpoint_completion_target = 0.9

random_page_cost = 1.1
effective_io_concurrency = 200

autovacuum_max_workers = 3

Small Server (4GB RAM)

# Resource-constrained environment
shared_buffers = 1GB
effective_cache_size = 3GB
work_mem = 4MB
maintenance_work_mem = 256MB

max_connections = 50
max_parallel_workers_per_gather = 2

checkpoint_timeout = 10min
max_wal_size = 1GB

autovacuum_max_workers = 2

Monitoring and Tuning

Key Metrics to Monitor

-- Buffer cache hit ratio (should be > 99%)
SELECT
    sum(heap_blks_read) as heap_read,
    sum(heap_blks_hit) as heap_hit,
    sum(heap_blks_hit) / (sum(heap_blks_hit) + sum(heap_blks_read)) * 100 AS cache_hit_ratio
FROM pg_statio_user_tables;

-- Connection usage
SELECT
    max_conn,
    used,
    res_for_super,
    max_conn - used - res_for_super AS available
FROM
    (SELECT count(*) used FROM pg_stat_activity) t1,
    (SELECT setting::int max_conn FROM pg_settings WHERE name = 'max_connections') t2,
    (SELECT setting::int res_for_super FROM pg_settings WHERE name = 'superuser_reserved_connections') t3;

-- Checkpoint statistics
SELECT * FROM pg_stat_bgwriter;

-- Table bloat
SELECT
    schemaname,
    tablename,
    pg_size_pretty(pg_total_relation_size(schemaname||'.'||tablename)) AS size,
    n_live_tup,
    n_dead_tup,
    round(n_dead_tup * 100.0 / NULLIF(n_live_tup + n_dead_tup, 0), 2) AS dead_ratio
FROM pg_stat_user_tables
WHERE n_dead_tup > 1000
ORDER BY n_dead_tup DESC
LIMIT 10;

-- Long-running queries
SELECT
    pid,
    now() - query_start AS duration,
    state,
    query
FROM pg_stat_activity
WHERE state != 'idle'
AND now() - query_start > interval '5 minutes'
ORDER BY duration DESC;

Tuning Tools

# PGTune - Configuration generator
# https://pgtune.leopard.in.ua/

# pg_stat_statements - Query statistics
sudo -u postgres psql -d database -c "CREATE EXTENSION pg_stat_statements;"

# Check extension
SELECT * FROM pg_stat_statements ORDER BY total_exec_time DESC LIMIT 10;

# pg_buffercache - Buffer cache inspection
sudo -u postgres psql -c "CREATE EXTENSION pg_buffercache;"

# pgBadger - Log analyzer
pgbadger /var/log/postgresql/postgresql-*.log -o report.html

Applying Configuration Changes

Testing Configuration

# Check configuration syntax
sudo -u postgres postgres -C shared_buffers

# Test configuration file
sudo -u postgres postgres --config-file=/etc/postgresql/16/main/postgresql.conf -C shared_buffers

# View current settings
sudo -u postgres psql -c "SHOW ALL;"

Applying Changes

# Reload configuration (doesn't restart server)
sudo systemctl reload postgresql

# Or from within psql
sudo -u postgres psql -c "SELECT pg_reload_conf();"

# Restart required for some settings
sudo systemctl restart postgresql

# Verify changes
sudo -u postgres psql -c "SHOW shared_buffers;"
sudo -u postgres psql -c "SHOW effective_cache_size;"

# Check logs
sudo tail -f /var/log/postgresql/postgresql-16-main.log

Dynamic Configuration

Some parameters can be changed without restart:

-- Change dynamically (lost on restart)
ALTER SYSTEM SET work_mem = '32MB';

-- Reload configuration
SELECT pg_reload_conf();

-- Verify
SHOW work_mem;

-- Settings in pg_settings view
SELECT name, setting, unit, context
FROM pg_settings
WHERE name = 'work_mem';

Complete Optimized Configuration Example

Production configuration for 32GB RAM dedicated database server:

# postgresql.conf - Optimized for 32GB RAM, SSD, OLTP/Mixed Workload

# Memory Settings
shared_buffers = 8GB
effective_cache_size = 24GB
work_mem = 32MB
maintenance_work_mem = 2GB
huge_pages = try

# WAL Settings
wal_buffers = -1
min_wal_size = 1GB
max_wal_size = 4GB
checkpoint_timeout = 15min
checkpoint_completion_target = 0.9
wal_compression = on

# Query Planner
random_page_cost = 1.1
effective_io_concurrency = 200
default_statistics_target = 100

# Parallel Query
max_worker_processes = 8
max_parallel_workers = 8
max_parallel_workers_per_gather = 4
max_parallel_maintenance_workers = 4

# Connection Settings
max_connections = 200
superuser_reserved_connections = 3

# Autovacuum
autovacuum = on
autovacuum_max_workers = 3
autovacuum_naptime = 1min
autovacuum_vacuum_scale_factor = 0.1
autovacuum_analyze_scale_factor = 0.05
autovacuum_vacuum_cost_delay = 2ms

# Background Writer
bgwriter_delay = 200ms
bgwriter_lru_maxpages = 100
bgwriter_lru_multiplier = 2.0

# Logging
logging_collector = on
log_directory = 'log'
log_filename = 'postgresql-%Y-%m-%d_%H%M%S.log'
log_rotation_age = 1d
log_rotation_size = 100MB
log_min_duration_statement = 1000
log_checkpoints = on
log_connections = on
log_disconnections = on
log_line_prefix = '%t [%p]: [%l-1] user=%u,db=%d,app=%a,client=%h '
log_lock_waits = on

# Statistics
track_activities = on
track_counts = on
track_io_timing = on
track_functions = all

Conclusion

PostgreSQL performance tuning is a continuous process requiring monitoring, analysis, and iterative adjustments. This guide has provided comprehensive configuration strategies for optimizing PostgreSQL through the postgresql.conf file.

Key Takeaways

  1. Memory is Critical: Properly configure shared_buffers and effective_cache_size
  2. Match Hardware: Optimize for SSD/HDD and available resources
  3. Monitor Continuously: Track metrics before and after changes
  4. Autovacuum Matters: Essential for long-term performance
  5. Workload-Specific: Tune for your specific use case

Optimization Checklist

  • ✓ Configure shared_buffers (25% of RAM, max 16GB)
  • ✓ Set effective_cache_size (75% of RAM)
  • ✓ Optimize work_mem for workload
  • ✓ Configure maintenance_work_mem
  • ✓ Enable huge pages
  • ✓ Optimize WAL settings
  • ✓ Tune checkpoint parameters
  • ✓ Configure for storage type (SSD/HDD)
  • ✓ Enable parallel queries
  • ✓ Optimize autovacuum
  • ✓ Configure logging
  • ✓ Monitor key metrics

Additional Resources

Remember: performance optimization is an empirical process. Measure, change one parameter at a time, measure again, and iterate.