PostgreSQL’s default configuration is designed to run on minimal hardware without crashing. It’s not designed to perform well. Out of the box, Postgres uses a fraction of available memory and I/O capacity.

Proper tuning can improve performance by 10x or more.

Memory Configuration

shared_buffers

The most important setting. This is Postgres’s own cache.

1
2
3
-- Default: 128MB (way too low)
-- Recommended: 25% of system RAM
shared_buffers = 4GB  -- On a 16GB server

Don’t set above 40% — the OS needs memory for its own file cache.

effective_cache_size

Tells the query planner how much memory is available for caching (Postgres + OS combined).

1
2
-- Recommended: 50-75% of total RAM
effective_cache_size = 12GB  -- On a 16GB server

This doesn’t allocate memory — it just helps the planner make better decisions.

work_mem

Memory for sort operations and hash tables per operation:

1
2
3
-- Default: 4MB
-- Be careful: this is per-operation, not per-query
work_mem = 64MB

A complex query might use work_mem multiple times. Set conservatively and increase for specific queries:

1
2
3
SET work_mem = '256MB';
SELECT ... complex query ...
RESET work_mem;

maintenance_work_mem

Memory for maintenance operations (VACUUM, CREATE INDEX):

1
2
-- Default: 64MB
maintenance_work_mem = 1GB

These operations are less frequent, so you can be more generous.

Write-Ahead Log (WAL)

wal_buffers

Buffer for WAL data before writing to disk:

1
2
3
-- Default: based on shared_buffers
-- Recommended: 64MB for most workloads
wal_buffers = 64MB

checkpoint_completion_target

How much of the checkpoint interval to spread I/O over:

1
2
-- Default: 0.9 (good)
checkpoint_completion_target = 0.9

max_wal_size

Maximum WAL size before forcing a checkpoint:

1
2
3
4
-- Default: 1GB
-- Increase for write-heavy workloads
max_wal_size = 4GB
min_wal_size = 1GB

Connection Management

max_connections

1
2
3
-- Default: 100
-- Set based on actual needs, not "just in case"
max_connections = 200

Every connection uses memory. Use connection pooling (PgBouncer) instead of raising this high.

Query Planner

random_page_cost

Cost of non-sequential disk access. Lower for SSDs:

1
2
3
-- Default: 4.0 (for spinning disks)
-- For SSDs:
random_page_cost = 1.1

effective_io_concurrency

How many concurrent I/O operations to expect:

1
2
3
-- Default: 1
-- For SSDs:
effective_io_concurrency = 200

Analyzing Slow Queries

EXPLAIN ANALYZE

The most important debugging tool:

1
2
3
4
5
EXPLAIN (ANALYZE, BUFFERS, FORMAT TEXT)
SELECT * FROM orders 
WHERE customer_id = 123 
ORDER BY created_at DESC 
LIMIT 10;

Look for:

  • Seq Scan on large tables (needs index?)
  • High row estimates vs actual (statistics outdated?)
  • Nested loops with many iterations (missing index on join?)

pg_stat_statements

Track query performance over time:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
-- Enable in postgresql.conf
shared_preload_libraries = 'pg_stat_statements'

-- Query the stats
SELECT 
    query,
    calls,
    total_exec_time / 1000 as total_seconds,
    mean_exec_time as avg_ms,
    rows
FROM pg_stat_statements
ORDER BY total_exec_time DESC
LIMIT 20;

Indexing

B-tree (Default)

Good for equality and range queries:

1
2
CREATE INDEX idx_users_email ON users(email);
CREATE INDEX idx_orders_created ON orders(created_at);

Partial Indexes

Index only rows you query:

1
2
3
-- Only index active users
CREATE INDEX idx_active_users ON users(email) 
WHERE status = 'active';

Covering Indexes

Include columns to enable index-only scans:

1
2
3
-- Can answer query without touching table
CREATE INDEX idx_orders_customer ON orders(customer_id) 
INCLUDE (status, total);

GIN for Arrays/JSONB

1
2
CREATE INDEX idx_tags ON posts USING GIN(tags);
CREATE INDEX idx_metadata ON events USING GIN(metadata);

Composite Indexes

Column order matters — most selective first:

1
2
3
-- For: WHERE tenant_id = X AND status = Y AND created_at > Z
CREATE INDEX idx_tenant_status_created 
ON orders(tenant_id, status, created_at);

VACUUM and Maintenance

Autovacuum Tuning

1
2
3
4
5
-- Make autovacuum more aggressive for busy tables
ALTER TABLE orders SET (
    autovacuum_vacuum_scale_factor = 0.05,
    autovacuum_analyze_scale_factor = 0.02
);

Manual Maintenance

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
-- Update statistics after bulk loads
ANALYZE orders;

-- Reclaim space after large deletes
VACUUM (VERBOSE) orders;

-- Full vacuum (locks table, use rarely)
VACUUM FULL orders;

-- Rebuild indexes
REINDEX INDEX CONCURRENTLY idx_orders_created;

Monitoring

Key Metrics to Watch

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
-- Cache hit ratio (should be > 99%)
SELECT 
    sum(heap_blks_hit) / nullif(sum(heap_blks_hit) + sum(heap_blks_read), 0) as ratio
FROM pg_statio_user_tables;

-- Index usage
SELECT 
    relname,
    idx_scan,
    seq_scan,
    idx_scan::float / nullif(idx_scan + seq_scan, 0) as idx_ratio
FROM pg_stat_user_tables
ORDER BY seq_scan DESC;

-- Unused indexes
SELECT 
    schemaname, relname, indexrelname, idx_scan
FROM pg_stat_user_indexes
WHERE idx_scan = 0 AND indexrelname NOT LIKE 'pg_%';

-- Table bloat estimate
SELECT 
    relname,
    n_dead_tup,
    n_live_tup,
    round(n_dead_tup::numeric / nullif(n_live_tup, 0) * 100, 2) as dead_pct
FROM pg_stat_user_tables
ORDER BY n_dead_tup DESC;

Active Queries

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
-- Long-running queries
SELECT 
    pid,
    now() - pg_stat_activity.query_start AS duration,
    query,
    state
FROM pg_stat_activity
WHERE (now() - pg_stat_activity.query_start) > interval '5 minutes'
AND state != 'idle';

-- Blocked queries
SELECT 
    blocked.pid AS blocked_pid,
    blocked.query AS blocked_query,
    blocking.pid AS blocking_pid,
    blocking.query AS blocking_query
FROM pg_stat_activity blocked
JOIN pg_stat_activity blocking ON blocking.pid = ANY(pg_blocking_pids(blocked.pid));

Sample Configuration

For a 16GB RAM server with SSDs:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
# postgresql.conf

# Memory
shared_buffers = 4GB
effective_cache_size = 12GB
work_mem = 64MB
maintenance_work_mem = 1GB

# WAL
wal_buffers = 64MB
max_wal_size = 4GB
min_wal_size = 1GB
checkpoint_completion_target = 0.9

# Query Planner (SSD)
random_page_cost = 1.1
effective_io_concurrency = 200

# Connections
max_connections = 200

# Logging
log_min_duration_statement = 1000  # Log queries > 1s
log_checkpoints = on
log_lock_waits = on

# Statistics
shared_preload_libraries = 'pg_stat_statements'
pg_stat_statements.track = all

PostgreSQL tuning is about matching configuration to your hardware and workload. The defaults assume nothing about your system. Proper tuning tells Postgres what resources it has and how to use them.

Start with memory settings, then query analysis, then indexing. Monitor cache hit ratios and slow queries. The database that’s properly tuned doesn’t just run faster — it scales further before you need more hardware.