Database Connection Pooling: Stop Opening Connections for Every Query
Why connection pooling matters and how to configure it properly.
March 1, 2026 · 5 min · 1021 words · Rob Washington
Table of Contents
Opening a database connection is expensive. TCP handshake, SSL negotiation, authentication, session setup—it all adds up. Do that for every query and your application crawls. Connection pooling fixes this by reusing connections. Here’s how to do it right.
Without pooling, every request opens a new connection:
1
2
3
4
5
6
7
8
# BAD: New connection per requestdefget_user(user_id):conn=psycopg2.connect(DATABASE_URL)# ~50-100mscursor=conn.cursor()cursor.execute("SELECT * FROM users WHERE id = %s",(user_id,))user=cursor.fetchone()conn.close()returnuser
At 100 requests per second, that’s 100 connections opening and closing per second. Your database server has a connection limit (typically 100-500). You’ll exhaust it fast.
A pool maintains a set of open connections and hands them out as needed:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
frompsycopg2importpool# Create pool once at startupconnection_pool=pool.ThreadedConnectionPool(minconn=5,# Minimum connections to keep openmaxconn=20,# Maximum connections alloweddsn=DATABASE_URL)defget_user(user_id):conn=connection_pool.getconn()# ~0ms (reuses existing)try:cursor=conn.cursor()cursor.execute("SELECT * FROM users WHERE id = %s",(user_id,))returncursor.fetchone()finally:connection_pool.putconn(conn)# Return to pool, don't close
Result: Connections are reused. Connection overhead drops from ~100ms to ~0ms per query.
fromsqlalchemyimportcreate_engineengine=create_engine(DATABASE_URL,pool_size=10,# Maintained connectionsmax_overflow=20,# Extra connections when busypool_timeout=30,# Seconds to wait for connectionpool_recycle=1800,# Recycle connections after 30 minpool_pre_ping=True# Verify connection before use)
Key settings:
pool_size: Base number of connections to maintain
max_overflow: How many extra connections during spikes
pool_timeout: How long to wait before raising an error
# SQLAlchemy will automatically:# 1. Get connection from pool# 2. Send a ping (SELECT 1)# 3. If ping fails, discard and get fresh connection# 4. Execute your query
# BAD: Connection leak if exception occursdefbad_query():conn=pool.getconn()cursor=conn.cursor()cursor.execute("SELECT * FROM users")# Exception here = leaked connpool.putconn(conn)# GOOD: Context manager ensures returndefgood_query():conn=pool.getconn()try:cursor=conn.cursor()cursor.execute("SELECT * FROM users")returncursor.fetchall()finally:pool.putconn(conn)# BETTER: Use SQLAlchemy's context managerdefbetter_query():withengine.connect()asconn:returnconn.execute("SELECT * FROM users").fetchall()# Connection automatically returned to pool
Detect leaks: Monitor pool.checkedout() or equivalent. If it keeps growing, you have a leak.
fromprometheus_clientimportGaugepool_size=Gauge('db_pool_size','Total connections in pool')pool_checked_out=Gauge('db_pool_checked_out','Connections in use')pool_overflow=Gauge('db_pool_overflow','Overflow connections in use')defupdate_pool_metrics():pool_size.set(engine.pool.size())pool_checked_out.set(engine.pool.checkedout())pool_overflow.set(engine.pool.overflow())
Alerts to set:
Pool exhaustion (checked_out == size + max_overflow)