Database connections are expensive to create. Connection pooling reuses existing connections to reduce latency and resource consumption. Understanding pooling is essential for scaling database-backed applications.
The Connection Problem
What Happens Without Pooling
Every database connection requires a TCP handshake, authentication, and session initialization. Without pooling, each request pays this cost, adding significant latency and straining database resources.
Request 1 → Open connection → Query → Close connection (100ms overhead)
Request 2 → Open connection → Query → Close connection (100ms overhead)
Request 3 → Open connection → Query → Close connection (100ms overhead)
Each connection involves:
- TCP handshake
- TLS negotiation (if encrypted)
- Database authentication
- Session initialization
With Connection Pooling
Connection pooling maintains a set of pre-established connections. Requests borrow connections from the pool, use them, and return them. The expensive setup happens once, and subsequent requests get near-instant access.
Pool maintains 10 connections
Request 1 → Borrow connection → Query → Return to pool (5ms overhead)
Request 2 → Borrow connection → Query → Return to pool (5ms overhead)
Request 3 → Borrow connection → Query → Return to pool (5ms overhead)
Types of Connection Pooling
Application-Level Pooling
Most web frameworks include built-in connection pooling. This is the simplest approach and works well for single-application deployments.
// Laravel database.php
'mysql' => [
'driver' => 'mysql',
'host' => env('DB_HOST'),
'database' => env('DB_DATABASE'),
'username' => env('DB_USERNAME'),
'password' => env('DB_PASSWORD'),
// Pool settings
'pool' => [
'min' => 2,
'max' => 10,
],
],
External Connection Poolers
For larger deployments with multiple application instances, external poolers centralize connection management. This is essential when the total connections from all app instances would exceed database limits.
Application → PgBouncer → PostgreSQL
→ ProxySQL → MySQL
PgBouncer
Installation and Setup
PgBouncer is the standard connection pooler for PostgreSQL. It sits between your application and database, managing a pool of connections efficiently.
# Install PgBouncer
apt-get install pgbouncer
# Configuration: /etc/pgbouncer/pgbouncer.ini
[databases]
myapp = host=127.0.0.1 port=5432 dbname=myapp
[pgbouncer]
listen_addr = 127.0.0.1
listen_port = 6432
auth_type = md5
auth_file = /etc/pgbouncer/userlist.txt
pool_mode = transaction
max_client_conn = 1000
default_pool_size = 20
min_pool_size = 5
reserve_pool_size = 5
The max_client_conn setting determines how many application connections PgBouncer accepts, while default_pool_size controls how many actual database connections it maintains.
Pool Modes
PgBouncer offers three pooling modes with different tradeoffs. Transaction mode is most common for web applications because it balances efficiency with compatibility.
# Session mode: Connection held for entire client session
pool_mode = session
# Transaction mode: Connection returned after each transaction
pool_mode = transaction
# Statement mode: Connection returned after each statement
pool_mode = statement
Transaction mode is most common for web applications:
Client A: BEGIN → query → query → COMMIT → Connection returned
Client B: (waiting) → Gets connection → query → ...
User Authentication
PgBouncer requires a userlist file with usernames and password hashes. The hash format must match what PostgreSQL uses.
# /etc/pgbouncer/userlist.txt
"myapp_user" "md5hash_of_password"
# Generate MD5 hash
echo -n "passwordusername" | md5sum
# Or use: SELECT concat('md5', md5('password' || 'username'));
Laravel with PgBouncer
When using Laravel with PgBouncer in transaction mode, you need to enable prepared statement emulation. PgBouncer does not support named prepared statements in transaction mode because statements are tied to specific connections.
// config/database.php
'pgsql' => [
'driver' => 'pgsql',
'host' => env('DB_HOST', '127.0.0.1'),
'port' => env('DB_PORT', '6432'), // PgBouncer port
'database' => env('DB_DATABASE'),
'username' => env('DB_USERNAME'),
'password' => env('DB_PASSWORD'),
'charset' => 'utf8',
'prefix' => '',
'schema' => 'public',
'sslmode' => 'prefer',
// Important for transaction pooling
'options' => [
PDO::ATTR_EMULATE_PREPARES => true,
],
],
The ATTR_EMULATE_PREPARES setting is crucial. Without it, PDO creates server-side prepared statements that break when PgBouncer assigns a different connection.
ProxySQL for MySQL
Configuration
ProxySQL provides connection pooling and query routing for MySQL. It includes an admin interface for runtime configuration changes.
# Install ProxySQL
apt-get install proxysql
# Connect to admin interface
mysql -u admin -padmin -h 127.0.0.1 -P6032
The admin interface uses MySQL protocol, so you configure it using SQL statements.
-- Add MySQL servers
INSERT INTO mysql_servers (hostgroup_id, hostname, port)
VALUES (1, '127.0.0.1', 3306);
-- Add users
INSERT INTO mysql_users (username, password, default_hostgroup)
VALUES ('myapp', 'password', 1);
-- Configure connection pool
UPDATE global_variables SET variable_value='100'
WHERE variable_name='mysql-max_connections';
UPDATE global_variables SET variable_value='10'
WHERE variable_name='mysql-free_connections_pct';
-- Apply changes
LOAD MYSQL SERVERS TO RUNTIME;
LOAD MYSQL USERS TO RUNTIME;
SAVE MYSQL SERVERS TO DISK;
SAVE MYSQL USERS TO DISK;
Changes only take effect after loading to runtime. The separate save step persists them across restarts.
Pool Sizing
Calculating Optimal Pool Size
More connections is not always better. Too many connections cause context switching overhead and memory pressure. A good starting point is based on CPU cores.
Connections = (core_count * 2) + effective_spindle_count
For SSD:
Connections = (CPU cores * 2) + 1
For a 4-core server:
Connections = (4 * 2) + 1 = 9-10 connections
Common Mistakes
A frequent mistake is setting pool sizes too large per application instance. When you have multiple instances, the total connections multiply quickly.
// Bad: Pool too large
'max_connections' => 100, // Per server instance
// If you have 10 app servers:
// 100 * 10 = 1000 connections to database!
// PostgreSQL default max is 100
// Better: Size appropriately
'max_connections' => 10, // 10 * 10 = 100 total
Monitoring Pool Usage
Regular monitoring helps you right-size your pools and detect issues before they cause outages.
-- PostgreSQL: Check current connections
SELECT count(*) FROM pg_stat_activity;
-- MySQL: Check connection status
SHOW STATUS LIKE 'Threads_connected';
SHOW STATUS LIKE 'Max_used_connections';
-- PgBouncer stats
SHOW POOLS;
SHOW STATS;
SHOW CLIENTS;
Connection Lifecycle
Healthy Connection Flow
A well-designed pool validates connections before use and properly cleans them up when returned. This prevents issues with stale or corrupted connection state.
class DatabasePool
{
public function getConnection(): Connection
{
// 1. Check for available connection in pool
$connection = $this->pool->acquire();
// 2. Validate connection is still alive
if (!$connection->ping()) {
$this->pool->remove($connection);
$connection = $this->createNewConnection();
}
// 3. Reset connection state
$connection->reset();
return $connection;
}
public function releaseConnection(Connection $connection): void
{
// 1. Rollback any uncommitted transactions
if ($connection->inTransaction()) {
$connection->rollback();
}
// 2. Return to pool
$this->pool->release($connection);
}
}
The connection reset is important because you cannot know what state the previous user left it in. Session variables, transaction state, or temporary tables might cause unexpected behavior.
Connection Validation
Configure your pool to validate connections, especially for long-lived pools where connections might become stale.
// Validate on borrow
'validation_query' => 'SELECT 1',
'test_on_borrow' => true,
// Background validation
'test_while_idle' => true,
'time_between_eviction_runs' => 30000, // 30 seconds
Common Issues
Connection Leaks
Connection leaks occur when code acquires a connection but never returns it to the pool. Over time, the pool exhausts and the application hangs waiting for connections.
// Bad: Connection never returned
public function processData()
{
$connection = DB::connection();
$data = $connection->select('SELECT * FROM users');
if ($data->isEmpty()) {
return; // Connection leaked!
}
// More processing...
}
// Good: Use try-finally or automatic cleanup
public function processData()
{
try {
$connection = DB::connection();
$data = $connection->select('SELECT * FROM users');
if ($data->isEmpty()) {
return;
}
// More processing...
} finally {
// Laravel handles this automatically
}
}
Laravel handles connection cleanup automatically at the end of requests, but be careful in long-running processes like queue workers.
Stale Connections
Connections can become stale when network issues interrupt them or when databases restart. Configure timeouts to close idle connections and reconnect periodically.
// PgBouncer configuration for stale connections
server_idle_timeout = 600 // Close idle server connections after 10 min
client_idle_timeout = 0 // Don't timeout idle clients
server_lifetime = 3600 // Reconnect to server every hour
Connection Exhaustion
When your pool exhausts, new requests must wait or fail. Implement monitoring and alerting to catch this before it affects users.
// Monitor and alert on pool exhaustion
class ConnectionPoolMonitor
{
public function check(): void
{
$stats = $this->getPoolStats();
$utilization = $stats['active'] / $stats['max'];
if ($utilization > 0.8) {
Log::warning('Connection pool at 80% capacity', $stats);
}
if ($utilization >= 1.0) {
Log::critical('Connection pool exhausted', $stats);
}
}
}
Kubernetes Deployment
PgBouncer Sidecar Pattern
In Kubernetes, you can deploy PgBouncer as a sidecar container. The application connects to localhost, and PgBouncer handles the remote connection. This simplifies configuration and provides per-pod pooling.
apiVersion: v1
kind: Pod
metadata:
name: myapp
spec:
containers:
- name: app
image: myapp:latest
env:
- name: DB_HOST
value: "localhost" # Connect to sidecar
- name: DB_PORT
value: "6432"
- name: pgbouncer
image: pgbouncer/pgbouncer:latest
ports:
- containerPort: 6432
volumeMounts:
- name: pgbouncer-config
mountPath: /etc/pgbouncer
volumes:
- name: pgbouncer-config
configMap:
name: pgbouncer-config
Centralized Pool
For larger deployments, a centralized pool service manages all connections. This provides better resource utilization but adds a network hop.
apiVersion: apps/v1
kind: Deployment
metadata:
name: pgbouncer
spec:
replicas: 2
selector:
matchLabels:
app: pgbouncer
template:
spec:
containers:
- name: pgbouncer
image: pgbouncer/pgbouncer:latest
resources:
requests:
memory: "256Mi"
cpu: "100m"
limits:
memory: "512Mi"
cpu: "500m"
---
apiVersion: v1
kind: Service
metadata:
name: pgbouncer
spec:
selector:
app: pgbouncer
ports:
- port: 6432
targetPort: 6432
Running multiple PgBouncer replicas provides high availability. The service load balances across them.
Best Practices
- Size pools conservatively - More connections isn't better
- Use transaction pooling - For web applications
- Monitor pool metrics - Track utilization, wait times
- Set connection timeouts - Prevent indefinite waits
- Validate connections - Detect stale connections
- Handle exhaustion gracefully - Queue or reject requests
- Use external poolers - For high-scale deployments
Conclusion
Connection pooling is essential for database performance at scale. Start with application-level pooling, then add external poolers like PgBouncer or ProxySQL as you scale. Size pools based on database capacity, not application demand. Monitor pool utilization and connection wait times to detect issues before they cause outages. The right pool configuration dramatically reduces latency and improves throughput.