The Cost of "Just Connect"
Your application needs data from PostgreSQL. It opens a connection. What actually happens:
- TCP handshake: 1.5 round trips (SYN → SYN-ACK → ACK). On a 1ms LAN, that's 1.5ms. Cross-region: 50ms+.
- TLS handshake: 1-2 additional round trips for certificate exchange and key agreement. Add 2-4ms on LAN, 100ms+ cross-region.
- Authentication: PostgreSQL exchanges authentication messages (SCRAM-SHA-256 requires 2-3 round trips). Add 3-6ms.
- Connection setup: set timezone, encoding, search_path, and other session parameters. 1-2ms.
Total: 10-30ms on a LAN, 100-300ms cross-region. If your query takes 2ms, the connection setup is 5-15x more expensive than the query itself. If you open a new connection for every request, your database spends more time shaking hands than answering queries.
PostgreSQL forks a new OS process for every connection. At 500 concurrent connections, PostgreSQL has 500 processes competing for CPU, memory, and lock contention. Performance degrades rapidly beyond a few hundred connections.
Connection Pooling: Reuse, Don't Recreate
A connection pool maintains a set of pre-established database connections. When your application needs a connection, it borrows one from the pool. When it's done, it returns it. No handshake, no authentication, no setup. The connection is already warm and ready.
Figure 1: Connection pooling. Application threads borrow pre-established connections from the pool. The database sees 5 connections instead of 1000, staying well within its capacity.
Pool Sizing: The Most Common Mistake
Developers instinctively set pool size to a large number — "we have 1000 concurrent users, so 1000 connections." This is wrong. PostgreSQL's performance degrades above a certain connection count because of process scheduling, lock contention, and shared buffer management.
The formula from PostgreSQL wiki: pool_size = (core_count * 2) + effective_spindle_count. For a 4-core server with SSDs: (4 * 2) + 1 = 9 connections. A pool of 9 connections can serve thousands of concurrent application threads because most threads are waiting for I/O, not holding a connection.
HikariCP's documentation: "Contrary to intuition, a connection pool of 10 connections can easily serve 10,000 concurrent users. The key insight is that connections are held for milliseconds per query, not seconds per user session."
Connection Poolers in Production
- PgBouncer: external connection pooler for PostgreSQL. Sits between your app and Postgres. Supports transaction-level pooling (connection returned after each transaction) and statement-level pooling (returned after each query). Used by Supabase, Heroku, and most PostgreSQL-as-a-service providers.
- HikariCP: the fastest JDBC connection pool for Java. Default pool in Spring Boot. Zero-overhead design with bytecode-level optimization.
- Prisma: includes a built-in connection pool. Critical for serverless environments where each Lambda invocation would otherwise open a new connection.
- RDS Proxy: AWS's managed connection pooler for RDS and Aurora. Handles connection pooling, failover, and IAM authentication. Essential for Lambda → RDS workloads.
The Serverless Problem
Serverless functions (AWS Lambda, Vercel, Cloudflare Workers) create and destroy execution environments rapidly. Each environment opens its own database connections. With 1000 concurrent Lambda invocations, you get 1000 database connections — far exceeding PostgreSQL's limits. Connection poolers like PgBouncer and RDS Proxy sit between Lambda and the database, multiplexing thousands of short-lived Lambda connections into a small pool of persistent database connections.
HTTP Connection Pooling
The same principle applies to HTTP. Opening a new HTTPS connection costs TCP handshake + TLS handshake (2-3 round trips). HTTP/1.1's Connection: keep-alive reuses connections. HTTP/2 multiplexes many requests over one connection. But the application still needs a pool to manage these connections — Go's http.Transport, Java's HttpClient, and Python's requests.Session all maintain HTTP connection pools internally.