TechLead
Lesson 16 of 22
5 min read
Performance Engineering

Connection Pooling

Manage database and HTTP connections efficiently with pooling, keep-alive, and connection lifecycle management

Why Connection Pooling Matters

Establishing a new database connection involves TCP handshake, TLS negotiation, and authentication — typically 20-50ms of overhead. Without connection pooling, every database query pays this cost. A connection pool maintains a set of pre-established connections that are reused across requests, eliminating connection overhead and preventing the database from being overwhelmed by too many concurrent connections.

Connection Pooling Benefits

  • Eliminated connection overhead: Reuse existing connections instead of creating new ones (saves 20-50ms per query)
  • Controlled concurrency: Limit max connections to prevent database overload
  • Better resource utilization: Connections are shared across requests efficiently
  • Resilience: Pool handles reconnection and health checking automatically

Database Connection Pooling

// Prisma with connection pooling configuration
// .env
// DATABASE_URL="postgresql://user:pass@host:5432/db?connection_limit=20&pool_timeout=10"

// For serverless environments, use Prisma Data Proxy or PgBouncer
// DATABASE_URL="prisma://accelerate.prisma-data.net/?api_key=..."

// schema.prisma
// datasource db {
//   provider = "postgresql"
//   url      = env("DATABASE_URL")
//   directUrl = env("DIRECT_URL")  // For migrations
// }

// Prisma client singleton with proper pooling
// lib/prisma.ts
import { PrismaClient } from '@prisma/client';

const globalForPrisma = globalThis as unknown as { prisma: PrismaClient };

export const prisma =
  globalForPrisma.prisma ||
  new PrismaClient({
    log: process.env.NODE_ENV === 'development'
      ? ['query', 'error', 'warn']
      : ['error'],
  });

if (process.env.NODE_ENV !== 'production') {
  globalForPrisma.prisma = prisma;
}

// Direct pg pool for advanced use cases
import { Pool } from 'pg';

const pool = new Pool({
  connectionString: process.env.DATABASE_URL,
  max: 20,                // Maximum connections in pool
  min: 5,                 // Minimum idle connections
  idleTimeoutMillis: 30000, // Close idle connections after 30s
  connectionTimeoutMillis: 5000, // Timeout waiting for a connection
  maxUses: 7500,          // Close and recreate after this many queries
});

// Monitor pool health
pool.on('connect', () => console.log('Pool: new connection'));
pool.on('error', (err) => console.error('Pool error:', err));

// Pool statistics
function getPoolStats() {
  return {
    total: pool.totalCount,
    idle: pool.idleCount,
    waiting: pool.waitingCount,
  };
}

HTTP Connection Management

// HTTP keep-alive and connection reuse
import { Agent } from 'http';
import { Agent as HttpsAgent } from 'https';

// Create reusable HTTP agents with keep-alive
const httpAgent = new Agent({
  keepAlive: true,
  maxSockets: 50,        // Max concurrent connections per host
  maxFreeSockets: 10,    // Keep 10 idle connections alive
  timeout: 30000,        // Socket timeout
  keepAliveMsecs: 60000, // TCP keep-alive probe interval
});

const httpsAgent = new HttpsAgent({
  keepAlive: true,
  maxSockets: 50,
  maxFreeSockets: 10,
  timeout: 30000,
  keepAliveMsecs: 60000,
});

// Use with fetch (Node.js 18+)
async function fetchWithPool(url: string, options?: RequestInit) {
  return fetch(url, {
    ...options,
    // @ts-ignore — Node.js specific
    agent: url.startsWith('https') ? httpsAgent : httpAgent,
  });
}

// Undici for even better performance (built into Node.js)
import { Pool as UndiciPool } from 'undici';

const apiPool = new UndiciPool('https://api.example.com', {
  connections: 20,         // Max concurrent connections
  pipelining: 10,          // HTTP pipelining depth
  keepAliveTimeout: 30000,
  keepAliveMaxTimeout: 60000,
});

async function apiRequest(path: string): Promise<any> {
  const { statusCode, body } = await apiPool.request({
    path,
    method: 'GET',
    headers: { 'Content-Type': 'application/json' },
  });

  return body.json();
}

Serverless Connection Management

// In serverless environments, connection pooling needs special handling
// because each function invocation may create a new connection

// Solution 1: External connection pooler (PgBouncer, Prisma Accelerate)
// PgBouncer sits between your app and database, managing a shared pool
// DATABASE_URL="postgresql://user:pass@pgbouncer-host:6432/db?pgbouncer=true"

// Solution 2: Prisma Accelerate (managed connection pooler)
// Handles pooling, caching, and global edge access
// DATABASE_URL="prisma://accelerate.prisma-data.net/?api_key=YOUR_KEY"

// Solution 3: Connection warming in serverless
// Establish connection outside handler for reuse across invocations
let dbConnection: any = null;

async function getConnection() {
  if (dbConnection && !dbConnection.ended) {
    return dbConnection;
  }

  dbConnection = await createConnection({
    host: process.env.DB_HOST,
    database: process.env.DB_NAME,
  });

  return dbConnection;
}

// Monitor connection pool exhaustion
setInterval(() => {
  const stats = getPoolStats();
  if (stats.waiting > 0) {
    console.warn(`Connection pool pressure: ${stats.waiting} requests waiting`);
  }
  if (stats.idle === 0 && stats.total >= 20) {
    console.error('Connection pool exhausted!');
  }
}, 10000);

Connection Pooling Best Practices

  • Always pool connections: Never create a new connection per request
  • Size pools appropriately: Match pool size to expected concurrency and database limits
  • Use external poolers for serverless: PgBouncer or Prisma Accelerate for serverless deployments
  • Enable keep-alive for HTTP: Reuse TCP connections for external API calls
  • Monitor pool metrics: Track active, idle, and waiting connections

Continue Learning