Orchestrating multiple PostgreSQL pools
Orchestrating multiple PostgreSQL pools
An admin platform or back-office monorepo often needs a single app to connect to several domain databases directly. Blog DB · market DB · operations log DB · an external managed Postgres (Supabase). Things to weigh when choosing direct pool access over HTTP.
1. Why split pools
Reasons not to put everything in one DB.
- Domain isolation — market backup should not block blog editing
- Permissions / accounts — DB-level roles map 1:1 to domain roles
- Capacity / backup cadence — crawler DB daily, blog DB weekly
- Coexistence with external SaaS — Supabase uses its own port band (e.g., 54332), so separation is natural
Splitting by schema (schema=marketplace; schema=blog) is also fine, but pool separation is simpler when you also want different container resource / backup / permission boundaries.
2. Singleton pools — node-postgres
import { Pool } from 'pg';
export const dmddkslPool = new Pool({
host: process.env.DMDDKSL_DB_HOST!,
port: Number(process.env.DMDDKSL_DB_PORT ?? 5432),
database: process.env.DMDDKSL_DB_NAME!,
user: process.env.DMDDKSL_DB_USER!,
password: process.env.DMDDKSL_DB_PASSWORD!,
ssl: sslConfig(process.env.DMDDKSL_DB_SSL_MODE),
max: 10,
});
Domain-prefixed environment variables make .env readable. Each pool is one singleton per process.
3. Thin query helpers
Raw pool calls have weak type inference. A thin wrapper per pool makes IDE hints · reviews · grep much smoother.
export async function queryCodingstairs<T>(
sql: string, params: unknown[] = []
): Promise<T[]> {
const { rows } = await codingstairsPool.query<T>(sql, params);
return rows;
}
Callers write await queryCodingstairs<Post>(...).
4. Transactions — connect() + try/finally
const client = await codingstairsPool.connect();
try {
await client.query('BEGIN');
const { rows } = await client.query('INSERT ... RETURNING id', [...]);
// ...
await client.query('COMMIT');
} catch (e) {
await client.query('ROLLBACK');
throw e;
} finally {
client.release();
}
Missing release() is the classic pool starvation cause. A withPoolClient<T>() wrapper can make it hard to forget.
5. SSL configuration
- Local / docker-to-docker: no SSL
- Cloud (RDS etc.):
rejectUnauthorized: true - Supabase pooler:
rejectUnauthorized: false(a known quirk)
6. Graceful shutdown
process.on('SIGTERM', async () => {
await Promise.all(pools.map((p) => p.end()));
process.exit(0);
});
7. Routing rules
/api/pryzeet/**→pryzeetPool/api/codingstairs/**→codingstairsPool- Cross-cutting (audit log, sessions, FCM tokens) → a single cross-cutting pool
Resisting "one audit_logs per domain" in year one keeps things simple.
8. Gotchas
Typoed env var — DMDDSKL_DB_HOST silently falls through, pg tries localhost, you get confusing errors. Use requireEnv().
max too low / too high — default 10 is often fine for a small admin app; measure with pg_stat_activity before tweaking.
Long-running admin transactions — migrations or batch inserts holding a connection for seconds starve other requests. Give admin jobs their own max:2 pool or a separate worker.
Forgetting await pool.end() in scripts — a Node process will not exit while a pool holds connections.
Closing
Multi-pool is more operational overhead than a single DB. But when you have three or more domains each on different lifecycles (crawl backup · blog revalidate · Supabase externally managed), separation actually simplifies. Start with "pools = domains", add 1–2 cross-cutting pools when genuinely needed.
Next
- postgres-first
- postgres-deep
References: node-postgres · Supabase Connection Pooling · AWS RDS Best Practices.