Step 5
3-layer cache strategy
25 min
3-layer cache strategy
A fast layer reduces load on the next. Edge → Redis → PG is the common ladder.
1. Structure
[Browser] ← Cache-Control / ETag
↓
[CDN / Caddy] ← edge cache
↓
[App] ← unstable_cache · Redis
↓
[PG] ← pg-cache · materialized view
2. Edge (Caddy)
example.com {
reverse_proxy localhost:3000
header /images/* Cache-Control "public, max-age=31536000, immutable"
header /api/* Cache-Control "no-store"
}
3. Browser
return NextResponse.json(data, {
headers: { "Cache-Control": "public, max-age=60, stale-while-revalidate=120" },
});
stale-while-revalidate returns stale while revalidating in the background.
4. App — Next.js unstable_cache
export const getTopPosts = unstable_cache(
async () => db.query("SELECT * FROM posts ORDER BY likes DESC LIMIT 20"),
["top-posts"],
{ tags: ["posts"], revalidate: 60 }
);
import { revalidateTag } from "next/cache";
revalidateTag("posts");
5. App — Redis
async function cachedQuery<T>(key: string, ttl: number, fetch: () => Promise<T>): Promise<T> {
const cached = await redis.get(key);
if (cached) return JSON.parse(cached);
const fresh = await fetch();
await redis.setex(key, ttl, JSON.stringify(fresh));
return fresh;
}
Shared across app instances; unstable_cache is per-instance memory.
6. DB layer
PostgreSQL has no query result cache (unlike MySQL). Tune shared_buffers / effective_cache_size. Materialize expensive aggregations:
CREATE MATERIALIZED VIEW mv_daily_stats AS
SELECT date_trunc('day', created_at) AS day, count(*) FROM events GROUP BY 1;
REFRESH MATERIALIZED VIEW mv_daily_stats;
7. Invalidation
| Strategy | Pros | Cons |
|---|---|---|
| TTL | simple | delayed |
| DEL on mutation | instant | easy to miss |
| stale-while-revalidate | great UX | brief staleness |
webhook (revalidateTag) |
precise | more code |
8. Key naming
user:profile:123
post:detail:456:ko
search:results:react:page1
Namespaces ease debugging and pattern deletes.
9. Cache stampede
const pending = new Map<string, Promise<any>>();
async function getCached(key: string, fetch: () => Promise<any>) {
if (pending.has(key)) return pending.get(key);
const p = fetch().finally(() => pending.delete(key));
pending.set(key, p);
return p;
}
10. Gotchas
- Same TTL everywhere
- Missing variants in keys (lang/user)
- User-specific data in shared unstable_cache
- Redis OOM → set
maxmemory-policy
Closing
Three layers aren't always needed. Scale up the stack as traffic grows.
Next
- 06-kafka-when