Cloud Emulator Stack — Designing a 4th Environment
Cloud Emulator Stack — Designing a 4th Environment
Most monorepos run on the familiar 3-tier local / dev / prod. What the three share is that the app and its infrastructure are one bundle. Both the Postgres and Redis we bring up locally and the same in prod are part of "the infrastructure we operate." The moment we depend on an external cloud (Supabase, AWS), this 3-tier composition splits — someone else's infrastructure starts intruding on ours. This piece describes the design that introduces cloud as a fourth environment for that situation.
1. What's the problem
When code that calls external clouds directly grows, the following costs become routine:
- Internet dependency — work stalls on planes, on subways, on overseas trips.
- Quotas / cost — free tiers run out fast. CI hitting them every run accumulates cost.
- Polluted shared resources — test data one person creates appears on someone else's screen.
- Secret exposure — production keys hardcoded in a local
.envrisk staying in git history forever.
The fix splits two ways.
- Mocking — hide SDK calls behind jest mocks. Code behavior gets verified fast, but the actual SDK and HTTP flow are skipped, and integration time runs into trouble.
- Local emulators — run processes like Supabase, LocalStack, or MiniStack locally and just change the endpoint. The SDK, HTTP, and auth flows get verified as is. The cost is container resources.
The cloud environment is the home for the second path.
2. Splitting it as a Separate Environment
We don't put it in the same folder as the existing LOCAL / DEV / PROD compose files. We introduce a fourth folder, infra/cloud/:
- External cloud emulators have a different on-off unit. We bring local infrastructure (DB, Redis) up every day, but bring AWS emulators up only when verifying SES mail.
- Port ranges differ too. LOCAL / DEV / PROD follow the standard / +10000 / +20000 convention, but for AWS emulators it's better for compatibility to keep tool defaults like 4566 as is.
- It's not a deployment target. PROD runs on the real cloud.
3. Independent compose per Emulator
infra/cloud/
├── supabase/
│ └── docker-compose.yml # 14 containers (Auth, REST, Storage, Realtime, Studio, ...)
├── ministack/
│ └── docker-compose.yml # MiniStack plus its companion Redis
├── localstack/
│ └── docker-compose.yml # LocalStack alone
├── firebase/
│ └── docker-compose.yml # Firebase Local Emulator (Auth, Firestore, RealtimeDB, Storage, Pub/Sub)
└── wiremock/
└── docker-compose.yml # arbitrary external HTTP API stub
The reason we don't merge five into one compose is selective startup. When verifying Supabase Auth we bring up only supabase, when doing an AWS S3 multipart PoC we bring up only ministack — saving memory and start time. Compose profiles: could do this too, but separating directories is cleaner for READMEs, env vars, and troubleshooting paths.
cd infra/cloud/supabase && docker compose up -d # only what we need
cd infra/cloud/ministack && docker compose up -d
4. Respect Tool Defaults for Ports
LocalStack and MiniStack both default to 4566. boto3, Terraform, and @aws-sdk examples and tutorials across the internet assume that port. Forcing one side onto LOCAL / DEV's +10000 / +20000 convention means converting every time we use external material.
Instead, only shift one side when bringing both up at the same time. One project setup moves only LocalStack's host port to 4666 (the container-internal 4566 stays — keeping SDK compatibility).
| Tool | Host port | Container internal |
|---|---|---|
| MiniStack | 4566 | 4566 |
| LocalStack | 4666 | 4566 |
Supabase uses the 54320 range bundled together (Kong, DB, Studio, Inbucket). Identical to the supabase CLI defaults.
5. Orthogonality with LOCAL / DEV / PROD
No port in cloud may collide with LOCAL / DEV / PROD. That way users can bring up the DB and Redis from infra/local/docker-compose.yml and the MiniStack from infra/cloud/ministack/docker-compose.yml simultaneously.
There's a trap here — MiniStack requires a companion Redis and exposes the default 6379. Local Redis is also 6379. Collision. The fix is simple — don't expose MiniStack's companion Redis to the host. It's for MiniStack container-internal traffic, not for the app to attach to directly.
ministack-redis:
image: redis:7-alpine
# no ports clause — only reachable inside the container network
6. Anti-pattern of Crossing the Host
cloud emulators each live in their own docker-compose with their own networks. We must not connect them through one or two host docker socket mounts. For example, making Supabase Storage use MiniStack S3 as its backend is possible, but it works only when both compose files are up at once. The moment a user shuts ministack down, supabase breaks.
Instead, each stack stays self-contained:
- Supabase Storage uses
STORAGE_BACKEND=file— saves to its own disk. - Code that needs MiniStack S3 calls
endpoint=http://localhost:4566directly.
Cutting dependency between them is what makes "bring up only what's needed" a real promise.
7. Handling Secrets
The secrets in cloud emulators (e.g., Supabase's JWT_SECRET, ANON_KEY) are local-only demo keys. Not production keys. So baking defaults into .env.example is safe — there's no rotation duty and no exposure risk.
Instead, do one thing — git-ignore .env itself so production secrets never land in cloud's .env. Prevents the accident where someone writes a production key into the same file and commits it.
8. Combining with Other Environments
| Scenario | What we bring up together |
|---|---|
| Plain local development | LOCAL (DB, Redis) plus host pnpm dev |
| Offline development (external Supabase blocked) | LOCAL plus cloud/supabase |
| AWS S3 multipart PoC | only cloud/ministack (LOCAL not needed) |
| Verifying SES mail routing | only cloud/localstack |
| Stubbing Kakao Login, FCM, Gemini | only cloud/wiremock |
| Verifying Firebase Auth, Firestore | only cloud/firebase |
| Integration tests in CI | LOCAL plus cloud/{supabase,localstack} (only what's needed) |
The principle is simple — the union of environments we need. Moving to the 4-tier composition costs nothing and gains the freedom of small on-off units.
9. Limits
- Memory footprint — running all five idles at
3.5 GB. The full Supabase setup is the heaviest (3 GB). Depending on laptop spec, we can't always keep them on. - Version-pinning burden — emulators release fast. Old compose files left alone start showing unhealthy one day. Refresh tags regularly.
- One-machine, one-user assumption — cloud secrets and demo keys assume a single developer PC. To use as shared infra for a team, separate auth and isolation are needed.
- Emulator ≠ real cloud — passing integration tests on cloud means running another verification on the real cloud right before deploy. The cloud environment is a fast verification channel, not the final one.
- No FCM — Firebase Local Emulator does not officially support FCM. Push goes through a WireMock stub instead.
Closing thoughts
The 3-tier local / dev / prod was the standard for an era that handled only its own infrastructure. As external cloud dependency grows, a fourth cloud environment is the natural answer. The core is each emulator self-contained, tool-default ports respected, and a small on-off unit. With those three together, even on a plane SDK flows can be tested as is.
Next
- (end of infra)
Refer to Supabase self-hosted, LocalStack, MiniStack, Firebase Local Emulator, and WireMock.