Step 7
Step 7 — Single-server philosophy
25 min
Step 7 — Single-server philosophy
Microservices, Kubernetes, distributed everything — they add complexity. Until one server truly stops being enough, single-server is usually the right answer.
One server isn't small
A t3.medium (2 vCPU, 4 GB) can run:
- Postgres
- Redis
- Backend API
- Frontend
- Caddy
…handling thousands of RPS. Plenty for most side projects and SaaS MVPs.
Five advantages
- Zero complexity —
docker compose up - 5% the cost — vs Kubernetes
- Easier debugging — all logs on one box
- Smaller attack surface — 1 IP, 3 ports
- Backup is
pg_dump+ S3 sync
When to scale out
When all three of these are true:
- CPU sustained >80% on weekdays
- DB pool saturation, regularly
- Restart downtime is hurting the business
If even one is false, stay single-server.
Five patterns to survive on one server
1. healthcheck + restart
restart: unless-stopped
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:3000/api/health"]
interval: 30s
2. Daily backups
docker exec postgres pg_dump -U user myapp | \
gzip | \
aws s3 cp - s3://my-backups/pg/$(date +%Y%m%d).sql.gz
3. External monitoring
UptimeRobot (free). Pings /api/health every 5 min, Slack on failure.
4. Vertical scale first
CPU short? Resize t3.medium → t3.large (reboot). Way simpler than horizontal.
5. CDN for static assets
S3 + CloudFront, or Cloudflare. Server load drops 10×.
When to go distributed
All three of these:
- 100k+ RPS regularly
- 99.99%+ uptime as a business commitment
- Team has operational distributed-systems experience
Without those, distributing usually adds problems.
Try it
Bring all your courses-projects onto one t3.small. Caddy + Docker compose + PG + Redis + apps — one machine handles it.
Course done
Single server, kept healthy, takes you remarkably far. Continue with security-foundations (coming soon) or backend-with-spring.