Leaving serverless led to performance improvement and a simplified architecture

Serverless vs. Cloudflare Workers

  • Many commenters argue the problem was not “serverless” generically but Cloudflare Workers’ edge/WASM model: stateless, short‑lived isolates, limited language/runtime, slow remote caches.
  • Analogy is drawn to people building SQL layers on top of NoSQL: they picked the wrong substrate and then fought it.
  • Several note Cloudflare’s newer container offering (and Durable Objects) as a better fit than Workers for stateful, high‑throughput APIs.

When Serverless Fits vs. Misfits

  • Good fits mentioned:
    • Spiky or low, intermittent workloads where scaling to zero saves real money.
    • “Glue” between managed services (e.g., S3 → Lambda → Dynamo) and background ETL.
    • Periodic or batch jobs, back-office pipelines.
  • Bad fits highlighted:
    • Latency‑critical APIs with tight SLOs (sub‑10ms) and heavy state/cache needs.
    • Always‑under‑load services where you effectively run 24/7 anyway.
    • Large uploads through API gateways with hard limits, leading to awkward workarounds (presigned URLs, S3 triggers, extra Lambdas).

Complexity, Operations, and Cost

  • Several argue serverless often increases architecture complexity: many functions, queues, triggers, separate deploy/monitoring paths vs. one monolith on a VM.
  • Others counter that it reduces operational burden relative to managing full infra, especially for small internal systems.
  • Testing and debugging serverless locally (Lambda, Workers) is widely described as painful; mocks/Localstack often diverge from real cloud behavior.
  • Cost views:
    • Extremely cheap for low‑traffic projects (pennies/month).
    • Surprisingly expensive at scale or when misused as a full API layer.
    • Some wish the article had given before/after cost numbers.

Containers, VMs, and the “Middle Ground”

  • Many prefer containers on managed platforms (Cloud Run, Fargate, ECS, Knative) as a sweet spot: Docker image as the unit, no cold starts, simpler local dev.
  • Others say a plain VPS/bare metal + a monolithic app and cache would handle most business workloads more cheaply and simply.
  • Debate over Docker itself: some see it as the last big productivity win; others see unnecessary overhead for simple Go/Java binaries.

Architecture & Organizational Lessons

  • Core technical lesson: moving compute “closer to the user” while keeping state far away can worsen end‑to‑end latency; colocate services with their data.
  • Network round‑trips to caches/DBs dominate latency; in‑process or in‑DC caches drastically help.
  • Several see this as a case of under‑estimating distributed systems fundamentals and over‑trusting vendor marketing.
  • Others appreciate the team’s willingness to share a real misstep, emphasizing that such experience reports are how the community re‑learns the limits of trends like serverless, microservices, and edge.