Escaping surprise bills and over-engineered messes: Why I left AWS
Cloud billing, surprise costs, and lack of hard caps
- Many commenters see “surprise bills” as a real, structural problem across AWS, GCP, and Azure, especially around egress, serverless, and misconfigurations.
- There is frustration that budget tools are alert-only; true hard caps either don’t exist or are DIY via APIs, and vendors explicitly disclaim that they won’t fully protect you.
- Some argue this is “enshittification”/incentive-driven: it’s profitable for providers that overages remain possible. Others counter that hard caps are tricky (e.g., what do you do with storage at a limit—delete data?).
- Several people describe real billing accidents (Azure rescue attempts spawning many disks, GCP free-tier networking surprises), and note that even a $100–$1,000 surprise can be devastating for individuals.
Alternatives for side projects and low budgets
- Strong consensus: AWS is rarely economical or worth the risk for hobby projects; better to use cheap VPS/bare metal (Hetzner, OVH, IONOS, DigitalOcean, etc.), or prepaid hosts like NearlyFreeSpeech with explicit spend ceilings.
- Lightsail is suggested as a simpler, semi-capped AWS path, but still pricier than bargain VPSes.
- Some run everything from home (NAS/Raspberry Pi/mini PC) behind Cloudflare tunnels or CDNs; a few report HN front-page spikes handled fine this way.
Simplicity vs over-engineering (“most apps fit on a Pi”)
- One camp: most modern web apps (CRUD sites, small shops, blogs) can run on a single modest machine, and people wildly overspec infra due to hype or vendor influence.
- Another camp: that ignores requirements like SLAs, redundancy, and operational reliability; for serious e‑commerce or contractual uptime, single-box setups aren’t enough.
- Debate centers on acceptable downtime: some say an hour or even a week is fine for many businesses; others argue that for revenue-producing or SLA-bound systems, that’s not realistic.
Self-hosting vs cloud: cost, complexity, and skills
- Several argue that traditional HA patterns (multiple machines behind HAProxy, on-prem virtualization, Proxmox, Kubernetes) can be as easy or easier than navigating AWS, especially once you factor in billing and IAM complexity.
- Others insist cloud wins on ease of scaling, blue/green deploys, and per-PR environments, particularly for teams lacking deep ops skills or nearby datacenters.
- A recurring subtext: many developers lack sysadmin/infra expertise, and many orgs separate dev and ops poorly, which amplifies both cloud and self-hosted disasters.
Serverless and AWS ecosystem: love–hate
- Skeptics highlight serverless as marketed “simpler/cheaper” but often yielding lock-in, opaque failures, hard-to-test architectures, and surprise costs once usage grows.
- Supporters report very cheap, low-traffic workloads (Lambda + DynamoDB at cents/month) and successful migrations from fragile pets-servers to managed services, accepting higher bills in exchange for maintainability and scaling.
- Several stress that complexity never disappears; serverless just moves it from code to configuration and integration, which can be harder to reason about.