A $1k AWS mistake
Runaway data transfer & NAT Gateway pricing
- Many commenters note that $1k is “rookie numbers” compared to other AWS bill shocks (e.g. $60k+ and recurring $1k/month mistakes).
- NAT Gateway and egress pricing are seen as extremely high-margin and “toll booth”-like; some call it a racket or dark pattern, especially when traffic stays inside AWS’s network logically but is billed as internet egress.
- There’s debate over scale: one person claims “thousands in less than an hour,” another points out NAT Gateway throughput caps make that unlikely without multiple AZs or other services; but S3/RDS/EC2 cross-region or misrouted transfers can still burn money fast.
- A recurring complaint: same-region EC2→S3 is nominally “free,” yet if reached via NAT rather than VPC endpoints it becomes surprisingly expensive.
Service gateways, endpoints, and AWS network design
- Many argue S3 VPC Gateway Endpoints should be created by default since this specific mistake is so common and the endpoint is free.
- Others counter that auto-adding endpoints mutates routing, breaks zero-trust designs, bypasses firewalls/inspection, and conflicts with IAM/S3 policies; VPCs are intentionally minimal and secure-by-default.
- Some propose at least warnings or better UI explaining “this path will incur NAT/data transfer fees,” especially for beginners using click-ops.
- There is friction between those who want infra to exactly match Terraform/IaC definitions and those who’d prefer “smart” defaults that avoid footguns.
Refunds, hard caps, and billing controls
- Experiences with refunds vary: some got substantial credits after demonstrating alerts and mitigation steps; others say AWS refused outright or required paid support.
- Long, heated debate over hard spending caps:
- One side: hobbyists and bootstrappers need a “never charge above X” option to avoid personal financial ruin; current delayed alerts are inadequate.
- Other side: hard caps risk taking down production and causing irrecoverable business loss; overages can be refunded, data loss can’t.
- Several suggest opt‑in caps, multi-bucket caps (storage vs usage), or “buffer windows” before shutdown; others note such mechanisms exist in limited form (budgets + SNS + Lambda) but require DIY work and aren’t real-time.
Cloud vs self‑hosting and cost predictability
- Strong thread arguing hyperscale cloud is overpriced for VMs/storage/bandwidth, especially for small or steady workloads; Hetzner/OVH/VPS or bare metal cited as far cheaper and more predictable.
- Counterpoint: managed services (RDS, EKS, etc.) provide “zero maintenance” and automated recovery that’s hard to replicate; for most non-GPU workloads and regulated environments, AWS-like platforms are seen as worth it.
- Bootstrapped founders express anxiety about uncapped bills and prefer fixed-cost servers even at the price of more ops work.
Complexity, training, and responsibility
- Several say this class of mistake is covered in basic AWS training; the deeper issue is people skipping fundamentals and relying on click-ops or shallow knowledge.
- Others push back: AWS networking/billing is inherently complex, docs can be misleading (e.g., S3 pricing page not clearly calling out the NAT interaction), and expecting every small user to be an expert is unrealistic.
Mitigations and new developments
- Recommended practices: always set up budget alerts, separate NAT costs in Cost Explorer, sketch data paths before large jobs, and use S3/DynamoDB gateway endpoints or IPv6/egress-only gateways instead of NAT where possible.
- Some mention third-party cost tools and open-source NAT replacements (or DIY iptables) as cheaper options.
- Multiple comments highlight AWS’s new flat‑rate CloudFront plans with no overages as a promising step toward predictable pricing, hoping it expands to more services.