I Stopped Using Kubernetes. Our DevOps Team Is Happier Than Ever
Overall reaction & Medium/paywall
- Many call the story unbelievable, embarrassing, or possibly content marketing for AWS or managed services.
- Others appreciate it as a cautionary tale about misusing Kubernetes and overcomplicating infrastructure.
- Significant annoyance about the Medium-style paywall; multiple archive / mirror links shared.
Root cause: misuse and organizational failure
- Common view: the problems came from bad architecture and management, not Kubernetes itself.
- Examples called out: 47 clusters, cluster-per-service, three clouds, five monitoring tools, three logging systems, hundreds of YAML files for “basic” deployments.
- Several people see this as classic resume-driven or “tool first” engineering with little planning, domain expertise, or ops discipline.
- Some suggest the article wrongly shifts blame to the tool instead of owning organizational mistakes.
Kubernetes complexity & appropriateness
- Many argue Kubernetes is powerful but complex and ill-suited for small/medium teams or simple workloads.
- Others say they run many clusters with small teams successfully; the key is expertise, planning, and avoiding unnecessary features.
- Some report k8s feeling like “super powers”; others see it as an unnecessary Rube Goldberg machine or even an “anti-pattern.”
47 clusters and multi-cluster debates
- 47 clusters is widely labeled “insane” and a strong signal of not knowing what they were doing.
- Discussion of legitimate multi-cluster reasons: prod/stage/dev separation, regions, regulatory isolation, stateful vs stateless, single-tenant customers.
- Counterpoint: most of this can be done with one or a few clusters using namespaces, node pools, taints/tolerations, and network policies, though people note compliance and trust issues.
Costs, DevOps time, and burnout
- Some highlight the absurdity of spending $25k/month just on control planes and having eight DevOps engineers for a relatively small bill.
- Others note that saving ~$100k/year is only ~2% of an assumed engineering payroll and would not alone justify a full replatform elsewhere.
- Burnout is seen as more related to chaotic management and constant firefighting than to k8s itself.
Alternatives & lock-in
- The new stack (ECS/Fargate, EC2 + Docker, AWS Batch, Lambda) is seen as “outsourcing ops” to AWS and trading k8s complexity for vendor lock-in.
- Some endorse ECS/Fargate as “80% of k8s for 20% of the effort”; others warn this doesn’t fix underlying organizational problems.