The economics of software teams: Why most engineering orgs are flying blind

LLM Agents, Code Quality, and Maintainability

  • Many commenters reject the idea that messy, AI-generated code is fine if agents will maintain it.
  • Reports of AI-only projects stalling: agents get stuck, make wrong changes, and can’t recover without heavy human guidance.
  • Even when agents pass tests, code can be structurally unsound: “walls made of foam” analogies; defensive, contradictory logic that destroys invariants.
  • Consensus that good human practices (modularity, types, documentation, tests) matter even more with agents; rewrites may get faster, but not easier or safer.

Economics of Teams, ROI, and Cost Awareness

  • Strong agreement that most engineering orgs underthink ROI: huge efforts on internal tools or features that save minimal time or affect few users.
  • Some find “back-of-the-envelope” cost math (e.g., three weeks on a 2% feature = tens of thousands of euros) clarifying and underused.
  • Others argue precise dollar attribution per feature/team is often impossible or misleading, especially in multi-feature, multi-team products.
  • Several note indirect and strategic value: reliability, compliance, support load, retention, and option value rarely show up in simple ROI formulas.

Platform Teams, Internal Tools, and Indirect Value

  • Skepticism about treating platform teams purely as “time savers”; they also provide reliability, security, standardization, and risk reduction.
  • Point that platform/internal tools are akin to shared infrastructure or admin overhead; the choice is centralizing vs duplicating effort, not just “hours saved”.

Slack Clone and LLM Productivity Claims

  • Widespread dismissal of the “95% Slack replica in 14 days” as conflating UI-level cloning with enterprise-grade product (scale, compliance, legal holds, SSO, APIs, mobile, etc.).
  • Historical analogies: many “lightweight clones” of complex products (e.g., word processors, Twitter) failed despite feature overlap.

Technical Debt, Metrics, and Organizational Blindness

  • Many agree organizations “fly blind” mainly by ignoring technical debt, reliability, and maintenance costs until productivity collapses.
  • Suggested metrics for debt/liability: time to ship, change failure rate, rework, MTTR, dependency age, complexity, churn.
  • Others caution against over-reliance on numbers: risk of Taylorism, Goodhart’s law, and defending pet projects with dubious calculations.

What’s Actually Hard in Software

  • Strong theme: the hardest part is understanding what to build and iterating toward the right solution, not keystrokes.
  • Some push back, noting that for non-trivial domains, designing workable architectures and algorithms is itself very hard, even when requirements are clear.
  • LLMs may speed up iteration and prototyping, but they don’t remove the need for human judgment, product sense, and responsibility.