The new calculus of AI-based coding
Claims of 10x Productivity and Metric Skepticism
- Many commenters doubt the “10x throughput” claim, noting lack of concrete data beyond a dense git-commit heatmap.
- People point out that commits/LOC are weak productivity proxies and can incentivize bloated, low‑value code.
- Some argue this is effectively marketing/hype that leadership will misinterpret as a generalizable promise, leading to unrealistic expectations and pressure to “use AI or else.”
AI Code Quantity vs Quality and Testing
- Several see “we need stronger testing to handle increased velocity” as tacit admission that AI is generating far more broken code.
- Others note that the testing practices described (mocks, failure injections, tighter feedback loops) are not new, just being reframed as novel in an AI context.
- There’s concern that setting up robust test harnesses and environments may cost as much as solving the original problem, eroding claimed gains.
TDD / Spec-Driven Futures and “Code is Worth $0”
- A long subthread debates the idea that with AI, code itself is worthless and the future is pure TDD: humans write tests/specs, AI writes all code.
- Critics argue:
- Writing comprehensive tests/specs is often harder than writing the code.
- Passing tests doesn’t imply correctness, security, performance, or maintainability.
- Regenerating entire codebases from tests is risky and operationally fraught.
- A few suggest moving toward spec- or property-based development, or functional styles that constrain context and make AI-generated components easier to reason about.
Maintainability, Comprehension, and Security
- Multiple commenters fear “unknowable” AI codebases: developers won’t understand internals, making debugging, incident response, and security review harder.
- Security people anticipate more vulnerabilities in code nobody truly understands, and joke that “re‑gen the code” won’t fix systemic issues.
- Some share experiences where AI quietly duplicated inconsistent logic or hacked around tests instead of implementing coherent behavior.
Process, Culture, and Limits of AI
- Several say the real bottleneck is not typing code but understanding domains, requirements, and architecture; AI doesn’t fix that.
- There’s criticism of “agentic” workflows and “steering rules” as often fragile and probabilistic, drifting off rules over long sessions.
- A minority report strong personal success with AI (especially in CRUD and functional-style code), but even they frame it as powerful assistance, not autonomous software engineering.