Coding assistants are solving the wrong problem
Perceived strengths and sweet spots of coding assistants
- Work well on tightly scoped, well-specified tasks: API version upgrades, small features, refactors, unit-test generation, and scripts in unfamiliar languages.
- Enable experienced developers to tackle domains they’d otherwise avoid (e.g., drivers, Android widgets, embedded/HAL code, Rust instead of shell/Python).
- Good for internal tools, one-off prototypes, or artifacts where maintainability and “properness” matter less than “it works right now.”
- Some users treat AI more as a “product owner” or design assistant than a coder, helping with specs, brainstorming, and test ideas.
Limitations, failure modes, and requirements gaps
- Poor at discovering business-process problems or better workflows; will implement mediocre specs instead of challenging them.
- Tends to guess through requirements gaps instead of escalating them; missing assumptions surface late in review, erasing time “saved.”
- Weak at multi-process reasoning and complex architecture changes without very explicit guidance.
- Multi-agent / swarm approaches are viewed skeptically: impressive code volume, doubtful long-term coherence or maintainability.
- Models forget constraints over long interactions; quality degrades with large contexts, leading some to restart sessions frequently.
Code quality, elegance, and technical debt
- Debate over whether “inelegant” code always harms business value: some stress tech-debt-as-strategy and shipping hacks for speed; others describe products collapsing under accumulated crud.
- Several note that AI can accelerate production of fragile, tightly coupled code, increasing long-term costs and “whack‑a‑hydra” bug patterns.
- Disagreement on what “technical debt” even means; some equate it with misaligned implementation, others with explicitly accepted shortcuts.
Productivity, studies, and review bottlenecks
- Cited studies: experienced devs 19% slower with assistants yet feeling faster; ~48% of AI code with security issues. Some find these match experience; others dismiss them as outdated in a fast-moving field.
- Reading and validating generated code is often harder and slower than writing it, especially for nontrivial changes.
- Code review becomes a new bottleneck: more code, same or fewer reviewers; some dread a future where the job is mostly AI code review.
How usage style and developer skill affect outcomes
- Assistants amplify existing skill: strong engineers get more done; weak ones generate more sophisticated errors.
- Effective use often means: plan first, constrain outputs, use strong typing and tests, and treat AI as a fallible collaborator.
- Over-trust—of one’s own mental model or the model’s authority—is called out as a core source of hard-to-find bugs.
Skepticism about hype and broader concerns
- Many see LLMs as powerful next-token predictors doing a “parlor trick,” not true reasoning; good within bounds, dangerous outside them.
- Concern over simulated empathy and compliments increasing misplaced trust.
- Worries about over-indexing on LLMs, centralizing power in a few vendors, and restructuring workflows around tools whose actual benefits remain contested.