The insecure evangelism of LLM maximalists
Business Incentives vs Code Quality
- Several argue that, in practice, companies reward “slop” that ships features and survives QA more than careful craftsmanship.
- Others counter this isn’t universal: some firms with plenty of cash still produce bad code because internal incentives reward velocity, not quality.
- There’s concern that LLMs supercharge this “feature pusher” culture, especially where management measures output by PR count rather than long‑term reliability.
What LLMs Do Well (According to Supporters)
- Rough drafts, boilerplate, CRUD UIs, simple scripts, glue code, and tests.
- Working in unfamiliar stacks or languages, where they can outperform a human novice.
- Search/“digital clerk” tasks: reading docs, comparing options, summarizing specs, rummaging through repos.
- Routine changes in a familiar stack when guided by a skilled engineer and good prompts, sometimes via agentic tools tightly integrated with the codebase.
Where LLMs Fall Short
- Tendency to add unnecessary logic, verbosity, and accidental complexity; mismatch with domain invariants; failure to reuse existing abstractions.
- Poor reliability in novel or intricate domains (custom build rules, concurrency benchmarks, binary formats, hardware interfaces).
- Need for heavy babysitting for small, precise changes; difficulty sticking to specific protocols or formats.
- Inconsistent outputs, even for similar prompts; context window and toolchain quality heavily affect results.
LLM Code as Technical Debt
- Many describe LLM output as “future digital asbestos”: fast to generate, expensive to live with.
- All code is debt, but LLMs can create much more, much faster; some report LLM‑authored sections as the worst debt they maintain.
- Others argue that with solid specs and tests, regenerability can offset debt, especially if LLMs help write and refactor tests too.
Skill, “Average” Output, and Who Benefits
- Common view: models reproduce something like the average of their training data; below‑average devs gain most, strong devs gain less.
- Some claim frontier models are already “above average” compared to typical industry code; others insist “LLM code is always bad” or at best student‑level.
- Disagreement over corpus quality (elite OSS vs lots of amateur/student code) and whether prompting can reliably pull from the “good” tail.
Evangelism, Insecurity, and Culture War
- Skeptics describe a pattern: maximalists insist LLM coding is the future, imply refusers are fearful, lazy “non‑hackers,” or will be “left behind,” and push mandatory adoption via management.
- Others note the article mirrors this by psychologizing evangelists as insecure or mediocre coders—seen as the same move from the opposite side.
- Multiple comments frame this as another instance of tech tribalism (like language/framework wars, crypto, self‑driving cars).
Learning, Careers, and the Future
- Worry that ubiquitous LLMs will produce generations of “vibe coders” who never learn fundamentals or understand systems they “build.”
- Some see LLMs as just another tool (like Python vs C, or 3D printers), with real but bounded impact; others think the trajectory (recent model jumps, agentic systems) points to major economic displacement within ~5–10 years.
- Calls to move away from abstract psychoanalysis and instead share concrete workflows, domains where they help or fail, and measurable outcomes.