Cheating Is All You Need
Productivity claims and the 80/20 argument
- Central debate: can “LLM writes 80%, human fixes 20%” really yield 5x productivity?
- Critics say work units aren’t equal; the last 20% often contains the hard parts and consumes most effort.
- Others note coding is only a fraction of an engineer’s time; even big coding speedups may only slightly improve overall productivity.
- Some report 2–4x speedups on side projects, in rare cases much more, but emphasize this is task‑dependent.
Verification, debugging, and code quality
- Strong concern that verifying LLM code is harder than writing it, because you must reconstruct intent and logic.
- Reading/debugging unfamiliar code is inherently high mental effort; swapping “writer” for “reviewer of AI output” is not an obvious win.
- Some argue LLMs, when used within their complexity limits, produce code at least as good as weak developers or Stack Overflow cargo‑culting.
- Many fear a wave of low‑quality “AI slop” that overwhelms review capacity and worsens long‑term maintainability.
Cheating, education vs workplace norms
- Heated sub‑thread on whether using LLMs is “cheating,” particularly in university settings where assessment of individual understanding matters.
- In professional contexts, several argue tool choice is morally neutral; only code quality and policy compliance matter.
Where LLMs help most
- Commonly cited wins: autocomplete, boilerplate, small self‑contained functions, test scaffolding, repetitive refactors, summarizing specs/standards, and learning unfamiliar stacks.
- As primary code authors for complex, cross‑file changes, models are seen as flaky and easily overconfident; extensive tests and careful scoping are required.
Maintenance, legacy code, and intent
- Worries that AI‑written code will be poorly understood, with no recorded prompts or intent, amplifying the usual “legacy archaeology” problem.
- Counterpoint: legacy human code is already hard to understand; what matters is good requirements, tests, and review, not who typed the lines.
Economic and labor implications
- Doubts that individual engineers will capture productivity gains; more likely: fewer engineers, same or higher expectations.
- Some see large future demand for consultants to clean up AI‑generated messes.
Tooling, context, and marketing angle
- Discussion around context windows, RAG, and search over large codebases; consensus that surfacing the right context is crucial.
- Several note the article is from 2023 and functions as promotional material for a code search/AI assistant product.