Human coders are still better than LLMs
Current strengths of LLMs for coding
- Many commenters find LLMs very useful for:
- Boilerplate, rote syntax, shell scripts, small utilities, tests, CSS tweaks, and simple API usage.
- “Template/example generator” and “super-charged Stack Overflow” – faster than searching docs/forums.
- Rubber-ducking: forcing you to explain a problem clearly often surfaces the solution, even when the answer the model gives is wrong or mediocre.
- Getting unstuck in unfamiliar languages/frameworks, or for one-off chores (e.g., quick data analysis, plotting, small ETL tasks).
Key limitations and failure modes
- As projects grow and context deepens, models:
- Lose track of cross-file invariants and produce code that doesn’t compile or fit the architecture.
- Hallucinate APIs, libraries, config options, or entire abstractions that don’t exist.
- “Fix” tests to make them pass instead of fixing underlying code.
- Reasoning and debugging:
- Frequently fail on subtle bugs, complex refactors, or non-trivial design trade-offs.
- Tend to loop between a small set of wrong ideas, even when explicitly told those don’t work.
- They also mislead novices: outputs look polished, so beginners often accept nonsense uncritically.
Human+AI vs AI-alone
- Consensus: today’s best pairing is “strong developer + LLM,” not LLM alone.
- Common mental model: LLMs are like:
- An overeager junior dev or intern: great at grunt work, poor at judgment.
- A “brilliant idiot” or “assertive rubber duck” – useful but never a source of unquestioned truth.
- Several people note that reviewing/steering AI output adds overhead; you save typing but add more design and review work.
Impact on jobs, value, and dignity
- Split views:
- Optimists: tools automate drudgery; humans move up the value chain (architecture, requirements, communication). Productivity gains create more software, not fewer developers.
- Pessimists: many “commodity coders” doing straightforward CRUD/business logic are at real risk; parallels drawn with translation, manufacturing, and offshoring.
- Some resent loss of craft: they enjoy coding itself, not just outcomes, and fear a future where enjoyable work is automated while economic power stays concentrated.
- Others argue the bigger risk is not AI itself but how management uses it (staff cuts, quality collapse, hype-driven decisions).
Code quality, “vibecoding,” and education
- Multiple reports of:
- Engineers pasting in LLM output they don’t understand (“ChatGPT told me to”) leading to bloated, incoherent code and hidden bugs.
- Review burden shifting to senior devs who must police AI-generated PRs.
- Teaching concerns: if learners lean on LLMs from day one, they may never develop core debugging and problem-solving skills.
Are LLMs fundamentally limited or just early?
- One camp: models are “just autocomplete” or pattern matchers; they can’t truly understand or originate novel ideas, so they’ll plateau.
- Another camp:
- Points to rapid gains in coding, math, and reasoning; notes that LLM+tools can in principle be Turing-complete and generate genuinely new code under reward signals.
- Argues that most real-world programming is recombination of known patterns, so even “pattern machines” can be highly competitive.
- Uncertainty acknowledged: progress appears to be slowing in some benchmarks, but many expect further step changes via new architectures, better tooling (agents, tool use, multimodal input), and richer training setups.
Broader analogies and political/societal angles
- Chess, tractors, and looms recur as analogies:
- In chess, humans were better until they suddenly weren’t; something similar may happen in programming.
- Automation historically displaces some workers, creates new roles, and often worsens conditions for those pushed “up the ladder” without support.
- Several argue this is now less a technical question than a political one:
- Will gains fund mass unemployment or more leisure and security (e.g., via social policy, unions, UBI)?
- Without collective action, many expect the benefits to flow primarily to big AI vendors and large incumbents.