The AI coding trap
How people actually use coding LLMs
- Many experienced engineers say they start with “don’t write code yet”, using agents for planning, architecture discussion, and design docs before any edits.
- LLMs are used as scaffolding tools: generating stubs, boilerplate, build/test wiring, and framework setup, then refined by humans.
- Several workflows rely on “plan mode” or similar: get a multi-step plan, negotiate it, then let the agent execute with gated approvals or on a separate branch.
Context, memory, and planning
- A recurring frustration is lack of true, persistent memory beyond the context window.
- People use workarounds: AGENTS.md/CONVENTIONS.md files, “memory banks” or JSON fact stores, RAG, and regular summarization, but all are seen as lossy.
- Some argue that remembering everything across sessions would require a different architecture or an “engineering breakthrough”, not just larger contexts.
Productivity gains and limits
- One camp reports very large speedups (10x–100x for certain tasks), especially for prototypes and one-off tools they’d never have built otherwise.
- Others find net gains small or negative once prompt crafting, review, and debugging are included, especially on complex, long-lived systems.
- There’s disagreement whether LLMs meaningfully accelerate the “thinking” phase or mainly the typing/boilerplate part.
Code quality, debugging, and maintainability
- LLMs are compared to a “highly buggy compiler” that often reports success while being wrong, with opaque failure modes.
- “Vibe coding” (letting agents freely generate large swaths of code) is widely criticized as generating messy, duplicated, inconsistent code and long‑term tech debt.
- Systematic use (small tasks, strong typing, strict tests, e2e suites, constraint-based specs) is seen as necessary to keep quality under control.
Learning, juniors, and skill development
- Many reject the “LLM = junior dev” analogy: juniors learn, ask questions, gain domain context; models don’t.
- Concern: if juniors lean on LLMs, they may never develop deep debugging, architecture, and domain modeling skills.
- Others note that user skill with LLMs compounds over time, but concede this doesn’t replace foundational experience.
Enjoyment, craft, and workflow preferences
- Some find LLM-assisted coding less fun, likening it to managing an erratic intern instead of “doing the work” and building intuition.
- Others enjoy offloading the rote/boilerplate parts and spending more time on architecture and domain reasoning; the divide often maps to whether one values code-as-craft versus code-as-means-to-an-end.
Broader concerns beyond laziness
- Commenters highlight IP/copyright abuse of open source, job displacement, power concentration, misinformation, and community spam as serious anti‑AI arguments that don’t rest on “people will be lazy”.
- There’s also anxiety that management will mandate careless AI use for short-term speed, leaving engineers to clean up later.