AI Is Making Developers Dumb
Enjoyment of Coding vs Tool-Driven “Building”
- Several commenters distinguish between people who love writing code itself versus those who mainly care about getting products built.
- Some say LLMs amplify their creativity: they design architecture and use AI as a “typing assistant” or to fill in stubs/boilerplate.
- Others see AI codegen as removing the fun, likening coding to crafts like knitting or painting that they want to do by hand.
- There’s pushback on the idea that “if you don’t enjoy writing code, the field isn’t for you”; economic realities mean many tolerate coding as a means to an end.
Code Quality, Maintainability, and Testing
- Many report AI is decent at boilerplate and small chunks, weak at nontrivial design, refactoring, and C++/Python correctness.
- Some like LLM-written unit tests; others say the tests are slow, tautological and mirror poor human suites, and that bugs in tests are high‑stakes too.
- Multiple anecdotes: LLM output is subtly wrong, overly complex, or effectively a worse copy of existing libraries; reviewing/fixing it can be more painful than writing code directly.
- Concern that “vibe coding” and LLM-heavy workflows will produce huge, unmaintainable codebases with few who really understand them.
Education and Learning Effects
- Instructors observe students using LLMs stop asking questions, focus on syntax, and fail to grasp fundamentals; even strong students struggle to explain recent material.
- A small study with business students suggests they can complete a data-science task via ChatGPT yet retain almost no understanding.
- Comparisons to calculators: widely agreed they should be restricted early in learning; debate centers on whether and how to teach responsible LLM use rather than banning it.
Abstraction, Atrophy, and Historical Parallels
- Some see AI as just another leaky abstraction layer (like high-level languages, GC, frameworks); others argue LLMs are qualitatively different because they’re nondeterministic and “hallucinate.”
- Recurrent fear: over-reliance on LLMs atrophies problem‑solving and deep understanding, normalizes mediocrity, and may eventually justify replacing AI‑dependent devs with AI alone.
- Others counter that externalizing rote skills (syntax, memorization) is fine, as long as humans still handle architecture, systems thinking, and reviewing.
Productivity, Workflow, and UX
- Experiences vary: some say coding with AI is now dramatically faster; others feel “Copilot lag,” rage, and exhaustion from correcting repeated AI mistakes.
- LLMs are praised for explaining unfamiliar codebases, generating repetitive code (e.g., SQL migrations, Pillow image compositing) and tests, and acting as an always‑available tutor.
- Many advocate a “middle ground”: never commit AI code you can’t explain, use it for boilerplate and exploration, and keep sharpening low‑level and conceptual skills.