What AI coding costs you
Perceived cognitive and skill costs
- Many commenters report feeling “mental fatigue,” dependence, or “addiction” to prompting, with a sense that architecture-level understanding and memory of their own systems are weakening.
- The idea of “cognitive debt” resonates: offloading too much thinking to AI may erode the ability to reason about code, especially debugging and conceptual understanding.
- Others push back: reviewing code and learning from it has always been a core skill; reading and reviewing AI-generated code can deepen understanding if done actively, not via rubber‑stamping.
Impact on learning, seniority, and developer identity
- Strong concern that juniors who start with AI will never build deep mental models, habits, or taste; risk of “seniority collapse” where few people truly understand systems.
- Some argue this is just another abstraction jump (opcodes→Fortran→C++…), and atrophied low‑level skills are fine when no longer needed.
- Others counter that previous abstractions were still precise formal languages; here the offload is of thinking itself via fuzzy natural language, which may change cognition more fundamentally.
- Several note that even pre‑AI, senior engineers who stop writing code and only review already atrophy.
Productivity, business pressure, and “inevitability”
- Managers describe direct pressure to adopt AI to shorten delivery cycles dramatically, even while worrying about long‑term quality and junior development.
- Some see fully agentic coding (LLMs doing “any software task” with enough tokens) as inevitable for mainstream commercial software; human‑written code retreats to niches.
- Others argue we still can’t automate deciding what to build, and specs precise enough for agents are themselves a major, non‑automatable task.
Code quality, maintainability, and tooling concerns
- Frequent complaints about “vibe‑coded slop”: fallbacks everywhere, swallowed errors, inefficiency, inconsistent patterns, and developers unable to explain their own PRs.
- Questions raised about reproducibility when “the compiler” (the model) is non‑deterministic and centrally controlled, and about checking generated artifacts into source control.
- Worry that relentless speed creates fragile “houses of cards” and that AI will keep papering over problems faster than teams can understand them.
Use patterns, thresholds, and healthy practices
- Many advocate using AI mainly for:
- painful, low‑reward tasks (boilerplate, glue code, harnesses, bash snippets)
- search/navigation, summarization, and documentation
- generating code plus explanations and reports to aid learning.
- Common suggestion: keep “hands in the code” for complex, fun, or business‑critical logic to preserve skill and intrinsic joy.
- Several stress designing processes (tests, reviews, social collaboration) and even AI “fasts” to avoid turning developers into demotivated AI babysitters.
Uncertainty and evidence
- Commenters note that long‑term effects are still unclear; existing studies focus on skill formation with small samples and can be misinterpreted as general “skill atrophy.”
- Some see current discourse as partly “moral panic” driven by vibes and professional identity; others see real early warning signs and argue for caution until we know more.