The hidden cost of AI coding

Usefulness on real codebases

  • Some report AI assistants are “mostly useless” on large, complex, evolving codebases and see no near‑term path to agents replacing programmers.
  • Others say long‑context “reasoning” models plus tool integration (ripgrep, code search, symbol extractors, IDE agents) work surprisingly well, even on 50k–300k LOC trees—if you:
    • Feed in carefully chosen fragments.
    • Work in stages (locate code, summarize, trace flows, then modify).
    • Use signatures/docs instead of full bodies where possible.

Where AI coding tools shine vs fail

  • Works well for:
    • Boilerplate, glue code, project scaffolding, config templates.
    • Remembering stdlib APIs and obscure library calls.
    • Test skeletons and mocks (when reviewed).
    • Bootstrapping in unfamiliar stacks or DSLs.
    • Explaining legacy code and summarizing modules.
  • Performs poorly or erratically for:
    • Debugging nontrivial bugs; tools “run in circles” or miss key edge cases.
    • Complex build systems and version‑sensitive setups.
    • Interacting algorithms, low‑level domains, and subtle performance issues.
    • Large legacy systems with deep tribal knowledge.

Verification, learning, and skill erosion

  • Strong consensus that AI‑generated code must be read, tested, and refactored like junior‑dev output; otherwise, it’s dangerous.
  • Concerns about automation bias: people may over‑trust code they don’t fully understand.
  • Several fear heavy reliance reduces deep learning and the ability to spot “accidentally quadratic” or insecure patterns. Others use AI only in domains they already know, precisely to measure its gaps.

Flow, joy, and the craft

  • Many feel AI restores flow: less slog through docs and repetitive typing, more time on architecture, algorithms, and problem‑framing. Managers and PMs say it lets them meaningfully code again in limited windows.
  • Others say prompting, waiting, and validating destroys flow and makes them feel like supervisors of a stochastic junior, not creators. They miss the satisfaction of fully understanding every line, and worry about turning a beloved craft into passive review work.

Management, incentives, and software quality

  • Several predict management will chase “productivity” by mandating AI use and accepting low‑quality “vibe coded” systems, since short‑term feature velocity matters more than maintainability.
  • Expectation of a coming wave of brittle, insecure, hard‑to‑maintain AI code, followed by renewed demand for experienced engineers to triage and repair.

Education and the rise of “vibe coders”

  • Instructors report students presenting AI‑written code they cannot explain, even while already holding IT jobs.
  • Widespread worry that newcomers will skip foundational understanding, relying on AI to “play pinball” with code until something seems to work, undermining long‑term competence.

Attitudinal split

  • Thread repeatedly contrasts:
    • Those who love programming as a craft and fear losing its hard, rewarding parts.
    • Those who see code as a means to solve problems and happily offload drudgery.
  • Many place themselves in the middle: AI as a powerful but narrow tool, best used for tedious or well‑understood tasks, not as a replacement for thinking.