LLMs can be exhausting

Cognitive Load and Exhaustion

  • Many find LLM-assisted coding more mentally taxing than manual coding.
  • Main source of fatigue: continuously steering, specifying, and reviewing agent output rather than “coasting” on implementation work.
  • High parallelism (multiple agents/sessions) increases context switching and drains focus.
  • Some compare it to pair programming or juggling: more productive, but more intense and harder to reach a calm “flow” state.

Shift in Role: From Coder to Manager/Architect

  • Users feel more like managers of semi-competent juniors or autopilots: deciding what to build, clarifying specs, and integrating code.
  • Integration and architectural decisions remain hard; LLMs just accelerate code generation, so complexity grows faster.
  • Some miss the satisfaction of personally solving problems and instead feel like QA testers of generated code.

Quality, Reliability, and Trust

  • Strong split: some report higher velocity and quality with careful use (good specs, tests, review), others see more bugs, regressions, and fragile code.
  • Cheaper/weaker models are often compared to “terrible juniors” who don’t learn and require constant correction.
  • Non-determinism and lack of a stable mental model (vs. compilers or libraries) are recurring pain points.

Organizational Pressure and Mandates

  • Reports of companies mandating AI use and expecting massive LOC/feature output.
  • Senior engineers feel burned out reviewing large, LLM-generated PRs, often from colleagues who barely read the code.
  • Some fear system resilience and codebase comprehensibility are degrading while accountability for failures remains human.

Use Cases, Boundaries, and “AI Discipline”

  • Productive uses cited: prototyping, debugging, code review, explaining codebases, small utilities, test generation.
  • Several advocate “AI discipline”:
    • Use LLMs selectively, keep humans in the loop, limit concurrent agents.
    • Invest heavily in specs, design docs, and tests before delegating.
    • Accept idle agents rather than optimizing for constant utilization.

Skepticism, Dystopian Vibes, and Mental Health

  • Some view the whole situation as dystopian: workers blaming themselves for tool limits, chasing hype, and risking burnout.
  • Others see LLMs as energizing “master weapons” for experienced engineers.
  • Multiple commenters explicitly worry about attention fragmentation, addiction-like behavior, and long-term cognitive/mental health impacts.