AI fatigue is real and nobody talks about it
Nature of AI Fatigue
- Many engineers report being able to ship far more in a day but ending it mentally exhausted.
- Core cost is cognitive: constant judging/reviewing of AI output, not typing code.
- Agents are seen as “ten unreliable junior engineers” needing supervision; you must catch their non‑deterministic mistakes, which keeps you in vigilance mode.
- Waiting for agent runs breaks flow; unpredictable latencies encourage tab‑switching and doomscrolling, increasing context switching fatigue.
- Some compare it to management or micromanagement: lots of oversight, little deep making.
Productivity, Expectations, and Capitalism
- Faster tasks don’t reduce workload; they increase the number of tasks and features pushed.
- Managers and individuals ratchet expectations up (“baseline moves”), echoing old critiques of labor‑saving tech that never actually saves labor.
- Several argue that productivity gains mostly enrich owners/investors, not workers, and that lines of code or feature count are poor metrics.
- Feature creep and rapid merging driven by “because we can” undermines stability and team comprehension.
Review Burden, Quality, and Tech Debt
- Reviewing AI‑generated code is often harder than writing it: unfamiliar style, weak conventions, and hidden pitfalls (e.g., SQL/indexing).
- “70% good” outputs create “perceived cost aversion”: it feels wasteful to spend hours improving something produced in a minute, so quality and maintainability suffer.
- People note rising review fatigue, fear of bugs escaping, and rapid accumulation of technical debt.
Divergent Personal Experiences
- Some feel significantly less stressed: AI removes drudgery, reduces “swirling mess” anxiety, and restores fun via rapid progress.
- Others feel no fatigue at all and see this as a boundaries/overwork issue, not an AI problem.
- A subset deliberately avoids agents or uses LLMs only as Q&A/editors, preserving traditional coding and “meditative” flow.
Critiques of the Article and AI “Slop”
- Many readers believe the essay itself is heavily LLM‑assisted, citing telltale phrasing and overlong, padded prose; this undermines trust in its authenticity.
- There’s broad irritation at AI‑generated writing and images in general, described as “slop” and “marketing sludge,” and a sense that HN surfaces too much of it.
Coping Strategies and Workflow Adjustments
- Suggested mitigations: time‑boxing AI sessions, taking longer breaks, focusing on fewer concurrent projects, and writing detailed specs first.
- Others advocate smaller, incremental prompts instead of long agent runs; use AI for boring refactors and boilerplate only.
- Some build meta‑tools (background code review, monitoring agents) to offload supervision; others lean on meditation, distraction blockers, or simply opting out.