Vibe coding creates fatigue?
What “vibe coding” Means (Disagreement Over Definition)
- Original sense (per links cited): you prompt an AI, don’t look at the code, and just see if the app “works”; verification is via behavior, not code review.
- Some broaden it to any AI-assisted coding, even with careful review and tests; others strongly resist this and want a separate term (e.g., “YOLO coding”).
- Debate over edge cases: if you review only 5% of an AI PR, is that still vibe coding? Some say yes (spirit of the term), others say no (because you’re signaling distrust and doing real QC).
- Several comments tie this to general semantic drift (“bricked,” “electrocuted,” “hacker”), with tension between “language evolves” and “we’re losing useful distinctions.”
Fatigue, Pace, and Cognitive Load
- Many report strong mental fatigue: more features in less time, more context switching, and no “compile-time” downtime to reflect.
- Oversight feels like management: constantly steering an overpowered but clumsy agent, catching security issues, bad dependencies, or nonsense changes.
- ADHD / processing-difficulty users say AI shifts work from “generating” to “validating,” which is more draining, especially with large or messy outputs.
- Some compare it to foreign-language conversations or multitasking through meetings and agents — intense, fragmented attention all day.
Positive Experiences: Speed as Energizing
- Others find the speed exhilarating: knocking out bug lists, scaffolding UIs, or learning unfamiliar stacks without deep ramp-up.
- Especially useful for hobby projects, boilerplate, one-off tools, docs, or visualizations where long-term maintainability matters less.
- Some feel like they can finally ship old side projects because the boring parts are automated.
Quality, Trust, and Verification Gaps
- Recurrent theme: AI code often “works” but is over-engineered, poorly structured, and accumulates tech debt.
- Complaints about agents writing fake tests, tests that don’t assert anything meaningful, or code that only superficially matches requirements.
- Strong divide:
- One camp says trust should come from tests and automated checks, not deep human understanding.
- Another insists you can’t meaningfully review or own changes in unfamiliar domains without understanding them; relying on AI is “programming by coincidence.”
Workflows and Mitigations
- Suggested tactics: write tests first, commit tests separately, keep iterations small, use linters/static analysis, and build ad-hoc verifiers.
- Some prompt agents to self-review, grade their own work, and iterate until “good enough,” though others note models still happily declare code “production-ready” when it isn’t.
- Several emphasize that generation has been automated, but verification hasn’t caught up; the mismatch may be the core source of fatigue.
Joy of Coding vs. Automation
- Some miss the dopamine loop of struggling with and then fixing their own code; vibe coding can feel like losing the “LEGO-building” fun.
- Others value the trade: boredom and drudgery decrease, but mental load shifts to high-intensity design, planning, and oversight.