After two years of vibecoding, I'm back to writing by hand

What “vibecoding” Means and Where It Came From

  • Strong disagreement on definitions:
    • “Strict” meaning (per original coinage): never look at the code, only the running product; accept diffs/PRs on vibes.
    • “Loose” meaning: any heavy AI-assisted programming, including careful review and refactoring.
  • This definitional drift causes people to talk past each other: critics often attack the strict, irresponsible version; many practitioners are really doing “AI-assisted programming” or “agent-assisted coding.”
  • Timeline debates: Copilot (2021) and early Cursor/chat weren’t truly agentic; many say real full-project “vibecoding” only became viable with Claude Code / modern agents in 2024–25.

Experiences with AI Coding Tools

  • Tools mentioned: GitHub Copilot, Claude Code, Cursor, Gemini, Grok, local models.
  • Some report Copilot as glorified autocomplete; others note it now supports planning, editing, tools, web search, and Claude integration.
  • Enterprise SSO and closed integrations make advanced workflows hard in big companies; smaller orgs/individuals are ahead on adoption.
  • Success stories: people claim to have built sizeable apps (CAD tools, interactive fiction platforms, web backends) with 80–99% AI-written code, with humans designing architecture and reviewing PRs.

Code Quality, Architecture, and Tests

  • Common failure mode: agents produce plausible, locally good changes that duplicate logic, ignore existing patterns, and fracture architecture (“slop”).
  • Proponents say this is a management problem:
    • Use agents for small, self-contained tasks.
    • Maintain strong tests, linters, ADRs, and clear CLAUDE.md/agent configs.
    • Iteratively refactor with agents; sometimes multiple agents review each other’s output.
  • Skeptics report that by the time they’ve corrected and refactored AI output, writing by hand would have been faster—especially in complex, stateful or performance‑critical systems.

Education, Learning, and Skill Erosion

  • Many CS teachers worry that AI doing “simple parts” prevents students from building the mental models needed for harder work.
  • Analogies: forklifts vs weightlifting, mech suits, calculators in math. Point: in learning, the struggle is the point.
  • Others counter that industry needs higher‑level thinkers who can use tools, not “assembly-line coders,” and that curricula are already outdated.
  • Consensus that exams and coursework must adapt (paper exams, oral defenses, change-history audits, AI as tutor rather than code generator).

Middle-Ground Practices vs Extremes

  • Broad agreement that “all-agent” vs “all-handwritten” is a false dichotomy. Effective patterns include:
    • Human-driven design and decomposition; AI for boilerplate, wiring, and refactors.
    • One-function‑at‑a‑time or small-scope prompting; frequent reviews of diffs.
    • Using AI as rubber duck, researcher, and test generator (with human pruning).
  • Several commenters say vibecoding is fine for prototypes, one-off tools, and side projects, but they avoid it for business‑critical or long‑lived systems.

Productivity, Careers, and the Future

  • Some claim order‑of‑magnitude productivity gains and say they’ll “never go back” to hand-only coding.
  • Others describe burnout, loss of codebase mental model, skill atrophy, and a sense of “passive coding” akin to GPS eroding navigation skills.
  • Worries that junior devs plateau at “5x with AI” instead of becoming much stronger engineers; fear of an “eternal summer” of low-quality AI-generated software.
  • Counterpoint: top labs and many companies report most of their code is now AI-written (but still human‑reviewed), suggesting agentic coding skills will be increasingly required, even if pure vibecoding remains risky.