The cult of vibe coding is dogfooding run amok

What “vibe coding” means in practice

  • Used loosely to mean: giving natural‑language goals to an LLM/agent and letting it write most or all of the code, sometimes without reading it.
  • Many commenters distinguish a spectrum: from “AI as autocomplete” to “AI writes whole subsystems from a spec I barely understand”.
  • Some see “vibe coding” as fine for prototypes, personal tools, or low‑stakes features; others already use it heavily for production with guardrails (tests, strong typing, QA).

AI as abstraction vs fundamentally different tool

  • One camp: AI is “just another abstraction layer”, like moving from assembly to high‑level languages.
  • Counterpoints:
    • LLMs are non‑deterministic and opaque; traditional abstractions are deterministic and well‑specified.
    • Models “guess” intent, invent behavior, and can’t reliably explain failures. That’s not a compiler.
    • Natural language specs are inherently ambiguous; this limits reliability.

Code quality vs product success

  • Strong evidence from leaked Claude Code source: messy, duplicated, “spaghetti” code can still underpin a very popular product.
  • Some argue this simply confirms a long‑standing reality: many successful commercial codebases are ugly; users care about features, not elegance.
  • Others stress long‑term costs: tech debt compounds, maintenance grinds to a halt, and LLMs struggle even more on convoluted code.

Maintainability, prompts, and non‑determinism

  • Worry: agents churn out huge, hard‑to‑reason‑about diffs; debugging becomes vastly harder than initial generation.
  • Proposed alternative: treat prompts/specs and tests as the primary artifact, regenerate code as needed, maybe store prompts in version control.
  • Critics note LLM non‑determinism and incomplete tests mean successive regenerations can silently introduce new bugs.

Safety, accountability, and critical systems

  • Many insist vibe coding is unacceptable for safety‑critical or financial systems; you must understand and review the code.
  • Debate over accountability:
    • One side: humans triggering the LLM are responsible by default.
    • Other side: in practice, organizations will use LLMs as “accountability sinks” and blame the tool.

Workflows, “AI levels”, and best practices

  • People reference informal “AI levels” from “human‑coded with light assist” up to “spec‑only, bots do all coding”.
  • Several engineers report comfort around mid‑levels: AI writes code they can fully understand and test, with humans steering architecture and reviewing diffs.
  • Consensus among cautious users: AI is powerful for refactors, lint‑like cleanup, boilerplate, and exploration; risky when used as an unchecked code factory.