Perverse incentives of vibe coding

Sci‑fi “Motie engineering” and AI code structure

  • Several comments riff on the “Motie engineering” idea (highly interdependent, non‑modular systems) as an analogy for LLM‑produced code.
  • Some speculate unconstrained optimization tends to produce tightly interwoven, opaque designs that are hard to understand or repair but potentially more optimal for a given objective.
  • Others doubt such “Motie‑style” systems are practical for humans, worrying that if AIs converge on them, codebases will become effectively unmaintainable without AI.

Vibe coding vs. structured AI‑assisted coding

  • Multiple people object to using “vibe coding” as a synonym for any AI‑assisted workflow, arguing it should mean largely unguided, no‑look prompting where the human barely understands the result.
  • Others describe more disciplined practices: detailed plans, small tasks, tight scopes, diffs only, tests and linters, and treating the model like a hyperactive junior. They see this as qualitatively different from vibe coding.
  • There’s disagreement over agents: some say editor/CLI agents that edit, compile, and iterate are essential; others find them produce messy, hard‑to‑understand changes and prefer conversational use plus manual edits.

Verbosity, token economics, and SaaS incentives

  • Many observe LLMs generate verbose, ultra‑defensive, comment‑heavy, “enterprise‑grade” code, often with duplicated logic and unnecessary abstractions.
  • Some link this to token‑based pricing: more tokens → more revenue, akin to other SaaS products that profit from CPU, storage, or log volume rather than efficiency.
  • Others push back: current models are mostly loss‑leaders in a competitive market, so providers are more motivated by capability than padding tokens; verbosity is framed as a side‑effect of training data and safety/completeness, not deliberate exploitation.
  • Users report partial success prompting for “minimal code” or banning comments, but note this can sometimes reduce accuracy.

Developer skills, quality, and gambling‑like dynamics

  • Several anecdotes from workplaces and teaching say heavy reliance on LLMs correlates with weaker debugging, poor edge‑case handling, and “almost‑works” solutions that crumble in the last 10%.
  • Some fear long‑term atrophy of critical thinking and propose bans or strict limits on vibe coding, using it as a hiring filter (“no AI slop”). Others argue the tools mostly amplify strong engineers and expose weak ones.
  • The article’s gambling analogy resonates for many: repeated prompting feels like a variable‑reward slot machine, especially with image and frontend work.
  • Others argue this is an overreach: many paid, non‑deterministic services (stocks, lawyers, artists) aren’t gambling; local or flat‑fee usage breaks any “house profit” story.

Effectiveness and limits of AI coding tools

  • Experiences diverge sharply. Some say AI is transformative for CRUD‑like apps, glue scripts, refactoring patterns, config tweaks, and explaining unfamiliar code.
  • Others, especially in embedded, multi‑language, or idiosyncratic codebases, find tools mostly hallucinate APIs, struggle with context limits, and provide little net value.
  • Broad agreement that LLMs help most with boilerplate and prototyping, and that they still require humans to own architecture, interfaces, and the hardest 10–20% of problems.