AI coding is gambling

Is AI Coding “Gambling”?

  • One camp says yes: outputs are non-deterministic, users repeatedly “pull the lever” with new prompts hoping for a good result, and success can feel like luck.
  • Others argue it’s only “gambling” in the trivial sense that all uncertain work is; if success rates are high and properly verified, it’s just engineering under uncertainty.
  • Some distinguish “spray-and-pray” one-shot prompting (like slots) from spec-driven, test-driven workflows (more like poker or regular work).

Human Analogies: Interns, Coworkers, Managers

  • Comparisons are made to assigning tasks to interns, temps, or mediocre coworkers: you can’t predict quality, and you must review and iterate.
  • Critics push back: humans can be trained, build lasting knowledge, are accountable, and have slower, less-addictive feedback loops.
  • There’s discomfort with using interns as a metaphor at all; some find it dehumanizing.

Addiction, Variable Rewards, and Psychology

  • Multiple commenters report slot-machine–like behavior: rapid re-prompts, running multiple agents in parallel, staying up late “chasing” a working run.
  • Variable rewards and near-misses are seen as key hooks, similar to gambling and social media.
  • Some explicitly describe disrupted work–life boundaries and dopamine-driven overuse.

Productivity vs Reliability and Maintainability

  • Many say agents dramatically speed up boilerplate, translation, and prototypes, sometimes enabling projects they’d never finish alone.
  • Others find that generated code often “looks right” but is brittle, hard to maintain, or subtly wrong, especially without strong tests.
  • Concerns include long-term maintainability, code bloat, and shallow understanding of complex systems.

Specs, Tests, and Workflows

  • A recurring theme: AI coding only works well when paired with clear specs, strong automated tests, and scripted quality checks.
  • Some teams reportedly forbid manual coding and enforce spec/TDD + agents, claiming large productivity gains but unhappy developers.
  • Others note that in real-world product work, specs are rarely clean or stable, so the “just conform to the spec” story feels unrealistic.

Shifts in Roles and the Meaning of Programming

  • There’s sharp disagreement over whether prompting + reviewing counts as “programming” versus “managing a coder.”
  • Some relish focusing on ideas and product design while offloading typing to AI; others value the craft of writing code itself and avoid AI to preserve skill and joy.
  • Worries appear about training pipelines for juniors and a future where many devs can orchestrate agents but few deeply understand systems.

Control, Ownership, and Industry Dynamics

  • Some fear dependence on proprietary LLM providers, subscription lock-in, and future price hikes.
  • Others see current subsidies and promotions as analogous to casinos’ free chips: the “house” ultimately wins.
  • Broader ethical concerns include damage to online trust and “looting the commons” of public code and text to train models.