The Bitter Prediction

Emotional response and the “cheating” analogy

  • Many relate to the article’s feeling that AI codegen “breaks the game” of programming the way a cheat breaks a videogame: once you know the shortcut, normal play feels less meaningful.
  • Others have the opposite reaction: AI removes drudgery, rekindles their love of building things, and lets them finally finish ideas they previously abandoned.
  • Several distinguish between loving “writing code” vs loving “creating something from nothing” or solving hard problems; AI only threatens the former.
  • Some older developers note they already lived through a similar loss when low‑level, bare‑metal work disappeared; it felt bitter but not catastrophic.

Tool, productivity, and workflow

  • Experiences with coding AIs range from “10–100× force multiplier” to “no gain, sometimes worse.” A lot depends on task type, model, prompting, and expectations.
  • Common failure mode: AI gets 70–80% there, but cleanup, debugging, and verification eat the savings; “great for throwaway or hobby code, less so at professional quality.”
  • Others say it shines on boilerplate, unfamiliar APIs, translation between languages, test generation, and large repetitive transformations (e.g., normalizing thousands of datasets).
  • Some feel guiding agents and reviewing their work is more exhausting than just coding directly.

Learning, skill, and intuition

  • Strong concern that beginners who lean on AI will never develop deep understanding or “intuition,” analogous to people who can’t do mental arithmetic or navigate without GPS.
  • Worry that future engineers won’t learn architecture and complexity management, only prompting; some call this risky for long‑term system health.
  • Counterpoint: every abstraction layer (from assembly upward) looked like this; AI is just the next one, and the “real skill” will shift to knowing when to trust and how to direct it.

Code quality, maintenance, and legacy

  • Skepticism that current models routinely produce “high‑quality, efficient” code; many report naïve algorithms, subtle bugs, and noisy PRs that are hard to review.
  • Concern that AI‑generated legacy code will be harder to understand because it lacks the human “subtext” that often encodes design intent.
  • Others argue that with good docs and context, models already handle internal APIs well and can assist in refactoring and schema design, at least to an 80% first draft.

Jobs, economics, and inequality

  • Several expect fewer engineers per project and more pressure on junior roles; AI plus a few mid‑seniors might replace larger teams.
  • Debate over whether offshoring + AI will crush high‑wage “just coding” roles, pushing developers to “generate business value” rather than enjoy the craft.
  • The article’s worry about $5/day AI costs creating a barrier for the global poor is contested: some say frontier models will get cheaper, others that energy/compute constraints or geopolitics could keep them costly.
  • Some note that most of the world has never been able to hire programmers at all; for them, even modest AI access could be a net widening of opportunity.

Future of programming and ecosystems

  • Question whether pervasive “vibe coding” will freeze ecosystems if models lag on new APIs; responses point to frequent retraining, big context windows, and good docs as mitigations.
  • Several predict stratification: a minority designing APIs, systems, and architectures (often still coding), and a majority of “tinker‑toy” builders assembling things via AI.
  • Underneath, many argue the bottlenecks in real projects are still requirements, coordination, and organizational dysfunction; speeding up coding alone doesn’t fix that.