Ask HN: Has AI stolen the satisfaction from programming?
Loss of Satisfaction and Sense of “Ownership”
- Several commenters resonate with the feeling that AI makes both philosophy and programming feel less “theirs”: if an LLM can generate or endorse an idea, it feels less meaningful; if it can’t, the idea feels invalid.
- For coding, some say: doing it by hand now feels slow and pointless; doing it with AI feels like the work doesn’t “count,” as if credit flows to the model.
- This feeds into impostor‑syndrome feelings and a sense that once-rigorous crafts (philosophy, politics, programming) are being cheapened.
AI as Accelerator, Not Thinker
- Many argue the premise “AI automates the thinking” is wrong in practice: models can’t truly reason, and using them without understanding causes technical debt and emergencies.
- Others see AI as a junior dev or a library: you still design the system, decompose problems, direct the architecture, and review everything.
Learning, Hobbies, and “Worthwhile” Problems
- A core lament: toy projects (toy DBs, Redis clones, parsers) used to be joyful learning; now they feel “one prompt away” and thus not worth doing.
- Counterpoints:
- People already could have copied GitHub repos; this didn’t previously kill the joy.
- Hobbies are intrinsically “inefficient” (like touring by bike instead of plane); it’s okay to keep doing small projects for learning.
- New “games” exist, like trying to outperform the LLM or tackling areas with little training data.
Quality, Reliability, and Copyright
- Some find LLM output banal, wrong, or only suitable for boilerplate and tests—dangerous for critical or novel work unless deeply reviewed.
- Others report large productivity gains (rewriting major apps, adding many features solo).
- Debate over whether common code is “boilerplate” or protected expression; some worry AI hides de facto code copying.
Workplace Culture and Expectations
- Several say the real problem is organizational: pressure to ship AI‑generated code without understanding, and expectations of “10x output.”
- Others report the opposite culture: devs are expected to fully understand and be responsible for AI‑assisted code.
Analogies, History, and Diverging Reactions
- Analogies range from Ikea assembly vs woodworking, to hand saw vs table saw, to cameras vs painting and record players vs instruments.
- Historical parallels are drawn to prior tooling waves (digitalization in surveying, IDEs, libraries).
- Reactions are split: some feel joy and empowerment is higher than ever; others avoid AI entirely to preserve the “grim satisfaction” of solving problems themselves.