I started programming when I was 7. I'm 50 now and the thing I loved has changed
AI-Generated Writing & Reader Trust
- Many commenters believe the essay itself was heavily LLM‑assisted, citing telltale stylistic patterns (“It wasn’t just X — it was Y”, short punchy fragments, LinkedIn‑style tone).
- This triggers distrust: if writing is partially automated, readers can’t assume the writer did the hard thinking, so why invest effort to parse it?
- Several argue LLM prose “poisons discourse”: accusations of AI authorship become default, raising false positives and undermining good-faith conversation.
- Some distinguish coding from writing: AI-written code can be tested; AI-written prose can’t be easily “run” to check its truth.
Joy of Coding vs Delegating to AI
- A large camp says the core pleasure is writing code and debugging; letting an LLM do that feels like letting an AI play your video games. The achievement is gone.
- Others feel the opposite: AI removed drudgery (CRUD, boilerplate, config, tests), revived the “magic” they felt as kids, and lets them build long-dreamed projects.
- Several describe a hybrid: use AI as a “supercharged autocomplete” or junior dev, but still design architecture, name functions, and hand‑code nontrivial parts.
Is AI Just Another Abstraction Layer?
- Some frame LLMs as the next step after C, Python, or ORMs: higher-level tools that free humans to focus on design, not syntax.
- Detractors argue it’s qualitatively different: non‑determinism, opaque reasoning, and producing “code‑like text” rather than well‑specified transformations.
- There’s tension between “the abstraction tower was already huge” and “this is the first layer where you genuinely can’t fully understand what’s happening.”
Labor, Economics, and Luddite Fears
- Many express classic Luddite anxiety: AI devalues labor, shifts power to capital, and will compress wages once “prompting” replaces coding as the scarce skill.
- Historical analogies (spinning jenny, blacksmiths, outsourced labor, trickle‑down economics) are used to predict fewer, more elite programming jobs and a race to the bottom for the rest.
- Others insist expertise will still matter; each frontier model lowers the bar for some tasks but raises the ceiling of what a single expert can deliver.
Code Quality, “Slop,” and Training
- Reviewers report an influx of AI‑generated “slop”: superficially plausible code that’s brittle, incoherent, or full of emoji‑filled logs, pushed with minimal understanding.
- The real problem is not that LLMs are uniquely bad, but that weak developers cannot recognize their flaws and management pressures speed over quality.
- Some argue companies must explicitly train engineers how to use AI responsibly (architecture first, tests, constraints), or risk overwhelming codebases with garbage.
Coping Strategies and Shifting Identity
- Many mid‑career and older devs describe a crisis: their identity was built on craftsmanship; AI recasts them as spec‑writers, project managers, or “agent wranglers.”
- Some lean into that: they enjoy architecting systems, orchestrating agents, and treating AI as a team of infinite juniors.
- Others deliberately wall off personal projects as AI‑free zones, or ban AI contributions in open source, to preserve the craft they love.
Nostalgia, Age, and Broader Tech Disillusionment
- A recurring thread is that disillusionment began before AI: with walled gardens, cloud/subscription models, surveillance capitalism, and endless frameworks.
- Some say this is “just aging”: every generation thinks the golden era was when they were young (ZX Spectrum, early web, BeOS, EVM experiments).
- Younger developers also report the same emptiness, suggesting it’s not purely nostalgia but also about enshittification and loss of autonomy.
Domains, Careers, and the Future of Software Work
- Suggestions for “crevices” where human coding will persist: embedded systems, performance‑critical code, industrial automation, safety‑critical domains. Others think AI will eventually reach those layers too.
- Independent consultants and high‑level problem solvers feel relatively optimistic: they’re paid to deliver outcomes, not lines of code, and see AI as leverage.
- Many worry about late‑career risk: in their 40s and 50s, with kids and mortgages, there’s little runway to fully retrain if AI compresses demand for traditional dev roles.