Now might be the best time to learn software development
Overall reaction to the essay
- Many found the piece funny, refreshing, and a useful antidote to AI doomerism about developers.
- Some disagreed with specific framings (e.g., scale of productivity improvement) but broadly agreed that AI shifts the role of developers rather than erasing it.
Historical analogies: Fortran, COBOL, SQL, “no‑code”
- Commenters drew parallels to past “anyone can program now” waves: Fortran, COBOL, SQL, QUEL, UML tools, FrontPage, Delphi, Flash, Dreamweaver, Excel, node-based tools.
- Pattern noted: tools do raise productivity and broaden who can “program,” but:
- They don’t eliminate demand for serious developers.
- They eventually hit inherent limits; hand-written code (or more general tools) outlasts many of them.
- SQL is highlighted as a partial success: many non‑programmers use it, but many developers still struggle and hide behind ORMs.
Farming, photography, and Jevons-style economics
- The combine-harvester analogy split opinion:
- Some see LLMs as similar: one dev can do the work of many; specialization around the core activity will expand.
- Others say Fortran was a bigger productivity leap than current AI.
- Debate over Jevons: food demand is partly inelastic, but waste and diet shifts suggest some elasticity.
- Important distinction: farmers historically owned the capital; most developers are employees, so owners may capture more gains.
- Photography analogy:
- Digital cameras massively increased quantity and lowered entry barriers, creating more good photos and more competition.
- Professional photographers’ income and job security declined; “good enough” flooded the market.
- Some see this as a template for software: more apps, more “ok” work, tougher economics for pros.
LLMs in current software practice
- Many report genuine productivity boosts:
- Navigating large unfamiliar codebases.
- Generating boilerplate, tests, deployment scripts, and reports.
- Turning “I know what to do but not where/how” into concrete steps.
- LLM-produced “vibe-coded” apps are already creating cleanup work; Upwork has clients stuck in AI‑generated pits.
- Several compare LLMs to an overconfident junior dev: good at scaffolding, bad at edge cases and refactors; needs supervision.
- Agentic/“fire and forget” coding is widely seen as unreliable; treating AI as an assistant, not an autonomous coder, works better.
Learning, Stack Overflow, and fundamentals
- Many see this as a uniquely good time to learn programming:
- LLMs fill the old “friendly Stack Overflow” role: tutoring, debugging help, alternate explanations.
- They make early learning less lonely and more interactive; you can learn by debugging AI output.
- Strong warnings not to skip fundamentals: without mental models (performance, concurrency, security, data structures), you can’t judge or fix AI output.
- Several lament Stack Overflow’s decline and worry about the training data pipeline if fewer people share solutions publicly.
Psychological impact: support vs drain
- Some find LLMs provide valuable “psychological support”: a rubber-duck partner that breaks procrastination and keeps momentum.
- Others find them emotionally draining—“overconfident idiot” or “yes‑man” interactions that require constant correction, with no real pushback or insight.
- People compare this to bad Google results vs. bad AI; both can be exhausting in different ways.
Jobs, wages, and prompt engineering
- There’s visible anxiety: recent layoffs, worse interviews, talk of “productivity and cost reduction” from management.
- Disagreement on actual unemployment levels, but consensus that perception matters more than theoretical productivity arguments.
- Debate over whether prompt engineering will:
- Become a high‑paid specialization (if value and scarcity are high), or
- Be low-paid because barriers to entry are low and many can do it.
- Some suggest reskilling outside software as a pragmatic hedge; others argue this is also a rare window for bootstrapped AI-era startups or building portfolio projects.
Labor power, efficiency, and distribution
- Several note that productivity gains don’t automatically flow to workers; they hinge on power, organization, and policy.
- Discussion around unions: tech is largely non-union, making it easier for firms to use AI as a pretext for wage suppression and headcount cuts.
- Others counter that individual resistance (refusing hype, careful tool choice) is possible but weaker than collective action.
Skepticism about long-term AI programming dominance
- Multiple commenters caution that most “this is the future of programming” predictions in history have been wrong.
- Even if LLMs remain useful, particular tools and workflows (like past RAD tools) may fade, so betting an entire career on one AI stack is seen as risky.