The uncertain future of coding careers and why I'm still hopeful

Future of software work and skill stratification

  • Many argue only a minority of developers can do “hard” work (systems, compilers, engines); most do CRUD/integration, which is exactly what LLMs are good at.
  • Some predict a profession that looks more like medicine or law: higher bar, slower path, “licensed” senior roles with explicit liability for AI-generated output; lower entry pay and longer apprenticeship.
  • Others counter that such licensing is unlikely for most software because most failures don’t directly kill people, and that juniors may ramp faster, not slower, with AI help.

AI handling “grunt work” vs creating new grunt work

  • Optimistic view: AI removes repetitive early-career tasks and lets humans focus on design, invention, and complex problem-solving.
  • Skeptical view: real “grunt work” is debugging messy legacy systems, vague bug reports, and ugly integrations—areas many say LLMs still struggle with.
  • Some claim agentic tools already help significantly with both bug-finding and glue code; others share experiences where AI-produced systems are sprawling, incoherent “vibe coded” messes that humans must then clean up.

Quality, hallucinations, and trust

  • Multiple examples of AI giving confident but wrong answers (search, medical side effects, setup docs, hardware instructions), sometimes contradicting itself depending on phrasing.
  • Concern: if you must fully verify every answer or PR, the productivity gain vanishes; AI may turn seniors into full-time reviewers of unreliable output.
  • There’s disagreement about whether error rates are already “good enough” (e.g., 1% vs 10%) and how users could even measure that.

Economics, management behavior, and cycles

  • Several say current pain is mostly macro (interest rates, post-COVID whiplash); AI is being used as a narrative to justify layoffs, similar to past offshoring waves.
  • Others argue markets eventually punish irrational “AI cargo cults,” but note that companies, monopolies, and banks can remain dysfunctional for a long time.
  • Offshoring and AI are seen as part of the same trend: arbitraging labor, hollowing out domestic middle-class work, with the main remaining moat being frontier R&D and security-critical domains.

Training, juniors, and profession shape

  • Widespread anxiety about how juniors will learn if “grunt work” is automated and seniors just supervise agents.
  • Some think early-career folks who master AI tools quickly gain an edge; others fear LLMs will short-circuit real skill development and produce long-term quality decay.
  • Proposals include unionization and professional standards with liability; critics note this would render a large portion of the current workforce unemployable.

Ownership and politics of the “shared brain”

  • Mixed feelings about the idea that everyone’s public work feeds a “giant shared brain”: enthusiasm for collective knowledge, but strong resentment that it’s effectively owned and monetized by a few firms.
  • Open-weight models are mentioned as a partial counterbalance, but there’s debate over how competitive they really are and how licensing, “rent-seeking” platforms, and copyright will shape access.