Two narratives about AI

Developer productivity & evidence

  • Several comments debate a recent study on LLM-assisted coding (246 tasks on mature codebases) that found experienced devs believed they were faster but were ~19% slower with AI.
  • Critics argue the study used older models, gave participants little time to learn tools, and didn’t “ground” LLMs with project docs; supporters say it’s still the best empirical data so far and at least shows self-assessment is unreliable.
  • Others report longer-term rollouts (e.g., Copilot over 6–8 months) where productivity was flat or slightly worse at first, then improved sharply as devs learned how to use the tools.

Where AI helps vs fails in coding

  • Many see strong gains for:
    • Boilerplate, glue code, tests, small greenfield projects.
    • Common stacks with huge corpora (JS/TS, Python, Java).
    • “First pass” code review: catching nits, missed renames, doc/test inconsistencies.
  • Weak or negative results are reported for:
    • Large, complex, legacy codebases with lots of hidden constraints.
    • Systems programming, niche languages/DSLs, and critical/firmware work.
  • Several note huge variance: same tool alternates between brilliant and useless; results hinge on written instructions, test-first workflows, and how much context it can see.

Error shifting & long‑term code quality

  • A recurring framing: the industry tries to push errors “left” (into types, review, tests); LLMs risk pushing them “right” (into production) if used as probabilistic code generators without deep human understanding.
  • Others counter that AI can also support shifting left (e.g., by making safer languages like Rust more accessible, or by automating low-level checks and refactors) if kept inside existing guardrails.

Jobs, identity & economic fear

  • Emotional charge is higher than with crypto because AI touches core professional identity and skills, not just investments.
  • Anxiety focuses on:
    • Devaluation of developer labor and rising expectations (“same pay, more output”).
    • Fewer entry-level/junior roles if “grunt work” is automated.
    • Non-dev roles (customer support, some design/UX, copywriting) being easier to replace.
  • Some argue this is another automation wave people will adapt to; others warn that many will be pushed from “comfortable” to “barely livable” without policy responses.

Narratives, hype and what we actually know

  • One “camp” is CEOs, vendors, and execs loudly predicting near-term replacement of developers; another is practitioners and researchers with far more mixed, context-dependent experiences.
  • Several commenters think calling it “no one knows anything” is itself misleading: we understand a lot about how LLMs work technically, but societal and labor-market impacts remain unclear.
  • A recurring recommendation: ignore extremes, focus on concrete use in your own domain, and treat AI as a powerful but narrow tool—not magic, not useless.