A bear case: My predictions regarding AI progress

Impact on software jobs, juniors, and outsourcing

  • Several comments argue current “AI layoffs” are mostly standard cost-cutting and offshoring, with AI just a fig leaf. Outsourced/junior devs allegedly paste in low-quality AI code that management accepts because it “works”.
  • Others counter that a real reduction in demand for humans writing code is inevitable as models improve, though there will still be demand for people who can direct tools effectively.
  • Concern is high about juniors: if one mid-level dev + LLM replaces many juniors, who trains the next generation of seniors? Some suggest putting seniors back on production work instead of training waves of juniors.
  • LLMs are seen as analogous to Squarespace/Wix: they eat low-end work that was always vulnerable, not complex systems design. Outsourcing itself may become “just an LLM proxy”.

Capabilities and limitations of LLMs for coding

  • Experiences are sharply split. Some say LLMs (especially with good tooling like agentic editors) are a major step-change and now core to their productivity, comparable to a mid-level dev on many web tasks.
  • Others find them shallow: good at boilerplate, tests, regexes, CRUD apps and log untangling; bad at deeper design, non-standard algorithms, complex edge cases, and maintaining coherence in large codebases or long-running “agents”.
  • A recurring pattern: AI is likened to an overeager but unreliable intern—fast at plausible answers, poor at admitting ignorance, often requiring more review than writing it yourself.
  • Several note no “second Copilot moment” since 2021: improvements since GPT‑3.5 are seen by some as incremental, not transformative.

Hype, progress, and AGI prospects

  • The original bear case—LLMs plateauing, GPT‑5/6 bringing only quality-of-life and benchmark gains—is both endorsed and disputed.
  • Optimists point to ongoing hardware scaling, many smart researchers, and clear economic value; they expect AGI or near-AGI within years. Skeptics cite fundamental limits (computability, verification, math/logic gaps) and warn that more compute may only linearly improve pattern-matching.
  • LessWrong’s usual AI-doomer stance is noted; some see this essay as an internal correction against overly aggressive near-term AGI timelines.

Economic, ethical, and practical themes

  • Multiple comments stress that AI makes cheating (academic and professional) easier, eroding trust in junior output; verification and responsibility remain human.
  • There is debate over which professions are most threatened: some highlight engineering and back-office roles; others emphasize that licensure, liability, and interpersonal interaction will slow full replacement.
  • Many agree on a “sidekick” future: LLMs embedded as assistants for coding, research, internal knowledge search, writing, and planning—powerful amplifiers for competent humans, but far from autonomous AGI.