Why didn't AI “join the workforce” in 2025?

AI layoffs, productivity, and hype

  • Commenters dispute the idea that 2023–25 tech layoffs were meaningfully “because of AI.”
  • If engineers were really 5–10x more productive, boards would be hiring aggressively to capture profit and market share, not laying off staff to hold flat growth.
  • Several argue current behavior (cautious spending, minimal new initiatives) is evidence that leaders don’t yet believe their own strongest productivity claims.

What “joining the workforce” even means

  • The original “agents will join the workforce in 2025” quote is criticized as vague; some interpret it as autonomous staff replacements, which clearly didn’t happen.
  • Others say AI did join the workforce if you include humans using tools (e.g., devs with Claude Code/Copilot, office workers with ChatGPT), even though AI is not an independent employee.

Where AI is actually used today

  • Strong evidence of adoption as a tool rather than a worker:
    • Programming assistance, refactors, test generation, and “agentic” CLI flows that run tools and edit code.
    • Insurance examples: extracting policy data into structured ontologies, drafting email replies from account context, automating certificate-of-insurance workflows with large time and cost savings.
    • B2B SaaS: faster content drafts, sales outreach, meeting summaries, synthetic demo content.
    • Heavy student use for homework and the collapse of traditional homework-help sites.
    • Content industries (logos, low-end design, copy, spam, video/blog/text flood) already materially altered.

Limits, reliability, and reasoning

  • Many emphasize that AI output still needs human validation; when quality, safety or regulation matter, fully automated workflows are rare.
  • LLMs excel where there’s either:
    • Strong external validators (compilers/tests for code), or
    • Low need for factual precision (marketing fluff, fiction, generic business prose, images).
  • Browser/GUI agents are singled out as still weak; text-only, tool-using agents (shell, scripts, APIs) work much better.
  • Multiple commenters argue LLMs lack robust reasoning and are “truthy” but error-prone, making them unsuitable as genuine autonomous staff.

Economy, bubble, and attention

  • Some see early macro impact (GDP “overperformance”) but others attribute it mostly to datacenter and GPU capex, not productivity gains.
  • Debate over whether this is an “AI bubble” similar to dot‑com: widespread overinvestment vs long-term eventual transformation.
  • Broad agreement with the article’s call to stop fixating on near-term AGI predictions and instead evaluate present-day, concrete capabilities and harms.