Why is AI so slow to spread?

How fast is AI actually spreading?

  • Several commenters argue AI adoption is very fast: ChatGPT hit hundreds of millions of users and “AI features” are being shoved into most products.
  • Others say that being embedded everywhere ≠ being meaningfully used; many users ignore AI buttons and just want reliable search or basic app functions.
  • Comparisons are made to PCs and the internet, with some saying LLMs have diffused into business talk much faster, but retention and real impact remain unclear.

Usefulness and productivity: mixed experiences

  • Some report major gains: automating metadata for video streaming, content summarization, internal search across tools, parsing files, generating boilerplate code, tests, and routine docs.
  • Others find AI slower than doing the task themselves, especially for integration troubleshooting, complex architectures, or specialized domains.
  • There’s a split: for some, AI is a “force multiplier”; for others, it adds a review-and-debug layer that cancels any benefit.

Reliability, hallucinations, and trust

  • Many comments focus on AI’s tendency to “bullshit”: wrong browser details, car torque specs, legal facts, sports trivia, or UI actions—with high confidence.
  • This unreliability is seen as disqualifying for law, medicine, safety‑critical code, and serious customer support.
  • Users want systems that say “I don’t know” instead of fabricating; current behavior undermines trust and slows adoption.

Business models, lock‑in, and inequality

  • Fears: big vendors underprice now, then hike prices once firms lay off staff and get dependent; AI amplifies existing corporate abuses and bias.
  • Others counter that switching providers and running local/open‑source models is possible, so moats are shallow.
  • Debate on inequality: some see AI as a huge divider (those with tools/skills vs. everyone else); others see potential leveling—cheap “AI lawyers/doctors” and Harvard‑like education access—but this is challenged because of errors and asymmetry (rich firms will also have better tools and prompts).

Data, context, and integration hurdles

  • A recurring theme: models lack organizational context. They don’t know legacy decisions, hallway conversations, or nuanced product strategy; encoding that is tedious.
  • Commenters call for a new “bridge layer” between corporate data lakes and AI, with proper access control, auditability, and UX for giving context.
  • Until then, many see AI as better for generic tasks than for deeply embedded, domain‑specific workflows.

Worker incentives, anxiety, and resistance

  • Non‑technical workers often see AI as a direct job threat, not a helper, especially where executives openly frame it as a way to cut headcount.
  • Some describe burnout and unrealistic expectations (“do double the work with AI”) without evidence of achievable productivity gains.
  • This produces quiet refusal or “sabotage” of AI initiatives, especially when people don’t share in the upside.

Developer workflows and coding agents

  • Enthusiasts: with clean architectures, good documentation, and carefully written “tasks,” LLMs can implement features plus tests; devs shift to specification and review.
  • Critics: that workflow is less fun, and on large, complex codebases AI often produces incoherent designs, subtle bugs, and wrong refactors—reviewing and fixing them is as hard as writing code.
  • Some see big gains for CRUD/front‑end/boilerplate; others say senior‑level engineering (design, invariants, performance) gets little benefit.

Hype, media narratives, and skepticism

  • Several comments criticize media like The Economist for assuming “AI is hundred‑dollar bills on the street” and blaming slow diffusion on inefficient workers or bureaucracy.
  • Others liken the atmosphere to crypto/NFTs: massive hype, weak evidence of broad, durable business value, and likely future disillusionment—though most expect AI to remain useful after any bubble pops.