AI Adoption Rates Starting to Flatten Out

Interpreting the “flattening” claim

  • Many commenters argue the headline is overstated or wrong given the charts: small-firm adoption is clearly rising; large-firm adoption shows recent declines but not a clear long‑term trend.
  • Others counter that several consecutive months of decline, especially in larger firms, does start to look like a real trend, absent evidence of a transient shock.
  • Some suggest a more accurate framing would be “stagnation” or “plateau,” especially for big companies.

Definitions and data quality

  • Strong confusion and disagreement over “adoption” vs “adoption rate”:
    – Some read “rate” as a time derivative; others as “percentage of firms using AI.”
    – Title vs axes vs text are seen as internally inconsistent.
  • Criticisms of the charts: no y‑axis label, misleading use of “rate,” 3‑month moving average hidden in a footnote, and missing grid lines.
  • The two datasets (Census vs Ramp) differ by ~3x, raising questions about representativeness and methodology.
  • Census question wording (“use AI in producing goods or services”) may exclude a lot of incidental or exploratory use, making absolute levels hard to interpret.

Enterprise vs small business dynamics

  • Small firms (1–4 employees) show the cleanest, steadily rising line; some see this as the leading indicator, with larger firms to follow later.
  • Others note that large-firm adoption appears to have fallen, which is concerning given AI company valuations and capex plans.
  • Speculation that grassroots use (developers, tiny businesses) grows while mid‑/large‑company managers slow or resist adoption if they feel threatened.

Bubble, valuations, and macro outlook

  • Several commenters see classic bubble dynamics: huge capex and valuations assuming exponential revenue growth, but only modest or stalling adoption.
  • Expectation from some that most AI startups will be wiped out or acquired cheaply; long‑term value may still be huge but concentrated in a few players after a shakeout.
  • Others say flattening would be a normal point on a technology S‑curve, not necessarily an “AI winter.”

Use cases, productivity, and developer experience

  • Mixed experiences on coding:
    – Some see LLMs as strong force multipliers for boilerplate, glue code, and minor tasks.
    – Others find quality unreliable, edge cases missed, and codebases turned into “slop” that’s hard to maintain.
  • Several developers report skill atrophy, loss of deep understanding, and reduced joy in programming when relying heavily on AI; a few intentionally cycle “no‑AI weeks” to keep skills sharp or have quit AI tools entirely.
  • Debate over whether non‑adopters will be unemployable in ~5 years vs. whether deep engineering skill (independent of AI) will remain the scarcest and most valuable asset.

Agents, UX, and mainstream adoption

  • Strong skepticism that autonomous “agents” are actually in real, unsupervised production use; most usage is interactive co‑pilot style.
  • Many note the average person doesn’t know what to do with LLMs; value will rise as AI is embedded into familiar applications and workflows, hiding complexity.
  • Growing frustration with hallucinations and inconsistent answers (e.g., between different models) erodes trust and may dampen further direct-chat adoption.