A new AI winter is coming?

Usefulness vs “Failure”

  • Many commenters reject the article’s “LLMs are a failure” framing. They point to massive real‑world use: hundreds of millions of users, heavy adoption in coding, and measurable productivity gains (e.g., full site rewrites or admin workflows done 5–10x faster).
  • Others agree LLMs haven’t met AGI‑level hype and often feel “underwhelming” outside of demos or narrow tasks, especially for harder reasoning or complex, long‑lived software.
  • A recurring view: LLMs are excellent assistants but poor autonomous agents; they amplify expert productivity yet let novices “shoot themselves in the foot faster.”

Economic Sustainability & Bubble Risk

  • Strong consensus that financials are shaky: training and infra are extremely costly, business models unclear, and much corporate “AI initiative” spend appears wasted or misdirected.
  • Some expect a dot‑com‑style correction: AI stocks crash and funding dries up for marginal “AI everywhere” projects, while durable workflows (coding help, healthcare admin, translation, etc.) remain.
  • Debate over whether inference is already cheap enough that ad‑ or subscription‑supported consumer LLMs can be sustainably profitable, vs. everything still being VC‑subsidized.

Technical Capabilities and Limits

  • Critics emphasize hallucinations as intrinsic to next‑token prediction: the system must always say something plausible, with no built‑in notion of truth or “I don’t know.”
  • Others counter that hallucinations can be mitigated via tools, retrieval, validation loops, and that many tasks (code with tests, constrained workflows) are inherently self‑checking.
  • Large subthread disputes whether LLMs “understand” language or merely mimic it; some argue they build genuine latent representations, others insist on purely mechanical pattern‑matching.
  • Several technically literate commenters say the article misuses complexity theory and misdescribes transformers and training history.

Employment, Society, and Information Ecosystem

  • Concerns about AI accelerating job loss (especially entry‑level), widening inequality, and creating “AI slop” that pollutes training data and the public web.
  • Some foresee regulatory and liability barriers in medicine, law, education, and policing, even if models become technically capable.
  • Others note that even a minority of automatable tasks still represents huge economic value, but also more surveillance, dark‑pattern uses, and low‑quality automated interactions.

Historical Analogies and AI Winter Debate

  • Analogies to steam engines, GOFAI winters, dot‑com, and Uber: early overinvestment, later shakeout, but lasting underlying tech.
  • Split camp: one side sees a genuine AI winter coming (funding pullback, slower progress); the other thinks we’re just heading into a hype reset while usage and incremental improvements continue.