What if A.I. doesn't get better than this?
Exponential vs S‑Curve Progress
- One camp argues current LLM gains are just the steep part of an S‑curve; aviation and spaceflight are cited as examples where early rapid progress plateaued.
- Others counter that we can’t know where on the curve we are; 10–75‑year extrapolations are seen as speculative and historically unreliable.
- Several note that all real‑world “exponentials” saturate, but that doesn’t tell us what the ceiling is for LLMs.
Labor, Politics, and Social Risk
- Some fear mass automation without safety nets (like UBI) will push people into “scraps” work and possible unrest or revolution.
- Others think current LLMs are more augmentation tools than replacements; transformative change may be slow, like the Internet’s diffusion.
- There’s concern AI will amplify elite control and surveillance, intensifying long‑standing labor–management conflicts.
AI vs LLMs and Mislabeling
- Many dislike the article’s conflation of “AI” with LLMs, arguing AI also includes search, planning, classic ML, etc.
- Others say the linguistic battle is effectively lost: in popular usage “AI = LLM chatbot products,” and media just reflects that.
- Some see this conflation as part of a “grift” that exaggerates intelligence to justify huge investment.
Capabilities, Orchestration, and Integration
- Several argue models are already powerful; the real frontier is orchestration: multi‑step workflows, agents, tool use, and system integration.
- Even without model improvements, better protocols, sensors, and cross‑system interfaces could yield major practical impact (and also dystopian scenarios).
- Others are skeptical: if it were truly “powerful,” use cases would be more obvious and self‑justifying.
Economics, Business Models, and Competition
- Massive AI capex versus modest current revenues leads to predictions of a shakeout or collapse for firms betting solely on frontier models.
- Open and cheap competitors (e.g., DeepSeek) are viewed as limiting price power and moat formation.
- One view: big players are in a user‑acquisition phase, aiming to monetize later via ads embedded in AI outputs; whether ad budgets can support this at scale is contested.
- Debate over inference costs: some claim serving millions with low latency and high uptime is expensive; others argue that, at scale and with hardware amortized, tokens can be cheaper than human labor.
Have We Hit a Plateau?
- Some participants perceive slowed progress: model differences feel incremental, coding help seems logarithmic, and products regress on certain tasks.
- Others point to recent competition performance (IMO/IOI medals) and new “reasoning” models as evidence that frontier capability is still rising.
- There’s disagreement over whether 2023 expectations for GPT‑5–style breakthroughs (sometimes framed as near‑AGI) have been met.
Data, Training Paradigms, and Cognition
- One view: language is a weak, high‑level substrate for cognition, yielding broad but shallow, brittle models; future systems should learn from lower‑level or real‑world data at huge scale.
- Others stress that not all AI is linguistic; “performing” systems (vision, control, optimization) may keep advancing even if pure LLMs stall.
- Some discussion touches on human thought: subconscious decisions preceding language, suggesting internal “conceptual” processing distinct from verbalization.
Reliability, Hallucinations, and Trust
- A reported test found GPT‑5 hallucinating most scientific citations (fabricated titles, authors, or mismatched journals), reinforcing claims that LLMs can’t be trusted as fact‑grounded systems.
- Some argue true trustworthiness requires integrated citation/claim‑checking pipelines outside the base model; products that solve this could be decisive.
- Others note that certain tools (e.g., web‑powered “deep research”) already partially address this, but are still imperfect.
Local Models, Infrastructure, and Medium‑Term Outlook
- Several expect local or on‑device models to erode centralized providers’ margins once quality is “good enough,” especially for coding and niche tasks.
- Infrastructure, integrations, and UX are seen as lagging far behind model capability; building robust AI‑aware systems is viewed as at least half the challenge.
- Some foresee an AI hype “trough of disillusionment,” with many VC‑funded players burning out, while big incumbents survive thanks to diversified profits.