I don't think AGI is right around the corner

State of AGI Timelines

  • Many commenters are more skeptical than a year or two ago: LLM hype feels past its peak, AGI is seen as “not right around the corner,” with estimates often in the 2030s–2040s or “maybe never with current hardware.”
  • Others point out the article’s author is actually bullish: ~50% for AGI by early 2030s, and even misaligned ASI by 2028 considered “plausible.”
  • A minority think timelines are much shorter, driven by autonomous money-making agents and huge economic incentives once such systems exist.

What Counts as AGI?

  • No shared definition. Competing views:
    • “Median human” or “bottom X% of humans” across all cognitive tasks.
    • Ability to replace unsupervised white‑collar workers or run companies.
    • Systems that can learn and self‑improve (e.g., design, train, and deploy better models).
    • Stronger notions: simulate reality faster than real time, or match humans in “virtually every task.”
  • Some argue current LLMs are already “AGI by some definition” (better than many humans on many intellectual tasks); others insist that’s redefining the term down.

Limits of Current LLMs

  • Core critiques:
    • No genuine continual learning: you can’t reliably update preferences or skills through feedback; models are mostly “frozen.”
    • Brittle reasoning and character‑level failures (e.g., counting letters, “states with W”) despite fluent chains of thought, often blamed on tokenization but seen as deeper evidence of shallow understanding.
    • Hallucinations and confident nonsense; poor at robust planning, physical reasoning, and open‑ended tasks like operating a vending machine in the real world.
    • No obvious track record of novel scientific discoveries purely from “having all human knowledge.”
  • Supporters counter that:
    • Emergent capabilities across domains are a form of intelligence.
    • Speed and scalable parallelism alone could be enough to reach “superhuman” performance once baseline reasoning is good enough.

Continual Learning, Architectures, and Paths to AGI

  • Many see lack of adaptive, test‑time learning as the main blocker; human‑like skill acquisition (e.g., learning saxophone, doing taxes across years) doesn’t match today’s train‑once paradigm.
  • Proposed directions:
    • Agent systems with tools, search, RL, simulations, and explicit world models.
    • Hybrid systems (logic, Prolog, heuristics, specialized sub‑agents) orchestrated by LLMs.
    • Hardware advances (memristors, analog or neuromorphic chips; or even biological “vat brains”).
  • Others argue scaling diverse data and compute has historically beaten hand‑crafted continual‑learning schemes and may keep doing so.

Intelligence, Consciousness, and “Souls”

  • Ongoing philosophical dispute:
    • Materialist view: brain is computation; in principle reproducible on silicon (or in wetware with a Python API).
    • Non‑material or “cardinality barrier” arguments: biological systems might use effectively uncountable analog state spaces or unknown physics, putting true sentience beyond current digital machines.
  • Distinction between intelligence vs consciousness vs wisdom:
    • LLMs may be “intelligent” at language but lack embodiment, caring, participatory knowing, and self‑correction in the human sense.
    • Some emphasize that current systems are “word fountains” without shared lived reality or genuine understanding.

Economic and Social Impact Without AGI

  • Broad agreement that sub‑AGI systems can still be highly disruptive:
    • Many repetitive, low‑skill, or text-heavy jobs could be automated or radically changed.
    • Risk of “good enough and cheaper” systems replacing humans even when quality is worse (IKEA vs master carpenter analogy).
    • Concern about layoffs justified by “AI” more than caused by true capability, and about concentration of power if very capable systems remain expensive and centralized.
  • Some hope this leads to more leisure and UBI; others worry about mass unemployment, political instability, and declining well‑being even as GDP rises.

Hype, Incentives, and Trust

  • Strong suspicion that near‑term AGI claims are driven by:
    • Fundraising, valuations, and subsidies for AI labs and startups.
    • Media and influencer incentives to declare “AGI is here” with each new model.
  • Academics and some practitioners often report more conservative views but feel ignored.
  • Several see AGI as a “moving goalpost” or even a grift: definitions shift so it’s always a few years away, yet always just close enough to sell.

Safety, Doom, and “Intelligence Explosion”

  • Some think current LLMs are already enough to start an “intelligence explosion” once they reliably help improve their own successors; others think their lack of real discovery and self‑direction undermines that narrative.
  • A fraction worry about misaligned ASI in the 2020s and discuss prepping (off‑grid power, bunkers), though others call this speculative and unfalsifiable.
  • Many place more weight on nearer‑term harms: economic shocks, surveillance, manipulation, military uses, and ethically fraught paths like human‑cell “computers.”