AGI is far from inevitable
Is AGI Inevitable or Very Far Off?
- One camp sees AGI as inevitable given that human brains exist; given enough time, scale, and engineering, something comparable should be buildable.
- Another camp argues we lack a working theory of cognition and consciousness, so claims of inevitability are unjustified; we don’t even know if current approaches are on the right track.
- Some push back on absolutist claims in both directions: saying “never” is as speculative as saying “soon.”
Limits of Current LLMs and Architectures
- Many see LLMs as sophisticated “word parrots”: excellent at language mimicry, coding help, and surface reasoning, but lacking continuous learning, robust world models, and reliable arithmetic or causal understanding.
- Training–inference separation and fixed weights are seen as incompatible with truly general, constantly learning intelligence.
- Others caution against strong “LLMs can never X” claims; empirical progress has repeatedly invalidated earlier limits, and architectural tweaks (e.g., better number representations, tool use, online learning) may change capabilities.
Critique of the Van Rooij Paper / Press Release
- Several commenters find the press-release claim (“AGI via ML is intractable / would exhaust resources”) overstated or poorly supported.
- The paper’s formal result is criticized as hinging on an unrealistically strong definition of an “AI trainer” that must learn any behavior pattern, making the intractability result less relevant to real systems.
- Some see it as similar to earlier theoretical “no free lunch” / complexity arguments that don’t map well to messy, heuristic engineering.
Brains, Compute, and Evolution
- Counter-argument to the “no resources” claim: human brains achieve human intelligence with modest energy in a finite volume; physics doesn’t forbid similar efficiency in machines.
- Others emphasize we still can’t simulate even simple animal brains end-to-end, and biological learning is far more data- and energy-efficient than current ML.
- Evolution is cited both as proof that general intelligence is physically possible and as a reminder that timescales and pathways may be very long and indirect.
Definitions, Benchmarks, and Moving Goalposts
- Disagreement over what “AGI” should mean: human-like consciousness, theorem-level creative reasoning, broad task coverage, or simply economic displacement.
- Some expect goalposts to keep shifting as AI gains specific abilities; others defend revising definitions as legitimate scientific refinement, not bad faith.
Value of Narrow AI Without AGI
- Several commenters argue that even without AGI, current and near-term systems can massively impact the economy: code generation, specialized robots, self-driving, administrative tasks.
- Others note historical examples (e.g., self-driving cars, Go, chess) where impressive but narrow progress did not straightforwardly generalize to “intelligence” in the human sense.
Philosophical and Ethical Undercurrents
- Thread touches on materialism vs. dualism (souls, “hard problem” of consciousness) and whether replicating intelligence would falsify certain religious views.
- Some worry more about hype, overpromising, and social impacts (labor, power concentration, misuse) than about true AGI arriving soon.