Why I don't think AGI is imminent
Debate over whether AGI is already here
- Some argue “AGI is here, just weaker than expected”: current LLMs plus basic tools can already do most white‑collar work; what’s missing is orchestration and productization.
- Others say this is “AGI-lite” or just powerful narrow tools; calling it AGI is moving the goalposts.
- A third camp thinks AGI is still 10–30 years away, if ever, with current systems more like impressive statistical parrots than minds.
Definitions and benchmarks for AGI
- Competing definitions:
- “Can do most human knowledge work.”
- “Can do all intellectual work any human can do” (very high bar, closer to ASI).
- “Self‑sustaining in its environment” (can keep itself alive and funded).
- “Indistinguishable from humans in conversation” (Turing‑style), though many say that’s no longer a useful test.
- Alternative proposed markers: supranormal GDP growth, an AI company with no human employees, or agents that can reliably manage other agents.
Capabilities of current models
- Many report big productivity gains in coding, planning, business modeling, and math; some say frontier models outperform most humans on many reasoning tasks.
- Others report frequent logical failures, bad code structure, subtle bugs, inconsistent arithmetic, and contradictory answers to basic factual questions.
- Consensus that results are “mixed”: extremely useful with expert supervision, dangerous in the hands of people who can’t detect its mistakes.
Limitations and architectural concerns
- Recurring worries: lack of persistent memory, fragile long‑horizon planning, poor physical reasoning, and no true learning from experience.
- Some say transformer feed‑forward nature and token prediction guarantee hard limits; others note that multi‑step reasoning loops already break the “purely feed‑forward” assumption.
- Debate over whether scaling current approaches is fundamentally blocked (curse of dimensionality) or still on a powerful trajectory.
Embodiment and world understanding
- One side claims AGI must ground concepts in the physical world (e.g., running a robot butler, reliably cleaning toilets).
- Others counter that embodiment isn’t necessary; being paralyzed doesn’t erase human intelligence, and world models can be learned from video and simulated environments.
Economic and social impacts
- Some see current tools already displacing junior white‑collar roles and accelerating “white‑collar work as an API.”
- Concerns: loss of training pathways for juniors, growing tech debt, enshitification of information, and mass automation before household labor is automated.
- Others say, despite hype, daily life looks much like 1–2 years ago; AI so far feels more like another dev tool than a civilizational rupture.
Safety and existential risk
- Fears range from adversarial persuasion (AI talking people into anything) to military control and accidental war, to AI adopting human‑like cruelty toward “lesser” species.
- Some argue AGI is not inherently a death sentence; risk depends on who wields it and how agentic it is.
Meta‑discussion
- Several commenters express fatigue: AI threads feel like endless “yes it will / no it won’t” arguments with little new evidence, while the original article briefly 404’ing became a running joke in the thread.