AGI is an engineering problem, not a model training problem
How AGI Is Framed: Science vs Engineering vs Training
- One camp argues AGI is primarily a science problem: we lack clear definitions of intelligence and consciousness and don’t understand how brains implement them, so there is “nothing to engineer” yet.
- Others say it’s an engineering and systems problem: current model classes (LLMs, neural nets) are powerful enough, and progress now depends on architecture, orchestration, and tooling around them rather than just bigger training runs.
- A third view: it’s still largely a model/scale problem; history (“bitter lesson”) suggests general methods plus more compute/data keep winning, and it’s premature to declare a plateau.
Definitions and Moving Goalposts
- AGI is criticized as ill-defined: is it “human-like intelligence,” “able to replace any worker,” or something like “better than humans on all tests we can devise”?
- Some predict it will become a marketing term; eventually someone will just declare AGI achieved and debate will follow.
- Several note that we already attribute “general intelligence” to humans (and maybe some animals) without rigorous tests, yet demand strict formal definitions for machines.
Consciousness, Sentience, and Agency
- Many insist intelligence does not require consciousness; others suspect we can’t reach human-like AGI without understanding subjective experience.
- There is extensive disagreement about whether consciousness is an “illusion,” emergent physical process, or even necessary to discuss at all.
- Some argue true AGI would imply moral status: anything that’s indistinguishable from a conscious agent but lacks rights looks like slavery. Others decouple AGI (capability) from sentience.
- A minority suggest “intelligence requires agency”: if systems have no real stake or goals of their own, they’re just tools, not minds.
Possibility in Principle (Physics, Computation, Gödel)
- One side: since human brains are physical systems, electronic AGI is possible in principle unless new physics or math shows otherwise.
- Counterpoints: practical impossibility (scale), unknown emergence mechanisms, or theoretical limits (incomputability, NP-hardness), though others argue these objections conflate problem classes with agents.
- Gödel/incompleteness is raised and largely dismissed as irrelevant to building useful general problem-solvers.
Limits of Current LLMs and Scaling
- Many see LLMs as powerful but fundamentally pattern-matchers: great at language and exams, brittle in long-horizon reasoning, embodiment, and consistent world modeling.
- Claims that “LLMs have plateaued” are contested; some users report clear ongoing improvements, others see diminishing returns and more obvious brittleness in real codebases.
- There’s skepticism that ever-larger probabilistic text models alone can yield AGI, likened to trying to reach space by making bigger flapping-wing machines.
Architectures, Memory, and Self-Learning
- Strong interest in memory systems beyond context windows: persistent, searchable, and evolving memory that supports belief updates over time.
- RAG and simple external stores are seen as insufficient; context overload degrades performance, and current models lack robust long-term state.
- Many argue for continual learning: models that update their own weights or user-specific submodels at runtime without catastrophic forgetting, possibly driven by prediction failure and multi-modal feedback.
- This collides with product concerns (reliability, rollback, “last-known-good” states), making self-updating systems hard to commercialize safely.
Beyond LLMs: Missing Pieces
- Proposed missing components include:
- World models and action/observation loops.
- Composable program synthesis and tool use.
- Emotion-like reward systems (curiosity, fear, avoidance) to drive exploration and prioritization.
- Recursive, loop-based architectures more like brains than one-pass transformers.
- Some think LLMs are “semantic memory” only; other cognitive modules (perception, control, motivation) still need to be built and integrated.
Biological Analogies and Skepticism
- Comparisons to animals (ants, bees, spiders, rats) highlight how far machines lag in robust 3D navigation, adaptation, and self-directed behavior.
- Evolution’s timescale and complexity are invoked to argue AGI is likely decades away, if possible at all, and that LLMs may be a costly dead-end similar to previous AI fads.