Problems the AI industry is not addressing adequately
AGI timelines, definitions, and plausibility
- Wide disagreement on timelines: claims range from “verbal-only AGI for most math in 2–7 years” to “maybe 2040 with new paradigms” to “we’re nowhere near, lacking even a theory of comprehension.”
- Definitions diverge:
- “Median human across almost all fields” is seen by some as already “basically achieved,” others call that meaningless or wildly overstated.
- Debate over whether solving formal, well-specified tasks counts as “general,” vs needing open-ended problem-solving without prior data.
- Some argue current LLMs show genuine “idea synthesis”; others say they only remix text and lack true understanding or originality.
Biology-inspired and alternative paradigms
- One line of argument: accurate large-scale neuron simulations (continuous time, dendritic learning, rich temporal dynamics) could yield animal-like intelligence; money + compute + biological inspiration makes AGI by ~2040 plausible.
- Pushback: these ideas are already being tried in labs and academia; paper-churn incentives and transformer efficiency keep alternatives marginalized, but any new paradigm must beat transformers on cost–performance.
Job-hopping and “revealed beliefs” about AGI
- The article’s inference (“people leaving leading labs → AGI not close”) is widely criticized as bad logic.
- Commenters list mundane reasons: 8–10× pay bumps, better equity, seniority, dislike of bosses, portfolio diversification, and “screw-you money,” regardless of AGI beliefs.
- Some see frequent moves and shifting data-center plans as evidence AGI rhetoric is mostly for fundraising and hype.
Company behavior vs rhetoric
- Skeptics: if AGI were imminent, firms wouldn’t focus on chatbots, sales, and engagement; they’d be in “crash program” mode. Canceled or reshaped infra plans are read as cooling expectations.
- Others: products both fund research and generate invaluable interaction logs (RLHF/agents), which may be the fastest path to better models. Chatbots are framed as “data funnels,” not just revenue.
- Several note a pattern of extreme fear (“x-risk”) and extreme optimism being used to justify faster deployment either way.
Hallucinations, reliability, and “understanding”
- Many see persistent hallucinations as the core unsolved problem that makes full autonomy unsafe; “agents” are viewed by some as Rube Goldberg workarounds.
- Others claim hallucinations are more manageable now and that LLMs already deliver high-impact productivity gains when paired with verification, RAG, and iterative loops.
- Large subthread on whether “a sufficiently good simulation of understanding is understanding,” vs the need for deeper mechanistic insight; no consensus.
Ethics, social impact, and desirability of AGI
- Concerns about optimizing AI for addictiveness (short-form video, flattering chatbots) rather than benefit; parallels to social-media harms.
- Climate impact of AI-driven data centers sparks debate: some see near-term fossil-heavy buildouts as irresponsible; others emphasize nuclear/renewables and argue added demand can coexist with decarbonization.
- Several argue that, if achieved, AGI would primarily serve capital as infinitely scalable, right-less labor (superhuman “CEO,” digital slaves), questioning whether AGI is even socially desirable.
Business models, moats, and sustainability
- Open questions about whether current pricing is VC-subsidized and what “true” costs would be absent cheap capital.
- Broad agreement that compute and data—not secret algorithms—are the main moats, favoring large incumbents if AGI ever arrives.
- Some expect an eventual AI bubble shakeout similar to dotcoms, followed by more modest, pragmatic uses; others predict “good enough and cheaper” systems will significantly disrupt labor well before any true AGI.