Horses: AI progress is steady. Human equivalence is sudden
Steady Progress vs “Breakthrough” Moments
- Some argue AI capability improves smoothly across benchmarks, matching their daily experience (especially in coding and question-answering).
- Others say these benchmarks are opaque, lab-designed marketing tools, and in real work they see only 5–10% productivity improvement, not “orders of magnitude.”
- Several developers report LLMs still slow them down for anything non-trivial, due to hallucinations and the need for deep review, while smaller code edits and CRUD-style apps can be sped up significantly.
Validity of the Horse / Engine Analogy
- Critics note horses vs engines was not just “efficiency line go up”: internal combustion, fuel infrastructure, and mass-market cars all mattered; horse decline took decades and varied by domain (city vs farm horses).
- People dispute whether the analogy is about “beings” (horses/humans) or “jobs.” Some say it really describes job categories disappearing once a threshold is crossed, not literal depopulation.
- Others find the framing dehumanizing or disturbing: graphing horse extinction and then hoping humans “get two decades” feels like economic determinism with little concern for human welfare.
Work, Inequality, and Who Benefits
- One camp is optimistic: boring admin/email/PowerPoint jobs will shrink, freeing humans for more meaningful work, as with bank clerks and ATMs.
- Another camp expects gains to flow mostly to capital: past automation hasn’t made housing, healthcare, or security more accessible; kiosks and self-checkout often cut labor without lowering prices.
- Fears include: hollowing out white-collar work, weakened bargaining power, “bullshit jobs” being replaced without new good ones, and possible social unrest if many become economically irrelevant.
- Counter-arguments stress that economies still need consumers; fully discarding human labor would collapse demand, so political and regulatory “human protectionism” is likely.
Limits of Current LLMs and Path to AGI
- Many commenters emphasize structural limits: LLMs are powerful text predictors, not reasoners; hallucinations remain unsolved; they lack continuous learning, symbolic reasoning, and rich multimodal grounding.
- Others think modest architectural tweaks plus scale may be enough, and expect more “sudden” tipping points like code-assistant UIs that quietly transform workflows.
- There’s concern that companies oversell disruption (“we’ll automate office work”) to drive adoption and valuations, even as practitioners see fragile tools that must be tightly bounded and supervised.
Culture, Governance, and Ethics
- Discussion touches on oligarchic control, wealth concentration, the risk that AI undermines redistribution mechanisms, and the need for unions, regulation, or UBI.
- Some want non-proliferation–style controls if AI is truly existential; others think panic outpaces evidence and that technology should be shaped by democratic choices, not just investor incentives.