Altman on AI energy: it also takes 20 years of eating food to train a human
How to Interpret Altman’s “20 Years of Food” Comment
- Many see the analogy as dehumanizing: it reduces life to a “training cycle” and treats a human as comparable to a corporate product competing for planetary resources.
- Others argue he was only making a narrow efficiency point: “many important things use lots of energy,” not “humans are wasteful” or “GPUs should replace people.”
- Critics counter that, intent aside, the message normalizes thinking of humans and LLMs as interchangeable entities with similar claims on resources.
Energy, Training Costs, and Napkin Math
- Back-of-envelope numbers:
- Human body from 0–20 years: ~15–21 MWh of food energy.
- Modern frontier models: roughly 1–10 MW-years (≈8,760–87,600 MWh) to train.
- Inference: ~0.1–1 kW per machine, comparable to a human’s continuous power use.
- Some argue LLMs are vastly more energy-efficient per task because a single model can serve millions of users.
- Others say this ignores: data-center infrastructure, ongoing retraining, and that humans are using “pre-trained” brains shaped by evolution and human-oriented learning materials.
Jobs, Post‑Work, and Dystopia
- Concern that AI is destroying jobs faster than creating them, especially for older workers; “post‑work” is seen as reserved for AI owners.
- Some argue eliminating jobs is the path to post-work; others say that without robust policy (e.g., UBI) it just means mass precarity.
- Discussion of regulatory capture: AI firms warning about disruption while promoting regulations that entrench their power.
- Dystopian analogies split between 1984 (surveillance, enforcement) and Brave New World (digital comforts and distraction), with AI enabling both.
Power, Elites, and Human Value
- Broader frustration with billionaires: claims that extreme wealth tends to corrupt, philanthropy often whitewashes exploitation, and very few actually divest below billionaire status.
- Some interpret Altman’s framing as symptomatic of an elite view of “useless eaters” where most humans are expendable once their labor is automated.
CEO Incentives and Communication
- Several note CEOs are selected to maximize output, not to think deeply about life or ethics, so shallow or “paperclip-like” framing is expected.
- With professional PR, commenters reject “offhand remark” defenses and argue that ambiguous, easily misread analogies from powerful figures are themselves a problem.
Transparency and Risk
- Frustration that AI leaders dismiss estimates of energy/water use as wrong while not publishing detailed numbers.
- AI existential risks are discussed; some dismiss sci-fi scenarios like Roko’s Basilisk, others assign a nontrivial probability of broader AI-driven catastrophe.