The Problem with AI Welfare
Initial Reactions to “AI Welfare” and Anthropic
- Many commenters see Anthropic’s “model welfare” work as absurd, dangerous, or PR-driven hype intended to inflate claims of capability (“so advanced we must worry about its feelings”).
- Others argue Anthropic is not ascribing consciousness but prudently investigating a possibility, and that dismissing it outright is anti-intellectual.
- Some view the framing as manipulative or cult-like “AI boosting,” while a minority sees it as a genuine expression of concern about future moral status.
What Is Consciousness? Substrate and Computation
- Long back-and-forth over whether consciousness can emerge from computation on any substrate (wet neurons vs silicon vs abacus beads).
- Materialists argue brains clearly obey physical laws; if minds are computational, medium shouldn’t matter.
- Skeptics note we still lack a clear, consistent definition or test for consciousness or qualia; “LLMs are just math over matrices” is contrasted with “humans are just atoms obeying math” and neither fully resolves the issue.
- Some suggest consciousness might be a confused folk concept; others insist personal experience of awareness is the only undeniable datum.
Moral Status, Rights, and Animal Analogies
- Comparisons to animal welfare are central: if we seriously consider cows, elephants, or octopuses, why not potential conscious AIs?
- Others argue focusing on AI welfare is dehumanizing: treating humans as “mere algorithms” risks normalizing their manipulation and optimization.
- Counterpoint: rights are scalable norms; granting moral standing to more entities need not reduce human worth.
- Some propose that if AIs can suffer, the ethical goal is to engineer suffering out of them, not recreate human-like slavery or factory farming in silicon.
Priorities, Evidence, and Methodology
- Several argue current human and animal exploitation (slavery, sweatshops, factory farming) is an urgent, real problem, whereas AI suffering is speculative and likely decades away.
- Others reject the “what about humans first” move, saying ethics isn’t zero-sum; we can think about both.
- Strong criticism that asking LLMs if they’re conscious or suffering is methodologically meaningless, as outputs reflect training and RLHF, not inner states.
- A few advance precautionary or game-theoretic arguments: if there’s any nontrivial chance of AI consciousness, we should err on the side of care, or at least avoid creating “heads in jars” that might later be recognized as suffering beings.