The New AI Consciousness Paper
Conceptual Confusion About “Consciousness”
- Commenters repeatedly note that people mix up consciousness, sentience, intelligence, and “aliveness.”
- A major distinction:
- Phenomenal consciousness (“what it’s like” to be something; qualia).
- Access consciousness / metacognition (having and using information about internal states).
- Some argue phenomenal consciousness is our only indubitable datum (“I experience”), others think introspection reveals nothing mysterious and that “consciousness” is just inner processing plus bad language.
- Several bring in meditation/Buddhist ideas: maybe “self” is illusory and consciousness is better seen as dependently arisen or as raw “suchness.”
Can Machines Be Conscious?
- One camp: consciousness requires embodiment, rich sensory loops, persistent interaction with an environment, and possibly biological substrate. Pure text prediction is seen as too disembodied.
- Others argue substrate independence: in principle, any system (biological or not) with the right structure and feedback could be conscious; biology is not privileged.
- Debate over whether LLMs’ recurrence (context window, attention, KV cache) and emerging introspection capabilities already violate the paper’s sharp line between “non-recurrent” transformers and recurrent systems.
- Strong emphasis on environmental coherence and persistent identity: stateless API calls and ephemeral instances are seen as a poor habitat for anything like a continuing subject. Counterpoint: agentic setups with tools, long-term memory, and simulated or robotic environments already give LLMs a kind of world and history.
Tests, Proofs, and Theoretical Limits
- Widespread agreement that we cannot prove consciousness in others—human, animal, or machine—only infer by analogy or behavior.
- Thought experiments (magic wand that swaps you into another being; p‑zombies; “what is it like to be a bat”) are used to highlight the hardness of the problem.
- Some invoke Rice’s theorem and related computability limits: even if consciousness were purely computational, we might have no decidable test for whether an arbitrary program is conscious.
- Integrated Information Theory (IIT) gets both support and criticism: praised for being quantitative and testable, attacked for implying tiny or trivial systems (thermostats, card catalogs) have nonzero consciousness.
Ethical and Political Stakes
- If AI can be conscious, questions arise about rights, suffering, and deletion. Suggestions range from “treat like livestock or tools” to “grant rights comparable to humans if they’re structurally human-like and claim consciousness.”
- Others stress real-world incentives: corporations and many ideologies are strongly motivated to deny machine consciousness to preserve current economic and moral arrangements.
- Several note a likely historical pattern: we exploit systems as long as possible, then grant rights only once they gain enough power or leverage.
Critiques of the Paper and Review
- Some think the reviewed paper mis-draws the architectural line (e.g., underplays feedback in LLMs and existing agentic setups).
- Others see the original authors as dressing up an AI product in philosophical language, or find the review’s tone dismissive or pompous.
- Scott Alexander’s participation in a speculative AI-2027 scenario is cited by some as undermining his credibility on AI; others defend his predictive track record.
Miscellaneous Threads
- Side discussion on em-dash vs hyphen as a (bad) proxy for AI authorship.
- Examples of multi-agent LLM “Sims”-like environments with persistent memories are raised as early steps toward a continuous stream-of-consciousness analogue.
- Some conclude consciousness may be a banal, emergent side-effect of complex prediction rather than something metaphysically special; others think it will remain, in principle, mysterious.