Antiqua et Nova: Note on the relationship between AI and human intelligence

Use of AI Summaries vs Deep Reading

  • Some commenters happily use AI tools to pre-digest a dense 13k-word document, to surface themes and decide whether to read fully.
  • Others object that outsourcing reading to AI is intellectually lazy and risks hallucinations and misrepresentation, especially for a major statement by a large religious body.
  • There’s agreement that this text is far denser than popular fiction; a “summary as abstract” is seen as acceptable by some, inadequate by others.

AI, Inequality, and Power

  • Many agree with the document’s warning that digital tech can worsen inequality, centralize influence, and entrench elites.
  • Others argue the opposite: the internet has broadly democratized speech and political influence compared to pre-digital eras.
  • There’s a secondary debate whether inequality itself is the core problem, or absolute poverty and concentration of power.
  • Historical side-discussion: industrialization, capitalism, democracy, and whether technology “inevitably” improves equality.

AI in Healthcare and Human Relationships

  • The critique of replacing doctors with AI resonates with those who value human care and fear increased loneliness.
  • But several people say they would prefer a “robot doctor” to rushed, biased, or arrogant clinicians, trusting AI to be more consistent and less prejudiced.
  • Many converge on a hybrid view: AI as support tool that augments human doctors, with worries about over-reliance and de-skilling.

Embodiment, Consciousness, and Intelligence

  • The document’s emphasis on embodiment and lived experience as key differences between humans and current AI sparks long debate.
  • Some think navigation, planning, and “coffee test”–style tasks are close to solved; roboticists strongly disagree, stressing novelty, state representation, and manipulation difficulties.
  • Deep exchanges dive into: sensory richness vs cameras, whether emotions and morality are distinct or just fast cognition, brain vs LLM learning mechanisms, and substrate independence.
  • No consensus on whether future AI could genuinely have “inner life,” emotions, or moral agency; many say this remains unclear.

Idolatry, AGI, and “Worshipping” AI

  • The sections on AGI as potential “idolatry” resonate with those who see quasi-religious faith in a coming AI savior (singularitarianism).
  • Others reply that, from a non-believing viewpoint, traditional theism and AGI-hope look structurally similar: projecting human concerns onto a powerful imagined “Other.”
  • Several discuss the real danger as human misuse and centralization of AI, not deifying silicon per se.

Moral Agency, Regulation, and Responsibility

  • The claim that only humans, not machines, are true moral agents is widely endorsed, including by non-religious commenters, as a practical governance stance.
  • Many like the insistence that “an AI told me so” should never excuse decisions, and that responsibility must remain with designers, operators, and users.
  • There’s support for calls to avoid anthropomorphizing AI, require transparency, and regulate its use in high-stakes contexts (politics, education, sexuality, healthcare).

Assessment of the Vatican’s Intervention

  • A lot of commenters, including skeptics of religion, praise the document as unusually careful, deeply researched, and philosophically literate compared to typical tech or policy takes.
  • Others dismiss it as rehashing unresolved philosophy-of-mind debates without new evidence, or note historical Church abuses as reasons to distrust its authority on “dignity” and progress.
  • Some see this as a strong early framework for AI ethics (even a kind of “AI Magna Carta”); others think it’s already at risk of being overtaken by rapid advances.