Should we use AI and LLMs for Christian apologetics? (2024)

Scope of LLM Capability and Reliability

  • Many commenters stress that LLMs are optimized for plausible, fluent text, not truth. They’re described as “probable text generators,” not factual systems.
  • Concrete failures cited: miscounting letters in words; confidently inventing non‑existent Bible verses, chapters, or manuscript readings, even with fine‑tuning and RAG.
  • Several argue that due to their architecture, LLMs can only be incidentally truthful; reliably enforcing truthfulness is still an unsolved problem.

Ethical Concerns in a Religious Context

  • For Christians who see doctrine and scripture as matters of eternal consequence, deploying a “bullshit machine in the name of Christ” is called reckless.
  • Some frame it as violating religious obligations to avoid false witness and to guard the “deposit of faith.”
  • A Catholic perspective in the thread says an LLM has no soul and cannot receive sacraments, so it cannot properly “profess” or safeguard the gospel.
  • Others liken AI‑mediated belief to idolatry or “outsourced faith,” echoing fictional devices that “believe for you.”

Arguments for Limited, Tool‑Like Use

  • A number of Christians report using LLMs productively as research assistants:
    • surfacing relevant verses, translations, and commentaries,
    • summarizing complex apologetics or theological works,
    • offering starting points for study that are then checked against primary sources.
  • Some propose a pragmatic standard: if AI is “less false than average internet content,” it may still be a net improvement, provided users verify important claims.
  • Others argue apologetics already involves fallible human reasoning; LLMs could help summarize classic arguments or remove intellectual barriers, not replace personal faith.

Risks Beyond Theology

  • Concerns about data privacy: vulnerable users (e.g., closeted LGBTQ youth) might ask sensitive religious questions, which then live in logs, can be sold, leaked, or monitored.
  • Worry that governments, churches, and influencers will be in denial about limitations, leading to large‑scale misinformation. An example is a “classical education” influencer allegedly sharing AI‑fabricated “quotes” as if they were real.

Broader Reflections

  • Debate over whether apologetics is even the right mode for religious truth, versus lived or “visceral” experience.
  • Some see resistance to AI here as excessive scrupulosity; others say the stakes of salvation justify extremely high standards of accuracy.