Can LLMs accurately recall the Bible?
Verbatim recall vs. hallucination
- Many commenters report that large models can quote Bible or Quran verses accurately, sometimes even in original languages and with references.
- However, they often hallucinate when asked for interpretations, historical claims, or “search-style” tasks (e.g., finding specific patristic quotes), inventing passages, sources, or details.
- Some argue that models do especially well on heavily quoted passages (Bible, navy SEAL copypasta) because of massive repetition in training data.
Use in religious study and practice
- Users describe helpful use cases: quick verse/location lookup, listing occurrences of themes, cross-referencing stories across scriptures, comparing Bible and Quran, or exploring theological viewpoints.
- Several people use LLMs as introspective tools: tarot interpretations, custom “prayers/mantras,” or a bespoke “virtual spiritual guide” combining multiple traditions.
- Others see LLMs as promising research assistants for Bible, Quran, and patristics, but only if users can verify citations in original texts.
RAG, databases, and tooling
- Strong consensus that LLMs are poor databases for canonical texts where exact wording matters.
- Suggested pattern: store scripture or regulations in a database and use LLMs only for search, summarization, and explanation, often via RAG and embeddings.
- Some projects already combine Quran + Hadith corpora with embeddings and show decent bilingual (Arabic/English) performance.
Memorization, parameters, and model behavior
- Discussion on whether strong verbatim recall implies overfitting; some see it as expected lossy compression, others as concerning.
- Debate on what “parameters” represent and how many documents or sequences can be “memorized.”
- Questions raised about how memorized sequences differ internally from novel generations and whether models memorize high-value texts over SEO “garbage.”
Trust, expertise, and epistemic worries
- Several warn that non-experts can’t reliably distinguish facts from hallucinations in complex theological or historical domains.
- Advice: LLM output is useful as a “springboard” for experts, dangerous if treated as authoritative by laypeople.
Ethical and theological reactions
- Some religious commenters are uneasy or find it vaguely heretical to insert an LLM between believer and scripture.
- Others see parallels between human religious interpretation and LLM “hallucination,” while defenders emphasize rigorous historical–grammatical methods and established creeds.