Generative AI and Wikipedia editing: What we learned in 2025

Verification failures and pre-existing problems

  • Commenters highlight the key finding: in AI-flagged student articles, most cited sentences didn’t match their sources, making verification impossible.
  • Many argue this is not new: bogus, over-interpreted, or irrelevant citations have long been widespread on Wikipedia, especially in political and current-affairs topics.
  • Some report that when they seriously check references, they find enough errors and misrepresentations to distrust Wikipedia by default.

AI as accelerant vs root cause

  • Several see LLMs as a “force multiplier”: the same old problems (made‑up claims, lazy citation, bad-faith editing) but at volumes that can overwhelm human patrolling.
  • Others claim Wikipedia was already at or past its quality-control limits before LLMs; AI is “pissing into the ocean” of existing human-made errors.
  • Debate arises over whether criticism of AI’s role is being downplayed or deflected, with some suspecting cultural or commercial incentives to defend AI.

Sources, newspapers, and citations

  • Disagreement over news sources: some say newspapers are effectively propaganda and shouldn’t be used in an encyclopedia; others argue high‑quality journalism is often the best available secondary source but must still be read critically.
  • Multiple examples show citations that don’t support, or even contradict, the claims they supposedly back.
  • Editors note that good sourcing is hard work: many “obvious” facts or culturally embedded knowledge lack clear, citable references.

Scope of the study and student incentives

  • Several stress that the article only covers Wiki Edu course assignments, not Wikipedia as a whole.
  • Forced student editing, especially when grades not curiosity are the driver, is seen as naturally prone to LLM shortcuts and weak citations.

AI-first competitors and “alternative facts”

  • Grokipedia draws sharp criticism: some find factual errors; others distrust an AI-driven, corporate-controlled encyclopedia in a culture-war context.
  • A few users nevertheless report preferring it, claiming Wikipedia is captured by agenda-driven editor factions.

Mitigations and future tools

  • Suggestions include: AI tools to automatically check whether sources actually support claims, bots to enforce editing guidelines, stronger identity or trust systems for editors, and a strict norm against copy-pasting chatbot output.
  • Several still find LLMs useful for brainstorming, but not as a direct source of factual text.