AI Responses May Include Mistakes

Google Search, Gemini & Declining Result Quality

  • Many comments report Gemini-in-search routinely fabricating facts: wrong car years/models, non‑existent computer models (e.g., PS/2 “Model 280”), bogus events, or made‑up sayings treated as real.
  • Users note Google often buries the one correct traditional result under AI “slop” and SEO junk, in areas (like car troubleshooting) where Google used to excel.
  • Some link this to long‑term “enshittification” of search and ad‑driven incentives: better to show something plausible (and more ads) than admit “no answer.”

Trust, User Behavior & Real‑World Harm

  • Several anecdotes show people treating AI overviews as gospel, then being confused or misled (car system resets, population figures, employment or legal info, game hints).
  • Concern that AI overviews make bad or underspecified queries “work” by giving smooth, confident nonsense where earlier results would have been messy enough to signal “you’re asking the wrong thing.”
  • Worry that this will create more downstream work: support staff and experts having to debug problems caused by AI misinformation.

LLMs vs Search & Alternative Uses

  • Some are baffled that anyone uses LLMs as primary search; others say they’re great for:
    • Framing vague ideas into better search terms.
    • Summarizing multi‑page slop, “X vs Y” comparisons, or avoiding listicle spam.
    • Coding help and boilerplate, provided you already know enough to verify.
  • Alternative tools (Perplexity, DDG AI Assist, Brave, Kagi) are cited as better examples of “LLMs plus search,” mainly because they surface and link sources more transparently.

Disclaimers, Liability & Ethics

  • Broad agreement that tiny footers like “may include mistakes” are inadequate; suggestions range from bold, top‑of‑page warnings to extra friction/pop‑ups.
  • Others argue pop‑ups won’t help: many users don’t read anything and just click through.
  • Tension noted: you can’t aggressively warn “this is structurally unreliable” while also selling it as a replacement for human knowledge work.

Technical Limits & “Hallucinations”

  • Repeated emphasis that LLMs are language models, not knowledge models: they’re optimized to produce plausible text, not truth.
  • Some push back on mystifying terms like “hallucination,” preferring plain “wrong answer” or “confabulation.”
  • Debate over acceptable error rates: at what point is “accurate enough” for non‑critical domains, versus inherently unsafe for anything high‑stakes?