Death by AI

Reliability and User Experience of LLMs & AI Overviews

  • Many see this as one more data point that LLM text is fundamentally unreliable; if a human were wrong this often, they’d be dismissed, not trusted.
  • Some users admit they still use AI because “instant answers” are tempting, but say it often becomes a time‑sink and erodes trust.
  • Others argue that failures are rare relative to “billions” of daily queries and that AI Overviews are likely “here to stay”; skeptics dispute that those queries are truly “successful” for users.

“Google Problem” vs “AI Problem”

  • One camp says the issue predates AI: Google has long surfaced wrong info (Maps, business summaries) with poor correction mechanisms and little incentive to fix individual errors.
  • Another camp says this is specifically an AI problem: traditional search results at least separate sources; AI Overviews blend multiple people/entities into a single authoritative‑sounding summary.
  • Broader frustration: Google’s search quality decline is blamed on ad/SEO incentives, with AI Overviews seen as a cost‑driven, low‑quality band‑aid.

Regulation, Liability, and Accountability

  • Several commenters call for regulation with real enforcement: e.g., strong liability when AI is used in safety‑critical or life‑sustaining contexts.
  • One proposal: “guilty until proven innocent” for decision‑makers using AI in such domains; critics say that’s unjust, would chill safer ML solutions, and should apply (if at all) equally to non‑AI systems and human decisions.
  • Others argue fines and measurable incident‑rate benchmarks are more workable; some think fines don’t deter large firms and want criminal accountability.
  • There’s debate over whether LLM operators should be liable for misinformation (e.g., defaming individuals or influencing elections), versus holding only the actor who relies on the info responsible.

Wikipedia, Bias, and Curation

  • Wikipedia is cited as an example of hard‑won policies for sensitive topics (living people, deaths) and community correction mechanisms.
  • Concerns: generative AI piggybacks on that work while freely inventing facts, without equivalent safeguards.
  • Separate thread: Wikipedia’s ideological bias vs. outright fabrication; many see biased curation as less dangerous than LLMs’ tendency to “make stuff up.”

Desire to Opt Out of AI Content

  • Multiple commenters want a global “no AI” switch in Google (for search, Maps, and business descriptions), and protection against AI‑generated lies about people or businesses.
  • Suggested alternatives include using other search engines (especially paywalled ones), classic‑style Google URLs that strip AI features, and browser‑level filtering.

Conceptual Views of LLMs

  • Some frame LLMs as “vibes machines”: they generate plausible text rather than retrieve facts, so they’re better at style and synthesis than truth.
  • Discussion touches on token‑by‑token probability generation, “hallucination lock‑in,” and whether models can represent multiple conflicting possibilities or only commit to one narrative at a time.

Names, Disambiguation, and Identity

  • Several note that mixing two identically named people into one narrative is exactly what’s happening: the model conflates the humorist with a deceased local activist.
  • Commenters argue Google should disambiguate like a knowledge graph/Wikipedia (“which person did you mean?”), not merge biographies into a single authoritative summary.