Ask HN: How do you deal with people who trust LLMs?

Scope of the Concern

  • Many see LLM overtrust as a continuation of people uncritically believing search results, social media, or news — just faster and slicker.
  • Others think it’s worse: LLMs hide sources, sound confident, and mimic human conversation, short‑circuiting skepticism even in otherwise logical people.

Why People Trust LLMs Too Much

  • Human‑like dialogue and technical tone make outputs feel authoritative.
  • Some anthropomorphize models (talking about them as sentient, oracles, “personal Jesus,” or deities).
  • Users often seek confirmation, not disproof; LLMs are good at supplying plausible confirmation.
  • Many lack a solid grasp of what “truth,” evidence, or “reputable source” actually mean.

Failure Modes & Harms

  • Hallucinations: fabricated legal cases, bogus rehab plans, wrong technical advice, and confident nonsense.
  • Sycophancy: some models readily change answers to agree with the user; others push back more.
  • Prompt‑sensitivity: slightly rephrased questions (“why X is good” vs “why X is bad”) yield opposite‑framed answers.
  • Outputs are structurally valid and fluent even when factually garbage, so errors propagate unnoticed.

How Commenters Deal with Overtrust

  • Gentle reframing: insist on saying “it,” describe LLMs as tools like calculators, show contradictions or 180° reversals in the same session.
  • Ask “Where did that come from?” and push for primary or clearly identified sources.
  • Demonstrate failures live (self‑contradictions, obvious hallucinations, jailbroken models, r/aifails).
  • Hold users responsible: treat LLM output like any other source; if they use bad info, it’s still on them.
  • In high‑stakes cases (medical, legal, policy): urge deference to experts and original research.
  • Some simply disengage or ignore “AI‑psychosis” types; others see it as an education problem that will take time.

Norms for “Good” Use

  • Use LLMs as a first pass, summary, or search UI, then verify with primary / human‑curated sources.
  • Ask for citations, then manually check them (or use multiple models/agents to cross‑check).
  • Treat LLMs like a junior coworker: potentially useful, never unreviewed.