Bad Actors Are Grooming LLMs to Produce Falsehoods
Reliability, “Off-by-One” Errors, and Erosion of Truth
- A major concern is “off‑by‑one” style errors: answers that are almost correct but wrong in subtle ways, which most users won’t notice.
- As LLMs replace search or act as research agents, this risks quietly corrupting shared factual baselines (dates, laws, history, technical details).
- People already tend to treat computer output as authoritative; combining that with models that are “mostly right” but unverifiable by laypeople is seen as dangerous.
Propaganda, Source Credibility, and Epistemology
- Commenters argue it’s hard for LLMs to distinguish propaganda from truth because that distinction is often ideological and shifting over time.
- Some think the minimum is for models to honor their own knowledge of source credibility (e.g., not treating known disinfo networks as reliable).
- Others note that “how to discern truth” is an old epistemic problem, not unique to AI; filtering out harmful sources is something humans already do.
- There’s discussion of “firehose of falsehood” strategies, and that flooding LLM training/search corpora is a logical next step for state and commercial propagandists.
Trust, Bubbles, and Societal Impact
- One camp hopes that visible AI failures will accelerate public distrust and pop the “AI bubble”; another points out tabloids and partisan media show that many never lose trust in congenial sources.
- Some foresee LLMs intensifying filter bubbles via personalized models that mirror users’ ideological preferences, reinforcing existing divisions.
- Others argue humans have always self‑selected bubbles; LLMs and social media just scale the effect.
Utility vs Harm: Deep Split Among Users
- There is a sharp divide:
- Critics: LLMs are “bullshit generators,” worsening the web’s signal‑to‑noise, adding confident errors, and encouraging intellectual laziness.
- Supporters: they report substantial productivity and convenience gains (coding help, search, summarization, how‑to learning, casual conversation).
- Several note widespread “good enough” attitudes: many users don’t verify outputs and don’t prioritize precision unless stakes are high.
Limits of Current Models and Article Framing
- Multiple commenters stress that LLMs don’t “know” or “reason”; they pattern‑match text. Expecting them to “put two and two together” about propaganda is seen as anthropomorphism.
- Evaluating models against fixed truth/falsehood lists risks training them into ideological sycophants rather than critical reasoners.
Proposed Responses and Future Risks
- Suggested mitigations include: curated “high‑quality” source sets, Web‑of‑Trust‑style reputation systems, explicit source tracing for each fact, and separate filtering of search results before model consumption.
- Others think no technical fix can replace cultural changes: widespread skepticism toward anything on a screen and better media literacy.
- Several predict “LLM grooming” will become the new SEO/advertising game: brands, propagandists, and scammers optimizing content specifically to steer model outputs.