Moscow-based global news network has infected Western AI tools
Skepticism about the article and its framing
- Several commenters argue this kind of influence campaign is as old as mass media; the only novelty is attaching “AI” and Russia–US framing.
- Some see the piece itself as propaganda or a “psyop” meant to pre-emptively discredit any Russia-aligned viewpoints produced by LLMs.
- Critics say the report makes “extraordinary claims” with thin documentation: vague about which narratives were tested, which models allegedly mirrored specific articles, and how they inferred training-data sources from overlapping language.
- A few users tested example prompts from the article (e.g. Zelensky banning Truth Social) and got correct, non-propagandistic answers, fueling doubt.
Russian influence, AI grooming, and data poisoning
- Many accept that actors with resources will try to “taint” models, analogizing to SEO, SEM, and “Google bombing.”
- The notion of “LLM grooming” is viewed as both plausible and concerning: if pro-Russia narratives become common online, models will statistically reproduce them more.
- Others note this is symmetrical: any side can flood the web; the real issue is human disinformation, not an “AI problem” per se.
- Some point to networks of “Pravda” sites ranking well in search as classic garbage-in/garbage-out.
LLMs’ understanding, bias, and knowledge cutoffs
- Example logs show ChatGPT confidently answering post–cutoff questions while insisting on an earlier cutoff, suggesting “cutoff dates” are part of a role rather than reliable introspection.
- Debate over whether LLMs actually “understand” propaganda: they can describe Pravda as biased, but don’t track which weights came from which sources, so may not downweight them when answering.
- Ambiguity over whether models are secretly using web search or just trained on later data.
Disinformation, propaganda, and truth
- Some define disinformation as deliberate, weaponized lying to mislead, distinct from honest disagreement; others argue the term often just labels opposing views.
- Long subthread on objective truth and history:
- One side emphasizes empirical methods, multiple sources, and degrees of accuracy.
- Another stresses how power, censorship, and propaganda (in both Russia and the US) shape what becomes “history.”
What LLMs are good for
- Strong consensus that LLMs are poor for niche factual incidents (“Did Azov fighters burn an effigy of X?”) and easily steered by prompt framing.
- Recommended uses: coding help, how-to questions, terminology explanations—things that are easy for the user to verify.