Decoding the Language of Othering by Russia-Ukraine War Bloggers

Use of LLMs for Speech Policing and Moderation

  • Some see this work as part of a trend toward using LLMs to police speech “down to the minutest nuance,” and argue that speech policing itself is suspect, so making it cheaper and more scalable is harmful.
  • Others argue the tool doesn’t matter; LLMs could be preferable to “moody humans” because their bias is at least in principle controllable via training data.
  • Critics counter that model bias is not well understood or measurable, especially for subtle linguistic choices embedded in decades of culturally shifting text.
  • One view suggests bias may emerge from the structure of language itself, not just from specific data, implying that fully “controlling” bias could be impossible.
  • There is concern that the same methods could be used both for censorship and for highly efficient mis/disinformation campaigns.

Empirical Findings vs Personal Experience on Russian/Ukrainian Othering

  • Commenters note the paper’s headline result: Russian war bloggers more often use stronger forms of othering (villainization, dehumanization), and such language is more central in Russian networks than in Ukrainian ones.
  • Several users report opposite or more mixed impressions from social media:
    • Pro‑Ukrainian accounts are described as heavily dehumanizing Russians with animalistic or monstrous slurs.
    • Pro‑Russian accounts are said to focus on labeling Ukrainians as “Nazis,” framed as a political rather than ethnic category, though others call this historically dishonest and list long‑standing Russian slurs for Ukrainians.
  • Disagreement arises over what counts as “propaganda”: official channels vs swarms of apparently “organic” social media accounts amplifying state narratives.
  • Some argue online spaces (Reddit, Twitter, Telegram) are so distorted by bots and shills that drawing broad social conclusions from them is hazardous.

Methodological and Political Implications

  • Several commenters see the real contribution in methods: cheap, large‑scale, automated analysis of rhetorical patterns in open-source communications.
  • There is explicit recognition that “othering intensifies during crises” is not a novel finding, but a validation that these tools can recover known social dynamics.
  • Others warn the obvious next step is optimization: using similar pipelines to discover exactly which word sequences best manipulate specific audiences, leading to highly personalized propaganda.
  • Comparisons are made to spam and algorithmic feeds: early concerns about attention exploitation were initially dismissed but later proved prescient.
  • Some question the paper’s theoretical framing, suggesting that “othering” is nearly synonymous with politics itself, and object to moralized comparisons (e.g., to Nazi rhetoric) while similar practices occur in contemporary Western discourse.

Wider Reflections on Othering and War

  • A historical parallel is drawn to anti‑Japanese racism in WWII: brutal war crimes, fear of a capable enemy, and strong pre‑existing racism all fed extreme dehumanization that outlasted the conflict.
  • Commenters note that in active wars (e.g., Ukraine), intense othering is almost inevitable for people under attack, and purely academic critiques can feel detached from that reality.