"Hate brings views": Confessions of a London fake news TikToker

Online anonymity, speech, and regulation

  • Several commenters struggle to reconcile support for anonymity with persistent, weaponized lying and faked media.
  • Some want laws against paid, harmful disinformation, likening it to defamation: truth would remain protected.
  • Others argue this is a slippery slope: whoever controls “truth” could suppress dissent or be weaponized by future strongmen.
  • Pseudonymity and strong moderation are proposed as better tools than real‑ID schemes; ID requirements for payouts (KYC) are seen as a possible compromise.

Platform design, moderation, and responsibility

  • TikTok is criticized for enabling such creators while dismissing them as isolated cases.
  • Many see the problem as structural: engagement‑maximizing algorithms reward outrage, hate, and “ragebait,” especially in ad‑supported, “free” platforms.
  • Some suggest banning politics on certain platforms or restricting promotion of content without a real identity.
  • HN itself is cited as an example of good, taste‑based moderation; most platforms are seen as failing here.

Money, markets, and incentives for disinformation

  • There’s a long subthread on how financial incentives erode civic norms: when “number goes up” (views, revenue) is the only metric, lying becomes rational.
  • Commenters reference broader critiques of markets invading every sphere, crowding out intrinsic ethics, trust, and civic responsibility.
  • Others note people resort to this kind of grift because traditional work often doesn’t pay enough, but argue that desperation doesn’t excuse malicious behavior.

TikTok payouts and “hate for pay”

  • Commenters debate whether 24k followers can yield £1,000; consensus is that payouts are view‑based, not follower‑based, and millions of views can plausibly reach that.
  • Hate content is described as “advertiser poison” that may earn less per view, but dedicated audiences can still make it lucrative.

Disinformation, cities, and migration

  • Residents of London, NYC, Chicago, SF, etc. describe living amid online narratives that their cities have “fallen,” often disconnected from their lived reality.
  • Some insist London really has deteriorated badly; others say that’s conflating real problems (fraud, housing, social issues) with xenophobic tropes.
  • A similar pattern is noted with anti‑India sentiment and anti‑immigrant content, with speculation about bot networks and state actors amplifying hate; the exact extent is unclear.

Psychology, imagery, and moral outsourcing

  • Commenters stress how cheap “deceptive imagery persuasion” is: simple mislabeled videos can strongly convince viewers, even without AI.
  • Many argue media literacy is essential, but hard to achieve at scale.
  • The TikTok creator’s apparent belief that “if TikTok allows it, it’s fine” alarms people; they see it as an outsourcing of conscience to platform rules.
  • Several lament that a sizable minority of people appear maximally selfish or indifferent to societal harm, though estimates of how common this is vary widely.

Polarization and “both-sides” claims

  • One commenter argues misinformation is not just a right‑wing phenomenon; others counter that the current disinformation ecosystem is disproportionately right‑aligned, at least in some countries.
  • There’s disagreement over how often grifters operate on the left versus pivot right once exposed; no hard data is provided, and the true balance remains unclear.