Weaponizing Ads: How Google and Facebook Ads Are Used to Wage Propaganda Wars

Government use of targeted ads as dystopian

  • Commenters describe being inundated with coordinated, highly targeted political content on platforms like Facebook, often around trivial or polarizing stories.
  • Many see direct government use of such microtargeting as dystopian and corrosive to public life, regardless of which party is in power.

Free speech, the Constitution, and political advertising

  • Some argue the First Amendment makes bans on political ads or targeting effectively impossible without a “revolutionary” constitutional change.
  • Others counter that amendments are meant to change power structures and that limiting microtargeted political ads could be reasonable.
  • A faction insists U.S. founding documents are the “best available,” favoring less government and warning against empowering the state to restrict speech.
  • Another faction stresses the slave-owning origins of those documents and rejects quasi-religious reverence for them.

Regulation vs. abuse of power

  • Multiple threads debate regulating ad platforms: preventing surveillance-based targeting, restricting foreign propaganda, or requiring liable local entities behind ad buys.
  • Critics worry any centralized “truth arbiter” (state or platform) becomes an oppression machine for the next authoritarian.
  • Others argue that big tech’s current unregulated power is already an oppression machine, and that checks-and-balances plus bureaucracy are preferable to corporate abuse.
  • There is disagreement over whether opposing specific regulations implies being “for” propaganda or child abuse; some push back against this “with us or against us” framing.

Ad platforms as propaganda infrastructure

  • Several comments argue ads and propaganda are fundamentally the same tool for persuasion; ad platforms are auction-based systems for behavior change at scale.
  • From this view, state propaganda campaigns (e.g., against UN agencies) are just another high-paying customer to the exchange—“propaganda-as-a-service.”
  • Others note platforms already take editorial stances (e.g., on war content or Covid misinformation), so their choices around state propaganda are inherently political and should be scrutinized.

Corporate incentives and perceived bias

  • Many see tech companies as amoral, chasing whichever side holds power or majority sentiment, not consistent principles.
  • Some claim specific communities (e.g., major subreddits) are heavily moderated in favor of certain geopolitical narratives, with bans and deletions enforcing a party line.

Attention economy, manipulation, and personal defenses

  • Commenters link pervasive ads and algorithmic feeds to rising cynicism, “slop” content, and addiction to outrage.
  • There is skepticism that media literacy alone protects against manipulation; people still respond to primal triggers even when aware.
  • Some advocate strict ad blocking as basic self-defense, arguing ads are now a primary vector for scams, malware, and state propaganda.

Marketing evolution and psychological exploitation

  • One subthread traces marketing’s shift from demonstrating product value to manufacturing aspirational lifestyles and envy.
  • Others respond that manipulation and propaganda have always been central to advertising; only the tools and reach have improved.
  • Particular concern is raised about exploitative mobile games and microtransactions targeting children, described as frying their reward circuits.

Geopolitics and one-sided narratives

  • The original article’s focus on Israeli government ad campaigns draws strong reactions.
  • Some defend those campaigns as justified given allegations about UNRWA and the broader conflict context; others see them as “genocide propaganda” that platforms should refuse.
  • One long comment accuses the article itself of being propaganda for omitting context like the initial attacks and the other side’s online operations.