Want to sway an election? Here’s how much fake online accounts cost

Concerns about Hungary and Russian influence

  • Commenters flag the 2026 Hungarian election as a key test case, describing the ruling party as closely aligned with Russia and allegedly skirting Facebook’s political-ad restrictions.
  • Anecdotes and demographic data point to educated, pro‑EU Hungarians emigrating, feeding concern that the electorate is becoming less pro‑EU.

State of social media platforms

  • Several comments describe Twitter/X as saturated with racist and Nazi content, especially in default or “For You” feeds, with others reporting clean feeds and blaming user choices and algorithms.
  • There’s debate over whether Twitter is uniquely bad or just similar to TikTok, Instagram, Facebook, etc., all seen as “algorithmic content addiction generators.”
  • Some argue algorithms are designed to radicalize and reward outrage; others say they merely reflect majority tastes.

Effectiveness and mechanics of fake accounts

  • People question whether cheap fake accounts actually change votes. One side cites Cambridge Analytica, Team Jorge, and other bot networks as evidence that microtargeted manipulation works; another, drawing on digital‑ad experience, is skeptical they’re very effective.
  • Several stress that account price alone is misleading: quality, geography, spam detection, IP/proxy infrastructure, human labor, and platform bot‑detection all matter.
  • Cheap foreign accounts may still be useful for mass upvoting or astroturfing, but are less effective when not in the same cohort as the target audience.

Democracy, manipulation, and money

  • Broader worries: democracy’s structural vulnerabilities, social media as the “coup de grâce” after mass media, and a drift toward oligarchy or techno‑feudalism.
  • Debate over whether restricting manipulation tools (e.g., by making accounts costly) is good—likened by some to limiting access to bioweapons, by others to entrenching a “monopoly on manipulation.”
  • Discussion of campaign finance and Citizens United: some argue money and elite preferences already dominate policy; others note diminishing returns to ad spend.

Identity, regulation, and research

  • Strong suspicion that phone‑number requirements are more about tying online and real‑world identities than preventing abuse; this criticism extends even to privacy‑branded apps and AI services.
  • A long thread speculates that “think of the children” age‑gating and ID pushes are partly motivated by fear of AI‑driven bot swarms overwhelming democratic discourse, making human verification politically inevitable despite civil‑liberties concerns.
  • Commenters note that misinformation research was heavily funded after Cambridge Analytica but now faces U.S. political backlash, with grants cancelled and visas reportedly denied to fact‑checkers and moderators.
  • Clarification is offered that Cambridge Analytica was not a formal spin‑out of the University of Cambridge, despite name confusion.

Countermeasures and future outlook

  • Proposals include minimum per‑account fees, stronger identity verification, and heavy regulation or even blocking of major social platforms as a sovereignty issue.
  • Others argue these measures risk overreach, centralizing control, or simply won’t scale against future AI swarms.
  • A recurring pessimistic theme is that large, open platforms may be doomed to become “dark forests” dominated by bots, with only small, tightly moderated communities remaining relatively trustworthy.