Manufactured consensus on x.com

Perceived degradation of X

  • Many describe X as overrun by bots (crypto scams, porn, “engagement bait”) and graphic violence, with feeds full of racialized crime content and rage-bait from both left and right.
  • Some users report never engaging with this type of content yet seeing it constantly, suggesting algorithmic pushing rather than organic interest.
  • Others say their feeds look relatively normal but full of recycled, low-effort content, implying strong personalization and uneven experiences.

Algorithmic manipulation & “manufactured consensus”

  • Core claim discussed: high‑follower accounts (especially the owner) can dramatically throttle or amplify others’ reach (e.g., massive overnight drops after being muted/feuding).
  • People connect this to an “author_is_elon” flag in the released code and reports of manual boosts, arguing that social proof now reflects proximity to power, not genuine consensus.
  • Some see X as a propaganda channel where posts are suppressed until brigaded by bots (hate or “love” bots), then surfaced as if controversial or popular.

Evidence, skepticism, and transparency

  • Several commenters criticize the article as light on concrete proof, relying heavily on a single graph and speculative framing.
  • Others point to outside investigations and a research paper suggesting algorithmic bias favoring the owner, but note the lack of ongoing open-source transparency.
  • There’s debate over whether reported boosts/deboosts are substantiated manipulation or explainable by engagement-optimized ranking.

Political bias, censorship, and ideological battles

  • Disagreement over whether X has become “more central” or shifted sharply right; anecdotes of new accounts instantly shown right‑wing content contradict claims that what you see is “mostly who you follow.”
  • Broader arguments about past moderation (e.g., Hunter Biden laptop, “Twitter Files”), what counts as censorship vs. enforcing rules on slurs/threats, and whether liberals previously dismissed such concerns.
  • Tangential but intense thread on Holocaust denial and whether it exists on the far left, with most saying it’s extremely rare compared to the far right.

Comparisons with HN, Reddit, and other platforms

  • HN is seen as algorithmically simple and more transparent, but users suspect coordinated voting and growing echo‑chamber effects.
  • Reddit is portrayed as “worst by far” for manufactured consensus: engagement sorting, heavy mod deletions, astroturfing in niche subs, bot flooding, and API changes weakening moderation tools.
  • Historical examples (Voat, /r/The_Donald) and the “Nazi bar” metaphor illustrate how karma/engagement systems can let extremists capture platforms.

Influence as capital & user responses

  • Several note that influence compounds like wealth: a few “super accounts” can silence critics and promote allies, entrenching their power over discourse.
  • Some argue this is not new—just old gatekeeping at planetary scale—while others stress the new danger of single‑owner platforms with opaque, tunable algorithms.
  • Responses range from “just opt out” (delete accounts, buy nothing from associated companies) to calls for protocol-based or public-utility‑like alternatives that reduce central control.

Broader pessimism about social media

  • Many conclude that genuine social interaction is unprofitable on ad‑driven platforms, which inevitably drift toward rage, propaganda, and manufactured consensus.
  • There’s concern that even critical discussions like this, if they don’t lead to mass abandonment, may normalize and entrench the power of platform “editors” rather than restrain them.