Study: Social media probably can't be fixed
Human behavior vs algorithms
- Several argue that “people choose outrage,” but many others say this underestimates hard‑wired susceptibility to gossip, rage-bait, and propaganda; engagement is often subconscious, like addiction.
- Some stress personal responsibility and curation (mute/block, “I don’t like this”), while others note these controls are obscure, ineffective, or constantly undermined by product decisions.
- A recurring view: the core dysfunction existed in Usenet, mailing lists, and forums; algorithms amplify, but don’t invent, flamewars and polarization.
Addiction, incentives, and regulation
- Many compare social media to an unregulated drug or to smoking: engineered “bliss points,” dopamine loops, and corporate incentives misaligned with public health.
- Counterpoint: unlike cigarettes, social media also has genuine utility (keeping in touch, coordinating events), so the analogy is incomplete.
- Strong thread on incentives: ad-based, profit‑seeking platforms are structurally driven toward maximizing engagement via outrage, sex, ragebait, and low moderation. Some claim this means “it can’t be fixed”; others say incentives can be changed through regulation or user‑paid / public / nonprofit models.
Chronological feeds, algorithms, and “fixes”
- Popular proposed fix: remove recommendation algorithms, show only followed content in chronological order.
- Critics respond that:
- Chronological feeds can still amplify extreme content at scale.
- Most users want passive discovery, entertainment, and celebrity/news content; “pure” social networks tend to lose attention to more addictive competitors.
- Even if you personally avoid algorithmic feeds, they still shape what your followers see and who is brought into your conversations.
- Decentralized/federated systems (Mastodon, Bluesky, forums) are praised for better culture, but also criticized as too small, too labor‑intensive to use well, or just “Twitter 2”.
Moderation, community size, and “third places”
- Strong agreement that active, value‑driven moderation and clear site culture (as in old forums or HN) substantially reduce dysfunction—but may not scale to billions of users.
- Some call this fundamentally a “third place” / offline community problem that software can’t solve; others describe attempts at co‑working social clubs as partial answers.
Skepticism about the LLM-based study
- Multiple commenters doubt using LLM agents to simulate users: models are trained on current toxic platforms, don’t learn or have stable identities, and can’t capture second‑order, long‑term social effects.
- Survey researchers in the thread warn against replacing real human samples with “synthetic personas” and consider this trend methodologically unsound.