Meta buried 'causal' evidence of social media harm, US court filings allege
Legal context and evidence
- Several commenters stress that allegations in court filings are not facts and can be selectively framed, but others counter that discovery-based internal Meta documents are hard to dismiss.
- There is skepticism about how studies are summarized: phrases like “people who stopped using Facebook reported…” are seen as weak causal evidence, and some note that overall research on social media’s causal impact on mental health is still mixed.
- Questions are raised about the design of Meta’s 2020 “Project Mercury” experiment (e.g., whether participants were randomly assigned to deactivate or self-selected).
Comparisons to tobacco, oil, gambling, and advertising
- Many liken Meta to tobacco and oil companies: internal knowledge of harm, burying research, and continuing harmful practices for profit.
- Some argue the broader pattern includes petrochemicals, PFAS, pharmaceuticals, finance, and pervasive advertising that deliberately fuels dissatisfaction.
- Social media is often portrayed as qualitatively worse than TV/MTV/video games because of personalized recommendation algorithms and social comparison dynamics.
Addiction, mental health, and user experience
- Multiple personal accounts compare quitting Facebook/Twitter to quitting smoking: withdrawal, then increased calm and mental clarity.
- Others report no major change, suggesting heterogeneous effects.
- Commenters argue the harm comes from systems engineered for maximum engagement, akin to slot machines; some distinguish between “naturally a bit addictive” (forums, HN) and “scientifically optimized addiction” (TikTok, Instagram).
Children, teens, sex abuse, and hate
- Internal prioritization of the metaverse over child safety is highlighted as especially damning.
- Allegations about high “strike” thresholds before banning suspected sex traffickers are seen as evidence of growth-over-safety culture.
- Commenters reference Meta’s role in amplifying hate that contributed to atrocities and draw parallels to genocidal radio in Rwanda.
Elder fraud and scams
- Several describe parents or grandparents losing savings to scams on Meta platforms and WhatsApp; Marketplace and romance scams are called “a silent crisis.”
- There are calls for platforms to be held liable for scam ads and for stronger legal protections for elders and minors online.
Responsibility: corporations, government, workers
- Strong consensus that companies will not meaningfully self-police; views diverge on remedies:
- Some advocate a “corporate death penalty,” nationalization, or personal liability (including prison) for executives.
- Others worry expanding state power will backfire and prefer easier civil suits and piercing corporate shields.
- Debate over whether social media firms abusing “neutral platform” claims under Section 230 should be treated as publishers.
- Meta employees are criticized as complicit; proposals include informal hiring blacklists, though others warn against punishing defectors or treating all roles equally.
What to do about social media
- Proposals include: regulating recommender systems like gambling, taxing harms like tobacco, mandating internal impact studies, and treating algorithmic feeds as editorial speech with full responsibility.
- Some advocate personal boycotts and exiting platforms; others argue alternatives already exist (forums, blogs, messaging, small group chats) but are less addictive, hence smaller.
- A few suggest building non-ad-driven, cooperative or nonprofit communication tools, and client-side defenses against dark patterns, while acknowledging these may develop their own incentives.