People want platforms, not governments, to be responsible for moderating content
Meaning of “responsible”
- Several argue the survey likely conflates “responsibility” with legal liability: people want platforms to face consequences for hosting or amplifying harmful content, not governments deciding truth directly.
- Common framing: governments set rules; courts enforce them against whoever is liable (users, platforms, both). Government “holds responsible,” it is not itself the speaker.
Who decides truth and illegality? Courts vs “ministry of truth”
- Some fear any stronger role for government becomes a “ministry of truth.” Others counter that courts already arbitrate perjury, libel, slander, fraud, and defamation without such a ministry.
- There’s debate over gray areas: scientific consensus (e.g., Covid), hate speech, Holocaust denial, terrorism advocacy, or incitement to violence.
- One side leans toward free‑speech absolutism (citing Article 19 and warning about European prosecutions for posts); the other emphasizes limits (incitement, reputational harm, Nazi symbols) and the Popper “paradox of tolerance.”
Platforms, amplification, and liability
- Disagreement over analogies: some say platforms are like mail carriers and shouldn’t be blamed for user lies; critics respond that recommendation algorithms and amplification make platforms more like publishers.
- Once a platform promotes or optimizes for outrage, many see it as responsible for the externalities of that design.
- Section 230 / DMCA–style safe harbors are seen by some as necessary for platforms to exist at all; others say they created a corrosive environment by removing incentives to address harm.
Moderation models: platform, government, user
- Many insist unmoderated platforms degenerate into spam, harassment, or extremism; they prefer active but transparent moderation (often by many small communities), possibly federated.
- Others fear “thought police,” noting that moderation often drifts from civility enforcement to ideological filtering.
- Some advocate client‑side filtering and protocol-based systems: individuals choose what to see, instead of centralized gatekeepers.
- Network effects and “public square” concerns surface; suggested countermeasure is antitrust rather than speech regulation.
Survey design, policy complexity, and shared responsibility
- Several call the survey question too vague: “responsible” could mean censorship, civil liability, algorithmic downranking, or cooperation with courts.
- Proposed frameworks include:
- Users primarily liable for their speech,
- Platforms liable when they amplify or ignore clearly illegal content,
- Governments confined to clear, narrowly defined prohibitions (e.g., libel, direct incitement, CSAM), plus competition and consumer-protection enforcement.
- There is broad unease that neither governments nor platforms have a good track record, and no consensus on a clear, workable middle ground.