NY Times gets 230 wrong again
Debate over Section 230’s Purpose and History
- Several comments restate 230’s core function: protect platforms from being treated as publishers of user content while allowing moderation.
- Pre-230 cases (Cubby vs. CompuServe, Stratton Oakmont vs. Prodigy) are invoked: no moderation → no liability; some moderation → liability, which 230 was meant to fix.
- Some argue blanket immunity is too strong and would prefer case-by-case judicial decisions; others say that would chill moderation and favor zero-moderation cesspools.
- There’s disagreement whether 230 is primarily “about moderation” or whether recommendations/algorithms change the analysis.
Algorithms, Recommendations, and Free Speech
- One side: recommendation order is an opinion of the platform; algorithms are an extension of editorial judgment, and thus speech protected by the First Amendment.
- Others argue that at some point sequencing content creates new meaning and the platform becomes a “speaker,” potentially liable for harms.
- Debate over whether holding recommenders to a higher duty (e.g., foreseeably harmful feeds) is workable or would make recommendation legally impossible.
First Amendment vs Platform Moderation
- Clarification that the First Amendment restricts government, not private platforms; platforms can remove users or content for almost any reason.
- Some want large platforms treated like utilities/public squares, with major limits on bans, arguing that being excluded is akin to losing free speech in practice.
- Others insist forcing platforms to host speech conflicts with the First Amendment and editorial freedom.
Liability, “Actual Knowledge,” and Harmful Content
- One camp claims platforms hide behind 230 and “willful blindness,” and should bear more responsibility once notified of illegal or harmful content.
- Others respond that 230 immunity doesn’t hinge on knowledge; primary liability belongs to original speakers, and forcing platforms to adjudicate things like defamation would lead to over-removal.
Discrimination and Public Accommodations Online
- Long subthread on whether anti-discrimination law for “public accommodations” applies to websites and social platforms.
- Some argue sites (or subcommunities like subforums) that function as public spaces should not be allowed to exclude users based on protected classes like religion.
- Others counter that:
- Many discrimination laws cover employers and physical venues, not user-run communities.
- Bans by user-moderators are user actions, not company actions.
- Morally, several agree identity-based bans are wrong; legally, applicability is contested and described as state- and context-dependent, with parts of the law “unclear.”
Transparency, User Control, and Algorithmic Power
- Some see 230 as essential for robust moderation (spam, hate, misinfo). Removing it, they argue, would produce unmoderated “wild west” platforms.
- Others push for more transparency and user control over recommendation systems, especially where they may amplify phobias, political content, or harmful material to children.
Real Identity, Anonymity, and Accountability
- A minority view favors strong identity verification so harmful anonymous actors can be held responsible.
- Critics warn this effectively means universal doxxing, threatens privacy, and history shows “real name” policies don’t reliably improve behavior.
Critiques of Media and Legal Understanding
- Multiple comments criticize mainstream coverage (including the referenced article’s target) for misdescribing 230, conflating it with the CDA’s censorship aims, or muddling First Amendment doctrine.
- There’s also meta-critique that online 230 debates often feature non-lawyers overstating legal claims or reading the Constitution too literally without doctrine.