The NO FAKES act has changed, and it's worse

Scope and comparison to existing laws

  • Several commenters note that parts of NO FAKES (rapid takedown, staydown filters, user unmasking) resemble existing EU rules (DSA, anti-terrorism, German “repeat upload” rulings) and US regimes (CSAM, DMCA).
  • Others argue the comparison is misleading: EU enforcement emphasizes “good faith” and proportionality, whereas US regimes tend to be rigid, punitive, and easily abused.

Free speech, censorship, and authoritarian drift

  • Many see NO FAKES as another step toward broad, easily weaponized censorship infrastructure, especially given the vague notion of “replicas” and no clear evidentiary standard for complaints.
  • Concerns include: chilled speech, overbroad filters that wipe out fair use and parody, forced de‑anonymization of speakers, and use against political dissidents rather than just deepfakes.
  • Some push back that harms from AI “nudifiers,” impersonations, and fake content are real and demand some response, but condemn this specific design as “do something, badly” legislation.

Impact on platforms and competition

  • Strong suspicion that only large platforms can afford the mandated filtering and compliance stack, turning the law into a regulatory moat and form of crony capitalism.
  • Debate over whether the bill really targets only “gatekeepers” or also burdens small firms, with no clear consensus.

Enforcement, power, and violence

  • A long subthread argues whether all laws are ultimately backed by state violence (“monopoly on violence”) versus more diffuse coercion (fines, cut‑off services).
  • Even skeptics concede that bad laws are far easier to fight before passage than after enforcement mechanisms exist.

EFF, Big Tech, and priorities

  • Some distrust the EFF as overly anti–Big Tech and inattentive to newer state abuses; others rebut with recent EFF litigation against federal data consolidation.
  • There’s disagreement over whether Big Tech has been a “benign steward” or a major contributor to the current information crisis.

Technical and practical issues

  • Commenters question feasibility of accurate replica filters, noting that AI makes cheap variation trivial, implying escalating compute and false positives.
  • A few suggest alternative approaches: open‑source “httpd for content inspection,” watermarking, or “friction” mechanisms on social platforms to slow virality rather than hard bans.
  • Several readers remain unclear on the bill’s exact mechanics and seek a non‑alarmist, plain‑language explanation.