UK Expands Online Safety Act to Mandate Preemptive Scanning
Definition of “Unwanted Nudes” and Consent
- Many comments mock the idea that software can distinguish “wanted” vs “unwanted” images, especially in private relationships.
- Satirical threads imagine bureaucratic permits for receiving dick pics, highlighting how absurd consent-detection by algorithm seems.
- Some note many people do want explicit images; the issue is lack of consent, not the content per se.
Technical and Product Design Questions
- People question whether scanning would be done locally on-device or via external services; most assume external scanning despite privacy rhetoric.
- Suggestions appear for user-controlled settings (e.g., “allow nudes from contacts only,” per-contact “Allow X” flags), but these are seen as fundamentally different from a government mandate.
- Accuracy concerns: “unwanted” would require understanding relationship context, intent, sarcasm—seen as infeasible.
Privacy, Surveillance, and Censorship Fears
- Core worry: OSA expansion compels preemptive scanning and blocking of all user content, effectively ending private digital communication.
- Several see this as infrastructure for broader censorship (e.g., blocking “misinformation,” criticism of politicians) rather than just about nudes.
- Comparisons are made to China, PRISM, and a “Ministry of Truth”; some argue public, legal mass surveillance is worse than secret but embarrassing programs.
Child Safety, Harassment, and “What’s the Alternative?”
- Supportive voices emphasize real harms: children receiving explicit images, AI-generated schoolyard porn, platforms not policing themselves.
- They argue average users expect “safe by default” devices and can’t configure protections.
- Opponents counter that such acts are already crimes; the proper response is enforcing existing laws, not blanket monitoring.
- Debate over unreported crimes: some say you “can’t” chase what isn’t reported; others say that stance is unacceptable for child abuse.
Impact on Platforms and the Open Internet
- Fear that only large platforms with AI budgets can comply; small forums risk ruinous fines if an attacker posts a single prohibited image.
- Legal vagueness (“probably enough” compliance) is seen as chilling, pushing sites to over-censor.
- Some predict mandatory government-approved middleware for all messaging, eliminating user choice between strict moderation, anonymity, and privacy.
Musk, X, and AI-Generated CSAM
- Thread digresses into whether X and its Grok AI are allowing AI-generated child sexual content.
- One side says this proves platforms won’t self-regulate; the other insists CSAM is banned, AI images occupy a legal gray area in some jurisdictions, but are still morally unacceptable.
- There is consensus that Grok producing underage sexual imagery is wrong; dispute is over whether this justifies laws like the OSA.
Legitimacy and Source Skepticism
- A minority defends the UK/EU regulatory impulse, blaming tech’s failure to “play ball.”
- Others distrust the linked site for promoting fringe “free speech” platforms, treating its framing as ideologically loaded, even if their concerns about surveillance resonate.