AI surveillance should be banned while there is still time
Policy and regulation proposals
- Requests for concrete policies: suggestions include mandatory on-device blurring of faces/bodies before cloud processing, and strong limits on training models with user data.
- Some propose strict liability frameworks: large multipliers on damages and profits for harms caused, to realign incentives.
- Another thread argues for treating AI like a fiduciary: privileged “client–AI” relationships, bans on configuring AIs to work against the user’s interests, and disclosure/contestability whenever AI makes determinations about people.
Training data, copyright, and data ownership
- Several argue LLMs should only train on genuinely public-domain data, or inherit the licenses of their training data, with individuals owning all data about themselves.
- Others stress the “cat is out of the bag”: enforcing new rules now would advantage early violators.
- There is anger at low settlements for book datasets and claims that current practices are systemic copyright infringement.
Chatbots, persuasion, and privacy risks
- Strong concern that long-lived chat histories plus personalization enable “personalized influence as a service” (political, financial, emotional).
- People highlight how future systems could use all past chats (with bots and humans) as context for targeted manipulation or even court evidence.
- Some see privacy-focused chat products as meaningful progress; others see them as marketing that still leaves users exposed (e.g., 30‑day retention, third-party processors).
Skepticism about bans and institutions
- Many doubt AI surveillance can be effectively banned: illegal surveillance isn’t stopped now, laws lag by years, and fines are tiny relative to profits.
- Some view belief in regulatory fixes as naïve given concentrated wealth, lobbying, and revolving doors.
- Others argue “do something anyway”: build civil tech, secure communications, and new organizing spaces.
Geopolitics, power, and arms-race framing
- One camp: surveillance AI is like nuclear weapons; unilateral restraint means strategic defeat by more authoritarian states.
- Counterpoint: nukes already constrain war; “winning” with ASI or AI surveillance may be meaningless or catastrophically dangerous for everyone.
Corporate behavior and trust
- Persistent distrust of big AI firms: comparisons to therapist/attorney privilege are seen as incompatible with monitoring, reporting, and ad-driven incentives.
- DuckDuckGo is both praised for pushing privacy and criticized for “privacy-washing” and reliance on third-party trackers/ads.
Platform moderation and everyday harms
- Numerous anecdotes of AI or semi-automated moderation wrongly banning users on large platforms, with no meaningful appeals.
- Concern that AI-driven enforcement plus corporate dominance creates undemocratic, opaque control over speech, jobs, and services.
Advertising, manipulation, and surveillance capitalism
- Debate over targeted ads: some users like relevance, others emphasize ads as adversarial behavioral modification, not neutral product discovery.
- Worry that granular profiling lets firms push each person to their maximum willingness to pay, shifting surplus from users to corporations and AI providers.
Cultural and technical responses
- Suggestions include: locally running models, hardware-based business models, avoiding anthropomorphizing AIs, opting out of smartphones/social media, and building privacy-preserving or offline alternatives.
- Underneath is a shared fear that pervasive AI surveillance will normalize self-censorship and make genuine privacy practically unreachable.