Microsoft only lets you opt out of AI photo scanning 3x a year
Reaction to the 3‑Times‑Per‑Year Opt‑Out Limit
- Many see “you can only turn this off 3 times a year” as absurd and hostile, an engineered erosion of consent rather than a real choice.
- Several argue this feature should be opt‑in by default; making it opt‑out, and then limiting opt‑outs, is characterized as a dark pattern and “illusion of choice.”
- A recurring worry: Windows/OneDrive updates have historically reset privacy settings, so users may “burn” their three opt‑outs just undoing Microsoft’s own changes.
- Some say they personally would just turn it off once and never touch it, but others emphasize that the existence of a hard limit is the issue, not the common use case.
Privacy, Surveillance, and Data Use
- Strong concern that cloud photo face‑scanning builds a massive facial database that could be monetized, misused by advertisers, or handed to governments or law enforcement.
- People connect this to longstanding CSAM‑scanning systems and debate Apple’s abandoned on‑device CSAM proposal, false positives in perceptual hashing, and inevitable “mission creep.”
- Many distrust Microsoft’s statements that photos won’t be used to train AI models, noting widespread secret training on “illegally acquired” content across the industry.
- There are edge‑case fears: compromised accounts being seeded with illegal content, or scanning photos of people who never consented and don’t even use Microsoft services.
Technical and Cost-Based Explanations (Contested)
- A minority suggests the limit is mainly about compute cost: disabling should force deletion of facial indexes; re‑enabling then requires an expensive full rescan.
- Critics reply that, if cost were the real reason, the limit should apply to enabling, not disabling, and should be clearly explained in the UI and PR responses.
- Others note there are more privacy‑respecting technical designs (e.g., encrypting indexes with user‑held keys, rate limiting, delayed batch jobs) that wouldn’t require such a crude toggle rule.
Microsoft’s Patterns, Trust, and PR
- Commenters cite a pattern: forced Microsoft accounts, aggressive OneDrive promotion, auto‑syncing documents, ads in Windows, and AI pushed by default.
- Anecdotes include regulated health data silently uploaded to OneDrive during updates, and settings repeatedly re‑enabled against user wishes.
- Microsoft’s refusal to directly answer why the 3‑toggle rule exists is taken as highly suspicious; PR responses are seen as evasive and emblematic of modern “non‑accountable” corporate communication.
- Several believe this behavior is likely incompatible with GDPR and expect EU regulators and courts to eventually intervene.
Alternatives and User Migration
- Many say this incident reinforces their move to Linux desktops, self‑hosted storage (e.g., Samba, Nextcloud, Immich), or encrypted overlays (e.g., tools similar to Cryptomator) on cloud drives.
- There are calls to avoid Microsoft products broadly, including GitHub and OneDrive, though others note work and gaming still lock many into the Windows ecosystem.