Facebook is asking to use Meta AI on photos you haven’t yet shared
Privacy, Consent, and Default Surveillance
- Many see Meta’s request to scan camera rolls as “egregious,” especially given Facebook/Instagram are preinstalled and hard to remove on many Android phones.
- Concern that “consent” here is meaningless: prompts are dark‑patterned, people don’t understand implications, and less tech‑savvy users are effectively exploited.
- Some frame this as part of a long‑running effort by Meta to break out of app sandboxes and access more device‑level data.
Lock‑In, Network Effects, and Inability to Quit
- Commenters stress that “just delete Facebook/WhatsApp” is not realistic for many: schools, sports clubs, businesses, and even banks often communicate only via Meta platforms.
- Parents describe being pressured by schools and activity providers to allow photos of their children on Facebook; refusing is socially costly and sometimes contractually hard.
- WhatsApp in particular is described as de‑facto mandatory infrastructure in many countries.
Mitigations and Technical Workarounds
- Android users discuss disabling or removing Meta bloatware via ADB or root, or escaping entirely via LineageOS, GrapheneOS, Librem, Fairphone, or Linux phones.
- Others rely on iOS’s granular photo permissions, separate user profiles, or using web versions only.
- Several note these paths are niche: most people lack the skills, time, or will.
Children’s Images, Identity, and Future Risks
- Multiple parents refuse or try to refuse posting kids’ photos, partly to avoid lifelong training data trails and facial recognition databases.
- There are anecdotes of family conflict when grandparents ignore such wishes.
- Some speculate about future harms: undermined anonymity for would‑be undercover roles, pervasive facial recognition, and loss of control over one’s “likeness.”
Liability, Government Access, and Abuse Risks
- People highlight legal and ethical risks: scanning private rolls will inevitably include nudes, gore, and possibly CSAM; letting those into training sets is seen as a “liability nightmare.”
- Others fear government/intelligence access to such image corpora, or their use in policing and immigration contexts.
Is Meta Training on These Photos? (Contested)
- Several point out Meta says this test does not use camera‑roll photos to train AI models, only to generate user‑facing suggestions.
- Critics respond that the terms appear broad enough to allow training, that Meta refuses to rule out future use, and that trust is very low given the company’s history.
- One thread argues the article overstates what is proven; others say given past behavior, assuming the worst is rational.
Broader Critique of Meta and Social Media
- Many frame Meta as fundamentally extractive: users’ lives and relationships treated as raw material for engagement and ad targeting.
- Long subthreads reminisce about early Facebook as a simple social tool and contrast it with today’s algorithmic feeds, polarizing content, and mental‑health impacts.
- Some liken Facebook and Instagram to gambling or tobacco: addictive products requiring societal, not just individual, responses.
- There is also debate over whether AI is central to the harm, versus incentives, leadership, and corporate power.
Alternatives and Resignation
- Ideas range from new “no‑AI, chronological, private” social networks to using the fediverse, email, forums, or group chats.
- Skepticism is high that any alternative can overcome network effects and trust deficits.
- Several commenters describe deleting Facebook as life‑improving; others stay solely for Marketplace or niche hobby groups, seeing no true substitute.