Used Meta AI, now Instagram is using my face on ads targeted at me
What the feature actually is
- Meta’s “Imagine Me” / Meta AI feature generates images of users’ faces in various scenes.
- These AI images later appear in Instagram feeds with “only you can see this” labels and links back to Meta AI.
- Disagreement over framing:
- One side: this is effectively an ad/promo for Meta AI using the user’s likeness.
- Other side: it’s more like an integrated product feature or preview, similar to filters or stickers.
Consent, ToS, and control
- Many argue that meaningful consent is lacking: users think they’re generating a one-off image, not enrolling in an ongoing feed feature.
- Others counter that users have explicitly uploaded a face to an AI tool and accepted ToS granting broad reuse.
- EU users report opt‑out emails around “legitimate interests” for AI training and a form-based objection process.
- Meta support pages (linked in the thread) say the feature and setup photos can be turned off and deleted, though this nuance isn’t obvious in the UX.
Privacy, likeness, and “only you can see this”
- Some see no privacy problem if:
- Images never leave Meta’s ecosystem.
- Only the user sees their own tailored images.
- Others stress:
- Using a person’s face in any persuasive context is a “personality rights” / autonomy issue, even if audience = 1.
- “Only you can see this” ignores employees/insiders and future misuse.
- Analogies raised: photo labs reusing client photos in posters, Snapchat selfie stickers, HBO/TV self‑promos.
Emotional and societal impact
- Many find it “creepy,” especially when surprise images surface in public or evoke body-image and self‑image concerns.
- Some share disturbing anecdotes of AI lookalikes of deceased loved ones appearing in ads, intensifying grief.
- Others find it “kinda cool” or harmless, viewing it as a more efficient way to personalize ads without extra data sharing.
Regulation, culture, and dystopian extrapolations
- Debate over US vs EU corporate ethics and the role of regulation (GDPR, AI rules); some praise EU caution, others call that naïve.
- Comparisons to Street View normalization, Minority Report‑style targeting, AR/VR hyper‑personalized billboards, and simulated friends/family in ads.
- Several foresee this moving into broader programmatic ad formats and deepfake/deceased‑relative scenarios, calling for stronger deepfake and likeness laws.