X Didn't Fix Grok's 'Undressing' Problem. It Just Makes People Pay for It
Scale and Nature of Harm
- Several commenters report Grok’s public reply feed was, at times, “almost entirely” non-consensual deepfakes: undressing women, sexualized/racist images, and apparent CSAM-style content.
- Harm is framed not just as “fantasy” but as reputational damage, targeted harassment, and making the “digital public square” unusable for women and other targets.
- A key point: intent is often humiliation and domination (e.g., posting explicit fakes directly under a woman’s professional post), described as part of “rape culture.”
Automation vs Traditional Tools (Photoshop, drawing, etc.)
- Many reject the “it’s just like Photoshop/pencils” analogy as disingenuous:
- Automation drastically lowers skill/time barriers and enables harassment and CSAM at scale, on demand.
- Deep realism and photorealistic likenesses are seen as qualitatively more harmful than crude drawings.
- Counterpoint: some argue the real issue is user behavior, not the tool, and that the same laws (harassment, defamation, CP) should already apply regardless of medium.
Responsibility and Liability
- Strong argument that X/Grok is not neutral infrastructure:
- Grok creates the images and posts them under its own account, often as replies to the victim’s posts; this is likened to a company running a CSAM/revenge-porn generator and distribution service.
- Section 230 is seen as weak protection when the platform itself is the “speaker.”
- Others push back, analogizing to gun makers or curl/Photoshop: the user who prompts is culpable. Critics respond that here the “Mad Max mode” is designed and operated by the company itself.
Design and Moderation Choices on X/Grok
- Publishing generated images publicly (rather than via DM or under the prompter’s account) is called a “product design error” at best, deliberate at worst.
- Grok is described as intentionally less censored and dominant for NSFW use; some note that other models (e.g., mainstream AIs) don’t publish explicit outputs from their corporate social accounts.
- Restriction to paid/verified users is seen by some as KYC/liability containment, not real safety.
Law, Enforcement, and Platform Rules
- Debate over terminology (CSAM vs CP) centers on whether synthetic child porn without a “real victim” is covered; some emphasize the law should (and often does) treat it as illegal regardless.
- Multiple analogies (guns, self-driving cars, printing press, photocopiers, nuclear weapons) are used to argue that scale and foreseeability matter in assigning liability.
- Some note X’s apparent violation of app store policies and question why Apple/Google haven’t removed it.
- Others highlight slow or captured regulators and partisan US institutions, expecting legal response to lag.
Cultural and Ethical Questions
- Some ask why people want to generate sexualized images of children and non-consensual porn at all, arguing for deeper cultural change alongside regulation.
- Others caution that harmful urges can’t be eliminated, only constrained through disincentives and enforcement.
- Punishing companies that deploy “turnkey harassment at scale” is proposed as one way to signal norms about consent.
Meta and Comparisons
- Multiple comments compare Grok unfavorably to ChatGPT/Gemini: those can be jailbroken to make bikini-type images, but they don’t auto-reply on a social network, creating harassment by default.
- There is visible frustration about Hacker News flagging of X/Musk stories, with some alleging systemic bias in community/moderation behavior.