Unauthorized experiment on r/changemyview involving AI-generated comments
Overview of the Experiment
- Researchers deployed LLM-based bots in r/changemyview, generating comments that:
- Invented detailed personal backstories (e.g., rape victim, trauma counselor, ethnic and political identities, malpractice victims).
- Scraped users’ Reddit histories to infer demographics and views, then tailored replies.
- Comments were not labeled as AI; the subreddit explicitly bans AI-generated content.
Core Ethical Objections
- Deception and fabricated trauma are widely seen as “grotesquely unethical,” regardless of AI.
- Conducting human-subjects research without consent, compensation, or debriefing is framed as classic ethics/IRB failure, even if formally approved.
- Profiling users from their histories is seen as a privacy breach.
- Harms cited: emotional impact, slander of groups, time wasted identifying/reporting bots, erosion of trust in forums.
- Many argue this undermines public trust in research in general and should not be publishable, even if the findings are interesting.
Defenses and “Necessary Evil” Arguments
- Some argue the manipulation already happens at scale (corporations, states, Cambridge Analytica–style ops); academics studying it transparently is valuable.
- Others liken it to security research/responsible disclosure: demonstrating a concrete, weaponizable vulnerability forces platforms to invest in defenses.
- Counterpoint: similar insights could have been gained from analyzing existing AI content or in closed, consented experiments; creating new deceptive content added risk for little extra knowledge.
Scientific Validity Critiques
- Commenters question rigor:
- No control for whether interlocutors were human or bots.
- Reliance on weak outcome metrics (e.g., Reddit “delta”).
- No clear benefit from personalization versus generic messages despite invasive profiling.
- Some view the data as too confounded to justify the ethical costs.
AI, Scale, and Identity
- One camp: AI is just another tool; the core wrong is lying. This would be equally unethical if done manually.
- Other camp: AI is central because it enables:
- Massive, cheap, 24/7 persuasion campaigns.
- Hyper-targeted identity mimicry that humans struggle to perform at scale.
- Strong focus on how personal narratives and “lived experience” drive persuasion; faking these with AI is called a “massive cheat.”
Platforms, Anonymity, and the Future of Discourse
- Many see this as proof Reddit and similar sites are already saturated with bots and shills.
- Split views on response:
- Harder authentication, fees, ID verification, or small invite-only communities.
- Recognition that any such system can be subverted (paid humans, rented identities).
- Broader worry: public, anonymous political debate may become so flooded with synthetic content that trust collapses; others note the internet has always had unverifiable identities, AI just lowers the cost and exposes that fragility.