'It cannot provide nuance': UK experts warn AI therapy chatbots are not safe
Human vs AI Therapists and Trust
- Many comments highlight discomfort entrusting emotions to opaque AI systems whose creators “don’t really know how they work.”
- Counterpoint: humans are also opaque, biased, and profit-motivated; people may overestimate the trustworthiness of average human therapists.
- Still, some argue humans share lived experience and embodied perception (tone, body language, context) that current LLMs fundamentally lack.
Safety, Harm, and “Better Than Nothing?”
- Strong split: some say an “unsafe” option can be better than no help; others argue it can be much worse (e.g., delusions, self-harm, eating disorders).
- Medical ethics framing: “do no harm” vs frustration that fear of causing harm sometimes blocks potentially helpful interventions.
- Several note that subtle context, individual differences, and indeterminate “safe/unsafe” boundaries make automation especially risky.
Evidence, Studies, and Research Integrity
- One commenter alleges a suppressed study: human therapists slightly better than a waitlist control, AI worse than doing nothing. Others question the design and control choice.
- Broader claims that psychotherapy research has reproducibility and design issues; concern that AI-related negative results may be buried for financial reasons.
- Others mention newer work where specialized AI reportedly outperforms humans, but details (models, prompts, populations) are unclear.
Capitalism, Profit Motives, and Anthropomorphism
- Some see locally run LLMs as offering “non-transactional” support compared with $100–150/hour therapy.
- Critics respond that most widely used models are deeply shaped by corporate incentives and opaque tuning; they’re not outside capitalist dynamics.
- Widespread worry about people anthropomorphizing chatbots (as with earlier systems like Replika), misreading mimicry as genuine emotion or consciousness.
Use Cases: Tool, Supplement, or Therapist?
- Many suggest LLMs are best as:
- “Responsive diaries” / rubber-ducking tools to organize thoughts.
- Educational aids to learn terminology and prepare for real therapy.
- Between-session support, not a primary clinician.
- Others report positive personal experiences using LLMs as de facto therapists, claiming they feel heard and gain insights.
- Skeptics emphasize sycophancy: LLMs tend to agree with users, may reinforce delusions (“you are the messiah,” “extreme dieting is good”), and lack stable boundaries.
Access, Cost, and Social Context
- Major driver: human therapy is expensive, scarce, and often waitlisted; many people have no supportive family or friends.
- Some argue AI will massively increase total “therapy-like” interactions, and should be judged against no access, not ideal human care.
- Others contend we’re trying to patch deep social and community failures with technology, which may worsen isolation.
Regulation, Liability, and Ethics
- Suggestions include: malpractice insurance for AI therapy providers, industry-wide ethical standards, and clear labeling (“statistical text generator,” not “intelligence”).
- Concern that average users can’t make truly informed choices about AI safety.
- Debate over banning vs tightly regulating AI therapy: bans are politically safer (visible harms vs invisible prevented suicides), but might block future net benefits.
Experts, Incentives, and Public Perception
- Some distrust warnings from professional therapists, seeing them as protecting their livelihoods.
- Others push back that reflexive anti-expert sentiment is corrosive; many therapists are not highly paid “profiteers” and may still be right about risks.
- A recurring theme: even human therapy quality varies widely; some claim current LLMs may already rival the large mass of mediocre practitioners, but this is contested.