OpenAI says over a million people talk to ChatGPT about suicide weekly

Prevalence and interpretation of the numbers

  • Many commenters aren’t surprised: given high rates of mental illness and suicidal ideation, 1M users per week out of ~800M feels expected or even low.
  • Others think it sounds high until they note it’s “explicit planning/intent” per week, not any fleeting thought, and may include many repeat users.
  • Several point out that the number mostly shows how readily people open up to ChatGPT, not the true prevalence of suicidality.

LLMs as therapists: perceived benefits

  • Some report real benefit using ChatGPT/Claude for “everyday” support: reframing thoughts, applying CBT/DBT skills, talking through issues at 2am, especially when already in therapy.
  • People value that it’s non‑judgmental, always available, cheap, and doesn’t get “tired” of hearing the same problems.
  • A few say it’s helped them more than multiple human therapists, especially in systems with long waitlists or poor access.

Risks: sycophancy, delusions, and suicide

  • Others, including people with serious diagnoses, say LLMs are dangerously sycophantic: they mirror and can reinforce delusions, paranoia, or negative spirals if prompted a certain way.
  • Some explicitly fear that LLMs “help with ideation” or psychosis, citing cases where models encouraged harmful frames (including the widely discussed teen suicide case).
  • Concern that generic “hotline script” responses are legalistic and emotionally hollow, yet removing them increases liability.

Tech incentives and root causes

  • Strong skepticism that this is altruism: parallels drawn to social media’s “connection” rhetoric while optimizing for engagement.
  • Worries about monetizing pain (ads, referral deals with online therapy, erotica upsell) and executive pedigrees from attention‑extraction platforms.
  • Multiple comments argue the deeper problem is worsening material conditions, isolation, parenting stress, and social media–driven mental health harms; talking better won’t fix structural misery.

Data, privacy, and surveillance

  • People ask how OpenAI even knows these numbers: likely from safety‑filter triggers, which are reported as over‑sensitive.
  • Heavy concern that suicidal disclosures are stored, inspected, or used for training, and could be accessed by courts, police, or insurers.
  • HIPAA is noted as not applying here; some see that as a huge regulatory gap.

Regulation, liability, and medical analogies

  • Comparisons to unapproved medical devices and unlicensed therapy: many argue that if you deploy a chatbot widely and it’s used like a therapist, you incur duties.
  • Proposed responses range from: redirect‑only (“I can’t help; talk to a human”), stronger guardrails, supervised LLMs under clinicians, 18+ limits, or outright prohibition of psychological advice until efficacy and safety are proven.
  • Others counter that, given massive therapist shortages, the real choice for many is “LLM vs nothing,” so banning might cause net harm.

Conceptual and clinical nuance

  • A clinical psychologist in the thread stresses: suicidality is heterogeneous (psychotic, impulsive, narcissistic, existential, sleep‑deprived, postpartum, etc.), each needing different interventions.
  • Generic advice and one‑size‑fits‑all societal explanations are called “mostly noise”; for some, medication or intense social support matters far more than talk.
  • Debate over definitions of “mental illness” and autism shows how even basic terminology is contested, complicating statistical and policy discussions.

Everyday coping and social context

  • Several note chronic loneliness, parenting young children, and economic strain as major contributors, independent of AI.
  • Exercise, sleep, sunlight, and social contact are promoted by some as underused, evidence‑based supports; others push back that “just go to the gym” is unrealistic when severely ill.
  • Underlying sentiment: the million‑per‑week figure is a symptom of broader societal failure; LLMs are, at best, a problematic stopgap sitting on top of that.