ChatGPT Health

Trust, Privacy, and Data Use

  • Many commenters say they would not trust OpenAI with health data; some extend that to any EMR provider, assuming leaks are inevitable.
  • Others openly don’t care who holds their data if it leads to real benefits, arguing current systems underuse valuable health information.
  • A large subthread debates realistic harms: data leaking to brokers, then indirectly affecting hiring, insurance, loans, immigration, or targeting of marginalized groups.
  • Confusion and skepticism around OpenAI’s “purpose-built encryption” and “dedicated space” claims; people want specifics on isolation, tool-calling, and retention rather than marketing language.
  • Concern that ChatGPT Health sits outside HIPAA for consumers, even as it integrates with apps and U.S. providers via partners.
  • Memory and personalization are seen as powerful but dangerous: examples of the model “deciding” a user had ADHD, or mixing up identity attributes, raising worries about wrong health attributes being silently stored or shared.

Legality, Liability, and Regulation

  • The “not for diagnosis or treatment” disclaimer is widely doubted; several expect class actions once harm cases accumulate.
  • Some argue existing law shields OpenAI if it markets this as an informational tool, not a provider, analogous to search/WebMD.
  • Others argue providers of such tools should carry malpractice-like liability, including potential jail or large fines for harmful advice.
  • Recent FDA moves to relax oversight of AI/wearables are cited as enabling unregulated tools into clinical workflows, worrying some.

Effectiveness: Successes vs Failures

  • Numerous anecdotes where LLMs helped more than doctors: catching missed diagnoses, guiding which specialist/tests to push for, interpreting lab results, optimizing meds or exercise, and sometimes avoiding unnecessary surgery.
  • Others recount harmful or silly suggestions (e.g., worsening an injury, unsafe ear oil advice, overconfident self-diagnosis of rare disease, teen overdose story), highlighting hallucinations and overtrust.
  • Consensus among cautious supporters: use LLMs for education, exploration, and preparing questions, but always confirm with clinicians and verify sources.

Impact on Doctors and the Health System

  • Many describe rushed, inattentive, or outdated care; misdiagnosis and “drink water, take painkillers” stories are pervasive, eroding trust in physicians.
  • Others push back, noting medicine’s complexity, time pressure, and systemic incentives (insurance, throughput) as key problems rather than individual incompetence.
  • Some expect admins and insurers to use AI to cut costs by substituting cheaper staff plus LLMs for physician time.
  • Others see best value in combination: patients using AI to summarize and research; doctors using AI as a second-opinion engine and for complex reasoning on full records.

Self-Diagnosis, Access, and Inequality

  • For people facing long waits, high costs, or no insurance (notably in the US), ChatGPT is seen as a critical stopgap and “sanity check.”
  • Commenters from countries with free, accessible healthcare worry this will pull people away from qualified local doctors toward a probabilistic, unregulated tool.
  • Debate persists over whether people are sufficiently critical: some insist most will overtrust AI; others argue misuse is inevitable but so is access, especially via local/open models.

Product Design, Ecosystem, and Risk Tolerance

  • Early launch rough edges (404 waitlist) reinforce perceptions of “vibe-coded” speed over rigor.
  • Many note this directly competes with third-party “AI wrappers,” shrinking their moats. Others imagine the future as AI agents orchestrating many specialized tools and data sources behind one interface.
  • Several stress that medicine is already probabilistic; the real question is whether AI shifts the error balance net positive, and how much risk society is willing to accept in exchange for broader, cheaper guidance.