ChatGPT terms disallow its use in providing legal and medical advice to others

What Actually Changed in the Policy

  • Many commenters argue this is mainly a terms-of-service / liability update, not a hard technical block.
  • Distinction emphasized:
    • Still allowed: individuals asking ChatGPT about their own health or legal situation.
    • Disallowed: using ChatGPT to provide licensed advice to others (e.g., “AI doctor/lawyer” products, custom GPTs marketed as such).
  • Some users report recent refusals on medical questions; others see no behavioral change, leading to confusion about whether the system or just the written terms changed.
  • The article itself was later corrected to say model behavior has not changed.

Anecdotes of Medical “Success” vs Limits

  • Multiple stories where ChatGPT surfaced rare or overlooked diagnoses or conditions (intestinal/birth defects, congenital issues, stroke risk), sometimes matching or beating doctors’ diagnostic lists.
  • Others stress these are anecdotes with heavy bias: prompts are often informed by hindsight, and users may unconsciously steer the model.
  • Several note that the primary value is helping patients understand terminology, tests, and options, and prepare better questions for clinicians.

Hallucinations, Sycophancy, and Self‑Diagnosis

  • Many examples of dangerously wrong advice in construction, electrical work, woodworking, and basic trades, used as a warning for medical/legal reliance.
  • Concern that LLMs eagerly confirm user biases, especially around mental health or rare diseases, and can be “coaxed” into any diagnosis with iterative prompting.
  • Comparison to WebMD: ChatGPT is more flexible and persuasive, which can amplify hypochondria and bad decisions.

Liability, Licensing, and Professional Protection

  • Broad agreement this is driven by fear of lawsuits and medical‑device / unauthorized‑practice regulations, not “AI doom.”
  • Debate over whether doctors/lawyers are being protected as a guild vs legitimately shielding the public.
  • Some foresee specialized, regulated “professional” AI products for clinicians and lawyers, with ordinary users pushed to weaker or more constrained tools.

Broader Concerns

  • Worry that restrictions will push people to less constrained (and possibly worse) models or jurisdictions.
  • Frustration that marketing oversells AI as near‑omniscient while fine print and policies insist outputs are untrusted and not actionable.