Google removes AI health summaries
Scope of the Problem: Healthcare and “Disruption”
- Many argue healthcare does need radical change, but mainly in policy and structure, not Silicon Valley–style tech “disruption.”
- Critiques focus on:
- Profit-seeking insurers and hospital executives.
- Captured markets and corrupt regulation.
- Adverse selection and the lack of universal coverage.
- Some see single-payer as the obvious solution, citing other developed nations; others counter that single-payer systems have serious inefficiencies and are partially subsidized by high US prices.
- There’s dispute over what really drives costs:
- One camp blames insurers and perverse incentives (e.g., profit caps that scale with total spend).
- Another blames restricted supply and high pay of physicians, plus scope-of-practice lobbying that limits cheaper providers.
- Non-profit status (hospitals/insurers) is viewed by several as having little effect on prices.
AI in Health Search: Errors, Confabulation, and Harm
- Multiple anecdotes of Google AI Overviews giving dangerously wrong or invented medical info (medications, conditions, health fads).
- People note the model confidently blends:
- Authoritative sources (e.g., official wikis, WebMD) with
- Forums, fan fiction, LARPs, and Reddit speculation.
- This produces surreal but plausible-seeming content (e.g., non-existent APIs, game mechanics, fictional demographics, made-up products).
- Several prefer “confabulation” over “hallucination” to emphasize confident, unintentional fabrication.
- Concern: users treat AI summaries as more authoritative than raw search results, even though they’re just remixing a polluted web.
Degradation of Google Search
- Many say AI Overviews have “wrecked” search: more wrong answers, more ads, more scrolling to reach real sites.
- Some still find LLM-style synthesis useful for discovering unknown literature or jargon, provided they verify everything afterward.
- There’s frustration that Google ships low-reliability health answers at all instead of detecting medical intent and backing off.
Safety, Regulation, and Liability
- Commenters note that medical recommendations are often “Software as a Medical Device,” implying FDA oversight and liability that seem absent here.
- Suggestions include:
- Bans or fines for unlicensed AI medical advice.
- Holding companies liable until they can prove reliability.
- Strong distinction is drawn between:
- Professionals using AI as a tool within institutional safeguards, and
- Laypeople self-diagnosing and self-treating from AI output.
Contrast with OpenAI’s ChatGPT Health
- Some highlight the timing: Google retracts some health summaries while OpenAI launches a branded health assistant.
- Opinions split on whether this reflects different safety cultures or just different marketing for essentially similar “web + LLM” systems.