CEO of largest public hospital says he's ready to replace radiologists with AI

Diagnostic accuracy & risk tradeoffs

  • The quoted claim that an AI mammography system is wrong “3 in 10,000” for low‑risk women raises multiple questions: how that was measured, on what dataset, and compared to what human baseline.
  • Several ask specifically for human false‑negative rates and performance in high‑risk populations; one link suggests human false negatives around 10 in 10,000 in some contexts.
  • Commenters stress that false negatives in cancer are life‑threatening, but excessive false positives also cause harm (unnecessary biopsies/surgeries), so risk must be balanced.
  • Some fear marketing cherry‑picks simple cases; complex anatomy, multiple pathologies, and rare presentations may be where AI fails most.

Augment vs replace radiologists

  • Many advocate AI as a second reader or triage tool, not a full replacement: double reads (AI + human), “blind workflows” where each reads independently then reconciles, etc.
  • A practicing radiologist argues current AI cannot replace them, that radiology is more than pattern recognition, and full replacement would require AGI.
  • Others see a likely outcome where top radiologists, aided by AI, handle far more volume, pressuring the rest of the workforce.

Legal, liability, and standard of care

  • Strong concern about who is sued when AI misses a diagnosis if no physician signs off: hospital, CEO, vendor?
  • Some propose laws making everyone in the approval chain prima facie liable, including AI vendors.
  • Others note malpractice law follows “standard of care”: if AI becomes standard and a doctor ignores it, that can itself be malpractice.

Economic incentives and reimbursement

  • Commenters view the CEO’s remarks as primarily cost‑cutting and negotiating leverage against radiology groups, not patient‑centric.
  • Several predict insurers will eventually pay less for AI reads than for human interpretation, eroding hospital cost savings.
  • Malpractice insurance dynamics and potential insurer pushback against unsafe AI use are noted but seen as slow‑acting constraints.

Debate over evidence and AI performance

  • One commenter cites very low human detection rates for some subtle findings; others strongly challenge these numbers and demand sources.
  • This sparks a meta‑discussion: if you give precise statistics, you should provide evidence; unsourced bold claims are treated skeptically.

Broader implications: CEOs, HR, and access

  • Many argue AI could more easily replace CEOs or HR than radiologists, and suggest that if executives felt personally automatable, they might treat AI impacts on workers differently.
  • Some imagine AI‑run co‑ops or nonprofits with lower overhead.
  • In systems with multi‑year wait times, several would accept AI screening as an initial step despite risks, while others emphasize the danger of both false positives and negatives.