The A.I. Radiologist Will Not Be with You Soon

Current performance of imaging AI

  • Practicing radiologists and imaging entrepreneurs report that existing tools (mammography CAD, lung nodule, hemorrhage, vessel CAD, autocontouring) are generally unreliable, miss important findings, or mostly flag “obvious” cases a rested human would catch.
  • Narrow, task‑specific models (e.g., segmentation for radiation oncology) have improved significantly and can speed up workflows, but are far from full interpretation or autonomous diagnosis.
  • Many see AI today as a useful “first‑cut triage” or “smack the radiologist on the head” assistant, not a replacement.

Can AI see what humans can’t?

  • Radiologists highlight “satisfaction of search”/inattentional blindness: humans stop looking after finding one abnormality; AI can still scan the whole image and flag a second lesion.
  • Some commenters argue this means AI “sees” what humans don’t; radiologists counter it’s not superhuman perception, just not stopping early.
  • Debate centers on studies where AI infers race from chest X‑rays: one side treats this as evidence AI can detect non‑obvious features; the other notes radiologists never train on that task and that it doesn’t prove earlier or better pathology detection.

Data, models, and technical barriers

  • Lack of massive, high‑quality, labeled imaging datasets is seen as a core blocker; building global cross‑hospital repositories is described as conceptually simple but operationally very hard.
  • Some think large, multimodal transformers trained specifically on radiology could be transformative; others note vision‑language models currently hallucinate badly and that scaling alone hasn’t produced a step change in practice.
  • There’s interest in AI’s ability to use full sensor dynamic range and consistent attention across the image, but no consensus that this has yet translated into superior clinical performance.

Liability, regulation, and gatekeeping

  • Multiple comments emphasize malpractice liability: as long as someone must be sued, systems will require a human clinician “on the hook.”
  • US licensing, board control (e.g., residency slots), and credentialing prevent offshoring reads to cheaper foreign radiologists and would similarly constrain purely automated reading.
  • Some see professional bodies and payment structures as artificially constraining supply; others say residents are net drains and programs aren’t obvious profit centers.

Jobs, productivity, and demand

  • Radiologists report a national shortage and huge backlogs; expectation is that any productivity gains will increase throughput and reduce delays, not create idle radiologists.
  • One side argues that if AI does 80% of the work, long‑term fewer humans will be needed; the counterargument is that latent demand (and “Jevons paradox”–style effects) will absorb efficiency gains.
  • Several radiologists claim their work requires general intelligence—integrating history, talking to clinicians/patients, reasoning through novel findings—so believe that if AI can truly replace them, it can replace almost everyone.

Patient access, cost, and markets

  • Commenters note that imaging costs are dominated by equipment/technical fees, not the radiologist’s read; insurers already ration MRIs and other scans via step therapy.
  • Some expect cheaper AI‑assisted reading to expand access (more preventive scans, fewer deferred problems); others think US pricing and billing structures will simply add an “AI fee” without reducing totals.
  • Ideas like patient‑owned home scanners or “radiology shops” are dismissed as impractical due to equipment cost, radiation safety, and licensing.

Ethics, data privacy, and geography

  • HIPAA and consent are seen as major constraints on US‑based mass dataset building; some predict countries with centralized systems (e.g., NHS, China) will gain an edge by more freely training on population‑scale data.
  • Others push back that de‑identified data can be used, and that dire predictions about US being left behind due to privacy rules are common but so far unfulfilled.

Broader AI narratives and analogies

  • Hinton’s past prediction that radiologists should stop training within five years is widely viewed as wrong; commenters generalize this to skepticism of domain‑outsider doom forecasts.
  • Analogies surface to self‑driving cars, chess engines, translators, coders using Copilot: in many fields, AI becomes a powerful tool, not an outright replacement, with cultural, legal, and economic factors often dominating pure technical capability.