Geoffrey Hinton said machine learning would outperform radiologists by now

AI capability, timelines, and the “last 10–20%”

  • Many argue people systematically mis-extrapolate AI progress: the first ~80–90% comes fast, but the remaining edge cases are much harder or effectively impossible.
  • Self‑driving is cited as a paradigm example of being “5 years away” for 15+ years. Others counter that robotaxis in a few cities show the tech is ~95% there and now a scaling problem.
  • Historical parallels (OCR, translation) are invoked to say timelines are usually over-optimistic, especially for safety‑critical tasks.

Radiology automation: current status

  • Several comments note radiology is already using ML: screening, “first pass” reads, second opinions, and workflow tools, especially in systems like the UK’s NHS.
  • Teleradiology and outsourced clinical labwork (often to lower‑cost countries) are well‑established; these organizations are seen as likely early adopters of radiology agents.
  • A specialized model (Harrison.rad.1) reportedly matches or slightly exceeds typical human scores on a major radiology exam, prompting interest but also questions about real‑world robustness.

Regulation, power, and adoption

  • Healthcare is described as heavily regulated, politically sensitive, and dominated by high‑status professional groups.
  • Several argue this is unlike taxis vs. ride‑hailing: “move fast and break things” is not viable; law, liability, and lobbying will slow full automation.
  • Others expect faster adoption in countries with cost pressure and fewer entrenched interests (e.g., India, Mexico, China).

Medical labor markets and shortages

  • There is broad agreement on physician shortages across specialties; debate focuses on causes: training bottlenecks, high education costs, and alleged “doctor cartel” behavior limiting supply.
  • Outsourcing radiology is seen as both a cost move and a way to stretch scarce specialists; AI is expected to continue this trend by letting a few radiologists supervise far more cases.

Jobs, careers, and economic impact

  • Views split between “AI will mostly augment radiologists” vs. “eventual large displacement.” Most see scenario 2 (augmentation, more technicians, fewer full specialists) as already underway.
  • Some warn that hype about imminent replacement can itself depress career entry, worsening shortages even if the tech under-delivers.
  • Broader comments generalize this to programmers and other knowledge workers: even if AI doesn’t fully replace them, fear and misprediction can have long‑term labor‑market effects.

Safety, responsibility, and ethics

  • One side argues that once models are even slightly better diagnosticians than humans, rapid deployment becomes a moral imperative; delaying would mean preventable deaths.
  • Others counter that AI errors (e.g., hallucinations) in medicine will kill people and likely trigger strong regulatory or legal backlash.
  • A reported teen suicide after chatbot interactions raises concerns about vulnerable users and shared responsibility (tools vs. environment, e.g., gun access).

Beyond radiology: scope and limits of AI

  • Some predict that big general models plus huge data will dominate many verticals, pushing smaller AI startups out. Others emphasize the “long tail” of highly specialized industrial and scientific tasks that require domain‑specific data and models.
  • Debate extends to mathematics: some see automation and AI‑assisted proving as on a transformative 20‑year path; skeptics say current LLMs still fail at basic arithmetic and that classical proof tools, not ML, do the real work.

Framing AI as replacement vs. augmentation

  • Several call for shifting rhetoric from “replacing X” to “enhancing X,” warning that over‑promising full automation mainly fuels cost‑cutting, underinvestment in people, and eventual backlash when tech falls short.