Experimental surgery performed by AI-driven surgical robot

Safety, Predictability, and “Hallucinations”

  • Several comments express fear about using transformer/LLM-style systems for surgery, seeing them as too fuzzy and unpredictable for a domain needing reliability.
  • Others counter that the real world isn’t perfectly reproducible and systems must handle unexpected situations; robustness to weird failures is the goal.
  • People worry about what an “AI hallucination” would mean in an operating room (catastrophic, irreversible errors), with some dark satire imagining chatty post‑mortem logs and apologies.

Architecture: LLMs, Transformers, and Tokens

  • Debate over whether this is really “ChatGPT-like.”
  • Clarification: the showcased system (“Surgical Robot Transformer”) uses transformers and tokenization, but its tokens are video/sensor patches and action sequences, not Internet text.
  • A similar point is made about autonomous driving: modern systems like Waymo also use transformer-based, tokenized models for state tracking.

Training, Edge Cases, and Cascading Complications

  • The model combines a high-level “language policy” (task vs corrective instructions) with a low-level controller for trajectories.
  • Training includes standard demonstrations plus deliberately induced failure states and human corrections to teach recovery behaviors.
  • Concerns remain about rare “corner case” surgeries and complex cascades of complications; expectation is that human surgeons will supervise and intervene, at least for a long time.
  • Access to rich kinematic data from existing surgical robots is described as a bottleneck; video is available but motion data is reportedly withheld.

Comparisons to Existing Tech and Adoption Path

  • Many see this as an extension of existing robot-assisted surgery (da Vinci, Mako, etc.), which is currently teleoperated, not autonomous.
  • Discussion compares acceptance to Waymo, LASIK, and Invisalign: gradual, data-driven, often starting in tech‑friendly populations.
  • Some argue unsupervised robotic surgeons will face much higher acceptance hurdles than assistive systems.

Ethics, Law, and Accountability

  • Questions raised about legal status, medical licensing, and who is liable when things go wrong: surgeon, hospital, manufacturer, or AI developer.
  • One comment cites recent FDA guidance mandating “human-in-the-loop” oversight and explicit attribution of decision responsibility.
  • There’s a broader worry that complex AI/tech stacks diffuse responsibility, analogous to large industrial accidents.

Socioeconomic and Value Questions

  • Debate over whether robots will be for the rich (more precise, expensive care) or for the poor (cheaper, less human time).
  • Some welcome robots to alleviate surgeon scarcity; others emphasize preserving human experts and using robots as tools, not replacements.
  • Satirical takes imagine optimizing surgery for insurer revenue and “subscription” health, highlighting distrust of profit-driven objectives.