Study shows 'alarming' level of trust in AI for life and death decisions

Accountability and Liability

  • Many see AI as a new way to diffuse or defer responsibility (“just following the AI”), similar to hiding behind shareholders or orders.
  • Debate over who is liable when AI causes harm: individual operator, institution (e.g., hospital), or tech vendor. Expectation that courts and lawsuits will set precedents.
  • Concern that AI tools are marketed as labor-replacing, not as decision-support for trained professionals, increasing black-box risk.
  • Historical examples (e.g., faulty IT systems, credit scoring, British Post Office scandal) show institutions often side with “the computer is right” even when it’s wrong.

Study Design and Interpretation

  • Several commenters call the drone-strike study “flawed” or “silly”:
    • Subjects were undergrads in a simulation with no real stakes.
    • “AI advice” was actually random; participants were told the AI was fallible but not that it was useless.
    • No control group where the same random advice is labeled as “human expert,” making it hard to claim this is specifically about AI.
  • Others defend the study as a valid demonstration of overtrust in automated advice, while criticizing sensationalist headlines.

Trust in AI vs Experts and Authorities

  • Some argue findings mostly show people treat AI like any authoritative second opinion. If they think it works, of course it influences them.
  • Others stress the dangerous assumption that “AI works,” especially amid hype and aggressive deployment.
  • Branding (“artificial intelligence” vs “decision-bot”) and conversational interfaces encourage anthropomorphism and misplaced trust.

High-Stakes Use Cases Already Here

  • Commenters note AI is already involved in life-or-death contexts: drone targeting, surveillance, policing, credit systems, aircraft automation, and medically oriented chatbots or clinical note-generation.
  • Worry that institutions will use AI to short-circuit safeguards in crises or for cost-cutting.

Automation Bias and Human Psychology

  • Automation bias—overweighting automated outputs and ignoring conflicting evidence—is cited as well documented.
  • Some argue we should deliberately cultivate distrust of automation, especially for edge cases and exceptions.
  • Others counter that machines are often more reliable than humans, so the real problem is designing systems and incentives that preserve human responsibility.

Ethics of Remote Killing and Delegation

  • Strong moral discomfort with drone warfare itself, especially “video game”-like killing at a distance and the temptation to blame the machine.
  • Counter-arguments frame remote, low-risk killing as strategic inevitability, not uniquely unethical compared to artillery or airstrikes.
  • Several note the deeper issue may be how easily people agree to kill strangers on thin information, regardless of AI.

Everyday and Benign Uses

  • Some share positive experiences using AI for developer tooling and documentation lookups, while others warn it can be as risky as (or worse than) unvetted code snippets.
  • Reports from educators and families suggest many non-experts now default to trusting AI answers, including for health advice, sometimes reinforcing confirmation bias.