"A computer can never be held accountable"
Human vs. Computer Accountability
- Central tension: a computer can’t be punished, deterred, or morally blamed, so accountability must attach to people and organizations that design, deploy, and rely on it.
- Several commenters stress the original 1979 qualifier “management decision”: computers may assist, but humans must own policy and high‑level choices.
- Others argue responsibility can’t be laundered through tools any more than through hammers or checklists.
AI in High‑Stakes Contexts (War, Cars, Insurance, Healthcare)
- Military examples (drone targeting, “computer says shoot”) raise fears of diffuse responsibility: many actors in the software/command chain but no clear individual answerable for civilian deaths.
- In insurance and healthcare, automated summaries and scoring systems can drive denials that harm or kill; users may hide behind “the algorithm malfunctioned.”
- Self‑driving cars shift liability from driver to manufacturer, prompting debate over whether firms will accept that risk or seek legal shields.
- Commenters note existing practice: organizations often pay fines or settlements while leadership and engineers avoid serious personal consequences.
Chains of Responsibility and Corporate Shields
- Discussion of how accountability dissolves in large systems: corporations, bureaucracies, and “the system” can be blamed while specific decision‑makers escape.
- Some see this as a deliberate design: using algorithms, consultants, or procedures as buffers (“computer says no”) to avoid personal culpability.
- Others emphasize that law already allocates liability (e.g., product defects, bridge collapses, emissions cheating), but is inconsistently enforced, especially for powerful actors.
What Accountability Is For
- Competing views:
- Preventive and deterrent: making people fear consequences so they think harder before delegating to unsafe systems.
- Reparative/systemic: priority should be fixing harm and improving systems, not hunting individuals.
- Philosophical clarification: accountability as being required to “give an account” (explain inputs, thresholds, decisions), not just punishment. Many current systems, especially black‑box AI, cannot do this.
Regulation, Governance, and Proposed Fixes
- Suggestions include:
- Clear legal rules that whoever deploys AI (up to C‑suite) is fully liable for its decisions.
- Banning or tightly regulating opaque, high‑risk automated decision systems (citing EU‑style approaches).
- Requirements for human appeals, audit logs, and explainable criteria.
- Skeptics doubt enforcement: powerful interests, carve‑outs (especially for militaries and law enforcement), and political fragmentation may render such rules toothless.
Should Computers Make Management Decisions?
- Most participants endorse the original norm: computers as advisors or tools, not final decision‑makers.
- A minority argues for letting AI make management decisions to escape human politics and finger‑pointing, provoking pushback about bias, control, and the opacity of “superintelligent” reasoning.