Proposed amendment to legal presumption about the reliability of computers
Background: UK Post Office / Horizon scandal
- Fujitsu’s Horizon system for UK Post Offices produced incorrect balances, leading to thousands of prosecutions, convictions, financial ruin, and suicides over ~15 years.
- Bugs were numerous and fundamental (transactions, distributed systems, lack of proper ledger/accounting design, Forex mis-handling).
- Management and Post Office prosecutors knew of bugs and remote “backdoor” interventions, yet maintained the system was robust, hid evidence, and continued prosecutions.
- Scandal is framed as both a software failure and, more importantly, a political/legal/ethical cover‑up and abuse of power.
Legal presumption that computers are reliable
- UK law evolved from presuming mechanical instruments correct, to briefly requiring proof of computer correctness (1984), then back to presumption of correctness (1999) after a hearsay review.
- Several argue this effectively shifted burden of proof onto defendants, clashing with “innocent until proven guilty.”
- Others note the intent was to avoid endless challenges (speed cameras, tax, tickets) and that courts still can question computer evidence, but didn’t in Horizon.
- Proposed amendment is seen as an improvement but criticized as too weak if prior government “certification” still creates a strong presumption.
Responsibility: engineers vs management vs justice system
- One view: primary blame lies with management, executives, and prosecutors who ignored reports, suppressed evidence, and lied; software bugs are inevitable.
- Counter‑view: both management and engineers share responsibility; basic properties like idempotent financial transactions were missing, and some technical witnesses allegedly misled courts.
- Many emphasize this was ultimately a justice‑system failure: courts and prosecutors treated computer output as near‑infallible evidence.
Regulation, “engineering” status, and liability
- Strong current for treating critical software like civil/aerospace engineering: licensing, standards, personal/professional liability, and insurance for safety‑critical and financial systems.
- Others warn this could entrench incumbents, stifle innovation, shift blame onto individual coders, and be hard to design in a field lacking stable standards.
- Debate over regulating the “engineer” title, mandating certified components, and tiered accreditation for critical vs trivial systems.
Transparency, open source, and evidence
- Calls for: open‑sourcing publicly funded systems, stronger audit trails (full calculation steps, logs), mandatory disclosure of known bugs, and security/process documentation when software evidence is used in court.
- Some argue that without access to source or rigorous independent audits, any right to challenge software in court is hollow.
AI and future risks
- Several draw parallels to generative AI: fear that courts or institutions might presume AI outputs reliable, despite non‑determinism and hallucinations.
- Widespread agreement that presuming correctness of opaque, complex systems is dangerous, especially as they gain legal or administrative power.