Accountability sinks
Justice, morality, and divided roles
- Several comments argue that institutions (courts, firing squads, corporations) are explicitly designed to spread moral culpability so individuals can “sleep at night.”
- Some suggest a more “direct” system: the person who sentences should execute, meat‑eaters should be able to kill animals, jurors should perform punishments. Others respond this would attract sadists and destroy safeguards like jury trials and separation of powers.
- There’s tension between seeing role‑splitting as moral evasion vs as protection against arbitrary power and bias.
Definitions: accountability, responsibility, blame
- Long subthread disputes the article’s definition of “accountability” as tied to ability to change a decision.
- Project‑management people bring up RACI distinctions (responsible vs accountable vs consulted vs informed) and complain the article’s terminology is “mushy.”
- Others say the real issue is not labels but whether decision‑makers actually feel/receive consequences and stories from those they affect.
Corporations, government, and limited liability
- Many see large firms and states as archetypal accountability sinks: executives avoid meaningful consequences for deadly products, fraud, data breaches, or policy failures.
- Limited liability and complex org charts diffuse blame up chains; size makes lines of responsibility opaque.
- Some argue this is primarily a problem of scale, not public vs private; others say government is uniquely bad, others that big companies are just as bad.
Automation, algorithms, and software‑mediated sinks
- Algorithms are framed as new “accountability firewalls”: management sets abstract goals; software enforces them on workers and customers; harm is blamed on “the system.”
- Examples: automated hiring, airline and rail self‑service, state e‑filing portals that force users to falsify data, credit/LinkedIn flags no one can fix.
- Debate on whether algorithms themselves can be “accountable” (e.g., deleting or fixing them) vs accountability always belonging to humans.
Customer service, scale, and feedback loops
- Many anecdotes: airline seat changes with no recourse, delayed flights/rooms, German rail chaos, ISPs that only offer IVR and chatbots, ticket machines swallowing money, parcel services and banks with no reachable humans.
- These are seen as deliberate walls that prevent feedback from reaching decision‑makers and make redress costly.
- Some point to regulation (e.g., EU flight compensation, mandated human escalation in some jurisdictions) as a partial counterweight.
Blameless culture, incentives, and proposed remedies
- Commenters contrast “blameless postmortems” (no personal punishment but strong learning) with true unaccountability (no learning, no consequences).
- Others stress incentive alignment: many “bad” decisions make perfect sense locally (cost cutting, metrics), but ignore downstream harm.
- Suggested remedies: tighter feedback loops (“intrinsic responsibility”), skin‑in‑the‑game for executives, liability and clawbacks, legal rights to human escalation, small‑claims actions, and smaller or federated organizations.
- Some are pessimistic, predicting mature societies and large organizations naturally converge toward minimal personal accountability.