Axon’s Draft One is designed to defy transparency
FOIA, Private Contractors, and Record Access
- Several comments note that while FOIA doesn’t apply directly to private companies, records “created or held” by contractors performing governmental functions can be FOIA’d via the agency, subject to vague tests like “directly relates to the governmental function” and many exemptions.
- People expect Axon/OpenAI/Microsoft materials could be reachable this way, but also note agencies often resist and requesters must fight hard.
- Past examples (e.g., outsourced NASA software) show that copyright and contracting structures can still be used to block disclosure.
Police Accountability and Existing Structural Problems
- Many argue the core problem isn’t AI but lack of accountability: qualified/sovereign immunity, union power, and political incentives shield officers and departments from consequences.
- Proposals include mandatory private liability insurance for officers, stronger external oversight (state AGs or independent bodies), and NTSB-style safety investigations into use-of-force incidents focused on systemic fixes rather than blame.
- Others warn that shifting liability onto individuals without fixing governance may just create scapegoats, not real reform.
Officer Responsibility vs AI Authors
- Some insist that once an officer signs a report, they must be fully responsible regardless of whether AI, dictation, or typing produced the words.
- Others highlight human factors: people routinely rubber‑stamp documents (EULAs analogy); over time, officers will trust the tool and review less, especially when under workload pressure and knowing they rarely face consequences anyway.
AI-Shaped Narratives and Hidden Bias
- Strong concern that AI systems will standardize “court-optimized” language (e.g., phrases like “furtive movements”) that systematically expands probable cause and legitimizes searches and force.
- Because eyewitness memory is already unreliable and easily biased, having AI generate a narrative and then asking officers to confirm it is seen as “AI prompting the human,” entrenching bias and error.
- Commenters see Axon’s explicit decision not to store drafts or edit history—framed as avoiding “disclosure headaches”—as a deliberate move against transparency and auditability.
Bodycams, Evidence, and the Limits of Recording
- Some think AI reports matter less when bodycams capture everything and defense attorneys can rely on raw video.
- Others push back: cameras are often off, limited in field of view and audio, subject to selective release, and federal rules around mandatory recording are weakening.
- AI may hallucinate off‑camera details (gestures, intent, smells) that video can’t disprove, yet courts tend to treat written reports as presumptively truthful.
- A feared future pattern is AI summarizing footage and then footage being discarded, leaving only the AI‑shaped narrative.
Legal, Regulatory, and Litigation Angles
- EU AI Act is cited as explicitly restricting high‑risk uses like this, contrasted with US permissiveness, though commenters note EU states are also eroding privacy via encryption backdoor pushes.
- Some foresee creative defense strategies: forcing officers to admit AI authorship, then subpoenaing Axon/OpenAI staff and records when AI-generated language leads to wrongful searches or arrests.
Broader Concerns: AI as Bureaucratic, Not Sci‑Fi, Apocalypse
- Multiple comments frame this not as a “Terminator” scenario but as the banal AI dystopia: bureaucratic, opaque decision‑support tools amplifying existing injustices (over‑policing, racial bias, mass incarceration).
- There is debate over whether US policing is a “police state” or simply flawed, but broad agreement that embedding opaque AI into life‑and‑death systems without strong transparency, retention, and accountability mechanisms is dangerous.