Tesla seeks to guard crash data from public disclosure

Access to NHTSA Crash Data & Redactions

  • Many argue that if NHTSA has crash data, taxpayers and crash victims should see it, especially for systems operating on public roads.
  • Others contend Tesla shouldn’t be singled out and data should be comparable across all manufacturers.
  • Thread participants inspect the official CSV and find:
    • Tesla, BMW, Subaru, Honda and others have many redacted or blank ADAS/ADS version fields.
    • Tesla appears to redact nearly all relevant ADAS fields (including narratives and system versions), making serious analysis difficult; some say this is materially worse than peers, others say many brands are similarly opaque.
  • There is consensus that current redaction levels, across multiple makers, significantly weaken the public’s ability to evaluate ADAS safety.

Reporting Thresholds, Under‑Reporting & EDR

  • NHTSA’s special crash reporting only covers serious outcomes (fatalities, vulnerable road users, hospital transports, airbag deployment).
  • Tesla has been criticized by NHTSA for telematics gaps and for treating many non‑airbag events as “non-crashes,” likely undercounting incidents.
  • Separately, UN and US EDR rules mostly capture physical vehicle behavior, not who (human vs ADS) controlled it. The contested data here goes beyond legal minimums, into proprietary logs Tesla chooses to keep.

Safety, Supervision & Autonomy Levels

  • One camp claims camera-only systems are already much safer than humans in absolute deaths, given their tiny fleet share; critics say the relevant metric is rate per mile and that good independent data is missing.
  • Tesla’s own Autopilot stats are challenged as incomparable (highway vs mixed driving, supervised vs unsupervised).
  • Some cite crowdsourced “miles per disengagement” suggesting poor unsupervised performance compared with other AV projects.
  • Long subthread debates SAE Levels 2–4:
    • Level 2 is seen as demanding inhuman vigilance (“pretend to drive”).
    • Level 3’s handover window is viewed by some as inherently risky; others say sufficient seconds of guaranteed control can make it workable and close to Level 4.
    • Many argue anything marketed as “Full Self Driving” should bear Level‑4‑like liability.

Liability, Logging, and Corporate Motives

  • Strong concern that Tesla markets FSD as near‑autonomous while legally treating all failures as driver error, including edge cases where the system disengages just before impact.
  • Some call for third‑party code and log audits for safety‑critical systems, comparing the bar to aviation or even casino software.
  • Tesla’s legal stance invokes “competitive harm” if detailed crash logs are released; critics compare this unfavorably to pharma trials and to earlier promises about open patents and advancing safety.
  • A few defend Tesla’s right to protect internal data and fear misinterpretation by hostile media, but others respond that data from public roads and public risk should not be treated as trade secrets.

User Experiences & Brand Perception

  • Anecdotes diverge:
    • Some HW4/v12 owners say FSD now feels like a genuine safety aid on most trips.
    • Others describe poor object detection (e.g., bins vs children), frequent construction-zone failures, and reliance on human “babysitting,” which they consider more stressful than driving.
  • Subscription pricing for a “safety feature” is widely criticized on principle.
  • Several argue rivals (especially Chinese EVs and Waymo-style L4 systems) now exceed Tesla on quality or safety, with Tesla leaning heavily on stock-market hype and brand politics.