Tesla concealed fatal accidents to continue testing autonomous driving
Article & media quality
- Several commenters say the RTS/SRF piece is vague, “sensationalized,” and fails to clearly show how Tesla hid accidents or from whom.
- Others defend RTS/SRF as generally high‑quality public broadcasters, while noting that European public media vary widely in funding, bias, and independence.
- Some see this as another example of weak investigative standards in Musk/Tesla coverage; others say Tesla’s own behavior has earned skepticism.
Tesla safety record & data disputes
- Multiple references to a study claiming Tesla has the highest fatal crash rate among US brands; critics argue it’s flawed, under‑documented, or lobbyist‑backed.
- Counter‑arguments: even if imperfect, the study aligns with concerns that crash‑test ratings don’t reflect real‑world issues (driver distraction from UI, misleading autonomy marketing, door egress problems).
- Snopes and Tesla’s own mileage claims are cited against the study, but others note this mostly relies on Tesla’s word.
- Some attribute high fatality rates to self‑selecting aggressive drivers; others reject this as an unfalsifiable excuse, especially given Tesla’s scale and high horsepower.
Insurance, liability, and litigation
- One side: if Teslas were significantly more dangerous, liability insurance premiums would already be higher; insurers have strong pricing incentives.
- Other side: unfolding litigation, unclear Tesla data practices, and complex liability (vehicle vs driver vs software) may delay accurate pricing signals.
Autopilot vs FSD and crash handling
- Repeated confusion between “Autopilot” (basic lane‑keep + adaptive cruise) and “FSD (Supervised)” (navigates and controls the car).
- Some emphasize Tesla disengaging automation shortly before impact, which can both:
- Undermine safety by dumping control on an unprepared driver.
- Let Tesla claim the system “wasn’t active” at the moment of crash.
- Others note Tesla says it counts crashes where FSD was active within 5 seconds pre‑impact and that AEB remains active after disengagement; how this works in practice is unclear.
Human factors, misuse, and ethics
- Many argue SAE Level 2/3 systems inherently overtax human supervision; people zone out and over‑trust the car.
- Anecdotes show both: FSD preventing accidents (e.g., not moving on green due to a red‑light runner) and people misusing it (hands off, low attention).
- Broader debate over future AI driving: trolley‑problem ethics, whether AVs should ever sacrifice occupants, and whether current focus on such hypotheticals distracts from real engineering failures.