Tesla’s autonomous vehicles are crashing at a rate much higher tha human drivers

Sample size and statistical validity

  • Several commenters argue 500,000 robotaxi miles and 9 crashes is too little data; a couple of outlier months can swing rates wildly.
  • Others counter that 500,000 miles is roughly a lifetime of driving for one person, so it’s enough to see that 9 crashes is unlikely if performance were human-like.
  • Poisson / confidence-interval arguments are used both ways: critics say uncertainty is huge; defenders say article’s “3x” or “9x” framing overstates what can be inferred.

Crash comparisons and definitions

  • Dispute over whether incidents being compared are “like for like”:
    • AV reports include very low-speed contact events that humans often never police‑report.
    • Human baselines include only police‑reported crashes, then are adjusted with rough estimates for unreported minor incidents.
  • Some note only a subset of Tesla’s 9 crashes sound clearly severe; others argue even “minor” hits (curbs, bollards) are important if they reflect sensor/perception failures.
  • City-only, low‑speed Austin usage is contrasted against national human‑driving stats that include many safer highway miles, likely making Tesla’s numbers look worse.

Safety drivers, interventions, and human factors

  • Because vehicles are supervised, people want to know how many near‑misses were prevented by human/remote intervention; that data isn’t public.
  • Some say the presence of monitors makes the observed crash rate especially damning; others note that automation with humans “on watch” is known to cause vigilance/complacency problems.

Transparency, burden of proof, and trust

  • Strong theme: Tesla withholds detailed crash and disengagement data, unlike other AV operators; many see this as a red flag.
  • One side says the article’s analysis is necessarily rough because Tesla is opaque; therefore the burden is on Tesla to release data if it wants public trust.
  • The opposing side criticizes drawing hard conclusions (“confirms 3x worse”) from partial, ambiguous data.

Electrek’s framing and perceived bias

  • Multiple commenters call the piece a “hit job” or “clickbait,” citing a long run of negative Tesla headlines.
  • Others respond that negative headlines may simply reflect deteriorating performance, overpromises, and a documented history of missed FSD timelines.

Broader debates: autonomy, safety, and Tesla’s strategy

  • Some argue any self‑driving system must be far safer than humans (not just comparable) to justify deployment.
  • Others defend driver‑assist and FSD as valuable safety tools that reduce fatigue and errors, if used responsibly.
  • There is significant skepticism that Tesla can pivot from a troubled FSD/robotaxi effort to humanoid robots and justify its valuation.