Tesla 'Robotaxi' adds 5 more crashes in Austin in a month – 4x worse than humans

Crash severity and what “4x worse” means

  • Many point out that most listed incidents are very low‑speed bumps (1–4 mph) or being hit while stationary, arguing these are more like parking scrapes than “crashes” and rarely reported for humans.
  • Others counter that routinely backing into objects at any speed is still poor driving and can injure vulnerable people; dismissing them as trivial is unsafe framing.
  • Several note that with so few total events, any “4x worse than humans” claim has large statistical uncertainty.

Data definitions, transparency, and Tesla’s own numbers

  • A big thread focuses on mismatched definitions: NHTSA “crash” reports include any property‑damage contact, while Tesla’s own “minor collision” metric is based on much higher‑severity telemetry triggers.
  • Critics argue comparing those two produces a bogus “4x” ratio; a valid comparison would match severity thresholds, geography, and exposure, with confidence intervals.
  • Commenters highlight Tesla’s systematic redaction of crash narratives in NHTSA filings, unlike Waymo/Zoox/etc., making fault assignment and context impossible.
  • Some contrast Robotaxi’s roughly 57k miles per “crash” with Tesla’s marketing claim of ~1.5M miles per minor collision for customer FSD, calling the 25–30x gap strong evidence that Tesla’s public safety stats are selectively framed.

Comparison to humans and other AVs

  • Several say data is too thin to firmly rate Robotaxi vs average human, especially since humans don’t report low‑speed bumps.
  • Others emphasize that even under professional supervision, Robotaxi appears clearly worse than human benchmarks and far worse than Waymo on similar NHTSA data, where Waymo also reports many low‑speed, no‑fault incidents.
  • There’s concern that Tesla’s weaker performance and higher‑profile missteps tarnish the reputation of the entire AV sector.

Sensors, design choices, and technical limits

  • Repeated criticism of Tesla’s camera‑only approach; many argue LIDAR+radar+camera is intrinsically more robust, and point to minor backing crashes that simple parking sensors likely would have avoided.
  • Some see Tesla as having boxed itself into a hardware corner: admitting cameras‑only is insufficient would effectively declare millions of cars “defective.”

Safety drivers and supervision model

  • Multiple comments stress all Austin Robotaxi miles are supervised; major accidents should mostly be caught by safety drivers, so observed incidents mostly reflect misses that are hard to anticipate at low speed.
  • There’s skepticism of supervision by a passive “emergency brake” operator; humans are known to be very poor at long‑term vigilance when not actively driving.

Regulation, liability, and business ethics

  • One side argues these are pilot programs and experimentation is expected; others reply that deploying unsafe systems on public roads without full transparency is ethically equivalent to large‑scale experimentation on bystanders.
  • Commenters link this to broader US norms of externalizing risk (pollution, etc.) and doubt current US institutions will meaningfully constrain Tesla, though they expect many large cities and some jurisdictions to resist deployments.

Media bias, Musk, and polarized reactions

  • Several accuse Electrek of anti‑Tesla spin and sensationalism; others respond that the underlying crash data are federally reported and the real issue is Tesla’s secrecy.
  • Many express fatigue with highly polarized “pro‑Elon vs anti‑Elon” discourse that makes nuanced safety analysis difficult.