Tesla reports another Robotaxi crash
Accident Rates and Statistical Debates
- Thread centers on claims that Tesla’s Robotaxis in Austin crash about once every 40k miles vs ~500k miles per reported human crash.
- Several commenters question the 500k figure, noting it likely only covers police‑reported crashes and excludes minor “hit a pole / garbage can” incidents, making the comparison imperfect.
- Others stress that, even with caveats, 7–8 crashes across ~20 robotaxis in a short period is alarmingly high, especially with professional supervisors.
- Some argue the sample is too small for strong conclusions and that confidence intervals and better stratification (e.g., urban vs highway) are missing.
Sensors: Cameras vs Lidar/Radar
- Major debate over Tesla’s camera‑only approach vs multi‑sensor (lidar/radar) stacks.
- Critics say starting with fewer sensors is backwards: use everything, make it work, then simplify. They highlight cameras’ weaknesses in fog, rain, glare, and low light.
- Defenders argue simplifying the sensor stack reduces engineering complexity, avoids ambiguous multi‑sensor conflicts, and can speed development; they claim cameras are the most versatile sensor.
- Counterpoint: multi‑sensor fusion is a mature field and can yield strictly better perception despite added latency and complexity.
Tesla vs Waymo Approaches and Performance
- Waymo is cited as having >100M driverless miles and large reductions in crash and injury rates vs humans; some users report frequent reliable use.
- Others note Waymo still has issues (school bus recall, parade and crime‑scene blockages) and operates only in limited geofenced areas and benign climates, so “problem solved” is disputed.
- Disagreement over whether Waymo counts as “completely self‑driving” given its remote “fleet response” system that can propose paths or interventions.
- Comparisons in Austin suggest Tesla’s incident rate is roughly 2x Waymo’s per mile; one commenter claims ~20x if only Robotaxis are counted, but this is not firmly resolved.
Data Transparency and Trust
- Tesla’s heavy redaction of NHTSA incident reports is widely criticized; it prevents detailed severity analysis and fuels suspicion they’re hiding worse outcomes.
- Waymo’s more granular public data enables peer‑reviewed safety studies; similar analysis is impossible for Tesla, eroding trust in its safety claims.
Ethics, Regulation, and Public Risk
- Some see deploying high‑crash‑rate systems on public roads as prioritizing corporate interests over public safety, likening it to earlier auto safety scandals.
- Others view a limited, supervised rollout with only one hospitalization so far as a “not terrible” early phase of an iterative technology, but many insist that anything less safe than a human driver violates the core social contract for self‑driving cars.
Media Bias and Source Skepticism
- A subset of commenters argue the article outlet is hostile to Tesla, cherry‑picks Tesla incidents, and uses misleading language (e.g., calling every contact a “crash”).
- Opponents reply that Tesla could easily dispel doubts by releasing unredacted data and that the bigger unsubstantiated claims are Tesla’s own FSD safety numbers, which lack independent audit.