US probes Tesla's Full Self-Driving software after fatal crash
Safety record and statistics
- Multiple commenters debate whether FSD is safer than human drivers.
- Pro-FSD voices cite billions of FSD miles, very few alleged fatalities, and Tesla’s own data (millions of miles per accident vs ~70k US average per crash), arguing it’s already much safer and improving.
- Skeptics call Tesla’s numbers “apples to oranges”: most FSD/Autopilot miles are on safer highways and in good weather, and only airbag‑deploying crashes are counted, excluding many incidents.
- Government crash databases exist, but Tesla often requests heavy redaction, making independent analysis hard.
- Several note that the ongoing NHTSA probe exists specifically to detect systematic failures (e.g., in low visibility), not to compute global safety parity.
Technical approach and limitations
- Tesla’s vision‑only strategy is heavily criticized, especially in sun glare, fog, dust, and poor markings; some see it as a dead‑end without lidar/radar.
- Supporters argue camera‑only is sufficient if neural nets and data scale, and note recent versions (12.x) are “night and day” better, with far fewer interventions.
- Others counter that each software update can introduce new, unseen failure modes and that “works 99% of the time” is not acceptable for unsupervised driving.
User experiences
- Positive anecdotes: some owners say they drive almost entirely on FSD, with rare interventions, notably on long highway trips, claiming reduced fatigue and stress.
- Negative anecdotes: others report frequent phantom braking, lane confusion, dangerous turns, aggressive acceleration, and near‑collisions; several disabled FSD after trials, calling it “too scary.”
- Many describe Tesla as fine “driver assistance” on highways but unreliable in complex city or edge conditions.
Marketing, naming, and alleged deception
- Strong criticism of the “Full Self‑Driving” branding while legally classifying it as Level 2 with required supervision.
- Commenters list a long history of missed autonomy timelines, staged demos, and shifting “hardware is enough” claims; some see this as fraud, not just optimism.
- The recent rename to “Full Self‑Driving (Supervised)” is widely mocked as an oxymoron.
Regulation, liability, and ethics
- The NHTSA investigation is broadly welcomed by skeptics, who see Tesla as beta‑testing on the public.
- Others argue regulators should compare FSD to real‑world human driving and not demand perfection; some propose allowing it once it’s significantly safer than median humans.
- There is repeated consensus that true autonomy should only be allowed when the manufacturer assumes full legal liability, as some limited systems (e.g., Mercedes L3, Waymo in geofenced areas) reportedly do.
- Concerns are raised about OTA software changes, lack of transparency, and the difficulty of certifying a constantly evolving system.
Comparisons and broader themes
- Waymo is frequently cited as actually operating driverless robotaxis with lidar/radar and strict mapping; its approach is seen as safer but more geographically limited.
- Many technologists object less to autonomy in principle than to Tesla’s culture, opacity, and “move fast and break things” attitude in a safety‑critical domain.