Tesla: Failure of the FSD's degradation detection system [pdf]

Degradation Detection and Crash Concerns

  • Central issue: NHTSA notes FSD often failed to detect camera-visibility degradation or only did so moments before crashes.
  • Commenters see this as especially serious because the human supervisor cannot see the system’s internal confidence.
  • Some share anecdotes of FSD confidently driving toward unseen obstacles or losing track of lead vehicles in degraded conditions.

Behavior in Adverse Conditions

  • Several owners report FSD/AP shutting off entirely in heavy rain, fog, or snow, reverting to manual control.
  • Others note inconsistent behavior: sometimes alerts for dirt/sun on cameras, sometimes none for fog.
  • One describes an automatic “clean camera” wiper-fluid routine with no explicit warning that vision is degraded.
  • Some say speed is auto-limited in low visibility; others say they rarely see automatic disabling.

Camera-Only Approach vs. LIDAR and Other Sensors

  • Strong recurring critique: dropping LIDAR is framed as cost-cutting that sacrifices safety; many call it “shameful engineering.”
  • Supporters argue vision-only can work in principle, citing Tesla’s occupancy networks and improved HW4 performance.
  • Critics stress camera limitations vs. human eyes (dynamic range, low light, depth, glare) and lack of binocular, movable sensors.
  • “Wile E. Coyote attacks” (painted tunnel entrances, fake roads, puddle illusions) are raised as failure modes for camera-only.
  • Some ask why jurisdictions haven’t mandated or incentivized LIDAR-based systems.

FSD Capability, Safety, and “Supervised” Autonomy

  • Experiences vary: some say FSD handles ~97–99.9% of their driving and is often “better than me,” especially on newer hardware.
  • Others call “FSD (Supervised)” a scam: if constant human supervision is required, it isn’t truly self-driving.
  • Waymo is cited as handling 100% of driving in its ODD, highlighting the gap between “almost works” and fully driverless service.
  • Concerns that FSD’s crash statistics are skewed because it disengages in bad conditions.

AI Reasoning and Reliability

  • Broader debate on whether modern AI has robust logical/common-sense reasoning.
  • Some argue frontier models still fail basic physical reasoning and numerical/logical tasks, implying risk in edge driving cases.
  • Examples given: odd LLM failures, long-tail road events (e.g., animals or debris falling onto highways) that require novel reasoning.

Regulation, Reporting, and Recalls

  • NHTSA report described as “preliminary” and “vague”; some think discussion is premature.
  • Others counter that experts have warned about these issues for a decade; the report simply formalizes known risks.
  • Concern that Tesla’s internal data/labeling limitations may undercount FSD-related crashes.
  • Mention that Tesla is highly recall-prone relative to other automakers (per a linked article).

Product Positioning and User Experience

  • Disagreement on whether Tesla is still “premium”: many describe interiors and build quality as spartan or cheap vs. price.
  • Some argue a “premium” or expensive product with FSD should include the most comprehensive sensors (e.g., LIDAR).
  • Complaints about missing features (e.g., CarPlay) and inconsistent communication about camera cleanliness and limitations.

Meta: Polarization and Discussion Quality

  • Multiple comments note Tesla/Elon topics quickly become polarized, with both heavy upvoting and flagging of negative stories.
  • Some lament that much of the thread rehashes entrenched pro/anti-Tesla views rather than engaging with new specifics from the report.