Tesla drives into Wile E. Coyote fake road wall in camera vs. Lidar test

Cameras vs. Lidar (and Radar)

  • Many argue camera-only + neural nets are inherently insufficient for safe autonomy; lidar (and at least proximity radar) is seen as an obvious, relatively cheap extra safety layer.
  • Others counter that vision-only must be possible in principle since humans drive with vision, and hardware/ML will keep improving; saying “never” is called absurd.
  • Critics respond that humans have richer perception (stereo, motion cues, vestibular, tactile feedback, theory-of-mind) and vastly more capable “neural hardware,” so extra sensors are a practical necessity, especially for safety-critical edge cases.

Human vs Machine Performance

  • Several note that humans would likely slow in heavy rain/fog and in “weird” situations, whereas the Tesla in the video barrels through limited visibility.
  • Some think many humans might also hit a photorealistic fake-road wall; others insist driver-assist systems exist precisely to exceed human limitations in such scenarios.

Autopilot vs FSD and Marketing

  • A big subthread disputes that the video “tests FSD”: it used basic Autopilot/AEB, not the latest FSD on new hardware. Supporters say this is a misrepresentation; critics respond that emergency braking should work regardless of paid software tier.
  • There’s lengthy debate about the names “Autopilot” and “Full Self Driving”:
    • One side: terminology mirrors aviation/nautical autopilots that still require human oversight.
    • Other side: for typical drivers, “autopilot” and “full” self driving reasonably imply autonomous capability and reduce vigilance; this is seen as intentionally confusing marketing.

Fairness and Design of the Test

  • Some see the Wile E. Coyote wall and extreme conditions as contrived, optimized to showcase lidar and generate clicks.
  • Others say poor visibility and visually deceptive obstacles are exactly where redundant sensing should shine.
  • Dispute over whether Autopilot was manually disengaged before impact; later raw footage suggests it auto-disengaged shortly before the crash. Exact behavior remains contentious/unclear.
  • Several wish the test had included multiple production vehicles (with radar-based AEB or production lidar cars like Volvo/Polestar) for a more balanced comparison.

Safety Data and Real-World Crashes

  • Commenters cite Tesla failures with white tractor trailers and lawsuits/accident maps as evidence that vision-only has serious blind spots.
  • Others point to studies showing AEB in general cuts crashes significantly and claim Teslas are statistically safer, while acknowledging Tesla’s own safety reports are marketing and methodologically debatable.

Lidar Adoption and Future Directions

  • Lidar-equipped consumer cars are still rare and often ship with sensors inactive or in data-collection mode; most mainstream systems use camera + radar.
  • Some expect improved depth-from-vision models may eventually reach “human parity,” but many argue that until then, adding lidar/radar is the prudent engineering tradeoff.