Tesla changes meaning of 'Full Self-Driving', gives up on promise of autonomy
Redefining “Full Self-Driving” and broken promises
- Many see adding “(Supervised)” to “Full Self-Driving” as an implicit admission Tesla won’t deliver unsupervised autonomy to existing buyers, after nearly a decade of “next year” claims.
- Others argue the change is mostly legal/PR framing: Tesla is now describing what the system does today without explicitly abandoning long‑term Level 4/5 ambitions.
- Several commenters point to early marketing (e.g. “driver only there for legal reasons”) as clearly implying unsupervised operation, now walked back in practice.
Fraud, regulation, and refunds
- Large contingent calls this straightforward fraud or securities/false‑advertising abuse, noting stock gains and FSD sales driven by undelivered autonomy promises.
- Skepticism that US regulators (SEC/FTC, states) will act; some blame “late‑stage capitalism” and weak consumer protection, though others say agencies have probably pushed as far as they can.
- People who bought FSD years ago feel cheated; talk of class actions is tempered by Tesla’s arbitration clauses and spotty enforcement history.
- A minority insists early timelines were naïve rather than malicious, but acknowledges they were “irresponsible.”
Waymo, autonomy levels, and what counts as FSD
- Repeated comparison: Waymo is geofenced Level 4 with remote assistance, Tesla is still Level 2. Debate whether L4 in limited cities “counts” as full self‑driving.
- Some argue “full” should mean “can drive nearly everywhere humans can”; others say transformative tech doesn’t need universal coverage (analogy to early cell phones and gas stations).
- There’s disagreement over how often remote assistance occurs and whether that undermines “full” autonomy.
Sensors: vision‑only vs lidar/radar
- Big fault line: critics say Tesla’s vision‑only bet was “short‑sighted,” rejected decades of sensor‑fusion research, and is now effectively being abandoned.
- Many engineers and practitioners in the thread argue lidar/radar + cameras are clearly superior for safety, redundancy, and latency; several cite Waymo and Chinese systems as evidence.
- Defenders counter that:
- Humans drive mostly on vision, so vision‑only is theoretically sufficient.
- Extra sensors add cost, complexity, and failure modes; the “best part is no part.”
- Strong pushback: cameras are not human eyes, current ML is far from human semantics, and engineering safety normally favors redundant modalities.
Human driving, edge cases, and environment
- Long subthreads on how well humans adapt to foreign driving cultures and conditions vs how localized today’s AVs are.
- Severe weather (snow, ice, heavy rain, glare, fog) and chaotic traffic (e.g. parts of India, Africa, rural icy roads) are repeatedly cited as unsolved for all vendors.
- Some argue the bar for machines should be “better than humans,” not merely “as good,” given existing human crash rates.
Current Tesla FSD performance
- Some owners report FSD now handles 90–95% of their driving, including complex Bay Area/Boston routes, with rare safety interventions. They see rapid progress and consider Tesla far ahead of legacy OEMs.
- Others report phantom braking, poor behavior in unusual geometry, and camera reliability issues, saying they must intervene every few miles and find it terrifying in real use.
- There’s a clear split between “it’s already better than average rideshare drivers” anecdotes and “I wouldn’t trust it in bad weather or unfamiliar areas.”
Broader views on Tesla and Musk
- One camp argues Musk’s leadership created enormous value (EV market, rockets, energy storage) and that overpromising is typical of ambitious tech.
- Another emphasizes poor governance, hype‑driven valuation, the trillion‑dollar pay package, and a pattern of big, undelivered narratives (robotaxis, cheap cars, tunnels) as warning signs.
- Some suggest Tesla’s real long‑term play is batteries/energy, with cars and FSD as a bootstrapping and hype vehicle.