When Tesla's FSD works well, it gets credit. When it doesn't, you get blamed
Marketing, Definitions, and Blame-Shifting
- Commenters argue Tesla has continually moved the goalposts: “robotaxi” now includes cars with human “safety drivers,” which some say is just rebranded traditional taxis.
- Many see a broader “AI pattern”: when FSD works, Tesla/AI gets the credit; when it fails, the human gets blamed. Comparisons are made to “agentic coding” and “you didn’t prompt it right.”
- Several point out the asymmetry: Tesla markets FSD as a product, but in crashes tends to frame it as a mere driver-assistance tool, pushing liability back to users.
Safety, Reliability, and Data vs Anecdotes
- There’s heavy criticism of the lack of transparent, third‑party safety statistics (e.g., collisions per mile, not just disengagements or user testimonials).
- Some users report big improvements in v13/14 and say it drives long highway or mixed trips with few or no interventions; others report persistent dangerous behavior in city driving and have stopped using it.
- Multiple people emphasize that anecdotes (“it drove me 2,000 miles”) are irrelevant for public safety; what matters is rigorously measured incident rates, akin to evaluating a medical treatment.
- Concern is raised that intermediate reliability (e.g., tens of thousands of miles per serious incident) is especially dangerous: drivers relax, treat it as unsupervised autonomy, but it is still worse than average humans.
Edge Cases, Sensors, and Technical Limits
- Sun glare, night driving, seasonal conditions, and lane visibility are cited as reasons not to trust FSD; some claim newer hardware and versions help, others with the newest hardware say issues remain.
- Debate over Tesla’s camera‑only approach vs adding lidar/radar. Critics say vision-only is brittle and that shipping systems with known limitations without clear user warnings is unethical.
Liability, Regulation, and Legal Cases
- Several are perplexed by weak regulatory action in the US/Canada, calling FSD essentially “unlicensed drivers on the road.”
- Discussion of a Florida Autopilot crash verdict: jury split fault between Tesla and the driver. Some argue Tesla deserves zero blame if the driver pressed the accelerator; others say branding (“Autopilot,” “FSD”) and design choices make shared liability appropriate.
- Some propose banning “Level 3” style systems entirely because they invite exactly this ambiguity about who is responsible.
Competition, Business Model, and User Sentiment
- Comparison with Waymo, Nuro, Baidu, Zoox, etc.: others are operating true robotaxis at limited scale, while Tesla is seen either as still catching up or as “maxed out and mostly hype,” depending on the commenter.
- There’s debate whether Tesla’s low‑cost, camera‑only robotaxi vision could eventually crush higher‑cost stacks economically, if it ever works as promised.
- Multiple early tech‑enthusiast owners report they won’t buy another Tesla: FSD perceived as oversold and underdelivering, frustration with lack of new models or meaningful upgrades, and growing distaste for the company’s leadership and brand image.
Broader AI and Incentive Structures
- Parallels are repeatedly drawn to generative AI tools: they can be impressively helpful but also produce bizarre failures, still requiring expert supervision.
- Some frame FSD and similar systems as part of a wider economy of plausible deniability and “chickenization,” where companies capture upside while systematically offloading risk and blame onto individual users.