The comparison isn't really like-for-like. NHTSA SGO AV reports can include very minor, low-speed contact events that would often never show up as police-reported crashes for human drivers, meaning the Tesla crash count may be drawing from a broader category than the human baseline it's being compared to.
There's also a denominator problem. The mileage figure appears to be cumulative miles "as of November," while the crashes are drawn from a specific July-November window in Austin. It's not clear that those miles line up with the same geography and time period.
The sample size is tiny (nine crashes), uncertainty is huge, and the analysis doesn't distinguish between at-fault and not-at-fault incidents, or between preventable and non-preventable ones.
Also, the comparison to Waymo is stated without harmonizing crash definitions and reporting practices.
All of your arguments are expounded upon in the article itself, and their conclusions still hold, based on the publicly available data.
The 3x figure in the title is based on a comparison of the Tesla reports with estimated average human driver miles without an incident, not based on police report data. The comparison with police-report data would lead to a 9x figure instead, which the article presents but quickly dismisses.
The denominator problem is made up. Tesla Robotaxi has only been launched in one location, Austin, and only since July (well, 28th June, so maybe there is a few days discrepancy?). So the crash data and the miles data can only refer to this same period. Furthermore, if the miles driven are actually based on some additional length of time, then the picture gets even worse for Tesla, as the denominator for those 9 incidents gets smaller.
The analysis indeed doesn't distinguish between the types of accidents, but this is irrelevant. The human driver estimates for miles driven without incident also don't distinguish between the types of incidents, so the comparison is still very fair (unless you believe people intentionally tried to get the Tesla cars to crash, which makes little sense).
The comparison to Waymo is also done based on incidents reported by both companies under the same reporting requirements, to the same federal agency. The crash definitions and reporting practices are already harmonized, at least to a good extent, through this.
Overall there is no way to look at this data and draw a conclusion that is significantly different from the article: Tesla is bad at autonomous driving, and has a long way to go until it can be considered safe on public roads. We should also remember that robotaxis are not even autonomous, in fact! Each car has a human safety monitor that is ready to step in and take control of the vehicle at any time to avoid incidents - so the real incident rate, if the safety monitor weren't there, would certainly be even worse than this.
I'd also mention that 5 months of data is not that small a sample size, despite you trying to make it sound so (only 9 crashes).
To add to this, more data from more regions means the estimate of average human miles without an incident is more accurate, simply because it is estimated from a larger sample, so more likely to be representative.
I agree with most of your points and your conclusion, but to be fair OP was asserting that human drivers under-report incidents, which I believe. Super minor bumps where the drivers get out, determine there’s barely a scratch, and go on. Or solo low speed collisions with walls in garage or trees.
I don’t think it invalidates the conclusion, but it seems like one fair point in an otherwise off-target defense.
Sure, but the 3x comparison is not based on reported incidents, it's based on estimates of incidents that occur. I think it's fair to assume such estimates are based on data about repairs and other such market stats, that don't necessarily depend on reporting. We also have no reason a priori to believe the Tesla reports include every single incident either, especially given their history from FSD incident disclosures.
> The 3x figure in the title is based on a comparison of the Tesla reports with estimated average human driver miles without an incident, not based on police report data. The comparison with police-report data would lead to a 9x figure instead, which the article presents but quickly dismisses.
I think OP's point still stands here. Who are people reporting minor incidents to that would be publicly available that isn't the police? This data had to come from somewhere and police reports is the only thing that makes sense to me.
If I bump my car into a post, I'm not telling any government office about it.
I don't know, since they unfortunately don't cite a source for that number, but I can imagine some sources of data - insurers, vehicle repair and paint shops. Since average miles driven without incident seems plausible to be an important factor for insurance companies to know (even minor incidents will typically incur some repair costs), it seems likely that people have studied this and care about the accuracy of the numbers.
Of course, I fully admit that for all I know it's possible the article entirely made up these numbers, I haven't tried to look for an alternative source or anything.
The article lists the crashes right at the top. One of 9 involved hitting a fixed object. The rest involved collisions with people, cars, animals, or injuries.
So, let's exclude hitting fixed objects as you suggest (though the incident we'd be excluding might have been anything from a totaled car and huge fire to zero damage), and also assume that humans fail to report injury / serious property damage accidents more often than not (as the article assumes).
That gets the crash rate down from an unbiased 9x to a lowball 2.66x higher than human drivers. That's with human monitors supervising the cars.
2.66x is still so poor they should be pulled of the streets IMO.
> So, let's exclude hitting fixed objects as you suggest (though the incident we'd be excluding might have been anything from a totaled car and huge fire to zero damage)
I don't know what data is available but what I really care about more than anything is incidents where a human could be killed or harmed, followed by animals, then other property and finally, the car itself. So I'm not arguing to exclude hitting fixed objects, I'm arguing that severity of incident is much more important than total incidents.
Even when comparing it to human drivers, if Tesla autopilot gets into 200 fender benders and 0 fatal crashes I'd prefer that over a human driver getting into 190 fender benders and 10 fatal crashes. Directionally though, I suspect the numbers would probably go the other direction, more major incidents from automated cars because, when are successful, they usually handle situations perfectly and when they fail, they just don't see that stopped car in front of you and hit it at full speed.
> That gets the crash rate down from an unbiased 9x to a lowball 2.66x higher than human drivers. That's with human monitors supervising the cars.
> 2.66x is still so poor they should be pulled of the streets IMO.
I'm really not here to argue they are safe or anything like that. It just seems clear to me that every assumption in this article is made in the direction that makes Tesla look worse.
>> However, that figure doesn’t include non-police-reported incidents. When adding those, or rather an estimate of those, humans are closer to 200,000 miles between crashes, which is still a lot better than Tesla’s robotaxi in Austin.
I can't be certain about auto insurers, but healthcare insurers just straight up sell the insurance claims data. I would be surprised if auto insurers haven't found that same "innovation."
That's a fair point, but I'll note that the one time I hit an inanimate object with my car I wasn't about to needlessly involve anyone. Fixed the damage to the vehicle myself and got on with life.
So I think it's reasonable to wonder about the accuracy of estimates for humans. We (ie society) could really use a rigorous dataset for this.
Tesla could just share their datasets with researchers and NHTSA and the researchers can do all the variable controls necessary to make it apples to apples.
TFA does a comparison with average (estimated), low-speed contact events that are not police-reported by humans, of one incident every 200,000 miles. I think that's high - if you're including backing into static objects in car parks and the like, you can look at workshop data and extrapolate that a lower figure might be closer to the mark.
TFA also does a comparison with other self-driving car companies, which you acknowledge, but dismiss: however, we can't harmonize crash definitions and reporting practices as you would like, because Tesla is obfuscating their data.
TFA's main point is that we can't really know what this data means because Tesla keep their data secret, but others like Waymo disclose everything they can, and are more transparent about what happened and why.
TFA is actually saying Tesla should open up their data to allow for better analysis and comparison, because at the moment their current reporting practice make them look crazy bad.
> TFA does a comparison with average (estimated), low-speed contact events that are not police-reported by humans, of one incident every 200,000 miles.
Where does it say that? I see "However, that figure doesn’t include non-police-reported incidents. When adding those, or rather an estimate of those, humans are closer to 200,000 miles between crashes, which is still a lot better than Tesla’s robotaxi in Austin."
All but one of the Tesla crashes obviously involved significant property damage or injuries (the remaining one is ambiguous).
So, based on the text of the article, they're assuming only 2/5ths of property damage / injury accidents are reported to the police. That's lower than I would have guessed (don't people use their car insurance, which requires the police report?), but presumably backed by data.
Because the bad title is the point, the author has made it his life’s purpose to troll the Elon sycophants on X. For that reason there’s no reason to take him any more seriously than you would take those guys as he’s just their mirror image. I’m enough of an Elon skeptic to suspect the Austin robotaxis don’t have a real path to operating autonomously for several reasons, doesn’t mean I have to listen to Fred Lambert. He’s peddling clickbait/ragebait and I don’t understand how it’s taken as anything more.
Yes, that's very often the case with things that would very likely be shared if it looked good.
There are things that don't get shared out of principle. For example there are anonymous votes or behind the scenes negotiations without commitment or security critical data.
But given that Musk tends to parade around vague promises since a very long time, it seems sharing data that looks very good would certainly be something they would do.
It's a public company making money off of some claims. Not being transparent about the data supporting those claims is already a huge red flag and failure on their part regardless of what the data says.
I've actually started ignoring all these reports. There is so much bad faith going on in self-driving tech on all sides, it is nearly impossible to come up with clean and controlled data, much less objective opinions. At this point the only thing I'd be willing to base an opinion on is if insurers ask for higher (or lower) rates for self-driving. Because then I can be sure they have the data and did the math right to maximise their profits.
The biggest indicator for me that this headline isn't accurate is that Lemonade insurance just reduced the rate for Tesla FSD by 50%. They probably have accurate data and decided that Tesla's are significantly safer than human drivers.
Thank you. Everyone is hiding disengagement and settling to hide accidents. This will not be fixed or standardized without changes to the laws, which for self driving have been largely written by the handful of companies in the space. Total, complete regulatory capture.
I think it's fair to put the burden of proof here on Tesla. They should convince people that their Robotaxis are safe. If they redact the details about all incidents so that you cannot figure out who's at fault, that's on Tesla alone.
While I think Tesla should be transparent, this article doesn't really make sure it is comparing apples to apples either.
I think its weird to characterize it as legitimate and the say "Go Tesla convince me ohterwise" as if the same audience would ever be reached by Tesla or people would care to do their due diligence.
It’s not weird. They have a history of over promising to the point that one could say they just straight up lie on a regular basis. The bar is higher for them because they have abused the public’s trust and it has to be earned again.
The results have to speak for Tesla very loudly and very clearly. And so far they don’t.
But this is more your feelings than actually factual.
I mean sure you can say that the timelines did slip a lot but that doesn’t really have anything to with the rest that is insinuated here.
I would argue a timeline slipping doesn’t mean you go about killing people and lie about it next. I would even go so far as to say that the timelines did slip to exactly avoid that.
Tesla continues to overpromise, about safety, about timelines that slip due to safety.
We should be a bit more hard nosed and data based when dealing with these things rather than dismissing the core question due to "feelings" and due to Tesla not releasing the sort of data tha allows fair analysis b
> But this is more your feelings than actually factual
Seems to be the other way, though I find that kind of rude to assert as opposed to asking me what informs my opinion. Other comments have answered that very well
You're generous with your words to the point they sound like apologism. Musk has been promising fully autonomous driving "within 1-3 years" since 2013. And he's been charging customers money for that promise for just as long. Timelines keep slipping for more than half of the company's existence now, that's not a slipup anymore.
Tesla has never been transparent with the data on which they base their claims of safety and performance of the system. They tout some nice looking numbers but when anyone like the NHTSA requests the real data they refuse to provide it.
When NHTSA shows you numbers, they're lying. If I tell you I have evidence Tesla is lying you'll tell me to show it or STFU. When Tesla does the same after so many people died, you go all soft and claim everyone else is lying. That's very one sided behavior, more about feelings than facts.
> But this is more your feelings than actually factual.
The article is about "NHTSA crash data, combined with Tesla’s new disclosure of robotaxi mileage". Sounds factual enough. If Tesla is sitting on a trove of data that proves otherwise but refuse to publish it that's on them. If anyone is about the feels and not the facts here, it's you.
Tesla (Elon Musk really) has a long history of distorting the stats or outright lying about their self driving capabilities and safety. The fact that folks would be skeptical of any evidence Tesla provided in this case is a self-inflicted problem and well-deserved.
He did promise his electric trucks to be more cost-effective than trains (still nothing in 2026...). And "world's fastest supercar". And full self-driving by "next year" in 2015. None of these are offered in 2026.
There have never been truthful statements from his companies, only hype & fluff for monetary gains.
There used to be [EDIT: still is] a website[1] that listed all of Musk's promises and predictions about his businesses and showed you how long it's been since he said the promise would materialize. It's full of mostly old statements, probably because it's impossible to keep up with the amount of content being generated monthly.
This has nothing to do with burden of proof, it has to do with journalistic accuracy, and this is obviously a hit piece. HN prides itself on being skeptical and then eats up "skeptic slop."
>I think it's fair to put the burden of proof here on Tesla.
That just sounds like a cope. The OP's claim is that the article rests on shaky evidence, and you haven't really refuted that. Instead, you just retreated from the bailey of "Tesla's Robotaxi data confirms crash rate 3x worse ..." to the motte of "the burden of proof here on Tesla".
More broadly I think the internet is going to be a better place if comments/articles with bad reasoning are rebuked from both sides, rather than getting a pass from one side because it's directionally correct, eg. "the evidence WMDs in Iraq is flimsy but that doesn't matter because Hussein was still a bad dictator".
The point is this: the article writer did what research they could do given the available public data. It's true that their title would be much more accurate if it said something like "Tesla's Robotaxi data suggests crash rate may be up to 3x worse than human drivers". It's then 100% up to Tesla to come up with cleaner data to help dispel this.
But so far, if all the data we have points in this direction, even if the certainty is low, it's fair to point this out.
It's not a Motte and Bailey fallacy at all; it's a statement of a belief about what should be expected if something is to be allowed as a matter of public health and safety implications.
They're saying that Tesla should be held to a very high standard of transparency if they are to be trusted. I can't speak to OP, but I'd argue this should apply to any company with aspirations toward autonomous driving vehicles.
The title might be misleading if you don't read the article, but the article itself at some level is about how Tesla is not being as transparent as other companies. The "shaky evidence" is due to Tesla's own lack of transparency, which is the point of stating that the burden of proof should be on Tesla. The article is about how, even with lack of transparency, the data doesn't look good, raising the question of what else they might not be disclosing.
From the article: "Perhaps more troubling than the crash rate is Tesla’s complete lack of transparency about what happened... If Tesla wants to be taken seriously as a robotaxi operator, it needs to do two things: dramatically improve its safety record, and start being honest about what’s happening..."
I'd argue the central thesis of the article isn't one of statistical estimation; it's a statement about evidentiary burden.
You don't have to agree with the position that Tesla should be held a high transparency standard. But the article is taking the position that you should, and that if you do agree with that position, that you might say that even by Tesla's unacceptable standards they are failing. They're essentially (if implicitly) challenging Tesla to provide more data to refute the conclusions, saying "prove us wrong", knowing that if they do, then at least Tesla would be improving transparency.
I don’t think it’s a motte and Bailey fallacy because the motte is not well established. Tesla clearly does not believe that the burden of proof is on them, and by extension regulators, legislators.
a) Teslas are unsafe. The sparse data they're legally obligated to provide shows this clearly.
b) Elon Musk is sitting on a treasure trove of safety data showing that FSD finally works safely + with superhuman crash avoidance, but is deciding not to share it.
You're honestly going with (b)? We're talking about the braggart that purchased Twitter so he could post there with impunity. To put it politely, it would be out of character for him to underpromise + overdeliver.
Btw, do you happen to know, why electrek.co changed their tune in such a way? I was commenting on a similarly negative story by the same site, and said that they are always anti-Tesla. But then somebody pointed out that this wasn't always the case, that they were actually supportive, but then suddenly turned.
Fred Lambert was an early Tesla evangelist - he constantly wrote stories praising Tesla and Elon for years. He had some interactions with Elon on Twitter, got invited to Tesla events, referred enough people to earn free Tesla cars, etc.
If we assume the best (per HN guidelines): Up to about 2018 Tesla was the market-leading EV company, and the whole thesis of Electrek is that EVs are the future. So, of course they covered Tesla frequently and in a generally positive light.
Since then, the facts have changed. Elon's become increasingly erratic, and has been making increasingly unhinged claims about Tesla's current and future products. At the same time, Tesla's offerings are far behind domestic standards, which are even further behind international competition. Also, many people have died due to obvious Tesla design flaws (like the door handles, and false advertising around FSD).
Journalistic integrity explains the difference in coverage over the years. Coverage from any fact-based outlet would have a similar shift in sentiment.
Good analysis. Just over a month ago, Electrek was posted here claiming that Teslas with humans were crashing 10x more than with humans alone.
That was based on a sample size of 9 crashes. In the month following that, they've added one more crash while also increasing the miles driven per month.
The headline could just as easily be about the dramatic decline in their crash rate! Or perhaps the data is just too small to analyze like this, and Electrek authors being their usual overly dramatic selves.
Previous article: Tesla with human supervisor at wheel: 10x worse than human alone.
Current article: Tesla with remote supervisor: 3-9x worse than human alone.
Given the small sample sizes, this shows a clear trend: Tesla's autopilot stuff (or perhaps vehicle design) is causing a ton of accidents, regardless of whether it's being operated locally by customers or remotely by professionals.
I'd like to see similar studies broken down by vehicle manufacturer.
The ADAS in one of our cars is great, but occasionally beeps when it shouldn't.
The ADAS in our other car cannot be disabled and false positives every 10-20 miles. Every week or so it forces the vehicle out of lane (either left of double yellow line center, or into another car's lane).
If the data on crash rates for those two models were public, I guarantee the latter car would have been recalled by now.
That is an overly optimistic way to phrase an apparent decrease in crashes, when Tesla is not being upfront about data that at best looks like it's worse than human crash rates.
Unless one was a Tesla insider, or had a huge interest in Tesla over other people on the road, such spin would not be a normal thing to propose saying.
Media outlets, even ones devoted to EVs, should not adopt the very biased framing you propose.
I don’t think statistics work that way. A study of all Teslas and all humans in Austin for 5 months is valid because Electrek ran a ridiculous “study”, and this headline could “just as easily” have presented the flawed Elektrek stork as a legit baseline?
The 10x would be 9x if the methodology were the same. 9x->3x is going from reported accidents to inferred true accident rate, as the article points out.
This is a statement of fact but based on this assumption:
> low-speed contact events that would often never show up as police-reported crashes for human drivers
Assumptions work just as well both ways. Musk and Tesla have been consistently opaque when it comes to the real numbers they base their advertising on. Given this past history of total lack of transparency and outright lies it's safe to assume that any data provided by Tesla that can't be independently verified by multiple sources is heavily skewed in Tesla's favor. Whatever safety numbers Tesla puts out you can bet your hat they're worse in reality.
> the fleet has traveled approximately 500,000 miles
Let's say they average 10mph, and say they operate 10 hours a day, that's 5,000 car-days of travel, or to put it another way about 30 cars over 6 months.
That's tiny! That's a robotaxi company that is literally smaller than a lot of taxi companies.
One crash in this context is going to just completely blow out their statistics. So it's kind of dumb to even talk about the statistics today. The real take away is that the Robotaxis don't really exist, they're in an experimental phase and we're not going to get real statistics until they're doing 1,000x that mileage, and that won't happen until they've built something that actually works and that may never happen.
The more I think about your comment on statistics, the more I change my mind.
At first, I think you’re right - these are (thankfully) rare events. And because of this, the accident rate is Poisson distributed. At this low of a rate, it’s really hard to know what the true average is, so we do really need more time/miles to know how good/bad the Teslas are performing. I also suspect they are getting safer over time, but again… more data required. But, we do have the statistical models to work with these rare events.
But then I think about your comment about it only being 30 cars operating over 6 months. Which, makes sense, except for the fact that it’s not like having a fleet of individual drivers. These robotaxis should all be running the same software, so it’s statistically more like one person driving 500,000 miles. This is a lot of miles! I’ve been driving for over 30 years and I don’t think I’ve driven that many miles. This should be enough data for a comparison.
If we are comparing the Tesla accident rate to people in a consistent manner (accident classification), it’s a valid comparison. So, I think the way this works out is: given an accident rate of 1/500000, we could expect a human to have 9 accidents over the same miles with a probability of ~ 1 x 10^-6. (Never do live math on the internet, but I think this is about right).
500,000 / 30 years is ~16,667mi/yr. While its a bit above the US average, its not incredibly so. Tons of normal commuters will have driven more than that many miles in 30 years.
That’s not quite the point. I’m a bit of an outlier, I don’t drive much daily, but make long trips fairly often. The point with focusing on 500,000 miles is that that should be enough of an observation period to be able to make some comparisons. The parent comment was making it seem like that was too low. Putting it into context of how much I’ve driven makes me think that 500,000 miles is enough to make a valid comparison.
But that's the thing, in many ways it is a pretty low number. Its less than the number of miles a single average US commuter will have driven in their working years. So in some ways its like trying to draw lifetime crash statistics but only looking at a single person in your study.
Its also kind of telling that despite supposedly having this tech ready to go for years they've only bothered rolling out a few cars which are still supervised. If this tech was really ready for prime time wouldn't they have driven more than 500,000mi in six months? If they were really confident in the safety of their systems, wouldn't they have expanded this greatly?
I mean, FFS, they don't even trust their own cars to be unsupervised in the Las Vegas Loop. An enclosed, well-lit, single-lane, private access loop and they can't even automate that reliably enough.
Waymo is already doing over 250,000 weekly trips.[0] The trips average ~4mi each. With those numbers, Waymo is doing 1 million miles a week. Every week, Waymo is doing twice as many miles unsupervised than Tesla's robotaxi has done supervised in six months.
Wait, so your argument is there's only 9 crashes so we should wait until there's possibly 9,000 crashes to make an assessment? That's crazy dangerous.
At least 3 of them sound dangerous already, and it's on Tesla to convince us they're safe. It could be a statistical anomaly so far, but hovering at 9x the alternative doesn't provide confidence.
No, my argument is you shouldn't draw a statistical conclusion with this data. That's all. I'm kind of pushing in the direction you were pointing in the second part - it's not enough data to make statistical inferences. We should examine each incident, identify the root cause and come to a conclusion as to whether that means the system is not fit for purpose. I just don't think the statistics are useful.
We've known for a long time now that their "robotaxi" fleet in Austin is about 30-50 vehicles. It started off much lower and has grown to about 50 today. There's actually a community project to track individual vehicles that has more exact figures.
Currently it's at 58 unique vehicles (based on license plates) with about 22 that haven't been seen in over a month
>One crash in this context is going to just completely blow out their statistics.
One crash in 500,000 miles would merely put them on par with a human driver.
One crash every 50,000 miles would be more like having my sister behind the wheel.
I’ll be sure to tell the next insurer that she’s not a bad driver - she’s just one person operating an itty bitty fleet consisting of one vehicle!
If the cybertaxi were a human driver accruing double points 7 months into its probationary license it would have never made it to 9 accidents because it would have been revoked and suspended after the first two or three accidents in her state and then thrown in JAIL as a “scofflaw” if it continued driving.
From the tone, it seems that the poster's sister is a particularly bad driver (or at least they believe her to be). While having an autonomous car that can drive as well as even a bad human driver is definitely a major accomplishment technologically, we all know that threshold was passed a long time ago. However, if Tesla's robotaxis (with human monitors on board, let's not forget - these are not fully autonomous cars like Waymo's!) are at best as good as some of the worse human drivers, then they have no business being allowed on public roads. Remember that human drivers can also lose their license if [caught] driving too poorly.
Elon promised self driving cars in 12 months back in 2017? He’s also promising Optimus robots doing surgery on humans in 3 years? Extrapolating…………… Optimus is going to kill some humans and it will all be worth it!
Elon is aware that Tesla insane market valuation would crash 10x if it stays a car company.
There isn't enough money and most importantly margin in the car industry to warrant such a valuation, so he has to pivot away from cars into the next thing.
Just to make an example of how risky it is to be a car company for Tesla.
In 2025 Toyota has had: 3.5 times Tesla's revenue, 8 times the net income and twice the margin.
And Toyota has a market cap that is 6 times lower than Tesla.
It would take tesla a gargantuan effort to match Toyota's numbers and margins, and if it matched it...it would be a disaster for Tesla's stock.
Hell, Tesla makes much less money than Mercedes Benz and with a smaller margin..
Mercedes has 60% more revenue and twice the net income. Yet, Tesla is valued around 40 times Mercedes-Benz.
Tesla *must* pivot away from cars and make it a side business or sooner or later that stuff is crashing, and it will crash fast and hard.
Musk understands that, which is why he focusing on robo taxis and robots. It's the only way to sell Tesla to naive investors.
And then, they will pivot away from humanoid robots. To justifying the valuation they have already pivoted from electric cars company to self-driving taxis company, without delivering self-driving taxis, they are now pivoting to robots, before delivering the robots they will pivot to the next shinny thing. Maybe Pivot is the real Tesla product that justifies the crazy valuation.
That is why all the crazy promises and moves, hyping X.ai, Robotaxis, Optimus, Data centers in space. If you are constantly promising the future and some radical moves, the optimistic investors believe him and he can keep increasing the "potential future valuation".
But when you look at it:
- X.ai is basically getting into the race by throwing money at the problem and using your name to get funding in a hyped industry.
- Do a buyout of your own company with it, get access to data that you restricted to everyone else.
- Merge it with SpaceX for "datacenters in space", do an IPO for a huge valuation
- Probably merge it with Tesla, overhype everything
- As the humanoid, AI and space industry grows, so will the valuation just because of the market growth, not necessarily because of great/revolutionary products
At that point, nobody can even consider what the valuation is, as it is a mishmash of promises, fudged numbers, real numbers, potential numbers, contracts, hype and everything else. It allows moving financials around and tuning things to get him his 1T package and hype things even more.
I mean congrats to Elon, just by overhyping his products he shifts the timeline narrative more towards techno-optimism and earns himself more money. The financial shenanigans to follow in the next few years will be an interesting period for future financial archeologists.
I dislike Tesla/Elon but would prefer the reality where they innovate until their worth matches their current price. I suspect yours is more likely to happen.
> Elon is aware that Tesla insane market valuation would crash 10x if it stays a car company.
I see nothing wrong here, correction back to reality.
I understand why people adored him blindly in the early days, but liking him now after its clear what sort of person he is and always will be is same as liking trump. Many people still do it, but its hardly a defensible position unless on is already invested in his empire.
It’d be best for everyone outside of the company but he and the board would be buried in lawsuits for the rest of their lives. They have a strong personal interest in avoiding that even if it’s well-deserved based on sober data analysis, so they’re pushing the Hail Mary play trying to jump into a bigger new market which they haven’t already ceded to the competition.
We need to bring back the concept of seppuku for situations like that. "We have to lie more because it would be too painful to admit our lies" should be a moment that makes any leader question where they have been, where they are, where they're going, and all of their motivations and reasoning. It should be the sort of "what have I done?" moment that sends a person to a monastery, an asylum, or the grave.
I saw a pretty convincing argument that Musk fried his brain with ketamine, written by a former ketamine abuser who saw a lot of familiar behavior. I don't think Musk is the same guy now that he was in the early days.
Sounds like a reach. Many people just loved the stuff he was doing and didn't really know much about him as a person. When a more complete picture (specifically his politics) emerged, and people decided they really didn't like the person, they had to resolve their cognitive dissonance by finding reasons they were right then and right now, instead of admitting they projected the person they imagined onto the person they really didn't know. Tech enthusiasts just never imagined that a guy doing so many cool things could turn out to be a right-winger.
They also reported tiny profits that are just slightly above what they get in as subsidies. P/E compared to other similar companies is also through the roof.
The best part of all of this is given their history, and the state of robotaxies as a whole, they will fail, and Tesla will crash. And it'll be a great day. The hype and obscene over valuation of them is utterly moronic.
Look how much longer, and more experience Waymo has and they still have multiple issues a week popping up online, and thats with running them in a very small well mapped and planned out area. Musk wants robo taxies globally, that's just not happening, not any time soon and certainly not by the 10 year limit for him to get his trillion dollar bonus from Tesla, which is the only reason he's pushing so hard to make it happen.
This comes after a recent iSeeCars study that found that Tesla as a brand had the highest fatal crash rate in the US (with Kia being a very close second)
Yes on one side Tesla is not transparent but on the other side the author of the article is an hypocrite given they went with the click-bait title "Tesla’s own Robotaxi data confirms crash rate 3x worse than humans even with monitor"
Tesla secrecy is likely due to avoid journalists taking any chance they can to sell more news by writing an autonomous vehicles horror story.
Given the secrecy we don't know what happened, yet the journalist did choose to go with the worse scenario title.
While the title is slightly biased, it's completely fair to analyze all of the public data a company provides about a very public problem (how safe their autonomous cars are), and show what the risks are. If Tesla wants us to believe their robotaxis are safe (which they implicitly do by putting these on public roads), it's entirely on them to publish data that supports that claim. If the data they themselves publish suggests that they are much worse than human drivers, then I want journalists to report on that.
It's also extremely implausible that Tesla has data that their cars are very safe, but choose to instead publish vague data that makes them seem much worse. It's for example much more likely that these 9 incidents reported are just the bad incidents that they think they won't be able to hide, rather than assuming these are all or mostly minor incidents like lightly bumping into a static object.
Secrecy clearly doesn't avoid that kind of story though. The question is if their numbers were really good, or at least as good as Waymo, why wouldn't they share them for the positive press? Waymo doesn't get as many negative pieces like this.
It's a pretty logical conclusion to say that numbers they won't share must make them look bad in some way.
The human accident count per mile is brought down by a lot of highway miles. The Robotaxi is, at present, geofenced. It's not going to be getting a lot of highway miles. Most crashes happen on city streets.
Tesla has completely fumbled a spectacular lead in EVs and managed to snatch defeat from the jaws of victory. And instead of turning it around, we're supposed to believe they are going to completely pivot and then take over a market with far more developed competitors (e.g. Boston Dynamics).
That Elon is riding this wave amidst the transparency of the whole thing is the funniest part. It's like watching people lose money at the "three cup" game but the cups are clear.
Do you remember EVs before Tesla? They were glorified golf carts using lead-acid batteries. The performance and range were awful. The Roadster and the Model S changed all that. I'm not saying I remember this perfectly but as I recall, Tesla's original objective was to show that an EV could be a real car, look attractive (or at least normal), and to create demand for EVs that would force all manufacturers to start making them. The ultimate value in Tesla was supposed to be batteries, which all cars would eventually need.
I'm not sure how defensible the lead was. The only reason BYD isn't the only game in town is tariffs. The pivot to Optimus is ridiculous though. They can't get a car to drive truly autonomously after more than a decade and they want to expand the degrees of freedom?
Tesla had a good brand image in the early 2010, they could have positioned themselves like the quality/luxury brand for EV and have people buy Tesla for the brand itself like people do for Apple.
Instead they let Elon made their brand so toxic people are actively avoiding it.
That was 10-15 years ago, but back then Musk appeared different, and Telsa was new. Today you can buy a Tesla, they are no longer the status symbol they once were. A 15 year old Mercedes is a status symbol in the US, a 15 year old Tesla is not, Tesla didn't capture the status symbol market (which might have been a good decision - what wasn't a good decision was for the CEO to go public about political views that are lot of his potential base to not support)
No, it's the reverse. Someone who finds Musk's behavior so abhorrent they fear being affiliated with it will actually find reasons they don't really want a Tesla.
It doesn't help that Tesla, making extremely low quality and uncomfortable cars for the price point, provides plenty of dislikable things to find.
Facebook as a monopoly of a sort and so is hard to get away from. If I don't like Tesla there are many other options. Even if you only buy EVs, there are a lot of options that you can buy today. The only people who have to buy Tesla are the type who are buying 10 year old EVs (the limited range on 10 year old Nissan rules them out).
Difference is their product is so good as to be basically irreplaceable (good = strong network effects, which is the only flavor of "good" that matters)
I'm glad Tesla is pivoting to a product that can drop your bag of groceries in the worst case, instead of one that can slam you into a concrete divider at 75mph.
In general, any robot that has servos powerful enough to be any of use is surprisingly dangerous to be around. While it's much easier to apply various limiters, the raw power in those engines will always pose a significant level of risk if anything goes wrong. If you're hovering above a human who sits up suddenly, you might get your nose broken. If it's a robot instead, it will have the strength and mass to easily mutilate you in the same kind of accident.
The robot could leave the ironer standing on your clothes and walk away; it could leave your empty pan on the stove at max heating; it could take a nice hard grip of your throat for a few minutes.
…and the guy shuffling the cups is Dave Chappell’s crackhead character.
My theory has always been Trumps grand con was acting like what poor people thought a rich person was like, Elon acts like what morons think a genius is like.
this is not good but the point is this can be improved much easier than improving human accidents rate. Both are very difficult problems, but one is certainly harder.
That's not really true. There are huge discrepancies in human human driver accident data across different countries, which shows that there are clear practices one could deploy to significantly reduce driving incidents - people just choose to not implement them.
As far as I understand, those Robotaxis are only available within Austin so far. That is slow city traffic, the number of miles per ride is very small. However the number for human drivers seem to take all kind of roads into respect. Of course, highways are the roads where you drive most of the distance at the least risk for an accident. Has this been taken into account for the evaluation?
It would be ironic that people are claiming the Tesla numbers for Autopilot are to optimistic, as it is used on highways only and at the same time don't notice that city-only numbers for the FSD would be pessimistic statistics-wise.
It does look extremely pessimistic. Like one of the 'incident' is that they hit a curb at a parking lot at 6 MPH.
No human driver would report this kind of incident. A human driver would probably forget it after the next traffic light.
While it's clearly Tesla's fault (if you hit any static object it's your fault), when you take this kind of 'incident' into account of course it'd look worse than humans.
The human data estimate they compare to to get the 3x number also includes this type of incident - even if of course no one reports it, you can get some idea of the number of such incidents based on service and paint shop data.
More importantly: it seems like Austin is mostly a typical US city grid of wide streets. Nothing comparable with an average old inner city, or narrow countryside roads with a ditch or cliff or quay on one or both sides. Probably not many pedestrians & cyclists roaming the streets either?
- A lot of people fervently hoped they wouldn't need to drive for much longer and their kids wouldn't even need to learn
- So much progress had been made with deep learning is a fairly short length of time that surely we were on the cusp of broadly deployed autonomous vehicles.
TBH, the comments here amaze me. The claim is that a human being paid to monitor a driver assistance feature is 3x more likely to crash than a human alone.
That needs extraordinary evidence. Instead the evidence is misleading guesses.
That... is not really an extraordinary claim. That has been many people's null hypothesis since before this technology was even deployed, and the rationale for it is sufficiently borne out to play a role in vigilance systems across nearly every other industry that relies on automation.
A safety system with blurry performance boundaries is called "a massive risk." That's why responsible system designers first define their ODD, then design their system to perform to a pre-specified standard within that ODD.
Tesla's technology is "works mostly pretty well in many but not all scenarios and we can't tell you which is which"
It is not an extraordinary claim at all that such a system could yield worse outcomes than a human with no asssistance.
As long as there are still safety drivers, the data doesn't really tell you if the AI is any good. Unless you had reliable data about the number of interventions by the driver, which I assume Tesla doesn't provide.
Still damning that the data is so bad even then. Good data wouldn't tell us anything, the bad data likely means the AI is bad unless they were spectacularly unlucky. But since Tesla redacts all information, I'm not inclined to give them any benefit of the doubt here.
> As long as there are still safety drivers, the data doesn't really tell you if the AI is any good. Unless you had reliable data about the number of interventions by the driver, which I assume Tesla doesn't provide.
Sorry that does not compute.
It tells you exactly if the AI is any good, as, despite the fact that there were safety drivers on board, 9 crashes happened. Which implies that more crashes would have happened without safety drivers. Over 500,000 miles, that's pretty bad.
Unless you are willing to argue, in bad faith, that the crashes happened because of safety driver intervention..
The problem is we don't know how many incidents would have happened if there was no safety driver. How many times did the driver have to intervene to prevent an accident? IMO, that should count towards the number of AI-driven accidents
I'm a bit hesitant to draw strong conclusions here because there is so little data. I would personally assume that it means the AI isn't ready at all, but without knowing any details at all about the crashes this is hard to state for sure.
But if the number of crashes had been lower than for human drivers, this would tell us nothing at all.
The "safety drivers" do nothing. They sit in the passenger seat and the only thing they have is a button that presumably stops the car and lets a remote operator take over.
> As long as there are still safety drivers, the data doesn't really tell you if the AI is any good.
I think we're on to something. You imply that good here means the AI can do it's thing without human interference. But that's not how we view, say, LLMs being good at coding.
In the first context we hope for AI to improve safety whereas in the second we merely hope to improve productivity.
In both cases, a human is in the loop which results in second order complexity: the human adjusts behaviour to AI reality, which redefines what "good AI" means in an endless loop.
As much as I'd love to pile in on Tesla, it's unclear to me the severity of the incidents (I know they are listed) and if human drivers would report such things.
"Rear collision while backing" could mean they tapped a bollard. Doesn't sound like a crash. A human driver might never even report this. What does "Incident at 18 mph" even mean?
By my own subjective count, only three descriptions sound unambiguously bad, and only one mentions a "minor injury".
I'm not saying it's great, and I can imagine Tesla being selective in publishing, but based on this I wouldn't say it seems dire.
For example, roundabouts in cities (in Europe anyway) tend to increase the number of crashes, but they are overall of lower severity, leading to an overall improvement of safety. Judging by TFA alone I can't tell this isn't the case here. I can imagine a robotaxi having a different distribution of frequency and severity of accidents than a human driver.
He compared to the estimated statistics for non-reported accident (typically your example, that involve only one vehicle and only result in scratched paint) to estimate the 3x. Else the title would have been 9x (which is in line with 10x a data analyst blogger wrote ~ 3month ago).
> roundabouts in cities (in Europe anyway) tend to increase the number of crashes
Not in France, according to data. It depends on the speed limit, but they decrease accident by 34% overall, and almost 20% when the speed limit is 30 or 50 km/h.
They reduce accidents in general, but bring us some “entertaining” new ones where a (usually) drunk driver crashes into the statue/fountain/whatever in the middle or uses the little “hill” in the middle as a jump ramp…
If a human had eyes on every angle of their car and they still did that it would represent a lapse in focus or control -- humans don't have the same advantages here.
With that said : i would be more concerned about what it represents when my sensor covered auto-car makes an error like that, it would make me presume there was an error in detection -- a big problem.
A bollard at three feet might look like a grain silo at 400 yards. I could see angles getting to where the camera sees "beige rectangle (wall), red cylinder (bollard)" and it's basically an abstract modern art piece.
I see things on security cameras a lot that in low resolution are nearly impossible for me to decipher.
> showing cumulative robotaxi miles, the fleet has traveled approximately 500,000 miles as of November 2025.
Comparing stats from this many miles to just over 1 trillion miles driven collectively in the US in a similar time period is a bad idea. Any noise in Tesla's data will change the ratio a lot. You can already see it from the monthly numbers varying between 1 and 4.
This is a bad comparison with not enough data. Like my household average for the number of teeth per person is ~25% higher than world average! (Includes one baby)
Edit: feel free to actually respond to the claim rather than downvote
It's always possible to deny the relevancy of a comparison based on some quality of the compared data. The autonomous car pilot trials will be by their very nature restriced to some locations, with specific weather patterns, etc., so even after the mileage will be 1000x of the current one there will be still options.
At which point will the comparison be considered relevant?
I think what you say would have be fair if Elon's and his fanboys' stance was "we need more data" rather than "we will be able to scale self-driving cars very quickly, very soon".
Of course, it could be no other way for a company that unleashed "FSD Beta" onto the streets and allowed all of us to be subjected to their bloody (literally) beta test. You don't get a safer future with "move fast and break things" mentality. Especially when the CEO is as illiterate as Musk about his own technology that he discount the results of actual experts in the field.
I mean, just look at the trail of headless corpses (there actually are multiple) left by Tesla during this beta test. Weren't we all here to witness a previous version of the thing running straight through a cartoon wall? Of course this thing was always going to end in disappointment -- it's sucked its whole existence. It's never been serious it's always been an 80/20 play hoping to get away with the con without delivering the rest of the 20% that makes it work.
Tesla's technology is bunk, their entire FSD thesis of "vision only" has been a dismal failure, and it's actually going to tank the entire Tesla car brand. I've been saying this for a while and it looks like it's finally starting to happen: Tesla is going to exit the car business never having delivered FSD in any viable capacity (although they'll claim total success), and Musk will retarget his empire to running the same FSD grift but with robots. Musk learned the bigger the promise, the more runway people give you to make it a reality. Spin a big enough yarn and Musk can live the rest of his life delivering nothing -- not Mars, not FSD, not AI, nada -- and people will still call him a genius.
I am so tired of people defending Tesla. I’ve wrote off Tesla long time ago but what gets me are the people defending their tech. We all can go see the products and experience them.
The tech needs to be at least 100x more error free vs humans. It cannot be on par with human error rate.
Maybe? For years the highest selling EV was the Leaf.
I agree Tesla kind of increased the desirability of EVs at least in the US, but I'm not convinced it wouldn't have happened anyway.
It's a hard question to answer, because you're talking about a counterfactual.
I feel like there's probably some broader type of cognitive bias at play (where we assume something common wouldn't have been common otherwise, because it is common) but I don't know what the term for it might be.
We tend to defend companies that push the frontiers of self-driving cars, because the technology has the potential to save lives and make life easier and cheaper for everyone.
As engineers, we understand that the technology will go from unsafe, to par-with-humans, to safer-than-humans, but in order for it to get to the latter, it requires much validation and training in an intermediate state, with appropriate safeguards.
Tesla's approach has been more risk averse and conservative than others. It has compiled data and trained its models on billions of miles of real world telemetry from its own fleet (all of which are equipped with advanced internet-connected computers). Then it has rolled out the robotaxi tech slowly and cautiously, with human safety drivers, and only in two areas.
I defend Tesla's tech, because I've owned and driven a Tesla (Model S) for many years, and its ten-year-old Autopilot (autosteer and cruise control with lane shift) is actually smoother and more reliable than many of its competitors current offerings.
I've also watched hours of footage of Tesla's current FSD on YouTube, and seen it evolve into something quite remarkable. I think the end-to-end neural net with human-like sensors is more sensible than other approaches, which use sensors like LIDAR as a crutch for their more rudimentary software.
Unlike many commenters on this platform I have no political issues with Elon, so that doesn't colour my judgement of Tesla as a company, and its technological achievements. I wish others would set aside their partisan tribablism and recognise that Tesla has completely revolutionised the EV market and continues to make significant positive contributions to technology as a whole, all while opening all its patents and opening its Supercharger network to vehicles from competitors. Its ethics are sound.
> but in order for it to get to the latter, it requires much validation and training in an intermediate state, with appropriate safeguards.
I expect self-driving cars to be launched unsupervised on public roads in only an order-of-magnitude safer than human drivers shape. Or not launch at all.
One can pay thousands of people to babysit these cars with their hands on the wheel for many years until that threshold is reached, and if no one is ready to pay for that effort then we'll just drive ourselves until the end of time.
> I wish others would set aside their partisan tribablism and recognise that Tesla has completely revolutionised the EV market and continues to make significant positive contributions to technology as a whole, all while opening all its patents and opening its Supercharger network to vehicles from competitors.
The problem is, they lost their drive. The competition has caught up - Mercedes Benz has an actually certified Level 4 Autonomous Driving system, on the high-class end pretty much every major manufacturer has something competitive with Tesla, the low budget end has something like the Dacia Spring starting at 12.000€, and the actual long-haul truck (i.e. not the fake "truck" aka Cybertruck) segment has (at least) Volvo, MAN and DAF making full-size trucks.
Where is the actual unique selling point that Tesla has now?
Note: this is in response to https://news.ycombinator.com/item?id=46823760 which is from the same commenter but got killed before there was time to post any links refuting its claims.
> The "salute" in particular is simply a politically-expedient freeze-frame from a Musk speech, where he said "my heart goes out to you all" and happened to raise his arm. I could provide freeze-frame images of Obama and Hilary Clinton doing similar "salutes" and claim this makes them "far right fascists" but I would never insult the reader's intelligence by doing so.
For Obama and Clinton you can find freeze frames showing their arm in a similar position, but when you look at the full video it was in the middle of something that does not match a Nazi salute. Here are several examples: https://x.com/ExposingNV/status/1881647306724049116?t=CGKtg0...
If you had a camera in my kitchen you could find similar freeze frames of me whenever I make a sausge/egg/cheese on an English muffin breakfast sandwich because the ramekin I use to shape the egg patty is on the top shelf.
Human-piloted planes have altimeters and airspeed indicators; the failure of which have caused many accidents.
Tesla cars have speed sensors as well as GPS. (Altimeter and ILS not being relevant). I agree with Musk's claim they don't need LIDAR because human drivers don't; it's self-evidently true. But I think they _should_ have it because they can then be safer than humans; why settle for our current accident and death rate?
> Tesla's approach has been more risk averse and conservative than others.
You lost me here. Tesla's approach has absolutely not been risk averse or conservative. They've allowed random public "testers" to beta test their self driving stack while even they called it a "beta". They've irresponsibly called the feature "full self driving" when it wasn't able to do any such thing. They've made completely outlandish promises (like FSD driving you from coast to coast in 2016). Finally they've staged marketing videos of FSD "working"[1]. Just deplorable stuff and using the public as their guinea pigs (and piggie bank).
Edit: Forgot another Tesla chonker of a promise. Remember when Elon said a Tesla car would be an appreciating asset because it would make you money by acting as a robotaxi when you're not using it? That was in 2019[2]. Has your Model S appreciated? Are you able to sell it for more today than the purchase price?
So I'm assuming you're fine with regular drivers using basic lane keep systems from other companies, which honestly doesn't even work well, even in the latest cars. (there's a reason Comma.ai exists) At least people who are using FSD are enthusiasts and understand the tech. You have some people using lane keep with adaptive cruise control and think the car is "self driving". That's dangerous.
electrek.co recent Tesla headline summary with sentiment:
(negative) Tesla to stop selling Full Self-Driving package, moves to subscription-only
(negative) Elon Musk says Tesla 'almost done' with AI5 design, 6 months after saying it was 'finished'
(negative) Tesla's full 2025 data from Europe is in, and it is a total bloodbath
(neutral) Tesla updates 2026 Model Y with new features, launches tiny third row in the US
(positive) Tesla launches US-made solar panel, a rare sign of life for its solar business
(negative) Elon Musk moves goalpost again: admits Tesla needs 10 billion miles for 'safe unsupervised' FSD
(negative) Are Tesla Gigafactory Berlin's days numbered?
(negative) Elon Musk shows total ignorance of Tesla's current falling sales trajectory
(negative) Tesla rolls out 0% financing to boost declining sales
(negative) Tesla (TSLA) releases Q4 delivery results: confirms decline in sales is accelerating
(negative) Tesla Cybercabs spotted testing, unsurprisingly with steering wheels
(negative) Elon Musk's top 5 Tesla predictions for 2025 that didn't happen
(negative) Tesla (TSLA) does something unusual ahead of Q4 delivery results
(negative) Elon Musk drops 'sustainable' from Tesla's mission
(negative) Tesla's Robotaxi project in Austin is much smaller than Musk claims
(neutral) Tesla Robotaxi spotted without a safety driver in Austin; Musk confirms testing begins
(negative) Tesla US sales drop to under 40,000 units following tax credit expiration
(neutral) Tesla CEO Elon Musk claims driverless Robotaxis coming to Austin in 3 weeks
(positive) Tesla announces 2025 holiday update with a few cool features
(negative) Tesla (TSLA) sales keep crashing in Europe with a single market temporarily saving it
All these self driving and "drivers assistance" features like lane keeping exist to satisfy consumer demand for a way to multitask when driving. Tesla's is particularly cancerous, but all of them should be banned. I don't care how good you think your lane keeping in whatever car you have is, you won't need it if you keep your hands on the wheel, eyes on the road, and don't drive when drowsy. Turn it off and stop trying to delegate your responsibility for what your two ton speeding death machine does!
I think it’s unfair to group all those features into “things for people who want to multitask while driving”.
I’m a decent driver, I never use my phone while driving and actively avoid distractions (sometimes I have to tell everyone in the car to stop talking), and yet features like lane assist and automatic braking have helped me avoid possible collisions simply because I’m human and I’m not perfect. Sometimes a random thought takes my attention away for a moment, or I’m distracted by sudden movement in my peripheral vision, or any number of things. I can drive very safely, but I can not drive perfectly all the time. No one can.
These features make safe drivers even safer. They even make the dangerous drivers (relatively) safer.
There are two layers, both relating to concentration.
Driving a car takes effort. ADAS features (or even just plain regular "driving systems") can reduce the cognitive load, which makes for safer driving. As much as I enjoy driving with a manual transmission, an automatic is less tiring for long journeys. Not having to occupy my mind with gear changes frees me up to pay more attention to my surroundings. Adaptive cruise control further reduces cognitive load.
The danger comes when assistance starts to replace attention. Tesla's "full self-driving" falls into this category, where the car doesn't need continuous inputs but the driver is still de jure in charge of the vehicle. Humans just aren't capable of concentrating on monitoring for an extended period.
Have you ever driven more than 200km at an average of 80km/h with enough turns on the highway? Perhaps after work, just to see your family once a month?
Driver fatigue is real, no matter how much coffee you take.
Lane-keep is a game changer if the UX is well done. I'm way more rested when I arrive at destination with my Model 3 compared to when I use the regular ICE with bad lane-assist UX.
EDIT: the fact that people that look at their phones will still look at their phones with lane-keep active, only makes it a little safer for them and everyone else, really.
If you're on a road trip, pull the fuck over and sleep. Your schedule isn't worth somebody else's life. If that's your commute, get a new apartment or get a new job. Endangering everybody else with drowsy driving isn't an option you should ever find tenable.
A couple of friends with Teslas have told me it's not perfect and you do still have to pay attention but they do regular long drives and say it mostly works and they use it all the time.
(They also say there's still the handoff issue if a human needs to take control but it's still a big net win.)
We made drunk driving super illegal and that still doesn't stop people. I would rather they didn't in the first place, but since they're going to anyway, I'd really rather they have a computer that does it better than they do. FSD will pull over and stop if the driver has passed out.
If we could ensure that only drunk people use driver assistance features, I'd be all for that. The reality is that 90% of the sober public are now driving like chronic drunks because they think their car has assumed the responsibility of watching the road. Ban it ALL.
What I'm hearing here is anecdotal and largely based on feelings. The facts are that automatic emergency braking (which should not activate under normal driving circumstances as it is highly uncomfortable) and lane-keeping are basic safety features that have objectively improved safety on the roads. Everything you've said is merely conjecture.
A car that calls the cops on you. Great. It could also park, lock the doors and hold you in while the police take their sweet time knowing you're already in a cell?
Elon know FSD still takes time and that is the reason he is now ramping up the robot production. Who else to turn to to steer is upcoming fleet of taxies?
The comparison isn't really like-for-like. NHTSA SGO AV reports can include very minor, low-speed contact events that would often never show up as police-reported crashes for human drivers, meaning the Tesla crash count may be drawing from a broader category than the human baseline it's being compared to.
There's also a denominator problem. The mileage figure appears to be cumulative miles "as of November," while the crashes are drawn from a specific July-November window in Austin. It's not clear that those miles line up with the same geography and time period.
The sample size is tiny (nine crashes), uncertainty is huge, and the analysis doesn't distinguish between at-fault and not-at-fault incidents, or between preventable and non-preventable ones.
Also, the comparison to Waymo is stated without harmonizing crash definitions and reporting practices.
All of your arguments are expounded upon in the article itself, and their conclusions still hold, based on the publicly available data.
The 3x figure in the title is based on a comparison of the Tesla reports with estimated average human driver miles without an incident, not based on police report data. The comparison with police-report data would lead to a 9x figure instead, which the article presents but quickly dismisses.
The denominator problem is made up. Tesla Robotaxi has only been launched in one location, Austin, and only since July (well, 28th June, so maybe there is a few days discrepancy?). So the crash data and the miles data can only refer to this same period. Furthermore, if the miles driven are actually based on some additional length of time, then the picture gets even worse for Tesla, as the denominator for those 9 incidents gets smaller.
The analysis indeed doesn't distinguish between the types of accidents, but this is irrelevant. The human driver estimates for miles driven without incident also don't distinguish between the types of incidents, so the comparison is still very fair (unless you believe people intentionally tried to get the Tesla cars to crash, which makes little sense).
The comparison to Waymo is also done based on incidents reported by both companies under the same reporting requirements, to the same federal agency. The crash definitions and reporting practices are already harmonized, at least to a good extent, through this.
Overall there is no way to look at this data and draw a conclusion that is significantly different from the article: Tesla is bad at autonomous driving, and has a long way to go until it can be considered safe on public roads. We should also remember that robotaxis are not even autonomous, in fact! Each car has a human safety monitor that is ready to step in and take control of the vehicle at any time to avoid incidents - so the real incident rate, if the safety monitor weren't there, would certainly be even worse than this.
I'd also mention that 5 months of data is not that small a sample size, despite you trying to make it sound so (only 9 crashes).
To add to this, more data from more regions means the estimate of average human miles without an incident is more accurate, simply because it is estimated from a larger sample, so more likely to be representative.
I agree with most of your points and your conclusion, but to be fair OP was asserting that human drivers under-report incidents, which I believe. Super minor bumps where the drivers get out, determine there’s barely a scratch, and go on. Or solo low speed collisions with walls in garage or trees.
I don’t think it invalidates the conclusion, but it seems like one fair point in an otherwise off-target defense.
Sure, but the 3x comparison is not based on reported incidents, it's based on estimates of incidents that occur. I think it's fair to assume such estimates are based on data about repairs and other such market stats, that don't necessarily depend on reporting. We also have no reason a priori to believe the Tesla reports include every single incident either, especially given their history from FSD incident disclosures.
"estimates" (with air quotes)
> The 3x figure in the title is based on a comparison of the Tesla reports with estimated average human driver miles without an incident, not based on police report data. The comparison with police-report data would lead to a 9x figure instead, which the article presents but quickly dismisses.
I think OP's point still stands here. Who are people reporting minor incidents to that would be publicly available that isn't the police? This data had to come from somewhere and police reports is the only thing that makes sense to me.
If I bump my car into a post, I'm not telling any government office about it.
I don't know, since they unfortunately don't cite a source for that number, but I can imagine some sources of data - insurers, vehicle repair and paint shops. Since average miles driven without incident seems plausible to be an important factor for insurance companies to know (even minor incidents will typically incur some repair costs), it seems likely that people have studied this and care about the accuracy of the numbers.
Of course, I fully admit that for all I know it's possible the article entirely made up these numbers, I haven't tried to look for an alternative source or anything.
The article lists the crashes right at the top. One of 9 involved hitting a fixed object. The rest involved collisions with people, cars, animals, or injuries.
So, let's exclude hitting fixed objects as you suggest (though the incident we'd be excluding might have been anything from a totaled car and huge fire to zero damage), and also assume that humans fail to report injury / serious property damage accidents more often than not (as the article assumes).
That gets the crash rate down from an unbiased 9x to a lowball 2.66x higher than human drivers. That's with human monitors supervising the cars.
2.66x is still so poor they should be pulled of the streets IMO.
> So, let's exclude hitting fixed objects as you suggest (though the incident we'd be excluding might have been anything from a totaled car and huge fire to zero damage)
I don't know what data is available but what I really care about more than anything is incidents where a human could be killed or harmed, followed by animals, then other property and finally, the car itself. So I'm not arguing to exclude hitting fixed objects, I'm arguing that severity of incident is much more important than total incidents.
Even when comparing it to human drivers, if Tesla autopilot gets into 200 fender benders and 0 fatal crashes I'd prefer that over a human driver getting into 190 fender benders and 10 fatal crashes. Directionally though, I suspect the numbers would probably go the other direction, more major incidents from automated cars because, when are successful, they usually handle situations perfectly and when they fail, they just don't see that stopped car in front of you and hit it at full speed.
> That gets the crash rate down from an unbiased 9x to a lowball 2.66x higher than human drivers. That's with human monitors supervising the cars.
> 2.66x is still so poor they should be pulled of the streets IMO.
I'm really not here to argue they are safe or anything like that. It just seems clear to me that every assumption in this article is made in the direction that makes Tesla look worse.
I'm using the data listed immediately after the introductory paragraph of the article.
FTA:
>> However, that figure doesn’t include non-police-reported incidents. When adding those, or rather an estimate of those, humans are closer to 200,000 miles between crashes, which is still a lot better than Tesla’s robotaxi in Austin.
Insurers?
I can't be certain about auto insurers, but healthcare insurers just straight up sell the insurance claims data. I would be surprised if auto insurers haven't found that same "innovation."
That's a fair point, but I'll note that the one time I hit an inanimate object with my car I wasn't about to needlessly involve anyone. Fixed the damage to the vehicle myself and got on with life.
So I think it's reasonable to wonder about the accuracy of estimates for humans. We (ie society) could really use a rigorous dataset for this.
Tesla could just share their datasets with researchers and NHTSA and the researchers can do all the variable controls necessary to make it apples to apples.
Tesla doesn't because presumably the data is bad.
TFA does a comparison with average (estimated), low-speed contact events that are not police-reported by humans, of one incident every 200,000 miles. I think that's high - if you're including backing into static objects in car parks and the like, you can look at workshop data and extrapolate that a lower figure might be closer to the mark.
TFA also does a comparison with other self-driving car companies, which you acknowledge, but dismiss: however, we can't harmonize crash definitions and reporting practices as you would like, because Tesla is obfuscating their data.
TFA's main point is that we can't really know what this data means because Tesla keep their data secret, but others like Waymo disclose everything they can, and are more transparent about what happened and why.
TFA is actually saying Tesla should open up their data to allow for better analysis and comparison, because at the moment their current reporting practice make them look crazy bad.
> TFA does a comparison with average (estimated), low-speed contact events that are not police-reported by humans, of one incident every 200,000 miles.
Where does it say that? I see "However, that figure doesn’t include non-police-reported incidents. When adding those, or rather an estimate of those, humans are closer to 200,000 miles between crashes, which is still a lot better than Tesla’s robotaxi in Austin."
All but one of the Tesla crashes obviously involved significant property damage or injuries (the remaining one is ambiguous).
So, based on the text of the article, they're assuming only 2/5ths of property damage / injury accidents are reported to the police. That's lower than I would have guessed (don't people use their car insurance, which requires the police report?), but presumably backed by data.
> TFA's main point is that we can't really know what this data means because Tesla keep their data secret
If that's so, then the article title is very poor.
Because the bad title is the point, the author has made it his life’s purpose to troll the Elon sycophants on X. For that reason there’s no reason to take him any more seriously than you would take those guys as he’s just their mirror image. I’m enough of an Elon skeptic to suspect the Austin robotaxis don’t have a real path to operating autonomously for several reasons, doesn’t mean I have to listen to Fred Lambert. He’s peddling clickbait/ragebait and I don’t understand how it’s taken as anything more.
Tesla could share real/complete data at any time. The fact that they don't is likely and indicator the data does not look good.
You can do this with every topic. XYZ does not share this, so IT MUST BE BAD.
Yes, that's very often the case with things that would very likely be shared if it looked good.
There are things that don't get shared out of principle. For example there are anonymous votes or behind the scenes negotiations without commitment or security critical data.
But given that Musk tends to parade around vague promises since a very long time, it seems sharing data that looks very good would certainly be something they would do.
And it usually is.
It's a public company making money off of some claims. Not being transparent about the data supporting those claims is already a huge red flag and failure on their part regardless of what the data says.
I've actually started ignoring all these reports. There is so much bad faith going on in self-driving tech on all sides, it is nearly impossible to come up with clean and controlled data, much less objective opinions. At this point the only thing I'd be willing to base an opinion on is if insurers ask for higher (or lower) rates for self-driving. Because then I can be sure they have the data and did the math right to maximise their profits.
The biggest indicator for me that this headline isn't accurate is that Lemonade insurance just reduced the rate for Tesla FSD by 50%. They probably have accurate data and decided that Tesla's are significantly safer than human drivers.
Thank you. Everyone is hiding disengagement and settling to hide accidents. This will not be fixed or standardized without changes to the laws, which for self driving have been largely written by the handful of companies in the space. Total, complete regulatory capture.
I think it's fair to put the burden of proof here on Tesla. They should convince people that their Robotaxis are safe. If they redact the details about all incidents so that you cannot figure out who's at fault, that's on Tesla alone.
While I think Tesla should be transparent, this article doesn't really make sure it is comparing apples to apples either.
I think its weird to characterize it as legitimate and the say "Go Tesla convince me ohterwise" as if the same audience would ever be reached by Tesla or people would care to do their due diligence.
It’s not weird. They have a history of over promising to the point that one could say they just straight up lie on a regular basis. The bar is higher for them because they have abused the public’s trust and it has to be earned again.
The results have to speak for Tesla very loudly and very clearly. And so far they don’t.
But this is more your feelings than actually factual.
I mean sure you can say that the timelines did slip a lot but that doesn’t really have anything to with the rest that is insinuated here.
I would argue a timeline slipping doesn’t mean you go about killing people and lie about it next. I would even go so far as to say that the timelines did slip to exactly avoid that.
That's not "feelings" that's reputational data.
Tesla continues to overpromise, about safety, about timelines that slip due to safety.
We should be a bit more hard nosed and data based when dealing with these things rather than dismissing the core question due to "feelings" and due to Tesla not releasing the sort of data tha allows fair analysis b
> But this is more your feelings than actually factual
Seems to be the other way, though I find that kind of rude to assert as opposed to asking me what informs my opinion. Other comments have answered that very well
https://en.wikipedia.org/wiki/Criticism_of_Tesla,_Inc.
https://www.tesladeaths.com/
https://elonmusk.today/
The data on this matter of lies, fraud, and bad faith is robust.
> a timeline slipping
You're generous with your words to the point they sound like apologism. Musk has been promising fully autonomous driving "within 1-3 years" since 2013. And he's been charging customers money for that promise for just as long. Timelines keep slipping for more than half of the company's existence now, that's not a slipup anymore.
Tesla has never been transparent with the data on which they base their claims of safety and performance of the system. They tout some nice looking numbers but when anyone like the NHTSA requests the real data they refuse to provide it.
When NHTSA shows you numbers, they're lying. If I tell you I have evidence Tesla is lying you'll tell me to show it or STFU. When Tesla does the same after so many people died, you go all soft and claim everyone else is lying. That's very one sided behavior, more about feelings than facts.
> But this is more your feelings than actually factual.
The article is about "NHTSA crash data, combined with Tesla’s new disclosure of robotaxi mileage". Sounds factual enough. If Tesla is sitting on a trove of data that proves otherwise but refuse to publish it that's on them. If anyone is about the feels and not the facts here, it's you.
Tesla (Elon Musk really) has a long history of distorting the stats or outright lying about their self driving capabilities and safety. The fact that folks would be skeptical of any evidence Tesla provided in this case is a self-inflicted problem and well-deserved.
He did promise his electric trucks to be more cost-effective than trains (still nothing in 2026...). And "world's fastest supercar". And full self-driving by "next year" in 2015. None of these are offered in 2026.
There have never been truthful statements from his companies, only hype & fluff for monetary gains.
There used to be [EDIT: still is] a website[1] that listed all of Musk's promises and predictions about his businesses and showed you how long it's been since he said the promise would materialize. It's full of mostly old statements, probably because it's impossible to keep up with the amount of content being generated monthly.
1: https://elonmusk.today
The burden of proof is on the article writer.
This has nothing to do with burden of proof, it has to do with journalistic accuracy, and this is obviously a hit piece. HN prides itself on being skeptical and then eats up "skeptic slop."
>I think it's fair to put the burden of proof here on Tesla.
That just sounds like a cope. The OP's claim is that the article rests on shaky evidence, and you haven't really refuted that. Instead, you just retreated from the bailey of "Tesla's Robotaxi data confirms crash rate 3x worse ..." to the motte of "the burden of proof here on Tesla".
https://en.wikipedia.org/wiki/Motte-and-bailey_fallacy
More broadly I think the internet is going to be a better place if comments/articles with bad reasoning are rebuked from both sides, rather than getting a pass from one side because it's directionally correct, eg. "the evidence WMDs in Iraq is flimsy but that doesn't matter because Hussein was still a bad dictator".
The point is this: the article writer did what research they could do given the available public data. It's true that their title would be much more accurate if it said something like "Tesla's Robotaxi data suggests crash rate may be up to 3x worse than human drivers". It's then 100% up to Tesla to come up with cleaner data to help dispel this.
But so far, if all the data we have points in this direction, even if the certainty is low, it's fair to point this out.
It's not a Motte and Bailey fallacy at all; it's a statement of a belief about what should be expected if something is to be allowed as a matter of public health and safety implications.
They're saying that Tesla should be held to a very high standard of transparency if they are to be trusted. I can't speak to OP, but I'd argue this should apply to any company with aspirations toward autonomous driving vehicles.
The title might be misleading if you don't read the article, but the article itself at some level is about how Tesla is not being as transparent as other companies. The "shaky evidence" is due to Tesla's own lack of transparency, which is the point of stating that the burden of proof should be on Tesla. The article is about how, even with lack of transparency, the data doesn't look good, raising the question of what else they might not be disclosing.
From the article: "Perhaps more troubling than the crash rate is Tesla’s complete lack of transparency about what happened... If Tesla wants to be taken seriously as a robotaxi operator, it needs to do two things: dramatically improve its safety record, and start being honest about what’s happening..."
I'd argue the central thesis of the article isn't one of statistical estimation; it's a statement about evidentiary burden.
You don't have to agree with the position that Tesla should be held a high transparency standard. But the article is taking the position that you should, and that if you do agree with that position, that you might say that even by Tesla's unacceptable standards they are failing. They're essentially (if implicitly) challenging Tesla to provide more data to refute the conclusions, saying "prove us wrong", knowing that if they do, then at least Tesla would be improving transparency.
I don’t think it’s a motte and Bailey fallacy because the motte is not well established. Tesla clearly does not believe that the burden of proof is on them, and by extension regulators, legislators.
So, there are two theories:
a) Teslas are unsafe. The sparse data they're legally obligated to provide shows this clearly.
b) Elon Musk is sitting on a treasure trove of safety data showing that FSD finally works safely + with superhuman crash avoidance, but is deciding not to share it.
You're honestly going with (b)? We're talking about the braggart that purchased Twitter so he could post there with impunity. To put it politely, it would be out of character for him to underpromise + overdeliver.
You're not replying to the author of the article.
electrek.co has a beef with Tesla, at least in the recent years.
Absolutely.
Let's examine the Elektrek editor's feed, to understand how "impartial" he is about Tesla:
https://x.com/FredLambert
Yup.
Btw, do you happen to know, why electrek.co changed their tune in such a way? I was commenting on a similarly negative story by the same site, and said that they are always anti-Tesla. But then somebody pointed out that this wasn't always the case, that they were actually supportive, but then suddenly turned.
Fred Lambert was an early Tesla evangelist - he constantly wrote stories praising Tesla and Elon for years. He had some interactions with Elon on Twitter, got invited to Tesla events, referred enough people to earn free Tesla cars, etc.
People roasted him for being a Tesla/Elon fanboy: https://www.thedrive.com/tech/21838/the-truth-behind-electre...
Fred gradually started asking tougher questions when Tesla's schedule slipped on projects and Elon ended up feuding with Fred (and I think blocking him) on Twitter: https://www.reddit.com/r/teslamotors/comments/bgmwk8/twitter...
Since then Fred has had a more realistic (IMHO) outlook on Tesla, although some might call it a "beef" since he's no longer an Elon sycophant.
I think you're being a bit unfair to Lambert.
If we assume the best (per HN guidelines): Up to about 2018 Tesla was the market-leading EV company, and the whole thesis of Electrek is that EVs are the future. So, of course they covered Tesla frequently and in a generally positive light.
Since then, the facts have changed. Elon's become increasingly erratic, and has been making increasingly unhinged claims about Tesla's current and future products. At the same time, Tesla's offerings are far behind domestic standards, which are even further behind international competition. Also, many people have died due to obvious Tesla design flaws (like the door handles, and false advertising around FSD).
Journalistic integrity explains the difference in coverage over the years. Coverage from any fact-based outlet would have a similar shift in sentiment.
Good analysis. Just over a month ago, Electrek was posted here claiming that Teslas with humans were crashing 10x more than with humans alone.
That was based on a sample size of 9 crashes. In the month following that, they've added one more crash while also increasing the miles driven per month.
The headline could just as easily be about the dramatic decline in their crash rate! Or perhaps the data is just too small to analyze like this, and Electrek authors being their usual overly dramatic selves.
I don't understand your claim.
Previous article: Tesla with human supervisor at wheel: 10x worse than human alone.
Current article: Tesla with remote supervisor: 3-9x worse than human alone.
Given the small sample sizes, this shows a clear trend: Tesla's autopilot stuff (or perhaps vehicle design) is causing a ton of accidents, regardless of whether it's being operated locally by customers or remotely by professionals.
I'd like to see similar studies broken down by vehicle manufacturer.
The ADAS in one of our cars is great, but occasionally beeps when it shouldn't.
The ADAS in our other car cannot be disabled and false positives every 10-20 miles. Every week or so it forces the vehicle out of lane (either left of double yellow line center, or into another car's lane).
If the data on crash rates for those two models were public, I guarantee the latter car would have been recalled by now.
That is an overly optimistic way to phrase an apparent decrease in crashes, when Tesla is not being upfront about data that at best looks like it's worse than human crash rates.
Unless one was a Tesla insider, or had a huge interest in Tesla over other people on the road, such spin would not be a normal thing to propose saying.
Media outlets, even ones devoted to EVs, should not adopt the very biased framing you propose.
I don’t think statistics work that way. A study of all Teslas and all humans in Austin for 5 months is valid because Electrek ran a ridiculous “study”, and this headline could “just as easily” have presented the flawed Elektrek stork as a legit baseline?
The 10x would be 9x if the methodology were the same. 9x->3x is going from reported accidents to inferred true accident rate, as the article points out.
Oh. Well then. May we see the details of these minor contact events so that people don’t have to come here and lie for them anymore?
How corrupt and unaccountable to the public is the city of Austin Texas, even, for allowing them to turn in incident reports like this?
I find it interesting the Lemonade insurance just began offering a 50% discount for Tesla with FSD.
Insurance companies are known for analytics and don't survive if they use bad data. This points to your comment being correct.
Thats a completely different scenario than fully autonomous driving.
"insurance-reported" or "damage/repair-needed" would be a better criteria for problematic events than "police-reported".
> The comparison isn't really like-for-like.
This is a statement of fact but based on this assumption:
> low-speed contact events that would often never show up as police-reported crashes for human drivers
Assumptions work just as well both ways. Musk and Tesla have been consistently opaque when it comes to the real numbers they base their advertising on. Given this past history of total lack of transparency and outright lies it's safe to assume that any data provided by Tesla that can't be independently verified by multiple sources is heavily skewed in Tesla's favor. Whatever safety numbers Tesla puts out you can bet your hat they're worse in reality.
oh hacker news, never change. "crashes 3x as much as human driven cars" but is that REALLY bad? who knows? pure gold
Humans driving cars crash more than humans walking on side walks. But is humans driving cars really bad?
To be honest I think the true story here is:
> the fleet has traveled approximately 500,000 miles
Let's say they average 10mph, and say they operate 10 hours a day, that's 5,000 car-days of travel, or to put it another way about 30 cars over 6 months.
That's tiny! That's a robotaxi company that is literally smaller than a lot of taxi companies.
One crash in this context is going to just completely blow out their statistics. So it's kind of dumb to even talk about the statistics today. The real take away is that the Robotaxis don't really exist, they're in an experimental phase and we're not going to get real statistics until they're doing 1,000x that mileage, and that won't happen until they've built something that actually works and that may never happen.
The more I think about your comment on statistics, the more I change my mind.
At first, I think you’re right - these are (thankfully) rare events. And because of this, the accident rate is Poisson distributed. At this low of a rate, it’s really hard to know what the true average is, so we do really need more time/miles to know how good/bad the Teslas are performing. I also suspect they are getting safer over time, but again… more data required. But, we do have the statistical models to work with these rare events.
But then I think about your comment about it only being 30 cars operating over 6 months. Which, makes sense, except for the fact that it’s not like having a fleet of individual drivers. These robotaxis should all be running the same software, so it’s statistically more like one person driving 500,000 miles. This is a lot of miles! I’ve been driving for over 30 years and I don’t think I’ve driven that many miles. This should be enough data for a comparison.
If we are comparing the Tesla accident rate to people in a consistent manner (accident classification), it’s a valid comparison. So, I think the way this works out is: given an accident rate of 1/500000, we could expect a human to have 9 accidents over the same miles with a probability of ~ 1 x 10^-6. (Never do live math on the internet, but I think this is about right).
Hopefully they will get better.
500,000 / 30 years is ~16,667mi/yr. While its a bit above the US average, its not incredibly so. Tons of normal commuters will have driven more than that many miles in 30 years.
That’s not quite the point. I’m a bit of an outlier, I don’t drive much daily, but make long trips fairly often. The point with focusing on 500,000 miles is that that should be enough of an observation period to be able to make some comparisons. The parent comment was making it seem like that was too low. Putting it into context of how much I’ve driven makes me think that 500,000 miles is enough to make a valid comparison.
But that's the thing, in many ways it is a pretty low number. Its less than the number of miles a single average US commuter will have driven in their working years. So in some ways its like trying to draw lifetime crash statistics but only looking at a single person in your study.
Its also kind of telling that despite supposedly having this tech ready to go for years they've only bothered rolling out a few cars which are still supervised. If this tech was really ready for prime time wouldn't they have driven more than 500,000mi in six months? If they were really confident in the safety of their systems, wouldn't they have expanded this greatly?
I mean, FFS, they don't even trust their own cars to be unsupervised in the Las Vegas Loop. An enclosed, well-lit, single-lane, private access loop and they can't even automate that reliably enough.
Waymo is already doing over 250,000 weekly trips.[0] The trips average ~4mi each. With those numbers, Waymo is doing 1 million miles a week. Every week, Waymo is doing twice as many miles unsupervised than Tesla's robotaxi has done supervised in six months.
[0] https://waymo.com/sustainability/
Wait, so your argument is there's only 9 crashes so we should wait until there's possibly 9,000 crashes to make an assessment? That's crazy dangerous.
At least 3 of them sound dangerous already, and it's on Tesla to convince us they're safe. It could be a statistical anomaly so far, but hovering at 9x the alternative doesn't provide confidence.
No, my argument is you shouldn't draw a statistical conclusion with this data. That's all. I'm kind of pushing in the direction you were pointing in the second part - it's not enough data to make statistical inferences. We should examine each incident, identify the root cause and come to a conclusion as to whether that means the system is not fit for purpose. I just don't think the statistics are useful.
> The real take away is that the Robotaxis don't really exist
More accurately, the real takeaway is that Tesla's robo-taxis don't really exist.
Because it is fraud trying to inflate Tesla stock price.
The real term is “marketing puffery.” It’s a fun, legally specific way to describe a company bullshitting to hype its product.
The Robotaxi service might be puffery, selling "full self driving" is just fraud.
What's even more unbelievable is that a significant number of people are still falling for it
We've known for a long time now that their "robotaxi" fleet in Austin is about 30-50 vehicles. It started off much lower and has grown to about 50 today. There's actually a community project to track individual vehicles that has more exact figures.
Currently it's at 58 unique vehicles (based on license plates) with about 22 that haven't been seen in over a month
https://robotaxitracker.com/
But deep learning is also about statistics.
So if the crash statistics are insufficient, then we cannot trust the deep learning.
I suspect Tesla claims they do the deep learning on sensor data from their entire fleet of cars sold, not just the robotaxis.
No, they exist, but they are called Waymo
>One crash in this context is going to just completely blow out their statistics.
One crash in 500,000 miles would merely put them on par with a human driver.
One crash every 50,000 miles would be more like having my sister behind the wheel.
I’ll be sure to tell the next insurer that she’s not a bad driver - she’s just one person operating an itty bitty fleet consisting of one vehicle!
If the cybertaxi were a human driver accruing double points 7 months into its probationary license it would have never made it to 9 accidents because it would have been revoked and suspended after the first two or three accidents in her state and then thrown in JAIL as a “scofflaw” if it continued driving.
> One crash in 500,000 miles would merely put them on par with a human driver.
> One crash every 50,000 miles would be more like having my sister behind the wheel.
I'm not sure if that leads to the conclusion that you want it to.
From the tone, it seems that the poster's sister is a particularly bad driver (or at least they believe her to be). While having an autonomous car that can drive as well as even a bad human driver is definitely a major accomplishment technologically, we all know that threshold was passed a long time ago. However, if Tesla's robotaxis (with human monitors on board, let's not forget - these are not fully autonomous cars like Waymo's!) are at best as good as some of the worse human drivers, then they have no business being allowed on public roads. Remember that human drivers can also lose their license if [caught] driving too poorly.
It does. She just ran over a bus shelter, like she was vibe driving a Tesla on autopilot or something.
Elon promised self driving cars in 12 months back in 2017? He’s also promising Optimus robots doing surgery on humans in 3 years? Extrapolating…………… Optimus is going to kill some humans and it will all be worth it!
Elon is aware that Tesla insane market valuation would crash 10x if it stays a car company.
There isn't enough money and most importantly margin in the car industry to warrant such a valuation, so he has to pivot away from cars into the next thing.
Just to make an example of how risky it is to be a car company for Tesla.
In 2025 Toyota has had: 3.5 times Tesla's revenue, 8 times the net income and twice the margin.
And Toyota has a market cap that is 6 times lower than Tesla.
It would take tesla a gargantuan effort to match Toyota's numbers and margins, and if it matched it...it would be a disaster for Tesla's stock.
Hell, Tesla makes much less money than Mercedes Benz and with a smaller margin..
Mercedes has 60% more revenue and twice the net income. Yet, Tesla is valued around 40 times Mercedes-Benz.
Tesla *must* pivot away from cars and make it a side business or sooner or later that stuff is crashing, and it will crash fast and hard.
Musk understands that, which is why he focusing on robo taxis and robots. It's the only way to sell Tesla to naive investors.
And then, they will pivot away from humanoid robots. To justifying the valuation they have already pivoted from electric cars company to self-driving taxis company, without delivering self-driving taxis, they are now pivoting to robots, before delivering the robots they will pivot to the next shinny thing. Maybe Pivot is the real Tesla product that justifies the crazy valuation.
Before people believe the Tesla robot hype they should probably consider that Toyota has been making cars AND robots for longer than Tesla exists.
And they started making electric cars in the mid-90s. They almost sold 400 of them, so that experience has definitely paid off.
That is why all the crazy promises and moves, hyping X.ai, Robotaxis, Optimus, Data centers in space. If you are constantly promising the future and some radical moves, the optimistic investors believe him and he can keep increasing the "potential future valuation".
But when you look at it:
- X.ai is basically getting into the race by throwing money at the problem and using your name to get funding in a hyped industry.
- Do a buyout of your own company with it, get access to data that you restricted to everyone else.
- Merge it with SpaceX for "datacenters in space", do an IPO for a huge valuation
- Probably merge it with Tesla, overhype everything
- As the humanoid, AI and space industry grows, so will the valuation just because of the market growth, not necessarily because of great/revolutionary products
At that point, nobody can even consider what the valuation is, as it is a mishmash of promises, fudged numbers, real numbers, potential numbers, contracts, hype and everything else. It allows moving financials around and tuning things to get him his 1T package and hype things even more.
I mean congrats to Elon, just by overhyping his products he shifts the timeline narrative more towards techno-optimism and earns himself more money. The financial shenanigans to follow in the next few years will be an interesting period for future financial archeologists.
> - X.ai is basically getting into the race by throwing money at the problem and using your name to get funding in a hyped industry.
Well, everyone is doing that. That's what AI research is.
I really hope I live to see Tesla stock crash to a reasonable valuation.
I dislike Tesla/Elon but would prefer the reality where they innovate until their worth matches their current price. I suspect yours is more likely to happen.
> Elon is aware that Tesla insane market valuation would crash 10x if it stays a car company.
I see nothing wrong here, correction back to reality.
I understand why people adored him blindly in the early days, but liking him now after its clear what sort of person he is and always will be is same as liking trump. Many people still do it, but its hardly a defensible position unless on is already invested in his empire.
It’d be best for everyone outside of the company but he and the board would be buried in lawsuits for the rest of their lives. They have a strong personal interest in avoiding that even if it’s well-deserved based on sober data analysis, so they’re pushing the Hail Mary play trying to jump into a bigger new market which they haven’t already ceded to the competition.
We need to bring back the concept of seppuku for situations like that. "We have to lie more because it would be too painful to admit our lies" should be a moment that makes any leader question where they have been, where they are, where they're going, and all of their motivations and reasoning. It should be the sort of "what have I done?" moment that sends a person to a monastery, an asylum, or the grave.
I saw a pretty convincing argument that Musk fried his brain with ketamine, written by a former ketamine abuser who saw a lot of familiar behavior. I don't think Musk is the same guy now that he was in the early days.
Sounds like a reach. Many people just loved the stuff he was doing and didn't really know much about him as a person. When a more complete picture (specifically his politics) emerged, and people decided they really didn't like the person, they had to resolve their cognitive dissonance by finding reasons they were right then and right now, instead of admitting they projected the person they imagined onto the person they really didn't know. Tech enthusiasts just never imagined that a guy doing so many cool things could turn out to be a right-winger.
They also reported tiny profits that are just slightly above what they get in as subsidies. P/E compared to other similar companies is also through the roof.
The best part of all of this is given their history, and the state of robotaxies as a whole, they will fail, and Tesla will crash. And it'll be a great day. The hype and obscene over valuation of them is utterly moronic.
Look how much longer, and more experience Waymo has and they still have multiple issues a week popping up online, and thats with running them in a very small well mapped and planned out area. Musk wants robo taxies globally, that's just not happening, not any time soon and certainly not by the 10 year limit for him to get his trillion dollar bonus from Tesla, which is the only reason he's pushing so hard to make it happen.
https://en.wikipedia.org/wiki/List_of_predictions_for_autono...
Optimus could do surgery on humans right now if it wasn't for regulation that prohibits killing people off with robots.
Your filthy Earth laws don't apply on Mars.
This comes after a recent iSeeCars study that found that Tesla as a brand had the highest fatal crash rate in the US (with Kia being a very close second)
https://www.iseecars.com/most-dangerous-cars-study#:~:text=T...
Most of us are very well aware of Tesla's shortcomings with FSD and inflated valuations.
But electrek's reporting is biased and in bad faith when it comes to Tesla/Musk.
A recent iSeeCars study that found that Tesla as a brand had the highest fatal crash rate in the US
https://www.iseecars.com/most-dangerous-cars-study#:~:text=T...
Yes on one side Tesla is not transparent but on the other side the author of the article is an hypocrite given they went with the click-bait title "Tesla’s own Robotaxi data confirms crash rate 3x worse than humans even with monitor" Tesla secrecy is likely due to avoid journalists taking any chance they can to sell more news by writing an autonomous vehicles horror story. Given the secrecy we don't know what happened, yet the journalist did choose to go with the worse scenario title.
While the title is slightly biased, it's completely fair to analyze all of the public data a company provides about a very public problem (how safe their autonomous cars are), and show what the risks are. If Tesla wants us to believe their robotaxis are safe (which they implicitly do by putting these on public roads), it's entirely on them to publish data that supports that claim. If the data they themselves publish suggests that they are much worse than human drivers, then I want journalists to report on that.
It's also extremely implausible that Tesla has data that their cars are very safe, but choose to instead publish vague data that makes them seem much worse. It's for example much more likely that these 9 incidents reported are just the bad incidents that they think they won't be able to hide, rather than assuming these are all or mostly minor incidents like lightly bumping into a static object.
Secrecy clearly doesn't avoid that kind of story though. The question is if their numbers were really good, or at least as good as Waymo, why wouldn't they share them for the positive press? Waymo doesn't get as many negative pieces like this.
It's a pretty logical conclusion to say that numbers they won't share must make them look bad in some way.
> Waymo doesn't get as many negative pieces like this.
Elon Musk is a divisive figure.
Humans average one police reported accident per 500,000 miles?!
TIL I'm incredibly unlucky.
The human accident count per mile is brought down by a lot of highway miles. The Robotaxi is, at present, geofenced. It's not going to be getting a lot of highway miles. Most crashes happen on city streets.
Tesla has completely fumbled a spectacular lead in EVs and managed to snatch defeat from the jaws of victory. And instead of turning it around, we're supposed to believe they are going to completely pivot and then take over a market with far more developed competitors (e.g. Boston Dynamics).
That Elon is riding this wave amidst the transparency of the whole thing is the funniest part. It's like watching people lose money at the "three cup" game but the cups are clear.
Do you remember EVs before Tesla? They were glorified golf carts using lead-acid batteries. The performance and range were awful. The Roadster and the Model S changed all that. I'm not saying I remember this perfectly but as I recall, Tesla's original objective was to show that an EV could be a real car, look attractive (or at least normal), and to create demand for EVs that would force all manufacturers to start making them. The ultimate value in Tesla was supposed to be batteries, which all cars would eventually need.
I'm not sure how defensible the lead was. The only reason BYD isn't the only game in town is tariffs. The pivot to Optimus is ridiculous though. They can't get a car to drive truly autonomously after more than a decade and they want to expand the degrees of freedom?
Tesla had a good brand image in the early 2010, they could have positioned themselves like the quality/luxury brand for EV and have people buy Tesla for the brand itself like people do for Apple.
Instead they let Elon made their brand so toxic people are actively avoiding it.
I'm skeptical. If someone really wants a Tesla, my guess is that they'll rationalize Musk's actions or least compartmentalize them.
That was 10-15 years ago, but back then Musk appeared different, and Telsa was new. Today you can buy a Tesla, they are no longer the status symbol they once were. A 15 year old Mercedes is a status symbol in the US, a 15 year old Tesla is not, Tesla didn't capture the status symbol market (which might have been a good decision - what wasn't a good decision was for the CEO to go public about political views that are lot of his potential base to not support)
A 15 year old Mercedes is a falling apart PoS maintenance nightmare driven by someone with a lack of common sense.
No, it's the reverse. Someone who finds Musk's behavior so abhorrent they fear being affiliated with it will actually find reasons they don't really want a Tesla.
It doesn't help that Tesla, making extremely low quality and uncomfortable cars for the price point, provides plenty of dislikable things to find.
I think Facebook is even more universally thought of as a bad company, and everyone still engages there, too.
Facebook as a monopoly of a sort and so is hard to get away from. If I don't like Tesla there are many other options. Even if you only buy EVs, there are a lot of options that you can buy today. The only people who have to buy Tesla are the type who are buying 10 year old EVs (the limited range on 10 year old Nissan rules them out).
Well, lots of people don't use Facebook. But you're right that there aren't any real like for like replacements.
Of course, Twitter was a quasi-monopoly as well. That said, Bluesky emerged but only as an alternative with much less critical mass.
Difference is their product is so good as to be basically irreplaceable (good = strong network effects, which is the only flavor of "good" that matters)
There are network effects to social networks that do not apply to choice of vehicle
Some people. Others are actively embracing it.
Most people don't know about the political aspect.
I'm glad Tesla is pivoting to a product that can drop your bag of groceries in the worst case, instead of one that can slam you into a concrete divider at 75mph.
In general, any robot that has servos powerful enough to be any of use is surprisingly dangerous to be around. While it's much easier to apply various limiters, the raw power in those engines will always pose a significant level of risk if anything goes wrong. If you're hovering above a human who sits up suddenly, you might get your nose broken. If it's a robot instead, it will have the strength and mass to easily mutilate you in the same kind of accident.
The robot could leave the ironer standing on your clothes and walk away; it could leave your empty pan on the stove at max heating; it could take a nice hard grip of your throat for a few minutes.
…and the guy shuffling the cups is Dave Chappell’s crackhead character.
My theory has always been Trumps grand con was acting like what poor people thought a rich person was like, Elon acts like what morons think a genius is like.
Wow thats interesting to know and is quite concerning
this is not good but the point is this can be improved much easier than improving human accidents rate. Both are very difficult problems, but one is certainly harder.
That's not really true. There are huge discrepancies in human human driver accident data across different countries, which shows that there are clear practices one could deploy to significantly reduce driving incidents - people just choose to not implement them.
By the law of large numbers, it's not a significant distance.
As far as I understand, those Robotaxis are only available within Austin so far. That is slow city traffic, the number of miles per ride is very small. However the number for human drivers seem to take all kind of roads into respect. Of course, highways are the roads where you drive most of the distance at the least risk for an accident. Has this been taken into account for the evaluation?
It would be ironic that people are claiming the Tesla numbers for Autopilot are to optimistic, as it is used on highways only and at the same time don't notice that city-only numbers for the FSD would be pessimistic statistics-wise.
It does look extremely pessimistic. Like one of the 'incident' is that they hit a curb at a parking lot at 6 MPH.
No human driver would report this kind of incident. A human driver would probably forget it after the next traffic light.
While it's clearly Tesla's fault (if you hit any static object it's your fault), when you take this kind of 'incident' into account of course it'd look worse than humans.
The human data estimate they compare to to get the 3x number also includes this type of incident - even if of course no one reports it, you can get some idea of the number of such incidents based on service and paint shop data.
More importantly: it seems like Austin is mostly a typical US city grid of wide streets. Nothing comparable with an average old inner city, or narrow countryside roads with a ditch or cliff or quay on one or both sides. Probably not many pedestrians & cyclists roaming the streets either?
Hard to believe that in 2017, I was utterly convinced that self-driving cars would be the majority of all cars on the road within 5 years.
- There was a lot of hype
- A lot of people fervently hoped they wouldn't need to drive for much longer and their kids wouldn't even need to learn
- So much progress had been made with deep learning is a fairly short length of time that surely we were on the cusp of broadly deployed autonomous vehicles.
I really wish we'd ban electrek articles on HN, they're constantly false, misleading, or just grinding an axe.
TBH, the comments here amaze me. The claim is that a human being paid to monitor a driver assistance feature is 3x more likely to crash than a human alone.
That needs extraordinary evidence. Instead the evidence is misleading guesses.
That... is not really an extraordinary claim. That has been many people's null hypothesis since before this technology was even deployed, and the rationale for it is sufficiently borne out to play a role in vigilance systems across nearly every other industry that relies on automation.
A safety system with blurry performance boundaries is called "a massive risk." That's why responsible system designers first define their ODD, then design their system to perform to a pre-specified standard within that ODD.
Tesla's technology is "works mostly pretty well in many but not all scenarios and we can't tell you which is which"
It is not an extraordinary claim at all that such a system could yield worse outcomes than a human with no asssistance.
As long as there are still safety drivers, the data doesn't really tell you if the AI is any good. Unless you had reliable data about the number of interventions by the driver, which I assume Tesla doesn't provide.
Still damning that the data is so bad even then. Good data wouldn't tell us anything, the bad data likely means the AI is bad unless they were spectacularly unlucky. But since Tesla redacts all information, I'm not inclined to give them any benefit of the doubt here.
> As long as there are still safety drivers, the data doesn't really tell you if the AI is any good. Unless you had reliable data about the number of interventions by the driver, which I assume Tesla doesn't provide.
Sorry that does not compute.
It tells you exactly if the AI is any good, as, despite the fact that there were safety drivers on board, 9 crashes happened. Which implies that more crashes would have happened without safety drivers. Over 500,000 miles, that's pretty bad.
Unless you are willing to argue, in bad faith, that the crashes happened because of safety driver intervention..
The problem is we don't know how many incidents would have happened if there was no safety driver. How many times did the driver have to intervene to prevent an accident? IMO, that should count towards the number of AI-driven accidents
I'm a bit hesitant to draw strong conclusions here because there is so little data. I would personally assume that it means the AI isn't ready at all, but without knowing any details at all about the crashes this is hard to state for sure.
But if the number of crashes had been lower than for human drivers, this would tell us nothing at all.
The "safety drivers" do nothing. They sit in the passenger seat and the only thing they have is a button that presumably stops the car and lets a remote operator take over.
> As long as there are still safety drivers, the data doesn't really tell you if the AI is any good.
I think we're on to something. You imply that good here means the AI can do it's thing without human interference. But that's not how we view, say, LLMs being good at coding.
In the first context we hope for AI to improve safety whereas in the second we merely hope to improve productivity.
In both cases, a human is in the loop which results in second order complexity: the human adjusts behaviour to AI reality, which redefines what "good AI" means in an endless loop.
As much as I'd love to pile in on Tesla, it's unclear to me the severity of the incidents (I know they are listed) and if human drivers would report such things.
"Rear collision while backing" could mean they tapped a bollard. Doesn't sound like a crash. A human driver might never even report this. What does "Incident at 18 mph" even mean?
By my own subjective count, only three descriptions sound unambiguously bad, and only one mentions a "minor injury".
I'm not saying it's great, and I can imagine Tesla being selective in publishing, but based on this I wouldn't say it seems dire.
For example, roundabouts in cities (in Europe anyway) tend to increase the number of crashes, but they are overall of lower severity, leading to an overall improvement of safety. Judging by TFA alone I can't tell this isn't the case here. I can imagine a robotaxi having a different distribution of frequency and severity of accidents than a human driver.
He compared to the estimated statistics for non-reported accident (typically your example, that involve only one vehicle and only result in scratched paint) to estimate the 3x. Else the title would have been 9x (which is in line with 10x a data analyst blogger wrote ~ 3month ago).
> roundabouts in cities (in Europe anyway) tend to increase the number of crashes
Not in France, according to data. It depends on the speed limit, but they decrease accident by 34% overall, and almost 20% when the speed limit is 30 or 50 km/h.
They reduce accidents in general, but bring us some “entertaining” new ones where a (usually) drunk driver crashes into the statue/fountain/whatever in the middle or uses the little “hill” in the middle as a jump ramp…
>they tapped a bollard
If a human had eyes on every angle of their car and they still did that it would represent a lapse in focus or control -- humans don't have the same advantages here.
With that said : i would be more concerned about what it represents when my sensor covered auto-car makes an error like that, it would make me presume there was an error in detection -- a big problem.
I wonder if slow speeds affect the detection?
A bollard at three feet might look like a grain silo at 400 yards. I could see angles getting to where the camera sees "beige rectangle (wall), red cylinder (bollard)" and it's basically an abstract modern art piece.
I see things on security cameras a lot that in low resolution are nearly impossible for me to decipher.
> showing cumulative robotaxi miles, the fleet has traveled approximately 500,000 miles as of November 2025.
Comparing stats from this many miles to just over 1 trillion miles driven collectively in the US in a similar time period is a bad idea. Any noise in Tesla's data will change the ratio a lot. You can already see it from the monthly numbers varying between 1 and 4.
This is a bad comparison with not enough data. Like my household average for the number of teeth per person is ~25% higher than world average! (Includes one baby)
Edit: feel free to actually respond to the claim rather than downvote
It's always possible to deny the relevancy of a comparison based on some quality of the compared data. The autonomous car pilot trials will be by their very nature restriced to some locations, with specific weather patterns, etc., so even after the mileage will be 1000x of the current one there will be still options.
At which point will the comparison be considered relevant?
I think what you say would have be fair if Elon's and his fanboys' stance was "we need more data" rather than "we will be able to scale self-driving cars very quickly, very soon".
Of course, it could be no other way for a company that unleashed "FSD Beta" onto the streets and allowed all of us to be subjected to their bloody (literally) beta test. You don't get a safer future with "move fast and break things" mentality. Especially when the CEO is as illiterate as Musk about his own technology that he discount the results of actual experts in the field.
I mean, just look at the trail of headless corpses (there actually are multiple) left by Tesla during this beta test. Weren't we all here to witness a previous version of the thing running straight through a cartoon wall? Of course this thing was always going to end in disappointment -- it's sucked its whole existence. It's never been serious it's always been an 80/20 play hoping to get away with the con without delivering the rest of the 20% that makes it work.
Tesla's technology is bunk, their entire FSD thesis of "vision only" has been a dismal failure, and it's actually going to tank the entire Tesla car brand. I've been saying this for a while and it looks like it's finally starting to happen: Tesla is going to exit the car business never having delivered FSD in any viable capacity (although they'll claim total success), and Musk will retarget his empire to running the same FSD grift but with robots. Musk learned the bigger the promise, the more runway people give you to make it a reality. Spin a big enough yarn and Musk can live the rest of his life delivering nothing -- not Mars, not FSD, not AI, nada -- and people will still call him a genius.
I am so tired of people defending Tesla. I’ve wrote off Tesla long time ago but what gets me are the people defending their tech. We all can go see the products and experience them.
The tech needs to be at least 100x more error free vs humans. It cannot be on par with human error rate.
Tesla was THE company that started the EV-revolution, while VW was actively manipulating emission data.
I don't like Elon and his politics, but I'm very grateful for Tesla to have shaken up the car industry. Everyone is better for it.
Maybe? For years the highest selling EV was the Leaf.
I agree Tesla kind of increased the desirability of EVs at least in the US, but I'm not convinced it wouldn't have happened anyway.
It's a hard question to answer, because you're talking about a counterfactual.
I feel like there's probably some broader type of cognitive bias at play (where we assume something common wouldn't have been common otherwise, because it is common) but I don't know what the term for it might be.
We tend to defend companies that push the frontiers of self-driving cars, because the technology has the potential to save lives and make life easier and cheaper for everyone.
As engineers, we understand that the technology will go from unsafe, to par-with-humans, to safer-than-humans, but in order for it to get to the latter, it requires much validation and training in an intermediate state, with appropriate safeguards.
Tesla's approach has been more risk averse and conservative than others. It has compiled data and trained its models on billions of miles of real world telemetry from its own fleet (all of which are equipped with advanced internet-connected computers). Then it has rolled out the robotaxi tech slowly and cautiously, with human safety drivers, and only in two areas.
I defend Tesla's tech, because I've owned and driven a Tesla (Model S) for many years, and its ten-year-old Autopilot (autosteer and cruise control with lane shift) is actually smoother and more reliable than many of its competitors current offerings.
I've also watched hours of footage of Tesla's current FSD on YouTube, and seen it evolve into something quite remarkable. I think the end-to-end neural net with human-like sensors is more sensible than other approaches, which use sensors like LIDAR as a crutch for their more rudimentary software.
Unlike many commenters on this platform I have no political issues with Elon, so that doesn't colour my judgement of Tesla as a company, and its technological achievements. I wish others would set aside their partisan tribablism and recognise that Tesla has completely revolutionised the EV market and continues to make significant positive contributions to technology as a whole, all while opening all its patents and opening its Supercharger network to vehicles from competitors. Its ethics are sound.
> but in order for it to get to the latter, it requires much validation and training in an intermediate state, with appropriate safeguards.
I expect self-driving cars to be launched unsupervised on public roads in only an order-of-magnitude safer than human drivers shape. Or not launch at all.
One can pay thousands of people to babysit these cars with their hands on the wheel for many years until that threshold is reached, and if no one is ready to pay for that effort then we'll just drive ourselves until the end of time.
> I wish others would set aside their partisan tribablism and recognise that Tesla has completely revolutionised the EV market and continues to make significant positive contributions to technology as a whole, all while opening all its patents and opening its Supercharger network to vehicles from competitors.
The problem is, they lost their drive. The competition has caught up - Mercedes Benz has an actually certified Level 4 Autonomous Driving system, on the high-class end pretty much every major manufacturer has something competitive with Tesla, the low budget end has something like the Dacia Spring starting at 12.000€, and the actual long-haul truck (i.e. not the fake "truck" aka Cybertruck) segment has (at least) Volvo, MAN and DAF making full-size trucks.
Where is the actual unique selling point that Tesla has now?
Note: this is in response to https://news.ycombinator.com/item?id=46823760 which is from the same commenter but got killed before there was time to post any links refuting its claims.
> The "salute" in particular is simply a politically-expedient freeze-frame from a Musk speech, where he said "my heart goes out to you all" and happened to raise his arm. I could provide freeze-frame images of Obama and Hilary Clinton doing similar "salutes" and claim this makes them "far right fascists" but I would never insult the reader's intelligence by doing so.
For Obama and Clinton you can find freeze frames showing their arm in a similar position, but when you look at the full video it was in the middle of something that does not match a Nazi salute. Here are several examples: https://x.com/ExposingNV/status/1881647306724049116?t=CGKtg0...
If you had a camera in my kitchen you could find similar freeze frames of me whenever I make a sausge/egg/cheese on an English muffin breakfast sandwich because the ramekin I use to shape the egg patty is on the top shelf.
With Musk the full video shows it matches from when his arm starts moving to the end of the gesture. See https://x.com/BartoSitek/status/1882081868423860315?t=8F0hL-...
When Musk starts invading neighbouring countries and rounding up ethnic minorities for extermination you can call him a Nazi with some legitimacy.
On the other hand, what you're trying to extrapolate here seems somewhat contrived.
> other approaches, which use sensors like LIDAR as a crutch for their more rudimentary software.
Do me a favor and take Musk and get on a plane with just a bunch of cameras instead of sevsors like radar, airspeed sensor, altimeter, GPS, ILS, etc.
No need for those crutches. Do autopiloting like a real man!
Human-piloted planes have altimeters and airspeed indicators; the failure of which have caused many accidents.
Tesla cars have speed sensors as well as GPS. (Altimeter and ILS not being relevant). I agree with Musk's claim they don't need LIDAR because human drivers don't; it's self-evidently true. But I think they _should_ have it because they can then be safer than humans; why settle for our current accident and death rate?
> Tesla's approach has been more risk averse and conservative than others.
You lost me here. Tesla's approach has absolutely not been risk averse or conservative. They've allowed random public "testers" to beta test their self driving stack while even they called it a "beta". They've irresponsibly called the feature "full self driving" when it wasn't able to do any such thing. They've made completely outlandish promises (like FSD driving you from coast to coast in 2016). Finally they've staged marketing videos of FSD "working"[1]. Just deplorable stuff and using the public as their guinea pigs (and piggie bank).
> Its ethics are sound.
You've got to be joking. Where's the "/s"?
[1] https://techcrunch.com/2023/01/17/tesla-engineer-testifies-t...
Edit: Forgot another Tesla chonker of a promise. Remember when Elon said a Tesla car would be an appreciating asset because it would make you money by acting as a robotaxi when you're not using it? That was in 2019[2]. Has your Model S appreciated? Are you able to sell it for more today than the purchase price?
[2] https://electrek.co/2025/03/18/elon-musk-biggest-lie-tesla-v...
So I'm assuming you're fine with regular drivers using basic lane keep systems from other companies, which honestly doesn't even work well, even in the latest cars. (there's a reason Comma.ai exists) At least people who are using FSD are enthusiasts and understand the tech. You have some people using lane keep with adaptive cruise control and think the car is "self driving". That's dangerous.
Cite?
electrek.co recent Tesla headline summary with sentiment:
But is that electrec's or Tesla's fault?
All these self driving and "drivers assistance" features like lane keeping exist to satisfy consumer demand for a way to multitask when driving. Tesla's is particularly cancerous, but all of them should be banned. I don't care how good you think your lane keeping in whatever car you have is, you won't need it if you keep your hands on the wheel, eyes on the road, and don't drive when drowsy. Turn it off and stop trying to delegate your responsibility for what your two ton speeding death machine does!
I think it’s unfair to group all those features into “things for people who want to multitask while driving”.
I’m a decent driver, I never use my phone while driving and actively avoid distractions (sometimes I have to tell everyone in the car to stop talking), and yet features like lane assist and automatic braking have helped me avoid possible collisions simply because I’m human and I’m not perfect. Sometimes a random thought takes my attention away for a moment, or I’m distracted by sudden movement in my peripheral vision, or any number of things. I can drive very safely, but I can not drive perfectly all the time. No one can.
These features make safe drivers even safer. They even make the dangerous drivers (relatively) safer.
There are two layers, both relating to concentration.
Driving a car takes effort. ADAS features (or even just plain regular "driving systems") can reduce the cognitive load, which makes for safer driving. As much as I enjoy driving with a manual transmission, an automatic is less tiring for long journeys. Not having to occupy my mind with gear changes frees me up to pay more attention to my surroundings. Adaptive cruise control further reduces cognitive load.
The danger comes when assistance starts to replace attention. Tesla's "full self-driving" falls into this category, where the car doesn't need continuous inputs but the driver is still de jure in charge of the vehicle. Humans just aren't capable of concentrating on monitoring for an extended period.
What about lane assist and follow technology in other cars? Do they also fall in the category of thing that replace attention?
Have you ever driven more than 200km at an average of 80km/h with enough turns on the highway? Perhaps after work, just to see your family once a month?
Driver fatigue is real, no matter how much coffee you take.
Lane-keep is a game changer if the UX is well done. I'm way more rested when I arrive at destination with my Model 3 compared to when I use the regular ICE with bad lane-assist UX.
EDIT: the fact that people that look at their phones will still look at their phones with lane-keep active, only makes it a little safer for them and everyone else, really.
If you're on a road trip, pull the fuck over and sleep. Your schedule isn't worth somebody else's life. If that's your commute, get a new apartment or get a new job. Endangering everybody else with drowsy driving isn't an option you should ever find tenable.
You are correct - but the reality is many humans do those stupid things.
But this is why people bought Tesla. Musk promised that the car is automatic.
Don’t be silly. Why would a reasonable person think “Full Self Driving” meant that a car would fully drive itself?
My current FSD percentage is 90% over 2k miles (recorded since v14 update).
FSD is not perfect, but it is everyday amazing and useful.
A couple of friends with Teslas have told me it's not perfect and you do still have to pay attention but they do regular long drives and say it mostly works and they use it all the time.
(They also say there's still the handoff issue if a human needs to take control but it's still a big net win.)
We made drunk driving super illegal and that still doesn't stop people. I would rather they didn't in the first place, but since they're going to anyway, I'd really rather they have a computer that does it better than they do. FSD will pull over and stop if the driver has passed out.
If we could ensure that only drunk people use driver assistance features, I'd be all for that. The reality is that 90% of the sober public are now driving like chronic drunks because they think their car has assumed the responsibility of watching the road. Ban it ALL.
No, remove their licenses if they can’t drive safely. Let safe and responsible drivers use these safety-enhancing features.
If someone is driving dangerously despite these safety features, they should not have a license to operate a motor vehicle on public roads.
These features are still valuable even to safe drivers simply because safe drivers are human and will still make mistakes.
What I'm hearing here is anecdotal and largely based on feelings. The facts are that automatic emergency braking (which should not activate under normal driving circumstances as it is highly uncomfortable) and lane-keeping are basic safety features that have objectively improved safety on the roads. Everything you've said is merely conjecture.
Modern cars could easily detect drunk like driving and stop or call the cops.
A car that calls the cops on you. Great. It could also park, lock the doors and hold you in while the police take their sweet time knowing you're already in a cell?
Elon know FSD still takes time and that is the reason he is now ramping up the robot production. Who else to turn to to steer is upcoming fleet of taxies?