484 comments

  • bastawhiz 10 hours ago

    Lots of people are asking how good the self driving has to be before we tolerate it. I got a one month free trial of FSD and turned it off after two weeks. Quite simply: it's dangerous.

    - It failed with a cryptic system error while driving

    - It started making a left turn far too early that would have scraped the left side of the car on a sign. I had to manually intervene.

    - In my opinion, the default setting accelerates way too aggressively. I'd call myself a fairly aggressive driver and it is too aggressive for my taste.

    - It tried to make way too many right turns on red when it wasn't safe to. It would creep into the road, almost into the path of oncoming vehicles.

    - It didn't merge left to make room for vehicles merging onto the highway. The vehicles then tried to cut in. The system should have avoided an unsafe situation like this in the first place.

    - It would switch lanes to go faster on the highway, but then missed an exit on at least one occasion because it couldn't make it back into the right lane in time. Stupid.

    After the system error, I lost all trust in FSD from Tesla. Until I ride in one and feel safe, I can't have any faith that this is a reasonable system. Hell, even autopilot does dumb shit on a regular basis. I'm grateful to be getting a car from another manufacturer this year.

    • TheCleric 8 hours ago

      > Lots of people are asking how good the self driving has to be before we tolerate it.

      There’s a simple answer to this. As soon as it’s good enough for Tesla to accept liability for accidents. Until then if Tesla doesn’t trust it, why should I?

      • genocidicbunny 8 hours ago

        I think this is probably both the most concise and most reasonable take. It doesn't require anyone to define some level of autonomy or argue about specific edge cases of how the self driving system behaves. And it's easy to apply this principle to not only Tesla, but to all companies making self driving cars and similar features.

      • concordDance 7 hours ago

        Whats the current total liability cost for all Tesla drivers?

        The average for all USA cars seems to be around $2000/year, so even if FSD was half as dangerous Tesla would still be paying $1000/year equivalent (not sure how big insurance margins are, assuming nominal) per car.

        Now, if legally the driver could avoid paying insurance for the few times they want/need to drive themselves (e.g. snow? Dunno what FSD supports atm) then it might make sense economically, but otherwise I don't think it would work out.

        • Retric 7 hours ago

          Liability alone isn’t nearly that high.

          Car insurance payments include people stealing your car, uninsured motorists, rental cars, and other issues not the drivers fault. Further insurance payments also include profits for the insurance company, advertising, billing, and other overhead from running a business.

          Also, if Tesla was taking on these risks you’d expect your insurance costs to drop.

          • TheCleric 7 hours ago

            Yeah any automaker doing this would just negotiate a flat rate per car in the US and the insurer would average the danger to make a rate. This would be much cheaper than the average individual’s cost for liability on their insurance.

            • thedougd 2 hours ago

              And it would be supplementary to the driver’s insurance, only covering incidents that happen while FSD is engaged. Arguably they would self insure and only purchase insurance for Tesla as a back stop to their liability, maybe through a reinsurance market.

            • ryandrake 3 hours ago

              Somehow I doubt those savings would be passed along to the individual car buyer. Surely buying a car insured by the manufacturer would be much more expensive than buying the car plus your own individual insurance, because the car company would want to profit from both.

          • concordDance 7 hours ago

            Good points, thanks.

      • bdcravens 6 hours ago

        The liability for killing someone can include prison time.

        • TheCleric 6 hours ago

          Good. If you write software that people rely on with their lives, and it fails, you should be held liable for that criminally.

          • viraptor an hour ago

            That's a dangerous line and I don't think it's correct. Software I write shouldn't be relied on in critical situations. If someone makes that decision then it's on them not on me.

            The line should be where a person tells others that they can rely on the software with their lives - as in the integrator for the end product. Even if I was working on the software for self driving, the same thing would apply - if I wrote some alpha level stuff for the internal demonstration and some manager decided "good enough, ship it", they should be liable for that decision. (Because I wouldn't be able to stop them / may have already left by then)

          • dmix 4 hours ago

            Drug companies and the FDA (circa 1906) play a very dangerous and delicate dance all the time releasing new drugs to the public. But for over a century now we've managed to figure it out without holding pharma companies criminally liable for every death.

            > If you write software that people rely on with their lives, and it fails, you should be held liable for that criminally.

            Easy to type those words on the internet than make it a policy IRL. That sort of policy IRL would likely result in a) killing off all commercial efforts to solve traffic deaths via technology and vast amounts of other semi-autonomous technology like farm equipment or b) government/car companies mandating filming the driver every time they turn it on, because it's technically supposed to be human assisted autopilot in these testing stages (outside restricted pilot programs like Waymo taxis). Those distinctions would matter in a criminal court room, even if humans can't always be relied upon to always follow the instructions on the bottle's label.

            • ryandrake 3 hours ago

              Your take is understandable and not surprising on a site full of software developers. Somehow, the general software industry has ingrained this pessimistic and fatalistic dogma that says bugs are inevitable and there’s nothing you can do to prevent them. Since everyone believes it, it is a self-fulfilling prophecy and we just accept it as some kind of law of nature.

              Holding software developers (or their companies) liable for defects would definitely kill off a part of the industry: the very large part that YOLOs code into production and races to get features released without rigorous and exhaustive testing. And why don’t they spend 90% of their time testing and verifying and proving their software has no defects? Because defects are inevitable and they’re not held accountable for them!

              • everforward 24 minutes ago

                It is true of every field I can think of. Food gets salmonella and what not frequently. Surgeons forget sponges inside of people (and worse). Truckers run over cars. Manufacturers miss some failures in QA.

                Literally everywhere else, we accept that the costs of 100% safety are just unreasonably high. People would rather have a mostly safe device for $1 than a definitely safe one for $5. No one wants to pay to have every head of lettuce tested for E Coli, or truckers to drive at 10mph so they can’t kill anyone.

                Software isn’t different. For the vast majority of applications where the costs of failure are low to none, people want it to be free and rapidly iterated on even if it fails. No one wants to pay for a formally verified Facebook or DoorDash.

              • viraptor 40 minutes ago

                > that says bugs are inevitable and there’s nothing you can do to prevent them

                I don't think people believe this as such. It may be the short way to write it, but actually what devs mean is "bugs are inevitable at the funding/time available". I often say "bugs are inevitable" when it practice it means "you're not going to pay a team for formal specification, validated implementation and enough reliable hardware".

                Which business will agree to making the process 5x longer and require extra people? Especially if they're not forced there by regulation or potential liability?

            • hilsdev 3 hours ago

              We should hold Pharma companies liable for every death. They make money off the success cases. Not doing so is another example of privatized profits and socialized risks/costs. Something like a program with reduced costs for those willing to sign away liability to help balance social good vs risk analysis

          • bdcravens 3 hours ago

            Assuming there's the kind of guard rails as in other industries where this is true, absolutely. (In other words, proper licensing and credentialing, and the ability to prevent a deployment legally)

            I would also say that if something gets signed off on by management, that carries an implicit transfer of accountability up the chain from the individual contributor to whoever signed off.

          • _rm 4 hours ago

            What a laugh, would you take that deal?

            Upside: you get paid a 200k salary, if all your code works perfectly. Downside: if it doesn't, you go to prison.

            The users aren't compelled to use it. They can choose not to. They get to choose their own risks.

            The internet is a gold mine of creatively moronic opinions.

            • thunky 2 hours ago

              You can go to prison or die for being a bad driver, yet people choose to drive.

              • ukuina 18 minutes ago

                Systems evolve to handle such liability: Drivers pass theory and practical tests to get licensed to drive (and periodically thereafter), and an insurance framework that gauges your risk-level and charges you accordingly.

            • moralestapia 3 hours ago

              Read the site rules.

              And also, of course some people would take that deal, and of course some others wouldn't. Your argument is moot.

          • beej71 5 hours ago

            And such coders should carry malpractice insurance.

          • dansiemens 4 hours ago

            Are you suggesting that individuals should carry that liability?

      • renewiltord 5 hours ago

        This is how I feel about nuclear energy. Every single plant should need to form a full insurance fund dedicated to paying out if there’s trouble. And the plant should have strict liability: anything that happens from materials it releases are its responsibility.

        But people get upset about this. We need corporations to take responsibility.

        • idiotsecant 4 hours ago

          While we're at it how about why apply the same standard to coal and natural gas plants? For some reason when we start taking about nuclear plants we all of a sudden become adverse to the idea of unfunded externalities but when we're talking about 'old' tech that has been steadily irradiating your community and changing the gas composition of the entire planet it becomes less concerning.

    • modeless 10 hours ago

      Tesla jumped the gun on the FSD free trial earlier this year. It was nowhere near good enough at the time. Most people who tried it for the first time probably share your opinion.

      That said, there is a night and day difference between FSD 12.3 that you experienced earlier this year and the latest version 12.6. It will still make mistakes from time to time but the improvement is massive and obvious. More importantly, the rate of improvement in the past two months has been much faster than before.

      Yesterday I spent an hour in the car over three drives and did not have to turn the steering wheel at all except for parking. That never happened on 12.3. And I don't even have 12.6 yet, this is still 12.5; others report that 12.6 is a noticeable improvement over 12.5. And version 13 is scheduled for release in the next two weeks, and the FSD team has actually hit their last few release milestones.

      People are right that it is still not ready yet, but if they think it will stay that way forever they are about to be very surprised. At the current rate of improvement it will be quite good within a year and in two or three I could see it actually reaching the point where it could operate unsupervised.

      • jvanderbot 9 hours ago

        I have yet to see a difference. I let it highway drive for an hour and it cut off a semi, coming within 9 to 12 inches of the bumper for no reason. I heard about that one believe me.

        It got stuck in a side street trying to get to a target parking lot, shaking the wheel back and forth.

        It's no better so far and this is the first day.

        • modeless 9 hours ago

          You have 12.6?

          As I said, it still makes mistakes and it is not ready yet. But 12.3 was much worse. It's the rate of improvement I am impressed with.

          I will also note that the predicted epidemic of crashes from people abusing FSD never happened. It's been on the road for a long time now. The idea that it is "irresponsible" to deploy it in its current state seems conclusively disproven. You can argue about exactly what the rate of crashes is but it seems clear that it has been at the very least no worse than normal driving.

          • jvanderbot 9 hours ago

            Hm. I thought that was the latest release but it looks like no. But there seems to be no improvements from the last trial, so maybe 12.6 is magically better.

            • modeless 9 hours ago

              A lot of people have been getting the free trial with 12.3 still on their cars today. Tesla has really screwed up on the free trial for sure. Nobody should be getting it unless they have 12.6 at least.

              • jvanderbot 9 hours ago

                I have 12.5. maybe 12.6 is better but I've heard that before.

                Don't get me wrong without a concerted data team building maps a priori, this is pretty incredible. But from a pure performance standpoint it's a shaky product.

                • KaoruAoiShiho 9 hours ago

                  The latest version is 12.5.6, I think he got confused by the .6 at the end. If you think that's bad then there isn't a better version available. However it is a dramatic improvement over 12.3, don't know how much you tested on it.

                  • modeless 9 hours ago

                    You're right, thanks. One of the biggest updates in 12.5.6 is transitioning the highway Autopilot to FSD. If he has 12.5.4 then it may still be using the old non-FSD Autopilot on highways which would explain why he hasn't noticed improvement there; there hasn't been any until 12.5.6.

        • hilux 9 hours ago

          > ... coming within 9 to 12 inches of the bumper for no reason. I heard about that one believe me.

          Oh dear.

          Glad you're okay!

        • eric_cc 8 hours ago

          Is it possible you have a lemon? Genuine question. I’ve had nothing but positive experiences with FSD for the last several months and many thousands of miles.

          • ben_w 7 hours ago

            I've had nothing but positive experiences with ChatGPT-4o, that doesn't make people wrong to criticise either as modelling their training data too much and generalising too little when they need to use it for something where the inference domain is too far outside the training domain.

      • wstrange 8 hours ago

        I have a 2024 Model 3, and it's a a great car. That being said, I'm under no illusion that the car will ever be self driving (unsupervised).

        12.5.6 Still fails to read very obvious signs for 30 Km/h playgrounds zones.

        The current vehicles lack sufficient sensors, and likely do not have enough compute power and memory to cover all edge cases.

        I think it's a matter of time before Tesla faces a lawsuit over continual FSD claims.

        My hope is that the board will grow a spine and bring in a more focused CEO.

        Hats off to Elon for getting Tesla to this point, but right now they need a mature (and boring) CEO.

        • pelorat 4 hours ago

          The board is family and friends, so them ousting him will never happen.

      • jeffbee 7 hours ago

        If I had a dime for every hackernews who commented that FSD version X was like a revelation compared to FSD version X-ε I'd have like thirty bucks. I will grant you that every release has surprisingly different behaviors.

        Here's an unintentionally hilarious meta-post on the subject https://news.ycombinator.com/item?id=29531915

        • modeless 7 hours ago

          Sure, plenty of people have been saying it's great for a long time, when it clearly was not (looking at you, Whole Mars Catalog). I was not saying it was super great back then. I have consistently been critical of Elon for promising human level self driving "next year" for like 10 years in a row and being wrong every time. He said it this year again and I still think he's wrong.

          But the rate of progress I see right now has me thinking that it may not be more than two or three years before that threshold is finally reached.

          • ben_w 7 hours ago

            The most important lesson I've had from me incorrectly predicting in 2009 that we'd have cars that don't come with steering wheels in 2018, and thinking that the progress I saw each year up to then was consistent with that prediction, is that it's really hard to guess how long it takes to walk the fractal path that is software R&D.

            How far are we now, 6 years later than I expected?

            Dunno.

            I suspect it's gonna need an invention on the same level as Diffusion or Transformer models to be able to get all the edge cases we can get, and that might mean we only get it with human level AGI.

            But I don't know that, it might be we've already got all we need architecture-wise and it's just a matter of scale.

            Only thing I can be really sure of is we're making progress "quite fast" in a non-objective use of the words — it's not going to need a re-run of 6 million years of mammilian evolution or anything like that, but even 20 years wall clock time would be a disappointment.

            • modeless 6 hours ago

              Waymo went driverless in 2020, maybe you weren't that far off. Predicting that in 2009 would have been pretty good. They could and should have had vehicles without steering wheels anytime since then, it's just a matter of hardware development. Their steering wheel free car program was derailed when they hired traditional car company executives.

              • ben_w 5 hours ago

                Waymo for sure, but I meant also without any geolock etc., so I can't claim credit for my prediction.

                They may well best Tesla to this, though.

        • Laaas 7 hours ago

          Doesn’t this just mean it’s improving rapidly which is a good thing?

          • jeffbee 5 hours ago

            No, the fact that people say FSD is on the verge of readiness constantly for a decade means there is no widely shared benchmark.

      • bastawhiz 8 hours ago

        > At the current rate of improvement it will be quite good within a year

        I'll believe it when I see it. I'm not sure "quite good" is the next step after "feels dangerous".

      • snypher 8 hours ago

        So just a few more years of death and injury until they reach a finished product?

      • delusional 8 hours ago

        > That said, there is a night and day difference between FSD 12.3 that you experienced earlier this year and the latest version 12.6

        >And I don't even have 12.6 yet, this is still 12.5;

        How am i supposed to take anything you say seriously when your only claim is a personal anecdote that doesn't even apply to your own argument. Please, think about what you're writing, and please stop repeating information you heard on youtube as if it's fact.

        The is one of the reasons (among many) that I can't take Tesla booster seriously. I have absolutely zero faith in your anecdote that you didn't touch the steering wheel. I bet it's a lie.

        • jsjohnst 4 hours ago

          > I have absolutely zero faith in your anecdote that you didn't touch the steering wheel. I bet it's a lie.

          I’m not GP, but I can share video showing it driving across residential, city, highway, and even gravel roads all in a single trip without touching the steering wheel a single time over a 90min trip (using 12.5.4.1).

          • jsjohnst 3 hours ago

            And if someone wants to claim I’m cherry picking the video, happy to shoot a new video with this post visible on an iPad in the seat next to me. Is it autonomous? Hell no. Can it drive in Manhattan? Nope. But can it do >80% of my regular city (suburb outside nyc) and highway driving, yep.

        • eric_cc 8 hours ago

          I can second this experience. I rarely touch the wheel anymore. I’d say I’m 98% FSD. I take over in school zones, parking lots, and complex construction.

        • modeless 8 hours ago

          The version I have is already a night and day difference from 12.3 and the current version is better still. Nothing I said is contradictory in the slightest. Apply some basic reasoning, please.

          I didn't say I didn't touch the steering wheel. I had my hands lightly touching it most of the time, as one should for safety. I occasionally used the controls on the wheel as well as the accelerator pedal to adjust the set speed, and I used the turn signal to suggest lane changes from time to time, though most lane choices were made automatically. But I did not turn the wheel. All turning was performed by the system. (If you turn the wheel manually the system disengages). Other than parking, as I mentioned, though FSD did handle some navigation into and inside parking lots.

      • misiti3780 8 hours ago

        i have the same experience 12.5 is insanely good. HN is full of people that dont want self driving to succeed for some reason. fortunately, it's clear as day to some of us that tesla approach will work

        • ethbr1 8 hours ago

          Curiousity about why they're against it and enunciating your why you think it will work would be more helpful.

          • misiti3780 7 hours ago

            It's evident to Tesla drivers using Full Self-Driving (FSD) that the technology is rapidly improving and will likely succeed. The key reason for this anticipated success is data: any reasonably intelligent observer recognizes that training exceptional deep neural networks requires vast amounts of data, and Tesla has accumulated more relevant data than any of its competitors. Tesla recently held a robotaxi event, explicitly informing investors of their plans to launch an autonomous competitor to Uber. While Elon Musk's timeline predictions and politics may be controversial, his ability to achieve results and attract top engineering and management talent is undeniable.

            • ryandrake 3 hours ago

              Then why have we been just a year or two away from actual working self-driving, for the last 10 years? If I told my boss that my project would be done in a year, and then the following year said the same thing, and continued that for years, that’s not what “achieving results” means.

        • eric_cc 8 hours ago

          Completely agree. It’s very strange. But honestly it’s their loss. FSD is fantastic.

      • seizethecheese 9 hours ago

        If this is the case, the calls for heavy regulation in this thread will lead to many more deaths than otherwise.

    • dreamcompiler 8 hours ago

      > It didn't merge left to make room for vehicles merging onto the highway. The vehicles then tried to cut in. The system should have avoided an unsafe situation like this in the first place.

      This is what bugs me about ordinary autopilot. Autopilot doesn't switch lanes, but I like to slow down or speed up as needed to allow merging cars to enter my lane. Autopilot never does that, and I've had some close calls with irate mergers who expected me to work with them. And I don't think they're wrong.

      Just means that when I'm cruising in the right lane with autopilot I have to take over if a car tries to merge.

      • bastawhiz 3 hours ago

        Agreed. Automatic lane changes are the only feature of enhanced autopilot that I think I'd be interested in, solely for this reason.

    • browningstreet an hour ago

      Was this the last version, or the version released today?

      I’ve been pretty skeptical of FSD and didn’t use the last version much. Today I used the latest test version, enabled yesterday, and rode around SF, to and from GGP, and it did really well.

      Waymo well? Almost. But whereas I haven’t ridden Waymo on the highway yet, FSD got me from Hunters Point to the east bay with no disruptions.

      The biggest improvement I noticed was its optimizations on highway progress.. it’ll change lanes, nicely, when the lane you’re in is slower than the surrounding lanes. And when you’re in the fast/passing lane it’ll return to the next closest lane.

      Definitely better than the last release.

      • bastawhiz 38 minutes ago

        I'm clearly not using the FSD today because I refused to complete my free trial of it a few months ago. The post of mine that you're responding to doesn't mention my troubles with Autopilot, which I highly doubt are addressed by today's update (see my other comment for a list of problems). They need to really, really prove to me that Autopilot is working reliably before I'd even consider accepting another free trial of FSD, which I doubt they'd do anyway.

    • frabjoused 10 hours ago

      The thing that doesn't make sense is the numbers. If it is dangerous in your anecdotes, why don't the reported numbers show more accidents when FSD is on?

      When I did the trial on my Tesla, I also noted these kinds of things and felt like I had to take control.

      But at the end of the day, only the numbers matter.

      • timabdulla 10 hours ago

        > If it is dangerous in your anecdotes, why don't the reported numbers show more accidents when FSD is on?

        Even if it is true that the data show that with FSD (not Autopilot) enabled, drivers are in fewer crashes, I would be worried about other confounding factors.

        For instance, I would assume that drivers are more likely to engage FSD in situations of lower complexity (less traffic, little construction or other impediments, overall lesser traffic flow control complexity, etc.) I also believe that at least initially, Tesla only released FSD to drivers with high safety scores relative to their total driver base, another obvious confounding factor.

        Happy to be proven wrong though if you have a link to a recent study that goes through all of this.

        • valval 8 hours ago

          Either the system causes less loss of life than a human driver or it doesn’t. The confounding factors don’t matter, as Tesla hasn’t presented a study on the subject. That’s in the future, and all stats that are being gathered right now are just that.

          • unbrice 8 hours ago

            > Either the system causes less loss of life than a human driver or it doesn’t. The confounding factors don’t matter.

            Confounding factors are what allows one to tell appart "the system cause less loss of life" from "the system causes more loss of life yet it is only enabled in situations were fewer lives are lost".

      • rvnx 10 hours ago

        There is an easy way to know what is really behind the numbers: look who is paying in case of accident.

        You have a Mercedes, Mercedes takes responsibility.

        You have a Tesla, you take the responsibility.

        Says a lot.

        • sebzim4500 8 hours ago

          Mercedes had the insight that if no one is able to actually use the system then it can't cause any crashes.

          Technically, that is the easiest way to get a perfect safety record and journalists will seemingly just go along with the charade.

        • diebeforei485 an hour ago

          While I don't disagree with your point in general, it should be noted that there is more to taking responsibility than just paying. Even if Mercedes Drive Pilot was enabled, anything that involves court appearances and criminal liability is still your problem if you're in the driver's seat.

        • tensor 8 hours ago

          You have a Mercedes, and you have a system that works virtually nowhere.

          • therouwboat 7 hours ago

            Better that way than "Oh it tried to run red light, but otherwise it's great."

            • tensor 6 hours ago

              "Oh we tried to build it but no one bought it! So we gave up." - Mercedes before Tesla.

              Perhaps FSD isn't ready for city streets yet, but it's great on the highways and I'd 1000x prefer we make progress rather than settle for the status quo garbage that the legacy makers put out. Also, human drivers are the most dangerous, by far, we need to make progress to eventual phase them out.

              • meibo 4 hours ago

                2-ton blocks of metal that go 80mph next to me on the highway is not the place I would want people to go "fuck it let's just do it" with their new tech. Human drivers might be dangerous but adding more danger and unpredictability on top just because we can skip a few steps in the engineering process is crazy.

                Maybe you have a deathwish, but I definitely don't. Your choices affect other humans in traffic.

      • jsight 10 hours ago

        Because it is bad enough that people really do supervise it. I see people who say that wouldn't happen because the drivers become complacent.

        Maybe that could be a problem with future versions, but I don't see it happening with 12.3.x. I've also heard that driver attention monitoring is pretty good in the later versions, but I have no first hand experience yet.

        • valval 9 hours ago

          Very good point. The product that requires supervision and tells the user to keep their hands on the wheel every 10 seconds is not good enough to be used unsupervised.

          I wonder how things are inside your head. Are you ignorant or affected by some strong bias?

          • jsight 2 hours ago

            Yeah, it definitely isn't good enough to be used unsupervised. TBH, they've switched to eye and head tracking as the primary mechanism of attention monitoring now. It seems to work pretty well, now that I've had a chance to try it.

            I'm not quite sure what you meant by your second paragraph, but I'm sure I have my blind spots and biases. I do have direct experience with various versions of 12.x though (12.3 and now 12.5).

      • bastawhiz 10 hours ago

        Is Tesla required to report system failures or the vehicle damaging itself? How do we know they're not optimizing for the benchmark (what they're legally required to report)?

        • rvnx 10 hours ago

          If the question is: “was FSD activated at the time of the accident: yes/no”, they can legally claim no, for example if luckily the FSD disconnects half a second before a dangerous situation (eg: glare obstructing cameras), which may coincide exactly with the times of some accidents.

          • diebeforei485 an hour ago

            > To ensure our statistics are conservative, we count any crash in which Autopilot was deactivated within 5 seconds before impact, and we count all crashes in which the incident alert indicated an airbag or other active restraint deployed.

            Scroll down to Methodology at https://www.tesla.com/VehicleSafetyReport

        • Uzza 7 hours ago

          All manufacturers have for some time been required by regulators to report any accident where an autonomous or partially autonomous system was active within 30 seconds of an accident.

          • bastawhiz 44 minutes ago

            My question is better rephrased as "what is legally considered an accident that needs to be reported?" If the car scrapes a barricade or curbs it hard but the airbags don't deploy and the car doesn't sense the damage, clearly they don't. There's a wide spectrum of issues up to the point where someone is injured or another car is damaged.

      • akira2501 10 hours ago

        You can measure risks without having to witness disaster.

      • nkrisc 10 hours ago

        What numbers? Who’s measuring? What are they measuring?

      • johnneville 7 hours ago

        are there even transparent reported numbers available ?

        for whatever does exist, it is also easy to imagine how they could be misleading. for instance i've disengaged FSD when i noticed i was about to be in an accident. if i couldn't recover in time, the accident would not be when FSD is on and depending on the metric, would not be reported as a FSD induced accident.

      • throwaway562if1 7 hours ago

        AIUI the numbers are for accidents where FSD is in control. Which means if it does a turn into oncoming traffic and the driver yanks the wheel or slams the brakes 500ms before collision, it's not considered a crash during FSD.

        • Uzza 7 hours ago

          That is not correct. Tesla counts any accident within 5 seconds of Autopilot/FSD turning off as the system being involved. Regulators extend that period to 30 seconds, and Tesla must comply with that when reporting to them.

        • concordDance 7 hours ago

          Several people in this thread have been saying this or similar. It's incorrect, from Tesla:

          "To ensure our statistics are conservative, we count any crash in which Autopilot was deactivated within 5 seconds before impact"

          https://www.tesla.com/en_gb/VehicleSafetyReport

          Situations which inevitably cause a crash more than 5 seconds later seem like they would be extremely rare.

      • ForHackernews 10 hours ago

        Maybe other human drivers are reacting quickly and avoiding potential accidents from dangerous computer driving? That would be ironic, but I'm sure it's possible in some situations.

      • gamblor956 8 hours ago

        The numbers collected by the NHTSA and insurance companies do show that FSD is dangerous...that's why the NHTSA started investigating and its why most insurance companies won't insure Tesla vehicles or charge significantly higher rates.

        Also, Tesla is known to disable self-driving features right before collisions to give the appearance of driver fault.

        And the coup de grace: if Tesla's own data showed that FSD was actually safer, they'd be shouting it from the moon, using that data to get self-driving permits in CA, and offering to assume liability if FSD actually caused an accident (like Mercedes does with its self driving system).

      • lawn 8 hours ago

        > The thing that doesn't make sense is the numbers.

        Oh? Who are presenting the numbers?

        Is a crash that fails to trigger the airbags still not counted as a crash?

        What about the car turning off FSD right before a crash?

        How about adjusting for factors such as age of driver and the type of miles driven?

        The numbers don't make sense because they're not good comparisons and are made to make Tesla look good.

    • dchichkov 7 hours ago

      > I'm grateful to be getting a car from another manufacturer this year.

      I'm curious, what is the alternative that you are considering? I've been delaying an upgrade to electric for some time. And now, a car manufacturer that is contributing to the making of another Jan 6th, 2021 is not an option, in my opinion.

      • bastawhiz 3 hours ago

        I've got a deposit on the Dodge Charger Daytona EV

    • mike_d 8 hours ago

      > Lots of people are asking how good the self driving has to be before we tolerate it.

      When I feel as safe as I do sitting in the back of a Waymo.

    • geoka9 6 hours ago

      > It didn't merge left to make room for vehicles merging onto the highway. The vehicles then tried to cut in. The system should have avoided an unsafe situation like this in the first place.

      I've been on the receiving end of this with the offender being a Tesla so many times that I figured it must be FSD.

      • bastawhiz 3 hours ago

        Probably autopilot, honestly.

    • thomastjeffery 10 hours ago

      It's not just about relative safety compared to all human driving.

      We all know that some humans are sometimes terrible drivers!

      We also know what that looks like: Driving too fast or slow relative to surroundings. Quickly turning every once in a while to stay in their lane. Aggressively weaving through traffic. Going through an intersection without spending the time to actually look for pedestrians. The list goes on..

      Bad human driving can be seen. Bad automated driving is invisible. Do you think the people who were about to be hit by a Tesla even realized that was the case? I sincerely doubt it.

      • bastawhiz 8 hours ago

        > Bad automated driving is invisible.

        I'm literally saying that it is visible, to me, the passenger. And for reasons that aren't just bad vibes. If I'm in an Uber and I feel unsafe, I'll report the driver. Why would I pay for my car to do that to me?

        • wizzwizz4 8 hours ago

          GP means that the signs aren't obvious to other drivers. We generally underestimate how important psychological modelling is for communication, because it's transparent to most of us under most circumstances, but AI systems have very different psychology to humans. It is easier to interpret the body language of a fox than a self-driving car.

        • thomastjeffery 3 hours ago

          We are taking about the same thing: unpredictability. If you and everyone else can't predict what your car will do, then that seems objectively unsafe to me. It also sounds like we agree with each other.

    • paulcole 8 hours ago

      > Until I ride in one and feel safe, I can't have any faith that this is a reasonable system

      This is probably the worst way to evaluate self-driving for society though, right?

      • bastawhiz 3 hours ago

        Why would I be supportive of a system that has actively scared me for objectively scary reasons? Even if it's the worst reason, it's not a bad reason.

    • pbasista 7 hours ago

      > I'm grateful to be getting a car from another manufacturer this year.

      I have no illusions about Tesla's ability to deliver an unsupervised self-driving car any time soon. However, as far as I understand, their autosteer system, in spite of all its flaws, is still the best out there.

      Do you have any reason to believe that there actually is something better?

      • bastawhiz 3 hours ago

        Autopilot has not been good. I have a cabin four hours from my home and I've used autopilot for long stretches on the highway. Some of the problems:

        - Certain exits are not detected as such and the car violently veers right before returning to the lane. I simply can't believe they don't have telemetry to remedy this.

        - Sometimes the GPS becomes miscalibrated. This makes the car think I'm taking an exit when I'm not, causing the car to abruptly reduce its speed to the speed of the ramp. It does not readjust.

        - It frequently slows for "emergency lights" that don't exist.

        - If traffic comes to a complete stop, the car accelerates way too hard and brakes hard when the car in front moves any substantial amount.

        At this point, I'd rather have something less good than something which is an active danger. For all intents and purposes, my Tesla doesn't have reliable cruise control, period.

        Beyond that, though, I simply don't have trust in Tesla software. I've encountered so many problems at this point that I can't possibly expect them to deliver a product that works reliably at any point in the future. What reason do I have to believe things will magically improve?

        • absoflutely 2 hours ago

          I'll add that it randomly brakes hard on the interstate because it thinks the speed limit drops to 45. There aren't speed limit signs anywhere nearby on different roads that it could be mistakenly reading either.

          • bastawhiz an hour ago

            I noticed that this happens when the triangle on the map is slightly offset from the road, which I've attributed to miscalibrated GPS. It happens consistently when I'm in the right lane and pass an exit when the triangle is ever so slightly misaligned.

      • throwaway314155 7 hours ago

        I believe they're fine with losing auto steering capabilities, based on the tone of their comment.

    • eric_cc 8 hours ago

      That sucks that you had that negative experience. I’ve driven thousands of miles in FSD and love it. Could not imagine going back. I rarely need to intervene and when I do it’s not because the car did something dangerous. There are just times I’d rather take over due to cyclists, road construction, etc.

      • itsoktocry 6 hours ago

        These "works for me!" comments are exhausting. Nobody believes you "rarely intervene", otherwise Tesla themselves would be promoting the heck out of the technology.

        Bring on the videos of you in the passenger seat on FSD for any amount of time.

        • eric_cc 4 hours ago

          It’s the counter-point to the “it doesn’t work for me” posts. Are you okay with those ones?

      • windexh8er 7 hours ago

        I don't believe this at all. I don't own one but know about a half dozen people that got suckered into paying for FSD. All of them don't use it and 3 of them have stated it's put them in dangerous situations.

        I've ridden in an X, S and Y with it on. Talk about vomit inducing when letting it drive during "city" driving. I don't doubt it's OK on highway driving, but Ford Blue Cruise and GM's Super Cruise are better there.

        • eric_cc 4 hours ago

          You can believe what you want to believe. It works fantastic for me whether you believe it or not.

          I do wonder if people who have wildly different experiences than I have are living in a part of the country that, for one reason or another, Tesla FSD does not yet do as well in.

      • bastawhiz 3 hours ago

        I'm glad for you, I guess.

        I'll say the autopark was kind of neat, but parking has never been something I have struggled with.

    • concordDance 7 hours ago

      This would be more helpful with a date. Was this in 2020 or 2024? I've been told FSD had a complete rearchitecting.

    • potato3732842 9 hours ago

      If you were a poorer driver who did these things you wouldn't find these faults so damning because it'd only be say 10% dumber than you rather than 40% or whatever (just making up those numbers).

      • bastawhiz 8 hours ago

        That just implies FSD is as good as a bad driver, which isn't really an endorsement.

        • potato3732842 5 hours ago

          I agree it's not an endorsement but we allow chronically bad drivers on the road as long as they're legally bad and not illegally bad.

    • dekhn 10 hours ago

      I don't think you're supposed to merge left when people are merging on the highway into your lane- you have right of way. I find even with the right of way many people merging aren't paying attention, but I deal with that by slightly speeding up (so they can see me in front of them).

      • sangnoir 10 hours ago

        You don't have a right of way over a slow moving vehicle that merged ahead of you. Most ramps are not long enough to allow merging traffic to accelerate to highway speeds before merging, so many drivers free up the right-most lane for this purpose (by merging left)

        • SoftTalker 9 hours ago

          If you can safely move left to make room for merging traffic, you should. It’s considerate and reduces the chances of an accident.

        • dekhn 7 hours ago

          Since a number of people are giving pushback, can you point to any (California-oriented) driving instructions consistent with this? I'm not seeing any. I see people saying "it's curteous", but when I'm driving I'm managing hundreds of variables and changing lanes is often risky, given motorcycles lanesplitting at high speed (quite common).

          • davidcalloway 5 hours ago

            Definitely not California but literally the first part of traffic law in Germany says that caution and consideration are required from all partaking in traffic.

            Germans are not known for poor driving.

            • dekhn 5 hours ago

              Right- but the "consideration" here is the person merging onto the highway actually paying attention and adjusting, rather than pointedly not even looking (this is a very common merging behavior where I life). Changing lanes isn't without risk even on a clear day with good visibility. Seems like my suggestion of slowing down or speeding up makes perfect sense because it's less risky overall, and is still being considerate.

              Note that I personally do change lanes at times when it's safe, convenient, I am experienced with the intersection, and the merging driver is being especially unaware.

          • sangnoir 4 hours ago

            It's not just courteous, it's self serving, AFAIK, a self-emergent phenomenon. If you're driving at 65 mph and anticipate a slow down in your lane due merging traffic, do you stay in your lane and slow down to 40 mph, or do you change lanes (if it's safe to do so) and maintain your speed?

            Texas highways allow for much higher merging speeds at the cost of far large (land area), 5-level interchanges rather than 35 mph offramps and onramps common in California.

            Any defensive driving course (which fall under instruction IMO) states that you don't always have to exercise your right of way, and indeed it may be unsafe to do so in some circumstances. Anticipating the actions of other drivers around you and avoiding potentially dangerous are the other aspects of being a defensive driver, and those concepts are consistent with freeing up the lane slower-moving vehicles are merging onto when it's safe to do so.

        • potato3732842 9 hours ago

          Most ramps are more than long enough to accelerate close enough to traffic speed if one wants to, especially in most modern vehicles.

          • wizzwizz4 8 hours ago

            Unless the driver in front of you didn't.

      • bastawhiz 8 hours ago

        Just because you have the right of way doesn't mean the correct thing to do is to remain in the lane. If remaining in your lane is likely to make someone else do something reckless, you should have been proactive. Not legally, for the sake of being a good driver.

        • dekhn 7 hours ago

          Can you point to some online documentation that recommends changing lanes in preference to speeding up when a person is merging at too slow a speed? What I'm doing is following CHP guidance in this post: https://www.facebook.com/chpmarin/posts/lets-talk-about-merg... """Finally, if you are the vehicle already traveling in the slow lane, show some common courtesy and do what you can to create a space for the person by slowing down a bit or speeding up if it is safer. """

          (you probably misinterpreted what I said. I do sometimes change lanes, even well in advance of a merge I know is prone to problems, if that's the safest and most convenient. What I am saying is the guidance I have read indicates that staying in the same lane is generally safer than changing lanes, and speeding up into an empty space is better for everybody than slowing down, especially because many people who are merging will keep slowing down more and more when the highway driver slows for them)

          • jazzyjackson 2 hours ago

            I read all this thread and all I can say is not everything in the world is written down somewhere

          • bastawhiz 3 hours ago

            > recommends changing lanes in preference to speeding up when a person is merging at too slow a speed

            It doesn't matter, Tesla does neither. It always does the worst possible non-malicious behavior.

  • AlchemistCamp 11 hours ago

    The interesting question is how good self-driving has to be before people tolerate it.

    It's clear that having half the casualty rate per distance traveled of the median human driver isn't acceptable. How about a quarter? Or a tenth? Accidents caused by human drivers are one of the largest causes of injury and death, but they're not newsworthy the way an accident involving automated driving is. It's all too easy to see a potential future where many people die needlessly because technology that could save lives is regulated into a greatly reduced role.

    • Arainach 10 hours ago

      This is about lying to the public and stoking false expectations for years.

      If it's "fully self driving" Tesla should be liable for when its vehicles kill people. If it's not fully self driving and Tesla keeps using that name in all its marketing, regardless of any fine print, then Tesla should be liable for people acting as though their cars could FULLY self drive and be sued accordingly.

      You don't get to lie just because you're allegedly safer than a human.

      • jeremyjh 10 hours ago

        I think this is the answer: the company takes on full liability. If a Tesla is Fully Self Driving then Tesla is driving it. The insurance market will ensure that dodgy software/hardware developers exit the industry.

        • blagie 10 hours ago

          This is very much what I would like to see.

          The price of insurance is baked into the price of a car. If the car is as safe as I am, I pay the same price in the end. If it's safer, I pay less.

          From my perspective:

          1) I would *much* rather have Honda kill someone than myself. If I killed someone, the psychological impact on myself would be horrible. In the city I live in, I dread ageing; as my reflexes get slower, I'm more and more likely to kill someone.

          2) As a pedestrian, most of the risk seems to come from outliers -- people who drive hyper-aggressively. Replacing all cars with a median driver would make me much safer (and traffic, much more predictable).

          If we want safer cars, we can simply raise insurance payouts, and vice-versa. The market works everything else out.

          But my stress levels go way down, whether in a car, on a bike, or on foot.

          • gambiting 10 hours ago

            >> I would much rather have Honda kill someone than myself. If I killed someone, the psychological impact on myself would be horrible.

            Except that we know that it doesn't work like that. Train drivers are ridden with extreme guilt every time "their" train runs over someone, even though they know that logically there was absolutely nothing they could have done to prevent it. Don't see why it would be any different here.

            >>If we want safer cars, we can simply raise insurance payouts, and vice-versa

            In what way? In the EU the minimum covered amount for any car insurance is 5 million euro, it has had no impact on the safety of cars. And of course the recent increase in payouts(due to the general increase in labour and parts cost) has led to a dramatic increase in insurance premiums which in turn has lead to a drastic increase in the number of people driving without insurance. So now that needs increased policing and enforcement, which we pay for through taxes. So no, market doesn't "work everything out".

            • blagie 7 hours ago

              > Except that we know that it doesn't work like that. Train drivers are ridden with extreme guilt every time "their" train runs over someone, even though they know that logically there was absolutely nothing they could have done to prevent it. Don't see why it would be any different here.

              It's not binary. Someone dying -- even with no involvement -- can be traumatic. I've been in a position where I could have taken actions to prevent someone from being harmed. Rationally not my fault, but in retrospect, I can describe the exact set of steps needed to prevent it. I feel guilty about it, even though I know rationally it's not my fault (there's no way I could have known ahead of time).

              However, it's a manageable guilt. I don't think it would be if I knew rationally that it was my fault.

              > So no, market doesn't "work everything out".

              Whether or not a market works things out depends on issues like transparency and information. Parties will offload costs wherever possible. In the model you gave, there is no direct cost to a car maker making less safe cars or vice-versa. It assumes the car buyer will even look at insurance premiums, and a whole chain of events beyond that.

              That's different if it's the same party making cars, paying money, and doing so at scale.

              If Tesla pays for everyone damaged in any accident a Tesla car has, then Tesla has a very, very strong incentive to make safe cars to whatever optimum is set by the damages. Scales are big enough -- millions of cars and billions of dollars -- where Tesla can afford to hire actuaries and a team of analysts to make sure they're at the optimum.

              As an individual car buyer, I have no chance of doing that.

              Ergo, in one case, the market will work it out. In the other, it won't.

        • KoolKat23 8 hours ago

          That's just reducing the value of a life to a number. It can be gamed to a situation where it's just more profitable to mow down people.

          What's an acceptable number/financial cost is also just an indirect approximated way of implementing a more direct/scientific regulation. Not everything needs to be reduced to money.

          • jeremyjh 7 hours ago

            There is no way to game it successfully; if your insurance costs are much higher than your competitors you will lose in the long run. That doesn’t mean there can’t be other penalties when there is gross negligence.

            • KoolKat23 4 hours ago

              Who said management and shareholders are in it for the long run. Plenty of examples where businesses are purely run in the short term. Bonuses and stock pumps.

        • stormfather 8 hours ago

          That would be good because it would incentivize all FSD cars communicating with each other. Imagine how safe driving would be if they are all broadcasting their speed and position to each other. And each vehicle sending/receiving gets cheaper insurance.

          • Terr_ 7 hours ago

            It goes kinda dsytopic if access to the network becomes a monopolistic barrier.

        • tensor 8 hours ago

          I’m for this as long as the company also takes on liability for human errors they could prevent. I’d want to see cars enforcing speed limits and similar things. Humans are too dangerous to drive.

      • mrpippy 7 hours ago

        Tesla officially renamed it to “Full Self Driving (supervised)” a few months ago, previously it was “Full Self Driving (beta)”

        Both names are ridiculous, for different reasons. Nothing called a “beta” should be tested on public roads without a trained employee supervising it (i.e. being paid to pay attention). And of course it was not “full”, it always required supervision.

        And “Full Self Driving (supervised)” is an absurd oxymoron. Given the deaths and crashes that we’ve already seen, I’m skeptical of the entire concept of a system that works 98% of the time, but also needs to be closely supervised for the 2% of the time when it tries to kill you or others (with no alerts).

        It’s an abdication of duty that NHTSA has let this continue for so long, they’ve picked up the pace recently and I wouldn’t be surprised if they come down hard on Tesla (unless Trump wins, in which case Elon will be put in charge of NHTSA, the SEC, and FAA)

        • ilyagr 31 minutes ago

          I hope they soon rename it into "Fully Supervised Driving".

      • SoftTalker 9 hours ago

        It’s your car, so ultimately the liability is yours. That’s why you have insurance. If Tesla retains ownership, and just lets you drive it, then they have (more) liability.

    • triyambakam 10 hours ago

      Hesitation around self-driving technology is not just about the raw accident rate, but the nature of the accidents. Self-driving failures often involve highly visible, preventable mistakes that seem avoidable by a human (e.g., failing to stop for an obvious obstacle). Humans find such incidents harder to tolerate because they can seem fundamentally different from human error.

      • crazygringo 10 hours ago

        Exactly -- it's not just the overall accident rate, but the rate per accident type.

        Imagine if self-driving is 10x safer on freeways, but on the other hand is 3x more likely to run over your dog in the driveway.

        Or it's 5x safer on city streets overall, but actually 2x worse in rain and ice.

        We're fundamentally wired for loss aversion. So I'd say it's less about what the total improvement rate is, and more about whether it has categorizable scenarios where it's still worse than a human.

    • akira2501 10 hours ago

      > traveled of the median human driver isn't acceptable.

      It's completely acceptable. In fact the numbers are lower than they have been since we've started driving.

      > Accidents caused by human drivers

      Are there any other types of drivers?

      > are one of the largest causes of injury and death

      More than half the fatalities on the road are actually caused by the use of drugs and alcohol. The statistics are very clear on this. Impaired people cannot drive well. Non impaired people drive orders of magnitude better.

      > technology that could save lives

      There is absolutely zero evidence this is true. Everyone is basing this off of a total misunderstanding of the source of fatalities and a willful misapprehension of the technology.

      • blargey 10 hours ago

        > Non impaired people drive orders of magnitude better.

        That raises the question - how many impaired driver-miles are being baked into the collision statistics for "median human" driver-miles? Shouldn't we demand non-impaired driving as the standard for automation, rather than "averaged with drunk / phone-fiddling /senile" driving? We don't give people N-mile allowances for drunk driving based on the size of the drunk driver population, after all.

        • akira2501 3 hours ago

          Motorcycles account for a further 15% of all fatalities in a typical year. Weather is often a factor. Road design is sometimes a factor, remembering several rollover crashes that ended in a body of water and no one in the vehicle surviving. Likewise ejections during fatalities due to lack of seatbelt use is also noticeable.

          Once you dig into the data you see that almost every crash, at this point in history, is really a mini-story detailing the confluence of several factors that turned a basic accident into something fatal.

          Also, and I only saw this once, but if you literally have a heart attack behind the wheel, you are technically a roadway fatality. The driver was 99. He just died while sitting in slow moving traffic.

          Which brings me to my final point which is the rear seats in automobiles are less safe than the front seats. This is true for almost every vehicle on the road. You see _a lot_ of accidents where two 40 to 50 year old passengers are up front and two 70 to 80 year old passengers are in back. The ones up front survive. One or both passengers in the back typically die.

    • Terr_ 9 hours ago

      > It's clear that having half the casualty rate per distance traveled of the median human driver isn't acceptable.

      Even if we optimistically assume no "gotchas" in the statistics [0], distilling performance down to a casualty/injury/accident-rate can still be dangerously reductive, when the have a different distribution of failure-modes which do/don't mesh with our other systems and defenses.

      A quick thought experiment to prove the point: Imagine a system which compared to human drivers had only half the rate of accidents... But many of those are because it unpredictably decides to jump the sidewalk curb and kill a targeted pedestrian.

      The raw numbers are encouraging, but it represents a risk profile that clashes horribly with our other systems of road design, car design, and what incidents humans are expecting and capable of preventing or recovering-from.

      [0] Ex: Automation is only being used on certain subsets of all travel which are the "easier" miles or circumstances than the whole gamut a human would handle.

    • gambiting 10 hours ago

      >>. How about a quarter? Or a tenth?

      The answer is zero. An airplane autopilot has increased the overall safety of airplanes by several orders of magnitude compared to human pilots, but literally no errors in its operation are tolerated, whether they are deadly or not. The exact same standard has to apply to cars or any automated machine for that matter. If there is any issue discovered in any car with this tech then it should be disabled worldwide until the root cause is found and eliminated.

      >> It's all too easy to see a potential future where many people die needlessly because technology that could save lives is regulated into a greatly reduced role.

      I really don't like this argument, because we could already prevent literally all automotive deaths tomorrow through existing technology and legislation and yet we are choosing not to do this for economic and social reasons.

      • esaym 10 hours ago

        You can't equate airplane safety with automotive safety. I worked at an aircraft repair facility doing government contracts for a number of years. In one instance, somebody lost the toilet paper holder for one of the aircraft. This holder was simply a piece of 10 gauge wire that was bent in a way to hold it and supported by wire clamps screwed to the wall. Making a new one was easy but since it was a new part going on the aircraft we had to send it to a lab to be certified to hold a roll of toilet paper to 9 g's. In case the airplane crashed you wouldn't want a roll of toilet paper flying around I guess. And that cost $1,200.

        • gambiting 10 hours ago

          No, I'm pretty sure I can in this regard - any automotive "autopilot" has to be held to the same standard. It's either zero accidents or nothing.

          • murderfs 4 hours ago

            This only works for aerospace because everything and everyone is held to that standard. It's stupid to hold automotive autopilots to the same standard as a plane's autopilot when a third of fatalities in cars are caused by the pilots being drunk.

      • travem 10 hours ago

        > The answer is zero

        If autopilot is 10x safer then preventing its use would lead to more preventable deaths and injuries than allowing it.

        I agree that it should be regulated and incidents thoroughly investigated, however letting perfect be the enemy of good leads to stagnation and lack of practical improvement and greater injury to the population as a whole.

        • penjelly 9 hours ago

          I'd challenge the legitimacy of the claim that it's 10x safer, or even safer at all. The safety data provided isn't compelling to me, it can be games or misrepresented in various ways, as pointed out by others.

          • yCombLinks 8 hours ago

            That claim wasn't made. It was a hypothetical, what if it was 10x safer? Then would people tolerate it.

        • gambiting 10 hours ago

          >>If autopilot is 10x safer then preventing its use would lead to more preventable deaths and injuries than allowing it.

          And yet whenever there is a problem with any plane autopilot it's preemptively disabled fleet wide and pilots have to fly manually even though we absolutely beyond a shadow of a doubt know that it's less safe.

          If an automated system makes a wrong decision and it contributes to harm/death then it cannot be allowed on public roads full stop, no matter how many lives it saves otherwise.

          • Aloisius 5 hours ago

            Depends on what one considers a "problem." As long as the autopilot's failures conditions and mitigation procedures are documented, the burden is largely shifted to the operator.

            Autopilot didn't prevent slamming into a mountain? Not a problem as long as it wasn't designed to.

            Crashed on landing? No problem, the manual says not to operate it below 500 feet.

            Runaway pitch trim? The manual says you must constantly be monitoring the autopilot and disengage it when it's not operating as expected and to pull the autopilot and pitch trim circuit breakers. Clearly insufficient operator training is to blame.

          • exe34 10 hours ago

            > And yet whenever there is a problem with any plane autopilot it's preemptively disabled fleet wide and pilots have to fly manually even though we absolutely beyond a shadow of a doubt know that it's less safe.

            just because we do something dumb in one scenario isn't a very persuasive reason to do the same in another.

            > then it cannot be allowed on public roads full stop, no matter how many lives it saves otherwise.

            ambulances sometimes get into accidents - we should ban all ambulances, no matter how many lives they save otherwise.

          • CrimsonRain 6 hours ago

            So your only concern is, when something goes wrong, need someone to blame. Who cares about lives saved. Vaccines can cause adverse effects. Let's ban all of them.

            If people like you were in charge of anything, we'd still be hitting rocks for fire in caves.

      • V99 6 hours ago

        Airplane autopilots follow a lateral & sometimes vertical path through the sky prescribed by the pilot(s). They are good at doing that. This does increase safety, because it frees up the pilot(s) from having to carefully maintain a straight 3d line through the sky for hours at a time.

        But they do not listen to ATC. They do not know where other planes are. They do not keep themselves away from other planes. Or the ground. Or a flock of birds. They do not handle emergencies. They make only the most basic control-loop decisions about the control surface and power (if even autothrottle equipped, otherwise that's still the meatbag's job) changes needed to follow the magenta line drawn by the pilot given a very small set of input data (position, airspeed, current control positions, etc).

        The next nearest airplane is typically at least 3 miles laterally and/or 500' vertically away, because the errors allowed with all these components are measured in hundreds of feet.

        None of this is even remotely comparable to a car using a dozen cameras (or lidar) to make real-time decisions to drive itself around imperfect public streets full of erratic drivers and other pedestrians a few feet away.

        What it is a lot like is what Tesla actually sells (despite the marketing name). Yes it's "flying" the plane, but you're still responsible for making sure it's doing the right thing, the right way, and not and not going to hit anything or kill anybody.

      • peterdsharpe 6 hours ago

        > literally no errors in its operation are tolerated

        Aircraft designer here, this is not true. We typically certify to <1 catastrophic failure per 1e9 flight hours. Not zero.

      • Aloisius 6 hours ago

        Autopilots aren't held to a zero error standard let alone a zero accident standard.

    • croes 10 hours ago

      > It's clear that having half the casualty rate per distance traveled of the median human driver isn't acceptable.

      Were the Teslas driving under all weather conditions at any location like humans do or is it just cherry picked from the easy travelling conditions?

    • jakelazaroff 10 hours ago

      I think we should not be satisfied with merely “better than a human”. Flying is so safe precisely because we treat any casualty as unacceptable. We should aspire to make automobiles at least that safe.

      • aantix 8 hours ago

        Before FSD is allowed on public roads?

        It’s a net positive, saving lives right now.

    • smitty1110 10 hours ago

      There’s two things going on here with there average person that you need to overcome: That when Tesla dodges responsibility all anyone sees is a liar, and that people amalgamate all the FSD crashes and treat the system like a dangerous local driver that nobody can get off the road.

      Tesla markets FSD like it’s a silver bullet, and the name is truly misleading. The fine print says you need attention and all that. But again, people read “Full Self Driving” and all the marketing copy and think the system is assuming responsibility for the outcomes. Then a crash happens, Tesla throws the driver under the bus, and everyone gets a bit more skeptical of the system. Plus, doing that to a person rubs people the wrong way, and is in some respects a barrier to sales.

      Which leads to the other point: People are tallying up all the accidents and treating the system like a person, and wondering why this dangerous driver is still on the road. Most accidents with dead pedestrian start with someone doing something stupid, which is when they assume all responsibility, legally speaking. Drunk, speeding, etc. Normal drivers in poor conditions slow down and drive carefully. People see this accident, and treat FSD like a serial drunk driver. It’s to the point that I know people that openly say they treat teslas on roads like they’re erratic drivers just for existing.

      Until Elon figures out how to fix his perception problem, the calls for investigations and to keep his robotaxis is off the road will only grow.

    • becquerel 10 hours ago

      My dream is of a future where humans are banned from driving without special licenses.

      • gambiting 10 hours ago

        So.........like right now you mean? You need a special licence to drive on a public road right now.

        • nkrisc 10 hours ago

          The problem is it’s obviously too easy to get one and keep one, based on some of the drivers I see on the road.

          • gambiting 10 hours ago

            That sounds like a legislative problem where you live, sure it can be fixed by overbearing technology but we already have all the tools we need to fix it, we are just choosing not to for some reason.

        • seizethecheese 10 hours ago

          Geez, clearly they mean like a CDL

      • FireBeyond 10 hours ago

        And yet Tesla's FSD never passed a driving test.

    • danans 9 hours ago

      > The interesting question is how good self-driving has to be before people tolerate it.

      It's pretty simple: as good as it can be given available technologies and techniques, without sacrificing safety for cost or style.

      With AVs, function and safety should obviate concerns of style, cost, and marketing. If that doesn't work with your business model, well tough luck.

      Airplanes are far safer than cars yet we subject their manufacturers to rigorous standards, or seemingly did until recently, as the 737 max saga has revealed. Even still the rigor is very high compared to road vehicles.

      And AVs do have to be way better than people at driving because they are machines that have no sense of human judgement, though they operate in a human physical context.

      Machines run by corporations are less accountable than human drivers, not at the least because of the wealth and legal armies of those corporations who may have interests other than making the safest possible AV.

      • mavhc 9 hours ago

        Surely the number of cars than can do it, and the price, also matters, unless you're going to ban private cars

        • danans 8 hours ago

          > Surely the number of cars than can do it, and the price, also matters, unless you're going to ban private cars

          Indeed, like this: the more cars sold that claim fully autonomous capability, and the more affordable they get, the higher the standards should be compared to their AV predecessors, even if they have long eclipsed human driver's safety record.

          If this is unpalatable, then let's assign 100% liability with steep monetary penalties to the AV manufacturer for any crash that happens under autonomous driving mode.

    • aithrowawaycomm 10 hours ago

      Many people don't (and shouldn't) take the "half the casualty rate" at face value. My biggest concern is that Waymo and Tesla are juking the stats to make self-driving cars seem safer than they really are. I believe this is largely an unintentional consequence of bad actuary science coming from bad qualitative statistics; the worst kind of lying with numbers is lying to yourself.

      The biggest gap in these studies: I have yet to see a comparison with human drivers that filters out DUIs, reckless speeding, or mechanical failures. Without doing this it is simply not a fair comparison, because:

      1) Self-driving cars won't end drunk driving unless it's made mandatory by outlawing manual driving or ignition is tied to a breathalyzer. Many people will continue to make the dumb decision to drive themselves home because they are drunk and driving is fun. This needs regulation, not technology. And DUIs need to be filtered from the crash statistics when comparing with Waymo.

      2) A self-driving car which speeds and runs red lights might well be more dangerous than a similar human, but the data says nothing about this since Waymo is currently on their best behavior. Yet Tesla's own behavior and customers prove that there is demand for reckless self-driving cars, and manufacturers will meet the demand unless the law steps in. Imagine a Waymo competitor that promises Uber-level ETAs for people in a hurry. Technology could in theory solve this but in practice the market could make things worse for several decades until the next research breakthrough. Human accidents coming from distraction are a fair comparison to Waymo, but speeding or aggressiveness should be filtered out. The difficulty of doing so is one of the many reasons I am so skeptical of these stats.

      3) Mechanical failures are a hornets' nest of ML edge cases that might work in the lab but fail miserably on the road. Currently it's not a big deal because the cars are shiny and new. Eventually we'll have self-driving clunkers owned by drivers who don't want to pay for the maintenance.

      And that's not even mentioning that Waymos are not self-driving, they rely on close remote oversight to guide AI through the many billions of common-sense problems that computets will not able to solve for at least the next decade, probably much longer. True self-driving cars will continue to make inexplicably stupid decisions: these machines are still much dumber than lizards. Stories like "the Tesla slammed into an overturned tractor trailer because the AI wasn't trained on overturned trucks" are a huge problem and society will not let Tesla try to launder it away with statistics.

      Self-driving cars might end up saving lives. But would they save more lives than adding mandatory breathalyzers and GPS-based speed limits? And if market competition overtakes business ethics, would they cost more lives than they save? The stats say very little about this.

    • alkonaut 8 hours ago

      > How about a quarter? Or a tenth?

      Probably closer to the latter. The "skin in the game" (physically) argument makes me more willing to accept drunk drivers than greedy manufacturers when it comes to making mistakes or being negligent.

    • __loam 8 hours ago

      The problem is that Tesla is way behind the industry standards here and it's misrepresenting how good their tech is.

    • sebzim4500 8 hours ago

      >It's clear that having half the casualty rate per distance traveled of the median human driver isn't acceptable.

      Are you sure? Right now FSD is active with no one actually knowing its casualty rate, and the for the most part the only people upset about it are terminally online people on twitter or luddites on HN.

    • iovrthoughtthis 10 hours ago

      at least 10x better than a human

      • becquerel 10 hours ago

        I believe Waymo has already beaten this metric.

        • szundi 10 hours ago

          Waymo is limited to cities that their engineers has to map and this map maintained.

          You cannot put a waymo in a new city before that. With Tesla, what you get is universal.

          • RivieraKid 10 hours ago

            Waymo is robust to removing the map / lidars / radars / cameras or adding inaccuracies to any of these 4 inputs.

            (Not sure if this is true for the production system or the one they're still working on.)

  • alexjplant 11 hours ago

    > The collision happened because the sun was in the Tesla driver's eyes, so the Tesla driver was not charged, said Raul Garcia, public information officer for the department.

    Am I missing something or is this the gross miscarriage of justice that it sounds like? The driver could afford a $40k vehicle but not $20 polarized shades from Amazon? Negligence is negligence.

    • smdyc1 11 hours ago

      Not to mention that when you can't see, you slow down? Does the self-driving system do that sufficiently in low visibility? Clearly not if it hit a pedestrian with enough force to kill them.

      The article mentions that Tesla's only use cameras in their system and Musk believes they are enough, because humans only use their eyes. Well firstly, don't you want self-driving systems to be better than humans? Secondly, humans don't just respond to visual cues as a computer would. We also hear and respond to feelings, like the sudden surge of anxiety or fear as our visibility is suddenly reduced at high speed.

      • jsight 10 hours ago

        Unfortunately there is also an AI training problem embedded in this. As Mobileye says, there are a lot of driver decisions that are common, but wrong. The famous example is rolling stops, but also failing to slow down for conditions is really common.

        It wouldn't shock me if they don't have nearly enough training samples of people slowing appropriately for visibility with eyes, much less slowing for the somewhat different limitations of cameras.

      • hshshshshsh 10 hours ago

        I think one of the reasons they focus only on vision is basically the entire transportation infra is designed using human eyes a primary way to channel information.

        Useful information for driving are communicated through images in form of road signs, traffic signals etc.

        • nkrisc 10 hours ago

          I dunno, knowing the exact relative velocity of the car in front of you seems like it could be useful and is something humans can’t do very well.

          I’ve always wanted a car that shows my speed and the relative speed (+/-) of the car in front of me. My car’s cruise control can maintain a set distance so obviously it’s capable of it but it doesn’t show it.

        • SahAssar 10 hours ago

          We are "designed" (via evolution) to perceive and understand the environment around us. The signage is designed to be easily readable for us.

          The models that drive these cars clearly either have some more evolution to do or for us to design the world more to their liking.

          • hshshshshsh 6 hours ago

            Yes. I was talking why Tesla choose to use vision. Since they can't control designing the transport infra to their liking at least for now.

      • pmorici 10 hours ago

        The Tesla knows when it's cameras and blinded by sun and act accordingly or tells the human to take over.

        • kelnos 10 hours ago

          Expect when it doesn't actually do that, I guess? Like when this pedestrian was killed?

        • eptcyka 6 hours ago

          If we were able to know when a neural net is failing to categorize something, wouldn’t we get AGI for free?

      • plorg 10 hours ago

        I would think one relevant factor is that human vision is different than and in some ways significantly better than cameras.

    • jabroni_salad 10 hours ago

      Negligence is negligence but people tend to view vehicle collisions as "accidents", as in random occurrences dealt by the hand of fate completely outside of anyone's control. As such, there is a chronic failure to charge motorists with negligence, even when they have killed someone.

      If you end up in court, just ask for a jury and you'll be okay. I'm pretty sure this guy didnt even go to court, sounds like it got prosecutor's discretion.

    • theossuary 11 hours ago

      You know what they say, if you want to kill someone in the US, do it in a car.

      • littlestymaar 11 hours ago

        Crash Course: If You Want to Get Away With Murder Buy a Car Woodrow Phoenix

      • immibis 8 hours ago

        In the US it seems you'd do it with a gun, but in Germany it's cars.

        There was this elderly driver who mowed down a family in a bike lane waiting to cross the road in Berlin, driving over the barriers between the bike lane and the car lane because the cars in the car lane were too slow. Released without conviction - it was an unforeseeable accident.

    • renewiltord 5 hours ago

      Yeah, I have a couple of mirrors placed around my car that reflect light into my face so that I can get out of running into someone. Tbh I understand why they do this. Someone on HN explained it to me: Yield to gross tonnage. So I just drive where I want. If other people die, that’s on them: the graveyards are full of people with the right of way, as people say.

    • macintux 11 hours ago

      I have no idea what the conditions were like for this incident, but I’ve blown through a 4-way stop sign when the sun was setting. There’s only so much sunglasses can do.

      • eptcyka 11 hours ago

        If environmental factors incapacitate you, should you not slow down or stop?

      • ablation 10 hours ago

        Yet so much more YOU could have done, don’t you think?

      • singleshot_ 8 hours ago

        > There’s only so much sunglasses can do.

        For everything else, you have brakes.

      • alexjplant 10 hours ago

        ¯\_(ツ)_/¯ If I can't see because of rain, hail, intense sun reflections, frost re-forming on my windshield, etc. then I pull over and put my flashers on until the problem subsides. Should I have kept the 4700 lb vehicle in fifth gear at 55 mph without the ability to see in front of me in each of these instances? I submit that I should not have and that I did the right thing.

      • vortegne 10 hours ago

        You shouldn't be on the road then? If you can't see, you should slow down. If you can't handle driving in given conditions safely for everyone involved, you should slow down or stop. If everybody would drive like you, there'd be a whole lot more death on the roads.

      • Doctor_Fegg 9 hours ago

        Yes, officer, this one right here.

      • IshKebab 40 minutes ago

        I know right? Once I got something in my eye so I couldn't see at all, but I decided that since I couldn't do anything about it the best thing was to keep driving. I killed a few pedestrians but... eh, what was I going to do?

  • rootusrootus 31 minutes ago

    I'm on my second free FSD trial, just started for me today. Gave it another shot, and it seems largely similar to the last free trial they gave. Fun party trick, surprisingly good, right up until it's not. A hallmark of AI everywhere, is how great it is and just how abruptly and catastrophically it fails occasionally.

    Please, if you're going to try it, keep both hands on the wheel and your foot ready for the brake. When it goes off the rails, it usually does so in surprising ways with little warning and little time to correct. And since it's so good much of the time, you can get lulled into complacence.

    I never really understand the comments from people who think it's the greatest thing ever and makes their drive less stressful. Does the opposite for me. Entertaining but exhausting to supervise.

  • massysett a day ago

    "Tesla says on its website its FSD software in on-road vehicles requires active driver supervision and does not make vehicles autonomous."

    Despite it being called "Full Self-Driving."

    Tesla should be sued out of existence.

    • bagels a day ago

      It didn't always say that. It used to be more misleading, and claim that the cars have "Full Self Driving Hardware", with an exercise for the reader to deduce that it didn't come with "Full Self Driving Software" too.

    • hedora a day ago

      Our non-Tesla has steering assist. In my 500 miles of driving before I found the buried setting that let me completely disable it, the active safety systems never made it more than 10-20 miles without attempting to actively steer the car left-of-center or into another vehicle, even when it was "turned off" via the steering wheel controls.

      When it was turned on according to the dashboard UI, things were even worse. It'd disengage less than every ten miles. However, there wasn't an alarm when it disengaged, just a tiny gray blinking icon on the dash. A second or so after the blinking, it'd beep once and then pull crap like attempt a sharp left on an exit ramp that curved to the right.

      I can't imagine this model kills fewer people per mile than Tesla FSD.

      I think there should be a recall, but it should hit pretty much all manufacturers shipping stuff in this space.

      • noapologies a day ago

        I'm not sure how any of this is related to the article. Does this non-Tesla manufacturer claim that their steering assist is "full self driving"?

        If you believe their steering assist kills more people than Tesla FSD then you're welcome, encouraged even, to file a report with the NHTSA here [1].

        [1] https://www.nhtsa.gov/report-a-safety-problem

      • HeadsUpHigh 21 hours ago

        Ive had similar experience with a Hyundai with steering assist. It would get confused by messed road lining all the time. Meanwhile it had no problem climbing a road curb that was unmarked. And it would try to constantly nudge the steering wheel meaning I had to put force into holding it in place all the time since it which was extra fatigue.

        Oh and it was on by default, meaning I had to disable it every time I turned the car on.

        • shepherdjerred 6 hours ago

          What model year? I'm guessing it's an older one?

          My Hyundai is a 2021 and I have to turn on the steering assist every time which I find annoying. My guess is that you had an earlier model where the steering assist was more liability than asset.

          It's understandable that earlier versions of this kind of thing wouldn't function as well, but it is very strange that they would have it on by default.

      • shepherdjerred a day ago

        My Hyundai has a similar feature and it's excellent. I don't think you should be painting with such a broad brush.

      • gamblor956 a day ago

        If what you say is true, name the car model and file a report with the NHTSA.

    • m463 a day ago

      I believe it's called "Full Self Driving (Supervised)"

      • maeil 21 hours ago

        The part in parentheses has only recently been added.

        • tharant 10 hours ago

          Prior to that, FSD was labeled ‘Full Self Driving (Beta)’ and enabling it triggered a modal that required two confirmations explaining that the human driver must always pay attention and is ultimately responsible for the vehicle. The feature also had/has active driver monitoring (via both vision and steering-torque sensors) that would disengage FSD if the driver ignored the loud audible alarm to “Pay attention”. Since changing the label to ‘(Supervised)’, the audible nag is significantly reduced.

        • rsynnott 14 hours ago

          And is, well, entirely contradictory. An absolute absurdity; what happens when the irresistible force of the legal department meets the immovable object of marketing.

    • fhdsgbbcaA a day ago

      “Sixty percent of the time, it works every time”

    • systemvoltage a day ago

      That seems extreme. They’re the biggest force to combat Climate Change. Tesla existing is good for the world.

      • mbernstein a day ago

        Nuclear power adoption is the largest force to combat climate change.

        • Retric a day ago

          Historically, hydro has prevented for more CO2 than nuclear by a wide margin. https://ourworldindata.org/grapher/electricity-prod-source-s...

          Looking forward Nuclear isn’t moving the needle. Solar grew more in 2023 alone than nuclear has grown since 1995. Worse nuclear can’t ramp up significantly in the next decade simply due to construction bottlenecks. 40 years ago nuclear could have played a larger role, but we wasted that opportunity.

          It’s been helpful, but suggesting it’s going to play a larger role anytime soon is seriously wishful thinking at this point.

          • dylan604 a day ago

            > Historically, hydro has

            done harm to the ecosystems where they are installed. This is quite often overlooked and brushed aside.

            There is no single method of generating electricity without downsides.

            • Retric a day ago

              We’ve made dams long before we knew about electricity. At which point tacking hydropower to a dam that would exist either way has basically zero environmental impact.

              Pure hydropower dams definitely do have significant environmental impact.

              • dylan604 9 hours ago

                I just don't get the premise of your argument. Are you honestly saying that stopping the normal flow of water has no negative impact on the ecosystem? What about the area behind the dam that is now flooded? What about the area in front of the dam where there is now no way to traverse back up stream?

                Maybe your just okay and willing to accept that kind of change. That's fine, just as some people are okay with the risk of nuclear, the use of land for solar/wind. But to just flat out deny that it has impact is just dishonest discourse at best

                • Retric 7 hours ago

                  It’s the same premise as rooftop solar. You’re building a home anyway so adding solar panels to the roof isn’t destroying pristine habitat.

                  People build dams for many reasons not just electricity.

                  Having a reserve of rainwater is a big deal in California, Texas, etc. Letting millions of cubic meters more water flow into the ocean would make the water problems much worse in much of the world. Flood control is similarly a serious concern. Blaming 100% of the issues from dams on Hydropower is silly if outlawing hydropower isn’t going to remove those dams.

          • mbernstein a day ago

            History is a great reference, but it doesn't solve our problems now. Just because hydro has prevented more CO2 until now doesn't mean that plus solar are the combination that delivers abundant, clean energy. There are power storage challenges and storage mechanisms aren't carbon neutral. Even if we assume that nuclear, wind, and solar (without storage) all have the same carbon footprint - I believe nuclear is less that solar pretty much equivalent to wind - you have to add the storage mechanisms for scenarios where there's no wind, sun, or water.

            All of the above are significantly better than burning gas or coal - but nuclear is the clear winner from an CO2 and general availability perspective.

            • Retric a day ago

              Seriously scaling nuclear would involve batteries. Nuclear has issues being cost effective at 80+% capacity factors. When you start talking sub 40% capacity factors the cost per kWh spirals.

              The full cost of operating a multiple nuclear reactor for just 5 hours per day just costs more than a power plant at 80% capacity factor charging batteries.

              • mbernstein a day ago

                > Seriously scaling nuclear would involve batteries. Nuclear has issues being cost effective at 80+% capacity factors.

                I assume you mean that sub 80% capacity nuclear has issues being cost effective (which I agree is true).

                You could pair the baseload nuclear with renewables during peak times and reduce battery dependency for scaling and maintaining higher utilization.

                • Retric a day ago

                  I meant even if you’re operating nuclear as baseload power looking forward the market rate for electricity looks rough without significant subsidies.

                  Daytime you’re facing solar head to head which is already dropping wholesale rates. Off peak is mostly users seeking cheap electricity so demand at 2AM is going to fall if power ends up cheaper at noon. Which means nuclear needs to make most of its money from the duck curve price peaks. But batteries are driving down peak prices.

                  Actually cheap nuclear would make this far easier, but there’s no obvious silver bullet.

          • UltraSane a day ago

            That just goes to show how incredibly short sighted humanity is. We new about the risk of massive CO2 emissions from burning fossil fuels but just ignored it while irrationally demonizing nuclear energy because it is scawy. If humans were sane and able to plan earth would be getting 100% of all electricity from super-efficient 7th generation nuclear reactors.

            • mbernstein a day ago

              When talking to my parents, I hear a lot about Jane Fonda and the China Syndrome as far as the fears of nuclear power.

              She's made the same baseless argument for a long time: "Nuclear power is slow, expensive — and wildly dangerous"

              https://ourworldindata.org/nuclear-energy#:~:text=The%20key%....

              CO2 issues aside, it's just outright safer than all forms of coal and gas and about as safe as solar and wind, all three of which are a bit safer than hydro (still very safe).

            • Retric a day ago

              I agree costs could have dropped significantly, but I doubt 100% nuclear was ever going to happen.

              Large scale dams will exist to store water, tacking hydroelectric on top of them is incredibly cost effective. Safety wise dams are seriously dangerous, but they also save a shocking number of lives by reducing flooding.

            • valval 21 hours ago

              There was adequate evidence that nuclear is capable of killing millions of people and causing large scale environmental issues.

              It’s still not clear today what effect CO2 or fossil fuel usage has on us.

              • UltraSane 5 hours ago

                Nuclear reactors are not nuclear bombs. Nuclear reactors are very safe on a Joules per death bases

        • porphyra a day ago

          I think solar is a lot cheaper than nuclear, even if you factor in battery storage.

        • ivewonyoung a day ago

          Are you proposing that cars should have nuclear reactors in them?

          Teslas run great on nuclear power, unlike fossil fuel ICE cars.

          • mbernstein a day ago

            Of course not.

            • dylan604 a day ago

              Why not? We just need to use Mr Fusion in everything

              https://backtothefuture.fandom.com/wiki/Mr._Fusion

            • ivewonyoung a day ago

              In a world where nuclear power helped with climate change, would also be a world where Teslas would eliminate a good chunk of harmful pollution by allowing cars to be moved by nuclear, so not sure what point you were trying to make.

              Even at this minute, Teslas are moving around powered by nuclear power.

      • gamblor956 a day ago

        Every year Musk personally flies enough in his private jet to undo the emissions savings of over 100,000 EVs...

        Remember that every time you get in your Tesla that you're just a carbon offset for a spoiled billionaire.

        • enslavedrobot a day ago

          Hmmmm average car uses 489 gallons a year. Large private jet uses 500 gallons an hour. There are 9125 hours in a year.

          So if Elon lives in a jet that flys 24/7 you're only very wrong. Since that's obviously not the case you're colossally and completely wrong.

          Remember that the next time you try to make an argument that Tesla is not an incredible force for decarbonization.

          • briansm 18 hours ago

            I think you missed the 'EV' part of the post.

        • valval 21 hours ago

          As opposed to all the other execs whose companies aren’t a force to combat climate change and still fly their private jets.

          But don’t get me wrong, anyone and everyone can fly their private jets if they can afford such things. They will already have generated enough taxes at that point that they’re offsetting thousands or millions of Prius drivers.

          • gamblor956 5 hours ago

            As opposed to all the other execs

            Yes, actually.

            Other execs fly as needed because they recognize that in this wondrous age of the internet that teleconferencing can replace most in-person meetings. Somehow, only a supposed technology genius like Elon Musk thinks that in-person meetings required for everything.

            Other execs also don't claim to be trying to save the planet while doing everything in their power to exploit its resources or destroy natural habitats.

      • gitaarik 11 hours ago

        As I understand, electric cars are more polluting than non-electric, because first of all manufacturing and resources footprint is larger, but also because they are heavier (because of the batteries), the tires wear down much faster, needing more tire replacement, which is so significantly much that their emission free-ness doesn't compensate for it.

        Besides, electric vehicles still seem to be very impractical compared to normal cars, because they can't drive very far without needing a lengthy recharge.

        So I think the eco-friendliness of electric vehicles is maybe like the full self-driving system: nice promises but no delivery.

        • theyinwhy 9 hours ago

          That has been falsified by more studies than I can keep track of. And yes, if you charge your electric with electricity produced by oil, the climate effect will be non-optimal.

        • djaychela 5 hours ago

          Pretty much everything you've said here isn't true. You are just repeating tropes that are fossil fuel industry FUD.

  • rKarpinski 11 hours ago

    'Pedestrian' in this context seems pretty misleading

    "Two vehicles collided on the freeway, blocking the left lane. A Toyota 4Runner stopped, and two people got out to help with traffic control. A red Tesla Model Y then hit the 4Runner and one of the people who exited from it. "

    edit: Parent article was changed... I was referring to the title of the NPR article.

    • danans 11 hours ago

      > Pedestrian' in this context seems pretty misleading

      What's misleading? The full quote:

      "A red Tesla Model Y then hit the 4Runner and one of the people who exited from it. A 71-year-old woman from Mesa, Arizona, was pronounced dead at the scene."

      If you exit a vehicle, and are on foot, you are a pedestrian.

      I wouldn't expect FSD's object recognition system to treat a human who has just exited a car differently than a human walking across a crosswalk. A human on foot is a human on foot.

      However, from the sound of it, the object recognition system didn't even see the 4Runner, much less a person, so perhaps there's a more fundamental problem with it?

      Perhaps this is something that lidar or radar, if the car had them, would have helped the OR system to see.

      • potato3732842 9 hours ago

        Tesla's were famously poor at detecting partial lane obstructions for a long time. I wonder if that's what happened here.

      • jfoster 10 hours ago

        The description has me wondering if this was definitely a case where FSD was being used. There have been other cases in the past where drivers had an accident and claimed they were using autopilot when they actually were not.

        I don't know for sure, but I would think that the car could detect a collision. I also don't know for sure, but I would think that FSD would stop once a collision has been detected.

        • pell 10 hours ago

          > There have been other cases in the past where drivers had an accident and claimed they were using autopilot when they actually were not.

          Wouldn’t this be protocoled by the event data recorder?

        • danans 9 hours ago

          > There have been other cases in the past where drivers had an accident and claimed they were using autopilot when they actually were not.

          If that were the case here, there wouldn't be a government probe, right? It would be a normal "multi car pileup with a fatality" and added to statistics.

          With the strong incentive on the part of both the driver and Tesla to lie about this, there should strong regulations around event data recorders [1] for self driving systems, and huge penalties for violating those. A search across that site doesn't return a hit for the word "retention" but it's gotta be expressed in some way there.

          1. https://www.ecfr.gov/current/title-49/subtitle-B/chapter-V/p...

        • FireBeyond 10 hours ago

          > FSD would stop once a collision has been detected.

          Fun fact, at least until very recently, if not even to this moment, AEB (emergency braking) is not a part of FSD.

          • modeless 5 hours ago

            I believe AEB can trigger even while FSD is active. Certainly I have seen the forward collision warning trigger during FSD.

        • bastawhiz 10 hours ago

          Did the article say the Tesla didn't stop after the collision?

          • jfoster 10 hours ago

            If it hit the vehicle and then hit one of the people who had exited the vehicle with enough force for it to result in a fatality, it sounds like it might not have applied any braking.

            Of course, that depends on the speed it was traveling at to begin with.

    • Retric 11 hours ago

      More clarity may change people’s opinion of the accident, but IMO pedestrian meaningfully represents someone who is limited to human locomotion and lacks any sort of protection in a collision.

      Which seems like a reasonable description of the type of failure involved in the final few seconds before impact.

      • rKarpinski 10 hours ago

        Omitting that the pedestrian was on a freeway meaningfully mis-represents the situation.

        • Retric 7 hours ago

          People walking on freeways may be rare from the perspective of an individual driver but not a self driving system operating on millions of vehicles.

          • rKarpinski 7 hours ago

            What does that have to do with the original article's misleading title?

            • Retric 7 hours ago

              I don’t think it’s misleading. It’s a tile not some hundred word description of what exactly happened.

              Calling them motorists would definitely be misleading by comparison. Using the simple “fatal crash” of the linked title implies the other people might in be responsible which is misleading.

              Using accident but saying Tesla was at fault could open them up to liability and therefore isn’t an option.

              • rKarpinski 6 hours ago

                > I don’t think it’s misleading. It’s a tile not some hundred word description of what exactly happened.

                "Pedestrian killed on freeway" instead of "pedestrian killed" doesn't take 100 words and doesn't give the impression Tesla's are mowing people down on crosswalks (although that's a feature to get clicks, not a bug).

                • Retric 6 hours ago

                  Without context that implies the pedestrians shouldn’t have been on the freeway.

                  It’s not an issue for Tesla, but it does imply bad things about the victims.

                  • rKarpinski 6 hours ago

                    A title of "U.S. to probe Tesla's 'Full Self-Driving' system after pedestrian killed on freeway" would in no way imply bad things about the pedestrian who was killed.

                    • Retric 4 hours ago

                      It was my first assumption when I was read pedestrian on freeway in someone’s comment without context. Possibly due to Uber self driving fatality.

                      Stranded motorists who exit their vehicle, construction workers, first responders, tow truck drivers, etc are the most common victims but that’s not the association I had.

      • potato3732842 9 hours ago

        This sort of framing you're engaging in is exactly what the person you're replying to is complaining about.

        Yeah, the person who got hit was technically a pedestrian but just using that word with no other context doesn't covey that it was a pedestrian on a limited access highway vs somewhere pedestrians are allowed and expected. Without additional explanation people assume normalcy and think that the pedestrian was crossing a city street or something pedestrians do all the time and are expected to do all the time when that is very much not what happened here.

        • Retric 8 hours ago

          Dealing with people on freeways is the kind of edge case humans aren’t good at but self driving cars have zero excuses. It’s a common enough situation that someone will exit a vehicle after a collision to make it a very predictable edge case.

          Remember all of the bad press Uber got when a pedestrian was struck and killed walking their bike across the middle of a street at night? People are going to be on limited access freeways and these systems need to be able to deal with it. https://www.bbc.com/news/technology-54175359

          • potato3732842 5 hours ago

            I'd make the argument that people are very good at dealing with random things that shouldn't be on freeways as long as they don't coincide with blinding sun or other visual impairment.

            Tesla had a long standing issue detecting partial lane obstructions. I wonder if the logic around that has anything to do with this.

    • neom 11 hours ago

      That is the correct use of pedestrian as a noun.

      • echoangle 11 hours ago

        Sometimes using a word correctly is still confusing because it’s used in a different context 90% of the time.

      • szundi 11 hours ago

        I think parent commenter emphasized the context.

        Leaving out context that would otherwise change the interpretation of most or targeted people is the main way to misled those people without technically lying.

      • sebzim4500 8 hours ago

        That's why he said misleading rather than an outright lie. He is not disputing that it is techincally correct to refer to the deceased as a pedestrian, but this scenario (someone out of their car on a freeway) is not what is going to spring to the mind of someone just reading the headline.

      • varenc 10 hours ago

        By a stricter definition, a pedestrian is one who travels by foot. Of course, they are walking, but they’re traveling via their car, so by some interpretations you wouldn’t call them a pedestrian. You could call them a “motorist” or a “stranded vehicle occupant”.

        For understanding the accident it does seem meaningful that they were motorists that got out of their car on a highway and not pedestrians at a street crossing. (Still inexcusable of course, but changes the context)

        • bastawhiz 10 hours ago

          Cars and drivers ideally shouldn't hit people who exited their vehicles after an accident on a highway. Identifying and avoiding hazards is part of driving.

        • neom 10 hours ago

          As far as I am aware, pes doesn't carry an inherent meaning of travel. Pedestrian just means foot on, they don't need to be moving, they're just not in carriage. As an aside, distinguishing a person's mode of presence is precisely what reports aim to capture.

          (I also do tend to avoid this level of pedantry, the points here are all well taken to be clear. I do think the original poster was fine in their comment, I was just sayin' - but this isn't a cross I would die on :))

  • frabjoused 11 hours ago

    I don't understand why this debate/probing is not just data driven. Driving is all big data.

    https://www.tesla.com/VehicleSafetyReport

    This report does not include fatalities, which seems to be the key point in question. Unless the above report has some bias or is false, Teslas in autopilot appear 10 times safer than the US average.

    Is there public data on deaths reported by Tesla?

    And otherwise, if the stats say it is safer, why is there any debate at all?

    • jsight 10 hours ago

      The report from Tesla is very biased. It doesn't normalize for the difficulty of the conditions involved, and is basically for marketing purposes.

      IMO, the challenge for NHTSA is that they can get tremendous detail from Tesla but not from other makes. This will make it very difficult for them to get a solid baseline for collisions due to glare in non-FSD equipped vehicles.

    • JTatters 10 hours ago

      Those statistics are incredibly misleading.

      - It is safe to assume that the vast majority of autopilot miles are on highways (although Tesla don't release this information).

      - By far the safest roads per mile driven are highways.

      - Autopilot will engage least during the most dangerous conditions (heavy rain, snow, fog, nighttime).

    • notshift 10 hours ago

      Without opening the link, the problem with every piece of data I’ve seen from Tesla is they’re comparing apples to oranges. FSD won’t activate in adverse driving conditions, aka when accidents are much more likely to occur. And/or drivers are choosing not to use it in those conditions.

    • bastawhiz 10 hours ago

      Autopilot is not FSD.

      • frabjoused 10 hours ago

        That's a good point. Are there no published numbers on FSD?

    • FireBeyond 10 hours ago

      > Unless the above report has some bias or is false

      Welcome to Tesla.

      The report measures accidents in FSD mode. Qualifiers to FSD mode: the conditions, weather, road, location, traffic all have to meet a certain quality threshold before the system will be enabled (or not disable itself). Compare Sunnyvale on a clear spring day to Pittsburgh December nights.

      There's no qualifier to the "comparison": all drivers, all conditions, all weather, all roads, all location, all traffic.

      It's not remotely comparable, and Tesla's data people are not that stupid, so it's willfully misleading.

      > This report does not include fatalities

      It also doesn't consider any incident where there was not airbag deployment to be an accident. Sounds potentially reasonable until you consider:

      - first gen airbag systems were primitive: collision exceeds threshold, deploy. Currently, vehicle safety systems consider duration of impact, speeds, G-forces, amount of intrusion, angle of collision, and a multitude of other factors before deciding what, if any, systems to fire (seatbelt tensioners, airbags, etc.) So hit something at 30mph with the right variables? Tesla: "this is not an accident".

      - Tesla also does not consider "incident was so catastrophic that airbags COULD NOT deploy*" to be an accident, because "airbags didn't deploy". This umbrella could also include egregious, "systems failed to deploy for any reason up to and including poor assembly line quality control", as also not an accident and also "not counted".

      > Is there public data on deaths reported by Tesla?

      They do not.

      They also refuse to give the public much of any data beyond these carefully curated numbers. Hell, NHTSA/NTSB also mostly have to drag heavily redacted data kicking and screaming out of Tesla's hands.

  • testfrequency 10 hours ago

    I was in a Model 3 Uber yesterday and my driver had to serve onto and up a curb to avoid an (idiot) who was trying to turn into traffic going in the other direction.

    The Model 3 had every opportunity in the world to brake and it didn’t, we were probably only going 25mph. I know this is about FSD here, but that moment 100% made me realize Tesla has awful obstacle avoidance.

    I just happen to be looking forward and it was a very plain and clear T-Bone avoidance, and at no point did the car handle or trigger anything.

    Thankfully everyone was ok, but the front lip got pretty beat up from driving up the curb. Of course the driver at fault that caused the whole incident drove off.

  • drodio 7 hours ago

    I drive a 2024 Tesla Model Y and another person in my family drives a 2021 Model Y. Both cars are substantially similar (the 2021 actually has more sensors than the 2024, which is strictly cameras-only).

    Both cars are running 12.5 -- and I agree that it's dramatically improved over 12.3.

    I really enjoy driving. I've got a #vanlife Sprinter that I'll do 14 hour roadtrips in with my kids. For me, the Tesla's self-driving capability is a "nice to have" -- it sometimes drives like a 16 year old who just got their license (especially around braking. Somehow it's really hard to nail the "soft brake at a stop sign" which seems like it should be be easy. I find that passengers in the car are most uncomfortable when the car brakes like this -- and I'm the most embarrassed because they all look at me like I completely forgot how to do a smooth stop at a stop sign).

    Other times, the Tesla's self-driving is magical and nearly flawless -- especially on long highway road trips, like up to Tahoe. Even someone like me who loves doing road trips really appreciates the ability to relax and not have to be driving.

    But here's one observation I've had that I don't see quite sufficiently represented in the comments:

    The other person in my family with the 2021 Model Y does not like to drive like I do, and they really appreciate that the Tesla is a better driver than they feel themselves to be. And as a passenger in their car, I also really appreciate that when the Tesla is driving, I generally feel much more comfortable in the car. Not always, but often.

    There's so much variance in us as humans around driving skills and enjoyment. It's easy to lump us together and say "the car isn't as good as the human." And I know there's conflicting data from Tesla and NHTSA about whether in aggregate, Teslas are safer than human drivers or not.

    But what I definitely know from my experience is that the Tesla is already a better driver than many humans are -- especially those that don't enjoy driving. And as @modeless points out, the rate of improvement is now vastly accelerating.

  • gitaarik 20 hours ago

    It concerns me that these Tesla's can suddenly start acting differently after a software update. Seems like a great target for a cyber attack. Or just a fail from the company. A little bug that is accidentally spread to millions of cars all over the world.

    And how is this regulated? Say the software gets to a point that we deem it safe for full self driving, then it gets approved on the road, and then Tesla adds a new fancy feature to their software and rolls out an update. How are we to be confident that it's safe?

    • boshalfoshal 4 hours ago

      > how are we to be confident that its safe?

      I hope you realize that these companies dont just push updates to your car like vscode does.

      Every change has to be unit tested, integration tested, tested in simulation, driven on a multiple cars on an internal fleet (in multiple countries) for multiple days/weeks, then is sent out in waves, then finally, once a bunch of metrics/feedback comes back, they start sending it out wider.

      Admittedly you pretty much have to just trust that the above catches most egregious issues, but there will always be unknown unknowns that will be hard to account for, even with all that. Either that or legitimately willful negligence, in which case, yes they should be held accountable.

      These aren't scrappy startups pushing fast and breaking things, there is an actual process to this.

    • rightbyte 15 hours ago

      Imagine all Teslas doing a full left right now. And full right in left steer countries.

      OTA updates and auto updates in general is just a thing that should not be in vehicles. The ecu:s should have to be air gaped to the internet to be considered road worthy.

  • lrvick 5 hours ago

    All these self driving car companies are competing to see whose proprietary firmware and sensors kill the fewest people. This is insane.

    I will -never- own a self driving car unless the firmware is open source, reproducible, remotely attestable, and built/audited by several security research firms and any interested security researchers from the public before all new updates ship.

    It is the only way to avoid greedy execs from cutting corners to up profit margins like VW did with faking emissions tests.

    Proprietary safety tech is evil, and must be made illegal. Compete with nicer looking more comfortable cars with better miles-to-charge, not peoples lives.

    • boshalfoshal 4 hours ago

      You are conflating two seperate problems (security vs functionality).

      "Firmware" can be open source and secure, but how does this translate to driving performance at all? Why does it matter if the firmware is validated by security researchers, who presumably don't know anything about motion planning, perception, etc? And this is even assuming that the code can be reasonably verified statically. You probably need to to run that code on a car for millions of miles (maybe in simulation) in an uncoutable number of scenarios to run through every edge case.

      The other main problem with what you're asking is that most of the "alpha" of these self driving companies is in proprietary _models_, not software. No one is giving up their models. That is a business edge.

      As someone who has been at multiple AV companies, no one is cutting corners on "firmware" or "sensors" (apart from making it reasonably cost effective so normal people can buy their cars). Its just that AV is a really really really difficult problem with no closed form solution.

      Your normal car has all the same pitfalls of "unverified software running on a safety critical system," except that its easier to verify that straightforward device firmware works vs a very complex engine whose job is to ingest sensor data and output a trajectory.

  • daghamm a day ago

    While at it, please also investigate why it is sometimes impossible to leave a damaged vehicle. This has resulted in people dying more than once:

    https://apnews.com/article/car-crash-tesla-france-fire-be8ec...

    • MadnessASAP a day ago

      The why is pretty well understood, no investigation needed. I don't like the design but it's because the doors are electronic and people don't know where the manual release is.

      In a panic people go on muscle memory, which is push the useless button. They don't remember to pull the unmarked unobtrusive handle that they may not even know exists.

      If it was up to me, sure have your electronic release, but make the manual release a big handle that looks like the ejection handle on a jet (yellow with black stripes, can't miss it).

      * Or even better, have the standard door handle mechanically connected to the latch through a spring loaded solenoid that disengages the mechanism. Thus when used under normal conditions it does the thing electronically but the moment power fails the door handle connects to the manual release.

      • Clamchop a day ago

        Or just use normal handles, inside and outside, like other cars. What they've done is made things worse by any objective metric in exchange for a "huh, nifty" that wears off after a few weeks.

        • nomel a day ago

          I think this is the way. Light pull does the electronic thing. Hard pull does the mechanical thing. They could have done this with the mechanical handle that's there already (that I have pulled almost every time I've used a Tesla, getting anger and weather stripping inspection from the owner).

      • daghamm a day ago

        There are situations where manual release has not worked

        https://www.businessinsider.com/how-to-manually-open-tesla-d...

        • willy_k 11 hours ago

          The article you provided does not say that. The only failure related to the manual release it mentions is that using it breaks the window.

          > Exton said he followed the instructions for the manual release to open the door, but that this "somehow broke the driver's window."

      • carimura a day ago

        it's worse than that, at least in ours, the backseat latches are under some mat, literally hidden. i had no idea it was there for the first 6 months.

      • amluto a day ago

        I’ve seen an innovative car with a single door release. As you pull it, it first triggers the electronic mechanism (which lowers the window a bit, which is useful in a door with no frame above the window) and then, as you pull it farther, it mechanically unlatches the door.

        Tesla should build their doors like this. Oh, wait, the car I’m talking about is an older Tesla. Maybe Tesla should remember how to build doors like this.

        • crooked-v 8 hours ago

          It's not very 'innovative' these days. My 2012 Mini Cooper has it.

      • Zigurd a day ago

        The inside trunk release on most cars has a glow-in-the-dark fluorescent color handle

  • botanical a day ago

    Only the US government can allow corporations to beta test unproven technology on the public.

    Governments should carry out comprehensive tests on a self-driving car's claimed capabilities. This is the same as cars without proven passenger safety (Euro NCAP) aren't allowed to be on roads carrying passengers.

    • krasin a day ago

      > Only the US government can allow corporations to beta test unproven technology on the public.

      China and Russia do it too. It's not an excuse, but definitely not just the US.

    • akira2501 a day ago

      > Only the US government

      Any Legislative body can do so. There's no reason to limit this strictly to the federal government. States and municipalities should have a say in this as well. The _citizens_ are the only entity that _decide_ if beta technology can be used or not.

      > comprehensive tests on a self-driving car's claimed capabilities.

      This presupposes the government is naturally capable of performing an adequate job at this task or that the automakers won't sue the government to interfere with the testing regime and efficacy of it's standards.

      > aren't allowed to be on roads carrying passengers.

      According to Wikipedia Euro NCAP is a _voluntary_ organization and describes the situation thusly "legislation sets a minimum compulsory standard whilst Euro NCAP is concerned with best possible current practice." Which effectively highlights the above problems perfectly.

    • CTDOCodebases a day ago

      Meh. Happens all around the world. Even if the product works there is no guarantee that it will be safe.

      Asbestos products are a good example of this. A more recent one is Teflon made with PFOAs or engineered stone like Caesarstone.

    • dzhiurgis a day ago

      If it takes 3 months to approve where steel rocket falls you might as well give up iterating something as complex as FSD.

      • AlotOfReading a day ago

        There are industry standards for this stuff. ISO 21448, UL-4600, UNECE R157 for example, and even commercial certification programs like the one run by TÜV Süd for European homologation. It's a deliberate series of decisions on Tesla's part to make their regulatory life as difficult as possible.

      • bckr a day ago

        Drive it in larger and larger closed courses. Expand to neighboring areas with consent of the communities involved. Agree on limited conditions until enough data has been gathered to expand those conditions.

        • romon a day ago

          While controlled conditions promote safety, they do not yield effective training data.

          • AlotOfReading a day ago

            That's how all autonomous testing programs currently work around the world. That is, every driverless vehicle system on roads today was developed this way. You're going to have to be more specific when you say that it doesn't work.

  • dietsche a day ago

    I would like more details. There are definitely situations where neither a car nor a human could respond quickly enough to a situation on the road.

    for example, I recently hit a deer. The dashcam shows that I had less than 100 feet from when the deer became visible due to terrain to impact while driving at 60 mph. Keeping in mind that stopping a car in 100 feet at 60 mph is impossible. Most vehicles need more than triple that without accounting for human reaction time.

    • ra7 a day ago

      Unfortunately, Tesla requests NHTSA to redact almost all useful information from their crash reports. So it's impossible to get more details.

      Here is the public database of all ADAS crashes: https://static.nhtsa.gov/odi/ffdd/sgo-2021-01/SGO-2021-01_In...

    • arcanemachiner a day ago

      This is called "overdriving your vision", and it's so common that it boggles my mind. (This opinion might have something to do with the deer I hit when I first started driving...)

      Drive according to the conditions, folks.

      • thebruce87m 20 hours ago

        There is a difference between driving too fast around a corner to stop for something stationary on the road and driving through countryside where something might jump out.

        I live in a country with deer but the number of incidences of them interacting with road users is so low that it does not factor in to my risk tolerance.

        • Zigurd 12 hours ago

          The risks vary with speed. At 30mph a deer will be injured and damage your car, and you might have to call animal control to find the deer if it was able to get away. At 45mph there is a good chance the deer will impact your windshield. If it breaks through, that's how people die in animal collisions. They get kicked to death by a frantic, panicked, injured animal.

      • Zigurd a day ago

        We will inevitably see "AVs are too cautious! Let me go faster!" complaints as AVs drive in more places. But, really humans just suck at risk assessment. And at driving. Driving like a human is comforting in some contexts, but that should not be a goal when it trades away too much safety.

      • Kirby64 a day ago

        On many roads if a deer jumps across the road at the wrong time there’s literally nothing you can do. You can’t always drive at 30mph on back country roads just because a deer might hop out at you.

        • seadan83 a day ago

          World of difference between, 30, 40, 50 and 60. Feels like something I have noticed between west and east coast drivers. Latter really send it on country turns and just trust the road. West coast, particularly montana, when vision is reduced, speed slows down. Just too many animals or road obstacles (eg: rocks, planks of wood) to just trust the road.

          • Kirby64 a day ago

            Road obstacles are static and can be seen by not “out driving your headlights”. Animals flinging themselves into the road cannot, in many instances.

            • amenhotep 14 hours ago

              You are responding in a thread about a person saying they were driving at 60 when the deer only became visible "due to terrain" at 100 feet away, and therefore hitting it is no reflection on their skill or choices as a driver.

              I suppose we're meant to interpret charitably here, but it really seems to me like there is a big difference between the scenario described and the one you're talking about, where the deer really does fling itself out in front of you.

              • dietsche 5 hours ago

                op here. you nailed it on the head. also, the car started breaking before i could!

                incidentally, i’ve also had the tesla dodge a deer successfully!

                autopilot has improved in BIG ways over the past 2 years. went 700 miles in one day on autopilot thru the mountains. no issues at all.

                that said expecting perfection from a machine or a human is a fools errand.

          • dragonwriter a day ago

            > West coast, particularly montana

            Montana is not "West coast".

            • seadan83 13 hours ago

              Yeah, I was a bit glib. My impression is more specifically of the greater northwest vs rest. Perhaps just "the west" vs "the east".

              Indiana drivers for example really do send it (in my experience). Which is not east coast of course.

              There is a good bit of nuance... I would perhaps say more simply east of Mississippi vs west, but Texas varies by region and so-Cal drivers vary a lot as well, particularly compared to nor-Cal and central+eastern california. (I don't have an impression for nevada and new mexico drivers - I dont have any experience on country roads in those states)

    • nomel a day ago

      I've had a person, high on drugs, walk out from between bushes that were along the road. I screeched to a halt in front of them, but 1 second later and physics would have made it impossible, regardless of reaction time (or non-negligible speed).

    • freejazz a day ago

      The article explains the investigation is based upon visibility issues... what is your point? I don't think any reasonable person doubts there are circumstances where nothing could adequately respond in order to prevent a crash. It seems a rather odd assumption to reach that these crashes would be in one of those scenarios such that we should be explained to otherwise, no less so when the report facially explains this to not be the case.

    • Log_out_ a day ago

      just have a drone fly ahead and have the lidar pointcloud on hud. This are very bio-logic excuses :)

  • Aeolun a day ago

    I love how the image in the article has a caption that says it tells you to pay attention to the road, but I had to zoom in all the way to figure out where that message actually was.

    I’d expect something big and red with a warning triangle or something, but it’s a tiny white message in the center of the screen.

    • valine a day ago

      It gets progressively bigger and louder the longer you ignore it. After 30ish seconds it sounds an alarm and kicks you out.

      • FireBeyond a day ago

        > After 30ish seconds it sounds an alarm and kicks you out.

        That's much better. When AP functionality was introduced, the alarm was fifteen MINUTES.

    • taspeotis a day ago

      Ah yes, red with a warning like “WARNING: ERROR: THE SITUATION IS NORMAL!”

      Some cars that have cruise control but an analog gauge cluster that can’t display WARNING ERRORs even hide stuff like “you still have to drive the car” in a manual you have to read yet nobody cares about that.

      Honestly driving a car should require some sort of license for a bare minimum of competence.

  • UltraSane a day ago

    I'm astonished at how long Musk has been able to keep his autonomous driving con going. He has been lying about it to inflate Tesla shares for 10 years now.

    • ryandrake a day ago

      Without consequences, there is no reason to stop.

      • UltraSane a day ago

        When is the market going to realize Tesla is NEVER going to have real level 4 autonomy where Tesla takes legal liability for crashes the way Waymo has?

        • tstrimple a day ago

          Market cares far more about money than lives. Until the lives lost cost more than their profit, they give less than zero fucks. Capitalism. Yay!

    • jjmarr a day ago

      There's no con. It works. You can buy a Tesla that can drive itself places and it's been technically capable of doing so for several years. The current restrictions on autonomous cars are legal, not technical.

      • UltraSane a day ago

        It "works" if you mean often does incredibly stupid and dangerous things and requires a person to be ready to take over for it at any moment to prevent a crash. So far no Tesla car has ever legally driven even a single mile without a person in the driver's seat.

        • jjmarr 8 hours ago

          And? How does that make Elon Musk a con artist?

          It's possible to physically get in a Tesla and have it drive you from point A to point B. That's a self-driving car. You're saying it's unreliable, makes mistakes, and can be used illegally. That doesn't mean the car can't drive itself, just that it doesn't do a very good job at "self-driving"

          • UltraSane 5 hours ago

            "How does that make Elon Musk a con artist?"

            Because since 2014 he has made wildly unrealistic claims that he is smart enough to know were BS.

            December 2015: “We’re going to end up with complete autonomy, and I think we will have complete autonomy in approximately two years.”

            January 2016 In ~2 years, summon should work anywhere connected by land & not blocked by borders, eg you're in LA and the car is in NY

            June 2016: “I really consider autonomous driving a solved problem, I think we are less than two years away from complete autonomy, safer than humans.”

            October 2016 By the end of next year, said Musk, Tesla would demonstrate a fully autonomous drive from, say, a home in L.A., to Times Square ... without the need for a single touch, including the charging.

            "A 2016 video that Tesla used to promote its self-driving technology was staged to show capabilities like stopping at a red light and accelerating at a green light that the system did not have, according to testimony by a senior engineer."

            January 2017 The sensor hardware and compute power required for at least level 4 to level 5 autonomy has been in every Tesla produced since October of last year.

            March 2017: “I think that [you will be able to fall asleep in a Tesla] in about two years.”

            May 2017 Update on the coast to coast autopilot demo? - Still on for end of year. Just software limited. Any Tesla car with HW2 (all cars built since Oct last year) will be able to do this.

            March 2018 I think probably by end of next year [end of 2019] self-driving will encompass essentially all modes of driving and be at least 100% to 200% safer than a person.

            February 2019 We will be feature complete full self driving this year. The car will be able to find you in a parking lot, pick you up, take you all the way to your destination without an intervention this year. I'm certain of that. That is not a question mark. It will be essentially safe to fall asleep and wake up at their destination towards the end of next year

            April 2019 We expect to be feature complete in self driving this year, and we expect to be confident enough from our standpoint to say that we think people do not need to touch the wheel and can look out the window sometime probably around the second quarter of next year.

            May 2019 We could have gamed an LA/NY Autopilot journey last year, but when we do it this year, everyone with Tesla Full Self-Driving will be able to do it too

            December 2020 I'm extremely confident that Tesla will have level five next year, extremely confident, 100%

            January 2021 FSD will be capable of Level 5 autonomy by the end of 2021

      • two_handfuls a day ago

        I tried it. It drives worse than a teenager.

        There is absolutely no way this can safely drive a car without supervision.

        • valval 21 hours ago

          It’s been safer than a human driver for years. It’s also not meant to be unsupervised.

          • two_handfuls 5 hours ago

            "safer than a human driver for years" can be misleading, since the system is supervised - it assists the human driver. So what we're comparing is human+FSD vs avg car (with whatever driver assist it has).

            The claim that FSD+human is safer than an average car is old and has since been debunked: if instead of comparing vs all cars (old and new, with and without driver assistance) you compare like for like: other cars of similar price also with cruise control and lanekeeping assistance, then the Tesla cars are as safe as the others.

            And to be clear, none of those are autonomous. There is a certification process for autonomous cars, followed by Waymo Mercedes and others. Tesla has not even started this process.

          • gitaarik 11 hours ago

            Something about these two statements seem to be in conflict with each other, but maybe that is just kinda Tesla PR talk.

            • UltraSane 5 hours ago

              It is cultish doublespeak.

            • valval 9 hours ago

              It’s quite easy to be safer than a human driver, since humans are just human. Supervision is required because the system can face edge cases.

              • gitaarik 9 hours ago

                Ah ok so if humans would be supervised for their edge cases then humans would actually be safer!

              • UltraSane 5 hours ago

                Edge cases like intersections?

          • lawn 10 hours ago

            Safer than a human driver...

            According to Tesla.

      • peutetre a day ago

        Musk believes Tesla without FSD is "worth basically zero": https://www.businessinsider.com/elon-musk-tesla-worth-basica...

        Musk is now moving the FSD work to xAI, taking what supposedly makes the public company Tesla valuable and placing it into his private ownership: https://www.wsj.com/tech/tesla-xai-partnership-elon-musk-30e...

        Seems like a good way to privatize shareholder capital.

    • porphyra a day ago

      Just because it has taken 10 years longer than promised doesn't mean that it will never happen. FSD has made huge improvements this year and is on track to keep up the current pace so it actually does seem closer than ever.

      • UltraSane a day ago

        The current vision-only system is a clear technological dead-end that can't go much more than 10 miles between "disengagements". To be clear, "disengagements" would be crashes if a human wasn't ready to take over. And not needing a human driver is THE ENTIRE POINT! I will admit Musk isn't a liar when Tesla has FSD at least as good as Waymo's system and Tesla accepts legal liability for any crashes.

        • valval 21 hours ago

          You’re wrong. Nothing about this is clear, and you’d be silly to claim otherwise.

          You should explore your bias and where it’s coming from.

          • UltraSane 14 hours ago

            No Tesla vehicle has legally driven even a single mile with no driver in the driver's seat. They aren't even trying to play Waymo's game. The latest FSD software's failure rate is at least 100 times higher than it needs to be.

            • fallingknife 13 hours ago

              That's a stupid point. I've been in a Tesla that's driven a mile by itself. It makes no difference if a person is in the seat.

              • UltraSane 12 hours ago

                "It makes no difference if a person is in the seat." It does when Musk is claiming that Tesla is going to sell a car with no steering wheel!

                The current Tesla FSD fails so often that a human HAS to be in the driver seat ready to take over at any moment.

                You really don't understand the enormous difference between the current crappy level 2 Tesla FSD and Waymo's level 4 system?

                • valval 9 hours ago

                  The difference is that Tesla has a general algorithm, while Waymo is hard coding scenarios.

                  I never really got why people bring Waymo up every time Tesla’s FSD is mentioned. Waymo isn’t competing with Tesla’s vision.

                  • porphyra 7 hours ago

                    Waymo uses a learned planner and is far from "hardcoded". In any case, imo both of these can be true:

                    * Tesla FSD works surprisingly well and improving capabilities to hands free actual autonomy isn't as far fetched as one might think.

                    * Waymo beat them to robotaxi deployment and scaling up to multiple cities may not be as hard as people say.

                    It seems that self driving car fans are way too tribal and seem to be convinced that the "other side" sucks and is guaranteed to fail. In reality, it is very unclear as both strategies have their merits and only time will tell in the long run.

                    • UltraSane 5 hours ago

                      " Tesla FSD works surprisingly well and improving capabilities to hands free actual autonomy isn't as far fetched as one might think"

                      Except FSD doesn't work surprisingly well and there is no way it will get as good as Waymo using vision-only.

                      "It seems that self driving car fans are way too tribal and seem to be convinced that the "other side" sucks and is guaranteed to fail."

                      I'm not being tribal, I'm being realistic based on the very public performance of both systems.

                      If Musk was serious about his Robotaxi claims then Tesla would be operating very differently. Instead it is pretty obvious it all a con to inflate Tesla shares beyond all reason.

                  • UltraSane 5 hours ago

                    The difference is that Waymo has a very well engineered system using vision, LIDAR, and millimeter wave RADAR that works well enough in limited areas to provide tens of thousands of actual driver-less rides. Tesla has a vision only system that sucks so bad a human has to be ready to take over for it at any time like a parent monitoring a toddler near stairs.

      • gitaarik 11 hours ago

        Just like AGI and the year of the Linux desktop ;P

        • porphyra 7 hours ago

          Honestly LLMs were a big step towards AGI, and gaming on Linux is practically flawless now. Just played through Black Myth Wukong with no issues out of the box.

          • UltraSane 5 hours ago

            LLMs are to AGI

            as

            A ladder is to getting to orbit.

            I can seem LLMs serving as a kind of memory for an AGI but something fundamentally different will be needed for true reasoning and continues self-improvement.

  • jqpabc123 2 days ago

    By now, most people have probably heard that Tesla's attempt at "Full Self Driving" is really anything but --- after a decade of promises. The vehicle owners manual spells this out.

    As I understand it, the contentious issue is the fact that unlike most others, their attempt works mostly from visual feedback.

    In low visibility situations, their FSD has limited feedback and is essentially driving blind.

    It appears that Musk may be seeking a political solution to this technical problem.

    • whamlastxmas 2 days ago

      It’s really weird how much you comment about FSD being fake. My Tesla drives me 10+ miles daily and the only time I touch any controls is pulling in and out of my garage. Literally daily. I maybe disengage once every couple days just to be on the safe side in uncertain situations, it I’m sure it’d likely do fine there too.

      FSD works. It drives itself fine 99.99% of the time. It is better than most human drivers. I don’t know how you keep claiming it doesn’t or doesn’t exist.

      • sottol 2 days ago

        The claim was about _full_ driving being anything but, ie not _fully_ self-driving, not being completely fake. Disengaging every 10-110 miles is just not "full", it's partial.

        And then the gp went into details in which specific situations fsd is especially problematic.

      • jqpabc123 2 days ago

        So you agree with Musk, the main problem with FSD is political?

        Tesla says on its website its "Full Self-Driving" software in on-road vehicles requires active driver supervision and does not make vehicles autonomous.

        https://www.reuters.com/business/autos-transportation/nhtsa-...

      • peutetre 2 days ago

        The problem is Tesla and Musk have been lying about full self-driving for years. They have made specific claims of full autonomy with specific timelines and it's been a lie every time: https://motherfrunker.ca/fsd/

        In 2016 a video purporting to show full self-driving with the driver there purely "for legal reasons" was staged and faked: https://www.reuters.com/technology/tesla-video-promoting-sel...

        In 2016 Tesla said that "as of today, all Tesla vehicles produced in our factory – including Model 3 – will have the hardware needed for full self-driving capability at a safety level substantially greater than that of a human driver." That was a lie: https://electrek.co/2024/08/24/tesla-deletes-its-blog-post-s...

        Musk claimed there would be 1 million Tesla robotaxis on the road in 2020. That was a lie: https://www.thedrive.com/news/38129/elon-musk-promised-1-mil...

        Tesla claimed Hardware 3 would be capable of full self-driving. When asked about Hardware 3 at Tesla's recent robotaxi event, Musk didn't want to "get nuanced". That's starting to look like fraud: https://electrek.co/2024/10/15/tesla-needs-to-come-clean-abo...

        Had Tesla simply called it "driver assistance" that wouldn't be a lie. But they didn't do that. They doubled, tripled, quadrupled down on the claim that it is "full self-driving" making the car "an appreciating asset" that it would be "financially insane" not to buy:

        https://www.cnbc.com/2019/04/23/elon-musk-any-other-car-than...

        https://edition.cnn.com/2024/03/03/cars/musk-tesla-cars-valu...

        It's not even bullshit artistry. It's just bullshit.

        Lying is part of the company culture at Tesla. Musk keeps lying because the lies keep working.

        • whamlastxmas a day ago

          Most of this is extreme hyperbole and it’s really hard to believe this is a genuine good faith attempt at conversation instead of weird astroturfing, bc these tired inaccurate talking points are what come up in literally every single even remotely associated to Elon. It’s like there’s a dossier of talking points everyone is sharing

          The car drives itself. This is literally undeniable. You can test it today for free. Yeah it doesn’t have the last 0.01% done yet and yeah that’s probably a lot of work. But commenting like the GP is exhausting and just not reflective of reality

          • peutetre a day ago

            > bc these tired inaccurate talking points are what come up in literally every single even remotely associated to Elon

            You understand that the false claims, the inaccuracies, and the lies come from Elon, right? They're associated with him because he is the source of them.

            They're only tired because he's been telling the same lie year after year.

          • jqpabc123 a day ago

            ... not reflective of reality

            Kinda like repeated claims of "Full Self Driving" for over a decade.

    • enslavedrobot 2 days ago

      Here's a video of FSD driving the same route as a waymo 42% faster with zero interventions. 23 min vs 33. This is my everyday. Enjoy.

      https://youtu.be/Kswp1DwUAAI?si=rX4L5FhMrPXpGx4V

      • jqpabc123 a day ago

        Can it drive the same route without a human behind the wheel?

        Not legally and not according to Tesla either --- because Tesla's FSD is not "Fully Self Driving" --- unlike Waymo.

      • ck2 2 days ago

        There are also endless videos of teslas driving into pedestrians, plowing full speed into emergency vehicles parked with flashing lights, veering wildly from strange markings on the road, etc. etc.

        "works for me" is a very strange response for someone on Hacker News if you have any coding background - you should realize you are a beta tester unwittingly if not a full blown alpha tester in some cases

        All it will take is a non-standard event happening on your daily drive. Most certainly not wishing it on you, quite the opposite, trying to get you to accept that a perfect drive 99 times out of 100 is not enough.

        • enslavedrobot a day ago

          Those are Autopilot videos this discussion is about FSD. FSD has driven ~2 billion miles at this point and had potentially 2 fatal accidents.

          The US average is 1.33 deaths/100 million miles. Tesla on FSD is easily 10x safer.

          Every day it gets safer.

          • hilux 8 hours ago

            Considering HN is mostly technologists, the extent of Tesla-hate in here surprises me. My best guess is that it is sublimated Elon-hate. (Not a fan of my former neighbor myself, but let's separate the man from his creations.)

            People seem to be comparing Tesla FSD to perfection, when the more fair and relevant comparison is to real-world American drivers. Who are, on average, pretty bad.

            Sure, I wouldn't trust data coming from Tesla. But we have government data.

          • diggernet a day ago

            How many miles does it have on the latest software? Because any miles driven on previous software are no longer relevant. Especially with that big change in v12.

            • enslavedrobot a day ago

              The miles driven are rising exponentially as the versions improve according to company filings. If the miles driven on previous versions are no longer relevant how can the NHTSA investigation of previous versions impact FSD regulation today?

              Given that the performance has improved dramatically over the last 6 months, it is very reasonable to assume that the miles driven to fatality ratio also improving.

              Using the value of 1.33 deaths per 100 million miles driven vs 2 deaths in 2 billion miles driven, FSD has saved approximately 24 lives so far.

  • graeme a day ago

    Will the review assess overall mortality of the vehicles compared to similar cars, and overall mortality while FSD is in use?

    • dekhn a day ago

      No, that is not part of a review. They may use some reference aggregated industry data, but it's out of scope to answwer the question I think you're trying to imply.

    • akira2501 a day ago

      Fatalities per passenger mile driven is the only statistic that would matter. I actually doubt this figure differs much, either way, from the overall fleet of vehicles.

      This is because "inattentive driving" is _rarely_ the cause of fatalities on the road. The winner there is, and probably always will be, Alcohol.

    • bbor a day ago

      I get where you’re coming from and would also be interested to see, but based on the clips I’ve seen that wouldn’t be enough in this case. Of course the bias is inherent in what people choose to post (not normal and not terrible/litigable), but I think there’s enough at this point to perceive a stable pattern.

      Long story short, my argument is this: it doesn’t matter if you reduce serious crashes from 100PPM to 50PPM if 25PPM of those are new crash sources, speaking from a psychological and sociological perspective. Everyone should know that driving drunk, driving distracted, driving in bad weather, and in rural areas at dawn or dusk is dangerous, and takes appropriate precautions. But what do you do if your car might crash because someone ahead flashed their high beams, or because the sun was reflecting off another car in an unusual way? Could you really load up your kids and take your hands off the wheel knowing that at any moment you might hit an unexpected edge condition?

      Self driving cars are (presumably!) hard enough to trust already, since you’re giving away so much control. There’s a reason planes have to be way more than “better, statistically speaking” — we expect them to be nearly flawless, safety-wise.

      • dragonwriter a day ago

        > But what do you do if your car might crash because someone ahead flashed their high beams, or because the sun was reflecting off another car in an unusual way?

        These are -- like drunk driving, driving distract, and driving in bad weather -- things that actually do cause accidents with human drivers.

        • hunter-gatherer a day ago

          The point is the choice of taking precaution part that you left out of the quote. The other day I was taking my kid to school, and when we turned east the sun was in my eyes and I couldn't see anything, so I pulled over as fast as I could and changed my route. Had I chosen to press forward and been in an accident, it is explainable (albeit still unfortunate and often unnecessary!). However, if I'm under the impression that my robot car can handle such circumstances because it does most of the time and then it glitches, that is harder to explain.

        • paulryanrogers a day ago

          Indeed, yet humans can anticipate such things and rely on their experience to reason about what's happening and how to react. Like slow down or shift lanes or just move ones head for a different perfective. A Tesla with only two cameras ("because that's all humans need") is unlikely to provably match that performance for a long time.

          Tesla could also change its software without telling the driver at any point.

        • dfxm12 a day ago

          This language is a bit of a sticking point for me. If you're drunk driving or driving distracted, there's no "accident". You're intentionally doing something wrong and committing a crime.

    • FireBeyond a day ago

      If you're trying to hint at Tesla's own stats, then at this point those are hopelessly, and knowingly, misleading.

      All they compare is "On the subsets of driving on only the roads where FSD is available, active, and has not or did not turn itself off because of weather, road, traffic or any other conditions" versus "all drivers, all vehicles, all roads, all weather, all traffic, all conditions".

      There's a reason Tesla doesn't release the raw data.

      • rblatz a day ago

        I have to disengage FSD multiple times a day and I’m only driving 16 miles round trip. And routinely have to stop it from doing dumb things like stopping at green traffic lights, attempting to do a u turn from the wrong turn lane, or switching to the wrong lane right before a turn.

        • rad_gruchalski 9 hours ago

          Why would you even turn it on at this point…

    • infamouscow a day ago

      Lawyers are not known for their prowess in mathematics, let alone statistics.

      Making these arguments from the standpoint of an engineer is counterproductive.

      • fallingknife a day ago

        Which is why they are the wrong people to run the country

        • paulryanrogers a day ago

          Whom? Because math is important and so is law, among a variety of other things.

          s/ Thankfully the US presidential choices are at least rational, of sound mind, and well rounded people. Certainly no spoiled man children among them. /s

    • johnthebaptist a day ago

      Yes, if tesla complies and provides that data

  • quitit 9 hours ago

    "Full Self-Driving" but it's not "full" self-driving, as it requires active supervision.

    So it's marketed with a nod and wink, as if the supervision requirement is just a peel away disclaimer to satisfy old and stuffy laws that are out of step with the latest technology. When in reality it really does need active supervision.

    But the nature of the technology is this approach invites the driver to distraction, because what's the use in "full self driving" if one needs to have their hands on the wheel and feet near the pedals ready to take control at a moments notice? Worsening this problem is that the Teslas have shown themselves to drive erratically at unexpected times such as phantom braking or misidentifying natural phenomena for traffic lights.

    One day people will look back on letting FSD exist in the market and roll their eyes in disbelief of the recklessness.

  • aanet a day ago

    About damn time NHTSA opened this full scale investigation. Tesla's "autonowashing" has gone on for far too long.

    Per Reuters [1] "The probe covers 2016-2024 Model S and X vehicles with the optional system as well as 2017-2024 Model 3, 2020-2024 Model Y, and 2023-2024 Cybertruck vehicles. The preliminary evaluation is the first step before the agency could seek to demand a recall of the vehicles if it believes they pose an unreasonable risk to safety."

    Roughly 2.4 million Teslas in question, with "Full Self Driving" software after 4 reported collisions and one fatality.

    NHTSA is reviewing the ability of FSD’s engineering controls to "detect and respond appropriately to reduced roadway visibility conditions."

    Tesla has, of course, rather two-facedly called its FSD as SAE Level-2 for regulatory purposes, while selling its "full self driving" but also requiring supervision. ¯\_(ツ)_/¯ ¯\_(ツ)_/¯

    No other company has been so irresponsible to its users, and without a care for any negative externalities imposed on non-consenting road users.

    I treat every Tesla driver as a drunk driver, steering away whenever I see them on highways.

    [FWIW, yes, I work in automated driving and know a thing or two about automotive safety.]

    [1] https://archive.is/20241018151106/https://www.reuters.com/bu...

    • ivewonyoung a day ago

      > Roughly 2.4 million Teslas in question, with "Full Self Driving" software after 4 reported collisions and one fatality.

      45000 people die yearly just in the US in auto accidents. Those numbers and timeline you quoted seem insignificant at first glance magnified by people with an axe to grind like that guy running anti Tesla superbowl ads, who makes self driving software like you.

  • wg0 a day ago

    In all the hype of AI etc, if you think about it then the foundational problem is that even Computer Vision is not a solved problem at the human level of accuracy and that's at the heart of the issue of both Tesla and that Amazon checkout.

    Otherwise as thought experiment, imagine just a tiny 1 Inch tall person glued to the grocery trolley and another sitting on each shelf - just these two alone are all you need for "automated checkout".

    • vineyardmike a day ago

      > Otherwise as thought experiment, imagine just a tiny 1 Inch tall person glued to the grocery trolley and another sitting on each shelf - just these two alone are all you need for "automated checkout".

      I don’t think this would actually work, as silly a thought experiment as it is.

      The problem isn’t the vision, it’s state management and cost. It was very easy (but expensive) to see and classify via CV if a person picked something up, it just requires hundreds of concurrent high resolution streams and a way to stitch the global state from all the videos.

      A little 1 inch person on each shelf needs a good way to communicate to every other tiny person what they say, and come to consensus. If 5 people/cameras detect person A picking something up, you need to differentiate between every permutation within 5 discrete actions and 1 seen 5 times.

      In case you didn’t know, Amazon actually hired hundreds of people in India to review the footage and correct mistakes (for training the models). They literally had a human on each shelf. And they still had issues with the state management. With people.

      • wg0 a day ago

        Yeah - that's exactly is my point that humans were required to recognize and computer vision is NOT a solved problem regardless of tech bros misleading techno optimism.

        Distributed communication and state management on the other hand is a solved problem already mostly with known parameters. How else do you think thousand and thousands of Kubernetes work in the wild.

  • gnuser a day ago

    I worked in 18 a wheeler automation unicorn.

    Never rode in one once for a reason.

    • akira2501 a day ago

      Automate the transfer yards, shipping docks, and trucking terminals. Make movement of cargo across these limited use areas entirely automated and as smooth as butter. Queue drivers up and have their loads automatically placed up front so they can drop and hook in a few minutes and get back on the road.

      I honestly think that's the _easier_ problem to solve by at least two orders of magnitude.

      • porphyra a day ago

        There are a bunch of companies working on that. So far off the top of my head I know of:

        * Outrider: https://www.outrider.ai/

        * Cyngn: https://www.cyngn.com/

        * Fernride: https://www.fernride.com/

        Any ideas what other ones are out there?

        • akira2501 a day ago

          Promising. I'm actually more familiar with the actual transportation and logistics side of the operation and strictly within the USA. I haven't seen anything new put into serious operation out here yet but I'll definitely be watching for them.

      • dylan604 a day ago

        Did you miss the news about the recent strike by the very people you are suggesting to eliminate? This automation was one of the points of contention.

        Solving the problem might not be as easy as you suggest as long as their are powerful unions involved

        • akira2501 a day ago

          This automation is inevitable. The ports are a choke point created by unnatural monopoly and a labor union is the incorrect solution. Particularly because their labor actions have massive collateral damage to other labor interests.

          I believe that if trucking were properly unionized the port unions would be crushed. They're not that powerful they've just outlived this particular modernization the longest out of their former contemporaries.

          • dylan604 14 hours ago

            So a union is okay for the trucking industry, but not for the dock workers?

            And what exactly will the truckers be trucking if the ports are crushed?

  • bastloing 10 hours ago

    It was way safer to ride a horse and buggy

  • 23B1 a day ago

    "Move fast and kill people"

    Look, I don't know who needs to hear this, but just stop supporting this asshole's companies. You don't need internet when you're camping, you don't need a robot to do your laundry, you don't need twitter, you can find more profitable and reliable places to invest.

  • dzhiurgis a day ago

    What is FSD uptake rate. I bet it’s less than 1% since in most countries it’s not even available…

  • whiplash451 a day ago

    Asking genuinely: is FSD enabled/accessible in EU?

    • AlotOfReading a day ago

      FSD is currently neither legal nor enabled in the EU. That may change in the future.

  • Animats a day ago

    If Trump is elected, this probe will be stopped.

  • JumpinJack_Cash 9 hours ago

    Unpopular take: Even with perfect FSD which is much better than the average human driver (say having the robotic equivalent of a Lewis Hamilton in every car) the productivity and health gains won't be as great as people anticipate.

    Sure way less traffic deaths but the spike in depression especially among males would be something very big. Life events are much outside of our control, having a 5000lbs thing that can get to 150mph if needed and responds exactly to the accelerator, brake and steering wheel input...well that makes people feel in control and very powerful while behind the aforementioned steering wheel.

    Also productivity...I don't know...people think a whole lot and do a whole lot of self reflection while they are driving and when they arrive at destination they just implement the thoughts they had while driving. The ability to talk on the phone has been there for quite some time now too, so thinking and communicating can be done while driving already, what would FSD add?

    • HaZeust 7 hours ago

      As a sports car owner, I see where you're coming from -- but MANY do not. We are the 10%, the other 90% see their vehicle as an A-B tool, and you can clearly see that displayed with the average, utilitarian car models that the vast majority of the public buy. There will be no "spike" in depression; simply put, there's not enough people that care about their car, how it gets from point A to point B, or what contribution they give, if any, into that.

      • JumpinJack_Cash 7 hours ago

        Maybe they don't care about their car to be a sports car but they surely enjoy some pleasure out of the control of being at the helm of something powerful like a car (even though it's not a sports car)

        Also even people in small cars they think a lot while driving already, and they also communicate, how much more productive they could be with FSD?

        • HaZeust 6 hours ago

          I really don't think you're right about the average person, or even a notable size of people, believing in the idea of their car being their "frontier of freedom" as was popular in the 70-80's media. I don't think that many people care about driving nowadays.

  • knob a day ago

    Didn't Uber have something similar happen? Ran over a woman in Phoenix?

    • BugsJustFindMe a day ago

      Yes. And Uber immediately shut down the program in the entire state of Arizona, halted all road testing for months, and then soon later eliminated their self driving unit entirely.

  • Rebuff5007 10 hours ago

    Tesla testing and developing FSD with normal consumer drivers frankly seems criminal. Test drivers for AV companies get advanced driver training, need to filed detailed reports about the cars response to various driving scenarios, and generally are paid to be as attentive as possible. The fact that any old tech-bro or un-assuming old lady can buy this thing and be on their phone when the car could potentially turn into oncoming traffic is mind boggling.

  • jgalt212 10 hours ago

    The SEC is clearly afraid of Musk. I wonder what the intimidation factor is at NHTSA.

  • DoesntMatter22 a day ago

    Each version has improved. FSD is realistically the hardest thing humanity as ever tried to do. It involves an enormous amount of manpower, compute power and human discoveries, and has to work right in billions of scenarios.

    Building a self flying plane is comically easy by comparison. Building Starship is easier by comparison.

    • gitaarik 11 hours ago

      Ah ok, first it is possible within 2 years, and now it is humanity's hardest problem? If it's really that hard I think we better put our resources into something more useful, like new energy solutions, seems we have an energy crisis.

  • xvector a day ago

    My Tesla routinely tries to kill me on absolutely normal California roads in normal sunny conditions, especially when there are cars parked on the side of the road (it often brakes thinking I'm about to crash into them, or even swerves into them thinking that's the "real" lane).

    Elon's Unsupervised FSD dreams are a good bit off. I do hope they happen though.

    • jrflowers a day ago

      > My Tesla routinely tries to kill me

      > Elon's Unsupervised FSD dreams are a good bit off. I do hope they happen though.

      It is very generous that you would selflessly sacrifice your own life so that others might one day enjoy Elon’s dream of robot taxis without steering wheels

      • massysett a day ago

        Even more generous to selflessly sacrifice the lives and property of others that the vehicle "self-drives" itself into.

      • judge2020 a day ago

        If the data sharing checkboxes are clicked, OP can still help send in training data while driving on his own.

    • gitaarik 11 hours ago

      I wonder, how are you "driving"? Are you sitting behind the wheel doing nothing except watch really good everything the car does so you can take over when needed? Isn't that a stressful experience? Wouldn't it be more comfortable to just do everything yourself so you know nothing weird can happen?

      Also, if the car does something crazy, how much time do you have to react? I can imagine in some situations you might have too little time to prevent the accident the car is creating.

      • xvector 2 hours ago

        > Isn't that a stressful experience?

        It's actually really easy and kind of relaxing. For long drives, it dramatically reduces cognitive load leading to less fatigue and more alertness on the road.

        My hand is always on the wheel so I can react as soon as I feel the car doing something weird.

    • left-struck a day ago

      That’s hilariously ironic because I have a pretty standard newish Japanese petrol car (I’m not mentioning the brand because my point isn’t that brand x is better than brand y), and it has no ai self driving functions just pretty basic radar adaptive cruise control and emergency brake assist where it will stop if there’s a car brake hard in front of you… and it does a remarkable job at rejecting cars which are slowing down or stopped in other lanes, even when you’re going around a corner and the car is pointing straight towards the other cars but not actually heading towards them since it’s turning. I assume they are using the steering input to help reject other vehicles and dopler effects to detect differences in speed, but it’s remarkable how accurate it is at matching the speed of the car in front of you and only the car in front of you, even when that car is over 15 seconds in front of you. If teslas can’t beat that, it’s sad

    • bogantech a day ago

      > My Tesla routinely tries to kill me

      Why on earth would you continue to use it? If it does succeed someday that's on you

      • newdee 15 hours ago

        > that’s on you

        They’d be dead, doubt it’s a concern at that point.

    • delichon a day ago

      Why do you drive a car that routinely tries to kill you? That would put me right off. Can't you just turn off the autopilot?

      • ddingus a day ago

        My guess is the driver tests it regularly.

        How does it do X, Y, ooh Z works, etc...

      • xvector 21 hours ago

        It's a pretty nice car when it's not trying to kill me

    • Renaud a day ago

      And what if the car swerves, and you aren't able to correct in time and end up killing someone?

      Is that your fault or the car's?

      I would bet that since it's your car, and you're using a knowingly unproven technology, it would be your fault?

      • ra7 a day ago

        The driver’s fault. Tesla never accepts liability.

        • LunicLynx a day ago

          And they have been very clear about that

  • fortran77 7 hours ago

    I have FSD in my Plaid. I don't use it. Too scary.

  • yieldcrv 10 hours ago

    Come on US, regulate interstate commerce and tell them to delete these cameras

    Lidar is goated and if tesla didn’t want that they can pursue a different perception solution, allowing for innovation

    But just visual cameras aiming to replicate us, ban that

  • diebeforei485 8 hours ago

    This feels like more lawfare from the Biden administration.

    They're doing this based on four collisions? Really?

  • ivewonyoung a day ago

    > NHTSA said it was opening the inquiry after four reports of crashes where FSD was engaged during reduced roadway visibility like sun glare, fog, or airborne dust. A pedestrian was killed in Rimrock, Arizona, in November 2023 after being struck by a 2021 Tesla Model Y, NHTSA said. Another crash under investigation involved a reported injury

    > The probe covers 2016-2024 Model S and X vehicles with the optional system as well as 2017-2024 Model 3, 2020-2024 Model Y, and 2023-2024 Cybertruck vehicles.

    This is good, but also for context 45 thousand people are killed in auto accidents in just the US every year, making 4 report crashes and 1 reported fatality for 2.4 million vehicles over 8 years look miniscule by comparison, or even better than many human drivers.

    • enragedcacti a day ago

      > making 4 report crashes and 1 reported fatality for 2.4 million vehicles over 8 years look miniscule by comparison, or even better than many human drivers.

      This is exactly what people were saying about the NHTSA Autopilot investigation when it started back in 2021 with 11 reported incidents. When that investigation wrapped earlier this year it had identified 956 Autopilot related crashes between early 2018 and August 2023, 467 of which were confirmed the fault of autopilot and an inattentive driver.

      • fallingknife a day ago

        So what? How many miles were driven and what is the record vs human drivers? Also Autopilot is a standard feature that is much less sophisticated than and has nothing to do with FSD.

    • dekhn a day ago

      Those numbers aren't all the fatalities associated with tesla cars; IE, you can't compare the 45K/year (roughly 1 per 100M miles driven) to the limited number of reports.

      What they are looking for is whether there are systematic issues with the design and implementation that make it unsafe.

      • moduspol a day ago

        Unsafe relative to what?

        Certainly not to normal human drivers in normal cars. Those are killing people left and right.

        • AlexandrB a day ago

          No they're not. And if you do look at human drivers you're likely to see a Pareto distribution where 20% of drivers cause most of the accidents. This is completely unlike something like FSD where accidents would be more evenly distributed. It's entirely possible that FSD would make 20% of the drivers safer and ~80% less safe even if the overall accident rate was lower.

        • dekhn a day ago

          I don't think the intent is to compare it to normal human drivers, although having some level of estimate of accident/injury/death rates (to both the driver, passenger, and people outside the car) with FSD enabled/disabled would be very interesting.

          • moduspol a day ago

            > I don't think the intent is to compare it to normal human drivers

            I think our intent should be focused on where the fatalities are happening. To keep things comparable, we could maybe do 40,000 studies on distracted driving in normal cars for every one or two caused by Autopilot / FSD.

            Alas, that's not where our priorities are.

        • llamaimperative a day ago

          Those are good questions. We should investigate to find out. (It'd be different from this one but it raises a good question. What is FSD safe compared to?)

        • Veserv a day ago

          What? Humans are excellent drivers. Humans go ~70 years between injury-causing accidents and ~5,000 years between fatal accidents even if we count the drunk drivers. If you started driving when the Pyramids were still new, you would still have half a millennium until you reach the expected value between fatalities.

          The only people pumping the line that human drivers are bad are the people trying to sell a dream that they can make a self-driving car in a weekend, or "next year", if you just give them a pile of money and ignore all the red flags and warning signs that they are clueless. The problem is shockingly hard and underestimating it is the first step to failure. Reckless development will not get you there safely with known technology.

    • tapoxi a day ago

      I don't agree with this comparison. The drivers are licensed, they have met a specific set of criteria to drive on public roads. The software is not.

      We are not sure when FSD is engaged with all of these miles driven, and if FSD is making mistakes a licensed human driver would not. I would at the very least expect radical transparency.

      • fallingknife a day ago

        I too care more about bureaucratic compliance than what the actual chances of something killing me are. When I am on that ambulance I will be thinking "at least that guy met the specific set of criteria to be licensed to drive on public roads."

        • tapoxi a day ago

          Are we really relegating drivers licenses to "bureaucratic compliance"?

          If FSD is being used in a public road, it impacts everyone on that road, not just the person who opted-in to using FSD. I absolutely want an independent agency to ensure it's safe and armed with the data that proves it.

          • fallingknife 14 hours ago

            What else are they? You jump through hoops to get a piece of plastic from the government that declares you "safe." And then holders of those licenses go out and kill 40,000 people every year just in the US.

            • tapoxi 2 hours ago

              And you're comparing that against what? That's 40,000 with regulation in place. Imagine if we let anyone drive without training.

    • throwup238 a day ago

      > The agency is asking if other similar FSD crashes have occurred in reduced roadway visibility conditions, and if Tesla has updated or modified the FSD system in a way that may affect it in such conditions.

      Those four crashes are just the ones that sparked the investigation.

    • insane_dreamer a day ago

      > making 4 report crashes and 1 reported fatality for 2.4 million vehicles over 8 years look miniscule by comparison,

      that's the wrong comparison

      the correct comparison is the number of report crashes and fatalities for __unsupervised FSD__ miles driven (not counting Tesla pilot tests, but actual customers)

      • jandrese a day ago

        That seems like a bit of a chicken and egg problem where the software is not allowed to go unsupervised until it racks up a few million miles of successful unsupervised driving.

        • AlotOfReading a day ago

          There's a number of state programs to solve this problem with testing permits. The manufacturer puts up a bond and does testing in a limited area, sending reports on any incidents to the state regulator. The largest of these, California's, has several dozen companies with testing permits.

          Tesla currently does not participate in any of these programs.

        • insane_dreamer 12 hours ago

          Similar to a Phase 3 clinical trial (and for similar reasons).

    • whiplash451 a day ago

      Did you scale your numbers in proportion of miles driven autonomously vs manually?

      • josephg a day ago

        Yeah, that’d be the interesting figure: How many deaths per million miles driven? How does Tesla’s full self driving stack up against human drivers?

        • gostsamo a day ago

          Even that is not good enough, because the "autopilot" usually is not engaged in challenging conditions making any direct comparisons not really reliable. You need similar roads in simila weather and similar time of the day for approximating good comparison.

          • ivewonyoung a day ago

            How many of the 45,000 deaths on US roads( and an order of magnitude more injuries) occur due to 'challenging conditions' ?

  • xqcgrek2 10 hours ago

    Lawfare by the Biden administration

    • jsight 10 hours ago

      In this case, I do not think so. NHTSA generally does an excellent job of looking at the big picture without bias.

      Although I must admit that their last investigation felt like an exception. The changes that they enforced seemed to be fairly dubious.

  • xqcgrek2 a day ago

    Law fare can work both ways and this administration is basically a lame duck.

  • soerxpso 13 hours ago

    For whatever it's worth, Teslas with Autopilot enabled crash about once every 4.5M miles driven, whereas the overall rate in the US is roughly one crash every 70K miles driven. Of course, the selection effects around that stat can be debated (people probably enable autopilot in situations that are safer than average, the average tesla owner might be driving more carefully or in safer areas than the average driver, etc), but it is a pretty significant difference. (Those numbers are what I could find at a glance; DYOR if you'd like more rigor).

    We have a lot of traffic fatalities in the US (in some states, an entire order of magnitude worse than in some EU countries), but it's generally not considered an issue. Nobody asks, "These agents are crashing a lot; are they really competent to drive?" when the agent is human, but when the agent is digital it becomes a popular question even with a much lower crash rate.

    • deely3 11 hours ago

      > Gaps in Tesla's telematic data create uncertainty regarding the actual rate at which vehicles operating with Autopilot engaged are involved in crashes. Tesla is not aware of every crash involving Autopilot even for severe crashes because of gaps in telematic reporting. Tesla receives telematic data from its vehicles, when appropriate cellular connectivity exists and the antenna is not damaged during a crash, that support both crash notification and aggregation of fleet vehicle mileage. Tesla largely receives data for crashes only with pyrotechnic deployment, which are a minority of police reported crashes.3 A review of NHTSA's 2021 FARS and Crash Report Sampling System (CRSS) finds that only 18 percent of police-reported crashes include airbag deployments.