Autopilot was on in fatal Model X cash

Discussion in 'Tesla' started by Feed The Trees, Mar 31, 2018.

To remove this ad click here.

  1. Feed The Trees

    Feed The Trees Active Member

    It gave the warning signal well in advance but the driver didn't respond.

    https://techcrunch.com/2018/03/30/tesla-says-fatal-crash-involved-autopilot/

    Sounds initially similar to the Volvo crash, that auto pilot disengages the human more than the car is capable of handling itself. Writing checks the tech can't cash.

    I've never driven an autopilot Tesla, does it give out warnings that it self corrects, causing drivers to ignore them altogether?
     
    Last edited: Mar 31, 2018
  2. To remove this ad click here.

  3. WadeTyhon

    WadeTyhon Well-Known Member

    Very disappointing and sad...

    He trusted the system more than he should have. We could blame him for not paying enough attention, but it happens to everyone. They're used to trusting technology that works *most* of the time. He probably got the new OTA update, heard about how much better it performs in all situations, and tried it out on a problem area.

    I've been very concerned about Tesla's self-driving and autopilot software for a while.

    They've made next to no progress on the fully autonomous front, Autopilot upgrades have been slow to materialize and it isn't fully featured enough. It lulls people into a false sense of security that they shouldn't have. They stopped testing autonomous drive in California just like Uber because they didn't want to have to conform to strict regulations for safety and transparency.

    I dunno, I think Tesla is doing self driving wrong and it's going to hurt their brand.

    They need to focus on what they actually do well! Making great electric cars. :)
     
    Last edited: Mar 31, 2018
  4. Feed The Trees

    Feed The Trees Active Member

    Two similar autopilot deaths in the same week, similar in that the driver stopped paying attention. The Uber was a total radar fail, the Tesla did not fail on radar but in both cases the inattentive driver is a major contributing factor if not the cause.

    So someone will say 'yeah yeah yeah, so what, people drive distracted all the time'. Yes sure they do, but who is so distracted don't look for asa long as the Uber driver, or ignore 5 seconds of warning signals on top of how long they werent watching before that. It's simply too much. In the Tesla at least the inattentive driver took only himself out and not those around him.
     
  5. WadeTyhon

    WadeTyhon Well-Known Member

    The audible warning signal was ‘earlier in the drive’ not prior to the wreck. Tesla only brought it up to point out that the driver was not using it correctly.

    From what I recall there is a significant amount of time you can let go of the wheel before the audible warning goes off. Initially the car only gives a ‘put hands on wheel’ icon on the dash. It’s tiny - if you have looked away, there is no chance of seeing it.

    If an audible warning was going off 5 seconds prior to the crash, Tesla would have said so in the blog. I don’t recall how long it takes for the audible warning.

    https://forums.tesla.com/forum/forums/how-much-hands-wheel-time-needed

    From Tesla Driver last year: “I think it's about every 2 minutes. I just shake the wheel or slightly turn it to one side, but not enough to disengage it. It takes be about 1/10 of a second and then do it again when the screen starts flashing again. I've only had it give me a audio warning once because I pay enough attention to notice when it starts flashing. One should always be paying attention, autopilot isn't perfect yet.”
     
  6. We don't know that he was over trusting of the system because we don't know why he had his hands off the wheel. Was he conscious? Maybe, but we've no way of knowing.

    As we reported yesterday, his brother claims he'd complained about the car veering at this spot 7 or 8 times. Now, Tesla claims otherwise, and I wasn't there to know whether this was really what was communicated, but if it was, it strikes me as odd that he would not be holding the wheel anticipating the possible veer.

    Autonomous system-wise, I believe the improvements in AP are part and parcel of the eventual autonomous system, but it's difficult to know how far (or not far) along they are.
     
  7. To remove this ad click here.

  8. WadeTyhon

    WadeTyhon Well-Known Member

    Well, that’s certainly true. We don’t know for sure.

    However, Tesla went out of their way to indicate that he had been intentionally ignoring the text based warnings and audible warnings earlier in the drive.

    The driver had received several visual and one audible hands-on warning earlier in the drive and the driver’s hands were not detected on the wheel for six seconds prior to the collision,” Tesla wrote in a blog post. “The driver had about five seconds and 150 meters of unobstructed view of the concrete divider with the crushed crash attenuator, but the vehicle logs show that no action was taken.”

    Note that they indicate the audible warning was ‘earlier in the drive’. It takes several warnings before an audible warning goes off. S/X owners give different accounts of how long it takes for the audible warning, but 1-2 minutes seems to be the period.

    So if he had only had his hands off the wheel for 6 seconds prior to the accident, then that means at some point between the initial set of warnings and the time of the crash, he put his hands back on the steering wheel.

    Meaning he was repeatedly removing his hands from the wheel.
     
  9. Not to be pedantic, but the Volvo fatality involved an 'autonomous" system. Autopilot is not an autonomous system. It's an advanced drivers assist system (ADAS), and the car is meant to be driven with the hands on the wheel at all times, just like any other car. There's a warning about this that appears on the screen, and the company has stated this repeatedly.
     
  10. Feed The Trees

    Feed The Trees Active Member

    My point on the similarities is that in both cases the human flesh bot was pulled into believing they can ignore whats happening. A human failure? Sure. But also demonstrates the self driving nature isn't where it needs to be. This weird interim phase is dangerous
    Either full level 5 or simple things like brake assist or lane keep warning, not driving the car.
     
    WadeTyhon likes this.
  11. If it is, as Tesla contends, resulting in a far lower death rate, then maybe it's fine.

    I would bet regular old cruise control is involved with some traffic deaths, but I haven't heard anyone say we should stop using it.
     
    Last edited: Mar 31, 2018
    WadeTyhon likes this.
  12. To remove this ad click here.

  13. Pushmi-Pullyu

    Pushmi-Pullyu Well-Known Member

    Yes, that's one of the things that jumped out at me.

    If, as the victim's brother claims, AutoSteer had failed several times at that exact place on the highway, and nearly caused an accident multiple times, then any reasonable person would have at the very least kept his hands on the wheel when driving past the spot. Speaking as a computer programmer, it seems to me that an Apple engineer, who would supposedly have been more aware of the limitations of the AutoSteer controlling software than the average person, would be more likely to turn off AutoSteer before that point, and not turn it back on until past that section of road.

    The fact that the driver, according to Tesla's data logs, was not touching the steering wheel for at least 6 seconds prior to the accident, leads me to think there are two likely possibilities:

    1. That the driver had fallen asleep, or was in some way physically or mentally incapacitated at the time. (I hate to suggest he might have been drunk, since we have no indication at all of that, but that is a frequent contributor to accidents.)

    2. The brother may have been at least exaggerating when he said that the victim had had the same problem at the same place numerous times before. Even if we set aside the possibility that the brother was "improving the story" to improve the chances for being awarded a lot of money in a wrongful death lawsuit, there is the natural human tendency to "remember" things after the event rather differently than we perceived them at the time. For example, when someone says "Oh, I just knew that something bad was going to happen!" ...Well, almost certainly they didn't have any feeling at the time strong enough to be described as "knowing" something bad would happen. Maybe they had a feeling of unease on previous occasions, and after the fact they "remembered" that as happening right before the event, and in their grief in the aftermath, "remembered" the feeling of unease or worry as being much stronger than it actually had been.

    In this case, perhaps it's more that the survivor remembered his brother complaining that AutoSteer seemed to be malfunctioning, and that he had taken it in for service to deal with the problem. Maybe it wasn't that AutoSteer had actually "tried to" run the car into that same concrete barrier multiple times before.

    * * * * *

    Hopefully I don't need to point out that all this is mere speculation on my part.
    -
     
  14. Pushmi-Pullyu

    Pushmi-Pullyu Well-Known Member

    Or maybe it's actually that AutoSteer reduces the risk of accident significantly, as the NHTSA has stated, but it's just human nature that when we rely on a machine to do something, then we expect near-perfection, instead of merely an improvement, which is the case here.

    At any rate, I cannot see it as anything but an emotional reaction to people being afraid of giving up control, that lots of people, both before and after this accident, have asserted that no self-driving systems should be used in cars until they are "perfected".

    An emotional reaction rather than a realistic, pragmatic, or logical one. By any objective standard, if you're safer using such as system than not using it, then you should start using it immediately, not wait until it is "perfected". This is doubly true for something as complex as driving on public roads. It's absurd to claim we should wait until self-driving cars are "perfected". If that's the goal, then it will never happen! I hope someday we can reduce the accident rate by as much as 90%, and perhaps over the course of several decades we might reduce it by 98-99%. But it's physically, scientifically impossible to make traveling down the road at highway speed, along with multiple other vehicles all moving just as fast, in perfect safety. Inertia and momentum will always rule, and entropy can never be entirely eliminated.

    Or, to put it more simply: Sometimes things go wrong. When things go wrong in vehicles moving at highway speed, deaths will occur.

    Murphy was right: Anything which can go wrong, will go wrong.

    Speed kills.
    -
     
  15. Pushmi-Pullyu

    Pushmi-Pullyu Well-Known Member

    I think we need to do some careful parsing here, and distinguish between what Tesla actually said and what it didn't. This isn't all aimed at you, Wade; I've noticed several comments that indicate readers have not parsed Tesla's carefully worded statement as carefully as it should be!

    1. Tesla didn't say the driver had his hands on the wheel until 6 seconds before the crash. Tesla merely said that the driver didn't have his hands on the wheel for the 6 seconds prior to the crash. We don't know if he had his hands on the wheel 7 seconds before. Quite possibly Tesla's logs don't record data for that on a second-by-second basis; quite possibly the log only retains the last 6 seconds of that data.

    2. It seems to me that all Tesla's comments about earlier warnings to the driver were intended to convey the message that "See, we do all this to remind the driver to pay attention and stay in control." The fact that the driver had to be given an audio warning earlier in the same trip, an audio warning which only follows repeatedly ignored visual warnings, does suggest that at least once, the driver allowed his attention to wander away from driving for some time. Tesla may have intended to imply that the driver was inattentive at the time of the accident, but there is no direct evidence of that other than the fact that the accident occurred without the driver grabbing the wheel.

    3. There is no suggestion that the car gave any warning to the driver about the impending accident. A careless reading of Tesla's statement may mislead the reader into inferring that, but that's not what it actually says.
    -
     
  16. WadeTyhon

    WadeTyhon Well-Known Member

    Agreed, although I think it is the emergency braking, lane keep, blind side warnings and other active safety features that are what truly reduce accidents. They only enhance your existing senses or step in if you screw up. None of them allow you to 'check out' the way systems like Autopilot or Supercruise do.

    Whereas I think of Cruise Control, Adaptive Cruise Control and Autosteer as convenience features in their current application.

    I'd be curious to know: Were L1 safety features available on a Tesla without Autopilot at the time of the NHTSA study? And was the study conducted on mostly Autopilot 1 or Autopilot 2 vehicles?
     
  17. WadeTyhon

    WadeTyhon Well-Known Member

    Lol No worries I don't take it personally. You and everyone else are making very good points.

    I think the above is what I'm getting at. If used correctly this will become a very useful tool or aid.

    But humans gonna human. :rolleyes:

    These days it's already hard enough for people to pay attention while driving. Now they're supposed to remain engaged and ready to take over at any moment, even when the car can drive itself in most situations? If I'm gonna need to pay attention either way, I'd much rather actively drive for hours than watch the car drive itself for hours. With my ADD, I know I'll zone out and totally not pay attention to the road!

    Theoretically and in a scientific testing environment, human + autopilot is probably super safe. But the limitations of autopilot + the limitations of humans might mean that in practice L2/L3 features might not be much safer on the roads as actual humans giving driving their full attention.

    I feel like lane keep/aeb/blind side monitoring should be on every new car. Systems like autopilot or supercruise should take over when something is wrong with the driver. If the driver is not driving for more than 1 minute, is unconscious, or is intoxicated and driving drunk or something like that, the vehicle should just pull over to the side of the road safely. This would make the system purely about safety, and more features can be added as the tech improves!

    We should baby-step drivers into this new tech because humans cannot and should not be trusted!

    This is just my opinion of course. There aren't many studies to back up an opinion on this one way or the other! And I am not referring to fully self driving vehicles that are coming up like Tesla FSD, Waymo, Cruise etc. I think those are the systems that will truly save lives!
     
  18. Feed The Trees

    Feed The Trees Active Member

    I think it would be even better if they kept all the fancy gizmos that do the safety aspects and dropping the whole it will drive for you part. There's certainly some great stuff that can be done with the tech itself thats short of driving. Don't they bounce radar under the vehicle in front of you to know whats happening? The most dangerous situation is when a driver is coming up on slowed or stopped traffic and just casually moves into the open lane, exposing the driver behind them to the stopped cars they couldn't see. Leave it there.
     
    Domenick likes this.
  19. So, no autosteer?
    Yes, the radar looks past the car immediately in front of it.
     
  20. bwilson4web

    bwilson4web Well-Known Member Subscriber

    I remain concerned that the crash barrier, the end cap protection, was damaged from a previous accident but had not been repaired. So using Google Map, I generated this:
    [​IMG]
    The vehicle entered the triangle of death zone where a left-side, exit, upper lane, separated from the main Highway 101, lower lane. The impact suggests autopilot treated the left side line as the lane marker until it hit the unprepared barrier, end cap.

    My initial speculation is a sloping barrier end cap, length wise wedge, might have added another layer of protection. But reading the highway engineering literature, they are concerned that a barrier might roll the car. This could be as bad for the occupants as tearing the car in two.

    Checking on another source:
    Safety Evaluation of Cable Median Barriers in Combination with Rumble Strips on Divided Roads

    The empirical Bayes before–after method was used to evaluate the safety effectiveness of cable median barriers in combination with rumble strips on the inside shoulder of divided roads, using Illinois, Kentucky, and Missouri data. In Illinois and Kentucky, cable median barriers were introduced many years after the inside shoulder rumble strips were installed, while in Missouri, the inside shoulder rumble strips and cable barrier were implemented about the same time. Hence, the evaluation in Missouri estimated the combined safety effect of inside shoulder rumble strips and cable barriers, while the analysis in Illinois and Kentucky estimated the effect of cable barriers installed on roads with existing inside shoulder rumble strips. The combined Illinois and Kentucky results indicate approximately a 27% increase in total crashes, a 24% decrease in fatal, incapacitating, non-incapacitating, and possible injury crashes (KABC), a 22% decrease in in fatal, incapacitating, and non-incapacitating injury crashes (KAB), and a 48% decrease in head-on plus opposite-direction sideswipe crashes (used as a proxy for cross-median crashes). The results from Missouri for total and KABC crashes were very similar to the combined Illinois and Kentucky results. However, the reduction in cross-median crashes in Missouri was much more dramatic—showing a 96% reduction (based on cross-median indicator only) and an 88% reduction (based on cross-median indicator plus head-on). An economic analysis showed that this strategy is cost-beneficial.

    Now I'm thinking rumble strips in the triangle of death might be a better answer. Rumble strips that become louder and rougher closer to the barrier. Also, a hatch pattern of painted yellow lines to identify the 'no go' zone.

    Bob Wilson
     
    Last edited: May 11, 2018
    Domenick and WadeTyhon like this.
  21. Pushmi-Pullyu

    Pushmi-Pullyu Well-Known Member

    @ bwilson4web

    Thanks for taking the time and effort to do that analysis!

    And certainly that location could use some rumble strips.

     

Share This Page