Uber autonomous car fatality

Discussion in 'Off Topic' started by Domenick, Mar 22, 2018.

To remove this ad click here.

  1. Feed The Trees

    Feed The Trees Active Member

    Perhaps something like a Phase One or Nikon D4 or D5 could come close but even great cameras simply dont have enough range to pickup a fully lit subject, high or low beam, and all the detail surrounding it in an otherwise dark situation. It will all come out mostly black. Ask any wedding photographer about the importance of dynamic range and iso.

    What's going to happen with a camera is it will see the fully lit area and consider that the point to expose to and lose most of the surrounding detail.

    Digital cameras also gather light better at the high end of the exposure in the scene than the low end. This makes the dark areas very grainy or simply just black.

    The only way to know in a scene like this is to be there with the human eye.
     
    Last edited: Apr 1, 2018
  2. To remove this ad click here.

  3. Cypress

    Cypress Active Member

    PNW
    Ford reporter problems with their safety engineers falling asleep while testing these systems for long drives. And tried numerous methods to keep them alert. It’s why they want to skip level 3/4 and just go to 5.
     
    bwilson4web likes this.
  4. Feed The Trees

    Feed The Trees Active Member

    Good on them then, that's how it should be.
     
  5. Feed The Trees

    Feed The Trees Active Member

    Im going to tack on one more thing to the camera bit. The best camera's are never ever going to do in low light what a human can do. Even if you set the best camera in a dark room like a wedding reception and take a photo without a flash it will be junk. Meanwhile you remember from that party that you could tell very well what was going on, who you were looking at, etc. It's not an issue for the human eye, it's a big issue for the camera sensor.

    A human eye is capable of detecting 30 stops of light, the best $40k camera maybe 11-14. It's true the human eye doesn't use all 30 at the same time, that'd be some crazy high dynamic range! But what it can do is adjust quickly. So while you're looking at the headlight area and frequently scanning the darker areas your eyes will adapt and you can fudge a pretty wide range of light to see.

    So that's all to say, forget what the craptastic dash cam shows you, and any picture from a camera where the headlights are the part exposed for.
     
    Pushmi-Pullyu and bwilson4web like this.
  6. bwilson4web

    bwilson4web Well-Known Member Subscriber

    Very early Saturday and Sunday nights I tried to spot Tiangong 1 by driving up the ridge line road to an observation, parking area. Sad to say, a full moon and distant clouds blocked direct viewing along with relatively low elevation of Tiangong 1. But I did spot the outline of an owl patrolling the road.

    One trick was to stand where I could see the sector but in shade from the moon. But any unexpected light from a passing car or wrong glance and the night vision had to come back. It appeared to be an area more than a focus issue.
    We'll have to agree to disagree about this. Mobileye claims their Volvo based system would have detected the pedestrian: https://www.reuters.com/article/us-autos-selfdriving-uber-mobileye/mobileye-says-its-software-would-have-seen-pedestrian-in-uber-fatality-idUSKBN1H22LM

    Mobileye said it took the dashboard camera video released last week by police and ran it through Mobileye’s advanced driver assistance system (ADAS), a building block of even more sophisticated full self-driving systems that is currently found in 24 million vehicles around the world.

    Despite the low quality imaging from the police video, Mobileye’s ADAS technology was able to detect the pedestrian, Elaine Herzberg, and the bicycle she was pushing across the road “approximately one second before impact,” Shashua wrote in the blog, which was published on Intel’s website.

    My rule of thumb is a human has about a 200-250 ms reaction time. I would have expected the Mobileye to apply the brakes within 50-75 ms. But the reports suggest the Uber system did not respond.

    Bob Wilson
     
  7. To remove this ad click here.

  8. Feed The Trees

    Feed The Trees Active Member

    Well Mobileye detecting it from a crappy photo is great but it also doesnt change the notion that the image isn't an accurate representation of the ambient settings that would be visible. In fact their point was that even from the crappy output they were able to tell. Their point wasn't that the person saw the same as the camera saw, I can tell you with almost full certainty the person saw far more.
     
  9. Pushmi-Pullyu

    Pushmi-Pullyu Well-Known Member

    Despite the low quality imaging from the police video, Mobileye’s ADAS technology was able to detect the pedestrian, Elaine Herzberg, and the bicycle she was pushing across the road “approximately one second before impact,” Shashua wrote in the blog, which was published on Intel’s website.

    Detection one second before impact might have given an autonomous car time to brake a bit, but it certainly would not be sufficient to stop it from a speed of... what was it, something like 35 MPH?

    I can't imagine why an autonomous car would be relying on camera images at night, when it has an active lidar scanner. Perhaps the car is set up to ignore anything smaller than a car (or a moose, to quote Elon Musk) on returns from the lidar scanner? That's pure speculation on my part, though.
     
  10. bwilson4web

    bwilson4web Well-Known Member Subscriber

    We have dynamic cruise control in both our 2014 BMW i3-REx and 2017 Prius Prime. Both the Uber and Tesla accidents have shown the detection range is not enough to handle the relative closing rate problem.

    Our human eyes have longer range and clues about the closing rate that the automated systems lack. But if I keep our dynamic speed control, set speed, higher than the lead car, we remain in range. This means we can come to a safe, efficient full stop at a traffic light and easily resume speed. It is an electronic link to the lead car.

    We also see side incursions, like Uber, with merging traffic that can trigger a collision warning. This is not theory but I can record video if proof is needed.

    I'm in the reality, the engineering world. If others wish to Luddite, no problem as long as it is their choice and does not impact mine.

    Bob Wilson
     
  11. Pushmi-Pullyu

    Pushmi-Pullyu Well-Known Member

    I don't see how that relates to the recent fatal Tesla car accident. Seems to me that the problem there is simply that Tesla cars are not programmed to react to stationary obstacles, period. That's why one Tesla car under control of Autopilot + Autosteer (I'm going to abbreviate that as A+A) ran into a fire truck stopped in a lane of traffic, and it's why in this latest horrible accident, a Tesla car under control of A+A ran into a collapsed barrier in front of the edge of a concrete wall separating lanes of traffic.

    If you haven't seen it before, just look at this official Tesla video from November 2016, demonstrating an improvement to AutoSteer. Watch the small "side view" windows, and note how frequently we see trees and other stationary objects which are well to the side of the road -- or even behind and to the side of the car! -- outlined in green boxes, which according to Tesla's own caption, means Autopilot is detecting them as "in-path" objects. There are literally hundreds of false positives in that ~10 minute drive. My conclusion: If Tesla's Autopilot stopped every time it detected an "in-path" object, then the car would never move at all!



    So, it seems to me that the reason the Tesla car ran into that barrier wasn't because it didn't "see" it soon enough, but because it is programmed to ignore all stationary objects.

    Or, stepping back and taking the broader view, the problem is that Tesla is stubbornly relying on video cameras and very, very unreliable optical object recognition software, rather than moving to active scanning, which will require lidar or high-resolution radar scanning, or both. Camera-based optical object recognition systems just ain't reliable enough, as Tesla's own video very clearly shows. And anyone who knows much about the research being done in robotics should also know that roboticists have been trying for decades to improve optical object recognition to the point that it's actually reliable, without much success. If Tesla thinks it can do in a few months what roboticists have not been able to do in decades... well, I think that's not a realistic goal. As it is, other companies working on autonomous cars are making progress, while Tesla is apparently stalled, apparently beating its head against a brick wall, stubbornly trying to stick with camera-based systems rather than moving to lidar and/or high-resolution radar.
    -
     
  12. To remove this ad click here.

  13. bwilson4web

    bwilson4web Well-Known Member Subscriber

    It relates because I see similar symptoms with:
    • 2014 BMW i3-REx - uses mobileye optically based, single camera system. When it first detects an object, it doesn't know the relative speed. So a stopped car or firetruck or wall is still approached at current speed which may not provide a margin to stop.
    • 2017 Prius Prime - uses mobileye for lane keep assist and radar for distance ranging and speed. Yet the range has the same problem, detection does not give relative velocity fast enough to leave a safe stopping distance.
    These are important to me because I drive both cars. So I'm not surprised that Tesla has similar issues. In effect the Tesla and Uber accidents have given me a clue to dynamic cruise control strengths and weaknesses.

    As for Lidar and the 2016 video, I'm fairly sanguine about them. In January 2009 I saw a Denso booth display of their automated driving system.
    [​IMG]
    This is pretty much what our Prius Prime uses.

    Since humans drive using biological optics, I suspect camera based systems will eventually work. I do like the idea of a phase-array radar with a tighter, steerable beam because it can see through fog better than strict optical systems including Lidar. Also, I'm not a fan of mechanical scanning that Lidar uses.

    Bob Wilson
     
  14. Pushmi-Pullyu

    Pushmi-Pullyu Well-Known Member

    I don't think engineers should be trying to imitate the way the human eye and the human brain see things. First of all, current or near-future computers and software can't match the sophistication and image data processing capability of the visual cortex of the human brains. Secondly, what we should be aiming for is self-driving cars that are better at driving than humans. Humans can't see in the dark, and neither can ordinary video cameras. Of course they could use infrared cameras, but those come with their own set of problems in pattern recognition.

    Better to use active scanning, with high-resolution radar or lidar, which will work just as well in pitch black as in bright sunlight. Is high-resolution radar the same as phased-array radar?

    What's the issue? Does the scanner rotate too slowly for safety, in your opinion?

    Anyway, I think it's pretty clear that the industry will be moving towards multiple (lower cost) solid-state lidar scanners pointed in various directions. I don't know if that would ease your concern with "mechanical scanning" or not. Since lidar uses a laser beam, obviously (in most configurations) it can't scan an arc all at once; it has to sweep back and forth across the arc. (But see "flash lidar" in the infographic at the bottom of this post.) The advantage of that, of course, is that it gives much better accuracy. A radar return image is more "fuzzy", even for high-resolution radar.

    [​IMG]
    High resolution radar looks to be a good backup to see thru fog and rain, but it seems pretty clear to me that lidar should be the primary system. Even in fog and rain, lidar will still be useful for detecting relatively small objects close to the front of the car.

    As I understand it, with solid state lidar, several detectors will be mounted at various points around the car, looking in different directions. Four wide-angle scanners will cover all points of the compass, while a scanner with a narrower forward arc will be able to "see" farther ahead. In the diagram below, there is also a rear-pointed longer range scanner:

    [​IMG]

    I found this interesting, too:

    [​IMG]
     
  15. bwilson4web

    bwilson4web Well-Known Member Subscriber

    You might find this April 3, 2018 YouTube interesting starting a 7:15:


    Yes, I treat phase array radar and high resolution radar as the same. The advantage is no moving parts. Switching diodes do the beam forming. There is another obscure technique that uses extremely short radar transmission and reconstructs the targets with multiple, phase sensitive receivers. In effect, the radar equivalent of a camera based on: https://en.wikipedia.org/wiki/Interferometry

    A firm in Huntsville has done pioneering work in this field: https://timedomain.com/technology/

    Bob Wilson
     
    Last edited: Apr 4, 2018
  16. Pushmi-Pullyu

    Pushmi-Pullyu Well-Known Member

    I've seen Elon's argument against lidar before. It boils down to "Since we have to use cameras to read traffic signs, we need to use it for everything."

    That is, of course, a fallacy. It's the "If all you have is a hammer, then everything looks like a nail" mindset. Trouble is, a good tool box needs a lot more than just a hammer in it. Some things need a screwdriver. There are people who use a hammer as a screwdriver, but they don't get good results!

    Furthermore, I don't know that anyone has ever shown that you can't use lidar to read traffic signs. Tesla has shown they can reliably read traffic signs with cameras, but that's not the same as proving it can't be done with lidar.

    It seems to me -- and I am very far from the only person saying this -- that Elon Musk doesn't want to put lidar in Tesla cars because it's expensive. And he's only arguing against it to justify a cost-based decision, not because it's actually best for autonomous driving.

    Here's my prediction: So long as Tesla stubbornly sticks to its "no lidar and no high resolution radar" policy, it's going to continue to fall behind other developers of autonomous driving systems. You will notice that, over the past year or so, Tesla's autonomous driving advancements have slowed to mere incremental improvements on what they've already got. Elon touted a coast-to-coast autonomous Model S drive by end of 2017. Obviously that didn't happen. And I predict it won't until the car is equipped with either lidar scanners, high-resolution radar scanners, or both!
    -
     
  17. bwilson4web

    bwilson4web Well-Known Member Subscriber

    FYI, from the video I got the impression he wants millimeter radar that can work through fog, snow, and rain. He points out and I agree that IR has problems that are cured by going longer range, radar.

    At 9:30, Elan points out:

    . . . if you are going to pick active photon generator doing so in 400-700 nanometer wavelength it is pretty silly since you are getting them passively ... you would want to to active photons generation in the radar frequencies are approximately around 4 millimeters because that is (..) penetrating. You can essentially see through snow, rain, dust, fog, anything ... it is just.. I find it quietly puzzling that a company would choose to do an active photons system in the wrong way of light.

    A phase-array radar uses solid state switching to form a narrow beam to send and receive the signal. It does not require moving parts. That is why I enjoyed this announcement:
    http://www.greencarcongress.com/2018/01/20180116-magna.html

    Magna’s ICON RADAR continuously scans its full environment 50 times faster than the time it takes a human to blink an eye, which helps a vehicle make instantaneous decisions in response to complex surroundings. It can detect vehicles at distances that well exceed any current requirements.

    Its state-of-the art imaging capability pulls from 192 virtual receivers incorporated into a single compact system. These virtual receivers are applied to deliver both horizontal and vertical resolution, achieving new benchmark levels for each. In addition, the technology is naturally immune to interference, which will become critical as the number of radar-enhanced vehicles on the road increases.

    Also: https://www.magna.com/media/press-releases-news/releases-news/2018/01/15/news-release---magna-unveils-high-definition-icon-radar---scans-environment-in-four-dimensions

    Bob Wilson
     
  18. Pushmi-Pullyu

    Pushmi-Pullyu Well-Known Member

    I completely agree. I did some online reading about the possibility of using infrared (IR) lidar, but as you say, that's only a partial solution to the problem of seeing thru rain and fog. Plus, if my understanding is correct, it requires active cooling on the IR sensor, to be able to detect any IR return signal cooler than the sensor itself. So that's a physical limitation, and would make the IR lidar unit bulkier, more expensive, and require more power to operate. Seems to me that would rule it out for use in mass produced automobiles. (However, that may be an over-generalization on my part. Electronic cooling systems exist, and are much less bulky.)

    I'm guessing that solid state lidar uses the same approach; no moving parts, much less bulky. Also far, far lower priced. That and resolution superior to high-res radar are the reasons why I think solid state lidar is going to be the go-to tech of choice for fully autonomous vehicles, with high-resolution radar as a backup system to see thru rain and fog.

    My prediction is that Tesla is either gonna get on board the lidar bandwagon, or else get left hopelessly far behind. But we will have to wait and see how my prediction turns out. Certainly some of mine have been entirely wrong!
    -
     
  19. bwilson4web

    bwilson4web Well-Known Member Subscriber

    Source: https://www.ntsb.gov/news/press-releases/Pages/NR20180524.aspx

    WASHINGTON (May 24, 2018) — The National Transportation Safety Board released Thursday its preliminary report for the ongoing investigation of a fatal crash involving a pedestrian and an Uber Technologies, Inc., test vehicle in Tempe, Arizona.

    The modified 2017 Volvo XC90, occupied by one vehicle operator and operating with a self-driving system in computer control mode, struck a pedestrian March 18, 2018. The pedestrian suffered fatal injuries, the vehicle operator was not injured.

    The NTSB’s preliminary report, which by its nature does not contain probable cause, states the pedestrian was dressed in dark clothing, did not look in the direction of the vehicle until just before impact, and crossed the road in a section not directly illuminated by lighting. The pedestrian was pushing a bicycle that did not have side reflectors and the front and rear reflectors, along with the forward headlamp, were perpendicular to the path of the oncoming vehicle. The pedestrian entered the roadway from a brick median, where signs facing toward the roadway warn pedestrians to use a crosswalk, which is located 360 feet north of the Mill Avenue crash site. The report also notes the pedestrian’s post-accident toxicology test results were positive for methamphetamine and marijuana.
    . . .

    More details in the report.

    Bob Wilson
     
    Domenick likes this.
  20. Key part for me:
    "At 1.3 seconds before impact, the self-driving system determined that an emergency braking maneuver was needed to mitigate a collision. According to Uber, emergency braking maneuvers are not enabled while the vehicle is under computer control, to reduce the potential for erratic vehicle behavior. The vehicle operator is relied on to intervene and take action. The system is not designed to alert the operator."
     

Share This Page