Ex-Navy SEAL is first person to die while using self-driving vehicle

“A former Navy SEAL has become the first person to die at the wheel of a self-driving car – and a witness has claimed he was watching a Harry Potter movie when his Tesla collided with a truck while on autopilot,” Ollie Gillman reports for The Daily Mail. “Joshua Brown, 40, died after his computer-guided Tesla Model S plowed into a tractor trailer on a freeway in Williston, Florida, in May.”

“Despite the claim, Tesla says it is not possible to play films on the vehicle’s touchscreen – however it is possible Brown was using another device,” Gillman reports. “The driver of the truck, Frank Baressi, said the Tesla driver was ‘playing Harry Potter on the TV screen’ at the time of the crash,” Gillman reports. “‘It was still playing when he died and snapped a telephone pole a quarter mile down the road,’ Baressi, 62, said.”

“Tesla said its autopilot system failed to detect the truck because its white color was similar to that of the bright sky, adding that the driver also made no attempt to hit the brakes. The U.S. National Highway Traffic Safety Administration (NHTSA) is investigating,” Gillman reports. “Tesla said its cars come with autopilot disabled and that owners have to acknowledge the system is new when they use it.”

Read more in the full article here.

MacDailyNews Take: And, unfortunately, he won’t be the last. (But Tesla’s system, even in its current nascent state is still safer on average versus all vehicles.)

Our condolences to Brown’s family and friends.

[Thanks to MacDailyNews Reader “Lynn Weiler” for the heads up.]

25 Comments

  1. A tractor trailer made a left turn in front of him. The car was confused by the white color of the truck and did not brake. What is surprising to me is that the car doesn’t have some kind of echo location to augment its visual capabilities.

    It is not surprising that some idiot in a tractor trailer would make a left turn in front of another moving vehicle.

    Still this flaw, as odd as it is, should be easy to compensate for. I wonder how other systems like Google’s would have handled it. Goggle’s cars have driven back and forth across the country with no incident.

    Even with this accident, autonomous vehicles remain better drivers than human beings.

    1. It’s not just that the truck was white. It was frontlit against a very bright hazy sky (Florida in summer) which made it nearly invisible to both the autopilot and the human driver. The trailer was empty and riding four feet or so above ground level, so the black road surface and white stripes on the straight divided highway were visible for a long distance past it. Any forward obstruction scan, whether radar, sonar, or visual, would have passed under the trailer (as would the driver’s forward vision). The reason this was a fatal crash is that the trailer passed over the car’s crumple zones and took off the roof.

      There is no real reason to believe that a vigilant human driver without an autopilot could have avoided this collision or that any non-Tesla vehicle other than a tank could have avoided the fatality. The only evidence that the driver was distracted is the Harry Potter story from the trucker who turned left in front of approaching traffic and therefore has every possible interest in shifting blame elsewhere.

      Tesla does not claim that its cars are self-driving. The autopilot is only to be used under suitable conditions with driver supervision. Even if it _were_ self-driving, no reasonable person would expect a 0% accident rate. The same day as this accident, there were about 100 other fatality collisions in America that did not involve cars with driver assistance. One could reasonably expect a significant reduction in fatal collisions over cars driven by average human drivers (who are not exactly NASCAR material). In fact, the driver who died in this accident had told all his friends that the autopilot saved him from a major accident a week earlier when someone swerved into his lane from his blind spot.

      As you say, this accident hardly proves that autonomous vehicles are any less safe than typical human drivers.

      1. Sounds like you’ve read the police report and know all the details? He didn’t brake because he was distracted. He was distracted because he took responsibility for his life out of his own hands and handed it to computer software. A vigilant driver would have reacted. What concerns me is that apparent idiots are admitted to the Navy SEALS.

    2. Elon mentioned that the truck crossing may not have triggered automatic breaking because it could be interpreted as an over the road sign…solid up high but not in front of the car at road level. The car is designed to reject these signals so you dont have unexpected braking as you pass under a sign. Obviously, this needs to be tweaked a little…

    3. I completely understand the camera issue confusing the sky with the truck, but what about the sonic radar? Tesla has a sonic radar, right? Laser detection?like those that many cars use to detect obstacles while parking. Please don’t tell me that TESLA Cars was so $stupid to relay only in images (light and vision) and not laser and sonic too.
      My Avalon has a laser system to detect the car in front of it and matin a safe distance while using the cruise control. How it is possible TESLAS don’t have alternative detection systems for such a critical function where lives depend on it?

      1. Again, the problem was that the forward collision monitors were looking for something near the road surface. You don’t want them to trigger panic breaking (and a possible collision from behind) for overhead signs, low bridges, and roof beams in a parking garage. In this case, the obstacle was just low enough to catch the top of the windshield.

        Trucks without autopilot hit bridges every day. This was the same sort of miscalculation. That’s why Tesla tells drivers to keep their hands on the wheel and maintain a proper lookout… which still might not have prevented this collision.

        The Tesla driver may also have been distracted by a portable DVD. Again, accidents caused by cellphones, texting, and other distractions kill people everyday. Most of those lives would be saved by wider adoption of collision avoidance technology, but motor transport will never be 100% safe.

  2. I wrote the long post below when the article of the Google car hitting the bus came about ( ” the Google car in autonomous mode re-entered the center of the lane, it struck the side of the bus, causing damage to the left front fender, front wheel and a driver side sensor. No one was injured in the car or on the bus.

    Google said in a statement on Monday that “we clearly bear some responsibility, because if our car hadn’t moved, there wouldn’t have been a collision”)

    . Current tech does not seem to indicate that self driving cars are safe. Many of the details released by Google are fudged as can be seen (below):

    —–

    eventually there doubtlessly will be self driving cars but not in the NEAR future. Google has done a splendid PR job (as usual) on the effectiveness of its cars but if you drill down it is troubling.

    Google touts its driving record but it’s cars usually drive on selected carefully mapped out routes at unnaturally low speeds and have been stopped by police for it:

    Telegraph:
    “An officer in Mountain View, California, near Google’s headquarters, stopped one of the company’s prototype vehicles after it was holding up traffic by driving 24 mph”

    even so Google cars have TWICE the average of accidents when compared to human driven vehicles. Google dismisses those numbers (i.e does not put them into record) as they claim they were mostly caused by the OTHER driver.

    but that’s just it. I don’t know how many accidents I’ve avoided , accidents which would have caused by the OTHER driver: making sudden turns, going through red lights. etc.

    When I see a kid playing with an unleashed dog on the SIDE (not on the road) I slow down, does the Goog car understand millions of issues like that?

    Apparently not :

    New York Times :”One Google car, in a test in 2009, couldn’t get through a four-way stop because its sensors kept waiting for other (human) drivers to stop completely and let it go.”

    (perhaps Goog cars will ONLY succeed when ALL other cars were computer driven? like a closed Rail Line?)

    Also Google has fudged or at least padded it’s Goog car reports:

    For example it doesn’t talk much about the fact that there would have been 13 accidents in ONE year 2014-2015 if it’s HUMAN test drivers had not intervened:

    Guardian:
    “Between September 2014 and November 2015, Google’s autonomous vehicles in California experienced 272 failures and would have crashed at least 13 times if their human test drivers had not intervened, according to a document filed by Google with the California Department of Motor Vehicles (DMV)”

    (that would have Blown up it’s ‘safe driving ‘ record sky high).

    It also doesn’t talk much about what it calls ‘disengagements’

    article:
    “The figures show that during the 14-month period… the cars unexpectedly handed control back to their test drivers, or the drivers intervened of their own accord…
    In 272 (out of 341) of those disengagements, the car detected a technology failure such as a communications breakdown, a strange sensor reading or a problem in a safety-critical system such as steering or braking”

    (goog does not rate those as ‘accidents’ or talk much about it in public)

    ( I have to note that when i drive around I see practically everyday cars going around with busted tail lights and headlights. What would it take to maintain a Goog car which now contains hundreds of thousands dollars worth of finicky sensors and cameras? Photos show Goog cars being serviced with groups like a pit crew. )

    “In the remaining 69 disengagements, the human driver took control of the car on their own initiative, simply by grabbing the steering wheel or pressing the accelerator or brake pedal
    “However, Google admits that its drivers actually took over from their vehicles “many thousands of times” during the period. The company is reporting only 69 incidents because Google thinks California’s regulations require it only to report disengagements where drivers were justified in taking over, and not those where the car would have coped on its own”

    (how many of these would have resulted in problems or accidents if the human driver didn’t take over? Note over and over it is Google is the ONE WHO DECIDES WHAT IT RECORDS AS “SAFE” . )

    When pressed by various Consumer Groups it said “report, “we identified some aspect of the [car]’s behavior that could be a potential cause of contacts in other environments or situations if not addressed. This includes proper perception of traffic lights, yielding properly to pedestrians and cyclists, and violations of traffic laws.”

    NOTE THIS WAS BEFORE THIS CURRENT ACCIDENT WITH THE BUS, I guess Goog was right “contacts in other environments” the car might get into… luckily it was going 2 m.ph.

    Google has been extremely cagy about given info about it’s cars, releasing stuff only when pressed. It still refuses to divulge data before 2014 (people speculate that earlier models had huge numbers of problems that would skew Goog’s ‘great safety data)

    Google: ” While Google has been testing its self-driving cars since 2008, the company will not be releasing disengagement data from before 2014. “This is the period we’re required to share with the DMV. Any data we would have from before that is just outdated,”

    Once again I Believe that there WILL be self driving cars and the will be safer than many humans but it’s a long way off and we have to watch the B.S from companies like Goog (an ADVERTISING company which is primarily good at spin. ).

    Goog’s OTHER hardware adventures: the blimp that crashed, the Google Glass that would take over the world and Schmidt in 2011 “Google TV will be built into most TVs next summer.”

    1. Autonomous cars are already safer than human drivers. Automobile accidents number in the tens of thousands in America alone every year. Just how safe do they need to be, considering how ridiculously dangerous human drivers already are?

      “We should be concerned about automated vehicles,” says Bryant Walker Smith, a University of South Carolina law professor who studies the technology. “But we should be terrified about today’s drivers.”

      http://phys.org/news/2016-03-autonomous-cars-safe.html

      1. eh?

        your link seems to back up what I’m saying rather than the reverse.

        the only info in the link showing Google cars are safer is a study PAID by GOOGLE and even that is not very convincing.

        I quote:

        “A Virginia Tech University study commissioned by Google found that the company’s autonomous cars crashed 3.2 times per million miles compared with 4.2 times for human drivers. But the study had limitations. The human figures were increased to include an estimate of minor crashes that weren’t reported to police. All autonomous car crashes in California, however, must be reported. The study also didn’t include potential crashes that were avoided when human backup drivers took control.”

        1) the google paid instate INCREASED the accident rates of humans (“The human figures were increased “) by ESTIMATING (guessing) the numbers of ‘NOT reported accidents’ ! there is NO real way to verify this as they were “NOT REPORTED”. this is the proof?

        2) “The study also didn’t include potential crashes that were avoided when human backup drivers took control.”

        er..
        even in the post it reads “the past two years, drivers took control 13 times when its cars likely would have hit something.”

        actually the time period was closer to ONE year , a 14 month period between Sept 2014 to Nov 2015.
        TWELVE crashes out of 56 cars (NOT all cars driving all the time) ….. IF THE HUMAN HAD NOT INTERVENED.
        TWELVE crashes from 50+ cars (one quarter?) … how would THAT have screwed Google’s data?

        and that TWELVE (not recorded in the stats given out by the institute) was Google’s OWN admission that they would have caused accidents , there were HUNDREDS of OTHER human interventions which they recorded but were claimed as irrelevant as (according to Google) they would not have caused accidents …
        (Malfunction of sensors, i.e the car would have gone “off the rails” if the human had not intervened were among these OTHER interventions not deemed relevant.
        I dunno but that seems worrisome… )

        3) note that with ALL the ABOVE (estimating not reported human accidents etc) 3.2 vs 4.2 times is very small difference.

        especially as I’ve shown from other articles sometimes the goog cars were travelling like 24 mph.

        I can go on and on.


        note friends I’m not against self driving cars and like I said the problems will be solved. but the Goog car today is half cooked and PR spun like Goog glasses.

  3. Any aircraft pilot familiar with autopilots and their MALFUNCTIONS will tell you that “autopilot cars” will not be the answer.

    A competent driver/pilot will always be needs to take over immediately because of malfunctions/miscompares/hiccups etc etc.

    Remember aircraft operate with miles of separation …… cars do not…..!!

  4. That’s the problem with this technology. It can’t assume every other car is likewise obeying the rules of the road and the technology has to be on the lookout for aberrant behavior from human drivers. At least until every car is driven by computers and they all are talking to each other. But that won’t happen for a mighty long time.

      1. Probably both. Won’t be the last instance of someone killed by auto-auto driving. Personally I think it won’t be until it’s accompanied by some overseeing fuzzy logic AI that it might be someplace in the realm of “safe.” They’ve still got a long way to go.

  5. Well… it’s Florida. Big truck involved. And now the media will now pound on Tesla because…. they’re super successful, taking risks, pushing the world’s auto industry and a new and necessary direction? I don’t get it. I’m surprised it’s taken this long for such an accident. Hopefully Musk will come out with some stats on the millions of miles per death per Tesla as compared to the next safe car on the road. Not said in the news; it’s still the safest car ever built, period.

  6. These technologies are for driver assistance- not driver replacement- at this early developmental point.

    As to his being an ex-S.E.A.L., I never quite got the need for soldiers in the Navy. Of course, the Army has divers, but does not try to make Sailors out of them. Obviously, if he was watching a movie in a moving car instead of driving he was either stupid or showing off for a passenger.

    The braking failure sounds pretty shallow and I am a shareholder in Tesla. Bought a new car last week that has driver assist technologies and not a word is mentioned about the color of a car impacting Braking. Maybe Tesla should contact Fuji Heavy Industries to license Subaru’s technology which so far is reported to outperform competing systems.

  7. CONCEPT: RADAR

    We’re in the baby days of autonomous driving cars. If all that’s being used is a camera system that’s supposed to ID this from that: Not gonna fly!

    Add a radar system for judging:
    A) Distance
    B) The difference between white and white. The sky is at infinity. The truck is at zero. Better brake for the truck.

    1. For the third time, I will repeat that the Tesla DID have radar, but the software was trained to ignore returns above eye level:

      Tesla press release: “Since January 2016, Autopilot activates automatic emergency braking in response to any interruption of the ground plane in the path of the vehicle that cross-checks against a consistent radar signature. In the case of this accident, the high, white side of the box truck, combined with a radar signature that would have looked very similar to an overhead sign, caused automatic braking not to fire.”

      1. but the software was trained to ignore returns above eye level

        Thank you! But this is the first time I’ve seen ‘radar’ in any quotes from Tesla in the press. Apologies.

        Obviously, the radar software has a deadly problem.

        And as usual, we aren’t anywhere near having actual ‘artificial intelligence’ worth mentioning. Baby steps, slowly plodding forward.

        1. My guess is that they will be modifying the software to detect obstacles like this above the roadway, but that isn’t as easy as it sounds. You need to avoid false negatives that allow frontal collisions like this—assuming it was avoidable at the speed the vehicle was going—but also false positives that cause unexpected braking with the risk of a skid or rear-end collision. The calculation of whether there is adequate clearance under a potential obstacle like a bridge or sign must allow not only for flat roads, but also for the car traveling up or down a hill.

          There is a reason that Tesla tells drivers to keep alert while autopilot is engaged, and to obey speed limits and other traffic laws. Existing driver assistance systems are almost certainly safer than riding with a typical human driver, but they aren’t foolproof. How many people would be killed by distracted or negligent driving if we waited for automatic systems to be perfect?

  8. There seems to be a bit of information being left out of the arguments here.

    1. The people that got to the Tesla first stated that a video was still playing on the large screen.

    2. The truck driver stated that he’d waited for another car to pass and started his turn when no other cars were visible.

    3. The Truck Driver stated that the Tesla appeared over a ridge after he was into his turn.

    4. Another driver stated that the Tesla had passed her at a high rate of speed,… and she was doing 85 mph at the time!

    5. The Tesla was in the passing lane, but changed to the slow lane before striking the truck.

    I think there is too little information at this time not to jump to conclusions.

  9. How many fatal human driver accidents were there on that same day across the globe because of visibility problems? HUNDREDS – so let’s everyone get their panties in a bunch about 1 autopilot accident, which will prob. turn out to be that the driver was speeding and did not have his hands on the wheel.

Reader Feedback

This site uses Akismet to reduce spam. Learn how your comment data is processed.