Alphabet Inc’s Google says it bears ‘some responsibility’ after self-driving car hit bus

“Alphabet Inc’s Google said on Monday it bears ‘some responsibility’ after one of its self-driving cars struck a municipal bus in a minor crash earlier this month,” David Shepardson reports for Reuters. “The crash may be the first case of one of its autonomous cars hitting another vehicle and making an error. The Mountain View, California-based Internet search leader and tech firm said it updated its software after the crash to avoid future incidents.”

“In a Feb. 23 report filed with California regulators, Google said the crash took place in Mountain View on Feb. 14 when a self-driving Lexus RX450h sought to get around some sandbags in a wide lane,” Shepardson reports. “Google said in the filing the autonomous vehicle was traveling at less than 2 miles per hour, while the bus was moving at about 15 miles per hour. The vehicle and the test driver ‘believed the bus would slow or allow the Google (autonomous vehicle) to continue,’ it said. But three seconds later, as the Google car in autonomous mode re-entered the center of the lane, it struck the side of the bus, causing damage to the left front fender, front wheel and a driver side sensor.”

“The crash comes as Google has been making the case that it should be able to test vehicles without steering wheels and other controls,” Shepardson reports. “In December, Google criticized California for proposing regulations that would require autonomous cars to have a steering wheel, throttle and brake pedals when operating on public roads. A licensed driver would need to be ready to take over if something went wrong.”

Read more in the full article here.

MacDailyNews Take: Shocking.

Hey, what’s a tougher sell, a car without a steering wheel, gas pedal and brakes or glasses that make you look like an asshole?

SEE ALSO:
U.S. tells Google computers can qualify as drivers – February 10, 2016
Why driverless cars will screech to a halt – February 9, 2016
Google’s self-driving cars hit the road this summer – May 15, 2015
Google acknowledges 11 accidents with its self-driving cars – May 11, 2015
Apple vs. Google in self-driving cars: To map or not to map? – March 6, 2015

[Thanks to MacDailyNews Readers “Fred Mertz” and “Lynn Weiler” for the heads up.]

28 Comments

    1. I’m going to guess the bus driver didn’t think the Google car would move into the lane as it was travelling so slowly.. Human drivers may have quickly entered the lane at a higher speed (maybe 5mph+) which the bus driver would have recognized as ‘intent’ to enter the lane to go around the sandbags.

    1. One ginormous drawback for me, and I’m amazed this hasn’t been brought to the forefront. I can’t drive 55!!! Do we think these self driving cars will, at any time, exceed the speed limit? I’m guessing, in theory, passing on a highway won’t exist in a driverless car world. Just merge and coast (yawn)
      I can see a driverless car to be used as a service such as cab, uber ….. But as for a personal purchase, I’m not gonna sacrifice my driving seat sanctuary.
      I luuuuuuuuv driving. I would be all for alternative fuel options, anything to dramatically reduce emissions. But I ain’t giving up the pure joy of driving.

  1. eventually there doubtlessly will be self driving cars but not in the NEAR future. Google has done a splendid PR job (as usual) on the effectiveness of its cars but if you drill down it is troubling.

    Google touts its driving record but it’s cars usually drive on selected carefully mapped out routes at unnaturally low speeds and have been stopped by police for it:

    telegraph:
    “An officer in Mountain View, California, near Google’s headquarters, stopped one of the company’s prototype vehicles after it was holding up traffic by driving 24 mph”

    even so Google cars have TWICE the average of accidents when compared to human driven vehicles. Google dismisses the numbers by citing that they were mostly caused by the OTHER driver.

    but that’s just it. I don’t know how many accidents I’ve avoided , accidents which would have caused by the OTHER driver: making sudden turns, going through red lights. etc.

    When I see a kid playing with an unleashed dog on the SIDE (not on the road) I slow down, does the Goog car understand millions of issues like that?

    Apparently not :

    New York Times :
    “One Google car, in a test in 2009, couldn’t get through a four-way stop because its sensors kept waiting for other (human) drivers to stop completely and let it go.”

    (perhaps Goog cars will ONLY succeed when ALL other cars were computer driven? like a closed Rail Line?)

    Also Google has fudged or at least padded it’s Goog car reports:

    For example it doesn’t talk much about the fact that there would have been 13 accidents in ONE year 2014-2015 if it’s HUMAN test drivers had not intervened:

    Guardian:
    “Between September 2014 and November 2015, Google’s autonomous vehicles in California experienced 272 failures and would have crashed at least 13 times if their human test drivers had not intervened, according to a document filed by Google with the California Department of Motor Vehicles (DMV)”

    (that would have Blown up it’s ‘safe driving ‘ record sky high!).

    It also doesn’t talk much about what it calls ‘disengagements’

    article:
    “The figures show that during the 14-month period… the cars unexpectedly handed control back to their test drivers, or the drivers intervened of their own accord…
    In 272 (out of 341) of those disengagements, the car detected a technology failure such as a communications breakdown, a strange sensor reading or a problem in a safety-critical system such as steering or braking”

    (goog does not rate those as ‘accidents’ or talk much about it in public)

    ( I have to note that when i drive around I see practically everyday cars going around with busted tail lights and headlights. What would it take to maintain a Goog car which now contains hundreds of thousands dollars worth of finicky sensors and cameras? Photos show Goog cars being serviced with groups like a pit crew. )

    “In the remaining 69 disengagements, the human driver took control of the car on their own initiative, simply by grabbing the steering wheel or pressing the accelerator or brake pedal
    “However, Google admits that its drivers actually took over from their vehicles “many thousands of times” during the period. The company is reporting only 69 incidents because Google thinks California’s regulations require it only to report disengagements where drivers were justified in taking over, and not those where the car would have coped on its own”

    (how many of these would have resulted in problems or accidents if the human driver didn’t take over? Note over and over it is Google is the ONE WHO DECIDES WHAT IT RECORDS AS “SAFE” . )

    When pressed by various Consumer Groups it said “report, “we identified some aspect of the [car]’s behavior that could be a potential cause of contacts in other environments or situations if not addressed. This includes proper perception of traffic lights, yielding properly to pedestrians and cyclists, and violations of traffic laws.”

    NOTE THIS WAS BEFORE THIS CURRENT ACCIDENT WITH THE BUS, I guess Goog was right “contacts in other environments” the car might get into… luckily it was going 2 m.ph.

    Google has been extremely cagy about given info about it’s cars, releasing stuff only when pressed. It still refuses to divulge data before 2014 (people speculate that earlier models had huge numbers of problems that would skew Goog’s ‘great safety data)

    Google: ” While Google has been testing its self-driving cars since 2008, the company will not be releasing disengagement data from before 2014. “This is the period we’re required to share with the DMV. Any data we would have from before that is just outdated,”

    Once again I Believe that there WILL be self driving cars and the will be safer than many humans but it’s a long way off and we have to watch the B.S from companies like Goog (an ADVERTISING company which is primarily good at spin. ).

    Goog’s OTHER hardware adventures: the blimp that crashed, the Google Glass that would take over the world and Schmidt in 2011 “Google TV will be built into most TVs next summer.”

      1. google admitting ” would have crashed at least 13 times if their human test drivers had not intervened” in one year (and that’s from less than 50 goog cars going at unnaturally low speeds ) doesn’t bother you?

        well ok, whatever.

        I’ve never had an accident although driving for 2 decades, not even a speeding ticket, maybe “i think different’.

        I BET YOU IF YOU HAD ENGINE GOVERNORS (speed limiters) THAT LIMITS ALL HUMAN CARS TO UNDER 30 M.P.H. (i.e driving like Goog cars) THE NUMBER OF HUMAN ACCIDENTS WOULD BE EXTREMELY LOW.

    1. If available in the NEAR future, I would expect the automated system to work only when on highways/freeways.. Basically anywhere you would expect to use ‘cruise control’.

    2. ““One Google car, in a test in 2009, couldn’t get through a four-way stop because its sensors kept waiting for other (human) drivers to stop completely and let it go.”

      some might say that’s long time ago and Google has ‘improved’ its car but reports show that issues like this are very hard to solve because it would mean programming the car to ‘take risks’ or ‘disobey rules’. (people say legally it can’t do that)

      the four way stop problem is due to ‘other drivers’ as Google keeps pointing out but like I said that’s just it, the roads and sidewalks are filled with humans or weird problems (RE: BUS CRASH), can the Goog car deal with it as effectively as human drivers?

      recently there were rumours that Ford wants to start a completely NEW COMPANY to help work on the Goog car. The new company is to INSULATE parent Ford from LAWSUITS…. (think about that),

    3. They are testing and refining the thing, so it is not ready for prime time. If they have that many issues after mass market, then it’s a problem.

      I once witnessed a very old, hunched-back lady slowly walk twenty feet to her car in about twenty minutes. She finally got in the thing and raced out of the parking lot like a speed demon. The point is, people are crazy and they are driving weapons. Don’t get me wrong, I loved driving my Porsche with the top down, twisting and turning over the country roads, while exceeding getting arrested speeds, but the sooner self-driving cars arrive, the safer it will be for everybody.

      1. you have a point

        Self driving cars will come as I acknowledged in my first post, and they will help a lot for many who don’t want to drive, physically challenged, the elderly etc.

        I’m just pointing out that Google is a ‘hype’ machine (see “goog TV will be on most TV sets by 2012” Schmidt prediction) and we have to take what they claim and promise with a large dose of salt.

        I Believe though that before completely self driving cars occur there still needs an intermediate state of ‘assisted’ cars like cars with self parking, collision control , assisted braking, fog radar etc to work out the kinks (although most of this tech is available now very few cars have them , they need refinement to make them better and keep costs down), The Goog car is like the Goog Glasses — too soon with undercooked tech.

        My hope is that Apple will make a good mid priced electric vehicle and then eventually move to self driving.

  2. If the thing can’t deal with a bus, why in the world would anyone think it could deal with an ice patch–or a child stooping down to pick up a toy.

    Tech “experts” love to talk about this stuff being right around the corner. See you in twenty years. it will STILL be right around the corner.

    1. You’ve just described this history of AI, artificial intelligence. It’s been ‘almost here’ for a few decades, with a number of gone-and-past prediction dates.

      We’re certainly making progress. The success and gradual improvement of competing expert systems on smartphones is clear. But as I point out below, it is HUBRIS to believe what we can program today is going to be adequate for anything involving public safety. It’s also outright silly to think we’re going to overcome our profound problem with secure software coding. The complexity of coding only becomes more complex, with and accompanying increase in density of coding errors, particularly regarding memory management.

      Real artificial insanity? No doubt. Real artificial intelligence? Maybe in a few more decades. ⏳

  3. The Word Of The Day Is:

    HUBRIS
    – – n 1: overbearing pride or presumption

    Seriously Google? You’re that in love with our AI that you’d risk the lives of riders and pedestrians on the quality of your AI? I know better. Why don’t you? 😛

    1. To be fair, I’d bet my life on the quality of their car AI over the actual “intelligence” of booze-besotted drivers driving around Friday and Saturday nights.

  4. Do not call it self-driving; It was not driving itself. It was driven by the code produced by a driver/coder. Arrest that driver/coder for the sake of accountability. It’s the coder, stupid. Do not use the Nazi excuse: “I was only following orders” to make the code, so it’s not my fault.

Reader Feedback

This site uses Akismet to reduce spam. Learn how your comment data is processed.