U.S. tells Google computers can qualify as drivers

“U.S. vehicle safety regulators have said the artificial intelligence system piloting a self-driving Google car could be considered the driver under federal law, a major step toward ultimately winning approval for autonomous vehicles on the roads,” David Shepardson and Paul Lienert report for Reuters. “The National Highway Traffic Safety Administration told Google, a unit of Alphabet Inc., of its decision in a previously unreported Feb. 4 letter to the company posted on the agency’s website this week.”

“Google’s self-driving car unit on Nov. 12 submitted a proposed design for a self-driving car that has ‘no need for a human driver,’ the letter to Google from National Highway Traffic Safety Administration Chief Counsel Paul Hemmersbaugh said,” Shepardson and Lienert report. “‘NHTSA will interpret ‘driver’ in the context of Google’s described motor vehicle design as referring to the (self-driving system), and not to any of the vehicle occupants,” NHTSA’s letter said. ‘We agree with Google its (self-driving car) will not have a ‘driver’ in the traditional sense that vehicles have had drivers during the last more than one hundred years.'”

“Google told NHTSA that the real danger is having auto safety features that could tempt humans to try to take control,” Shepardson and Lienert report. “Google “expresses concern that providing human occupants of the vehicle with mechanisms to control things like steering, acceleration, braking… could be detrimental to safety because the human occupants could attempt to override the (self-driving system’s) decisions,” the NHTSA letter stated. NHTSA’s Hemmersbaugh said federal regulations requiring equipment like steering wheels and brake pedals would have to be formally rewritten before Google could offer cars without those features.”

Read more in the full article here.

MacDailyNews Take: The problem is mixing these vehicles on the road with unpredictable human drivers. If autonomous vehicles are to come online, the two will have to coexist for quite a long period of time.

Why driverless cars will screech to a halt – February 9, 2016
Apple vs. Google in self-driving cars: To map or not to map? – March 6, 2015


  1. Watch and understand the car wreck scene in the Will Smith movie version of iRobot. Will the cars decide that hitting the baby carriage with one occupant is a better choice than possibly wrecking the GooCar?

    1. There have been various instances in aircraft where both human intervention overriding autonomous decisions and humans never questioning those autonomous decisions despite the signs of error have led to serious accidents. Fact is though that many aircraft have been saved by human intervention, when autonomous decision making goes seriously wrong. Those systems will rarely if ever put it right thereafter because in the present state of technology it would rarely at that stage understand even where it has performed erroneously in the first place, as in theory it should not happen. In a best case scenario it could only run through a preset regime of routines which would be too late even if potentially effective or even make things worse.

      A few points. Can the software drive be arrested and legally punished? Will it ever be considered that it could make a mistake? If it is held responsible who will be prosecuted in its place? Or are we all to be cannon fodder for its unfettered development. Will we get incur video images of the panic inside the vehicle as it heads for disaster with no way for them to take control? Interesting times.

      1. Computers, mostly, handle major portions of an airplane’s flight profile. Yet, we still have a crew of 2 in the flight deck. Why? Because neither machine, nor man alone, is perfect.

        When the FAA ends crews in the flight deck, then cars will drive themselves.

        The thought of self driving, robotic cars following a precise logical approach to driving, mixed with a slew of aholes on the road at the same time is a recipe for disaster.

      2. Comparing the AI technology currently being developed for the self-driving cars (which monitors multiple cameras and sensors, assesses traffic and conditions around and makes decisions based on those), to the fairly simple and straightforward systems automating flight is a bit disingenuous. None of the autopilot systems are AI; none are designed to independently control all aspects of flight, making decisions along the way. All they are designed to do is fly the aircraft along the predefined path and land it on a runway. Mind you, this requires proper equipment on the ground, in addition to the specific and precise input from the pilot.

        This is completely uncharted territory. Nothing from our past (or present time) compares; certainly not the current state of automation in the aircraft cockpits.

        1. Something could be said about the difference in complexity between driving on a road (basically movement in 2 dimensions) vs flying (movement in 3 dimensions). Significant factors for aircraft would include drift, drag, turbulence vs. handling intersections and obstacles, road conditions, braking/accelerating for road vehicles. Other factors such as avoiding traffic apply to both. In some ways handling an aircraft from take-off to landing is more complex than handling a car from one destination to another especially since you can’t stop a plane in mid-air where you could brake on a car to stop and avoid an accident.

    2. I am so sick with America’s and Silicon Valley’s obsession with self-driving cars.

      -A lot of people probably DON’T want nor need a self-driving car. People have the freedom to be in control and also can enjoy driving. Also being in control can be better (quick change of direction, Sunday drive, shortcuts, etc.)
      -Anyone who thinks cars will drive themselves with no human required is a complete idiot. We’ve had “self-driving” airplanes for decades, but at least 2 pilots are required to pilot the aircraft. Even with this technology lots of manual overrides occur all of the time, like with dealing with turbulance, busy runways, take off and landing, approaches, etc. A road full of cars and people is EXTREMELY challenging and self-driving cars without human intervention is way far into the future. Not happening. Just watch the YouTube videos of Teslas auto-pilot to see cars about to crash with the driver just barely able to recover the car.

      This is all absurd. People don’t need this. What they need is the basics. And that primarily is a car that is electric that has long range and costs something reasonable. These latter are the biggest problems to solve in the auto industry, not self-driving.

      Self-driving mode is simply a feature that will be able to be turned on and off, nothing more.

      And people really get way too geeky about this. We’ve had self-driving cars for decades. It’s called cruise control and it works well. If you’re that much of a geek that you need a car to change drive you in other respects, then have fun with that. But that won’t happen for a long time yet anyway.

      1. The attraction of the autonomous car is the free time it bestows on the person being transported. A person with an hour each way commute gets two hours a day to read, work or whatever.

        I agree with getting the basics right first. Assists and autonomy are for later. OBTW, cruise control isn’t really self-driving in this context.

        1. You are delusional if you think a car is going to be able to drive you and you don’t have to do anything.

          You will be mandated to be seated in the driver’s seat, paying attention to the road, and ready to take over. You will not be watching a movie or playing on your smartphone.

          True self-driving cars are well far into the future. And by then, the infrastructure and vehciles will have changed to the point where it makes sense.

          1. Yet that was the premise of the article and Google got this response: NHTSA’s letter said. ‘We agree with Google its (self-driving car) will not have a ‘driver’ in the traditional sense that vehicles have had drivers during the last more than one hundred years’. So maybe the regulatory agency and Google are delusional.

            1. quiviran:

              Let me go really slowly and break this down for you. This is Google PR at work. These articles you’re reading are for promotional purposes for Google to advance their cause. It’s no secret they’ve been lobbying government for a number of things.

              What Google is trying to do is get the regulatory changes required to make real self-driving a reality. This article means nothing. There are no legal changes. It’s simply something Google is trying to start with.

              And if you read what you posted, it DOES NOT in any fashion whatsoever state or imply that a self-driving car will not require a person to be in the care or control of the vehicle in question.

              If you actually understood the technology required for autonomous technology you wouldn’t be arguing about it. You’d know that we are so far off a real self-driving car it’s not worth talking about. You’d know that there needs to be infrastructure changes to compliment self-driving cars (e.g., more pedestrian overways so cars can flow better). You’d know that even the rules of the roads would change. And so forth.

              And finally, you work off of the invalid assumption that people want self-driving cars. There is ZERO EVIDENCE that people want self-driving cars.

              What people want is a nice, efficient car. What needs to happen is innovation in battery technology, electric drivetrains, and manufacturing at scale.

              I’ve been involved with an electric car startup company and have been doing this rodeo. I agree with the person who attacked us in the media by saying that electric cars are about $1 trillion in R&D away from being perfected and manufactured at scale.

              In other words, there’s a long way to go on the basics of the next generation of vehicles. Real self-driving cars are at least 20-30 years away given the requirements of advancements in technology, infrastructure changes, and regulatory changes.

          2. Perhaps a completely automated drive from endpoint to endpoint will not occur for a long time. I think however that in the near future cars that can run automated on freeways or long stretches of properly equipped roads is a definite possibility. The most complex part of driving is within towns/cities where intersections, pedestrians and other conditions constantly need updating. The freeway removes a huge majority of those conditions making automating just that portion of travel much easier.

  2. Some of the technologies being used and developed depend upon being able to see the edges of the road. Out in the real world away from Silicon Valley many roads are poorly maintained, pockmarked with potholes and lack or have faded stripes on the edge of roads.

    Last winter we went through a repeated cycle of hard freezes and quick thaws that opened up huge potholes on the Interstate I commute by and caused poorly done temporary patches to open up.

    I hit one of these deep holes at 70 MPH which resulted in the shearing/stripping of 3 of the 5 lugs on one of my wheels and a severely damaged tire. When I got to a mechanic a few miles away the belts in the tire were severely damaged. I got to buy a new Wheel Assembly and a new Tire thanks to a nasty pothole.

    Tell me how these autonomous systems avoid these damned things. Tell me how well they will do in deteriorating weather where sleet and a roadway at freezing may have patchy ice. Tell me how they will do in a frog strangler rain where you can not see the end of your hood and the cars ahead of you disappear. Tell me how they will do when you are on a remote stretch of Interstate with no exits, a Tornado Warning is called and you can see the twister from the highway.

    I have been in these situations before. Working in healthcare we do not get to not respond when people need us. Sometimes that means coming early and staying on campus, but sometimes that means having to drive when most people would be absolutely terrified to do so.

    1. Well I suppose we could ask the software designers of the F35 who so far have taken endless time (5 years late so far) trying to write and re write the software to actually make the aircraft even fly safely let alone see threats and deliver weapons. Not convinced Google has superior experience or capabilities to them.

      Not convinced either that there is an easy answer to that question for as you say there are almost endless variations of a given and often unexpected scenarios, where no matter how many lines of code you write there simply isn’t a chance of covering all possibilities while not doing so could result in a disaster. It will take years of experience to get anywhere near that ability and that doesn’t include internal error. Testing will in itself never be able to accomplish that sufficiently as experience has long shown. So I suppose do we have separate roads for these autonomous vehicles that create ideal conditions for them to operate, or do we act as expendable test dummies for many years in the name of progress. In North Korea it would be easy to answer that not quite so easy in the West.

      1. F35 was not designed to fly by itself; otherwise, there would be no need for the pilot. None of the aircraft of today have automation that flies the aircraft completely by itself, making decisions along the way depending on the various factors around the flight. No serious effort has ever been made to develop a self-flying plane (in a way effort is made to develop self-driving car). The extent of the current flight automation is the autopilot — a simple system allowing the airplane to automatically fly the path that was specified by a human. No collision warnings or avoidance (unless a separate, and independent, anti-collision warning is installed), no autonomous decision-making (weather below minimums? seek alternate landing field). It would in fact be much easier to develop a self-flying aircraft than a self-driving car. After all, air traffic is much safer, much more predictable, much more regulated and controlled, with much less independent and unpredictable behaviour by pilots and crafts. This doesn’t mean that it is impossible to make self-driving cars. In fact, I’m pretty convinced that every kind of situation that an average car driver can come across during his car-driving life can be accounted for and properly programmed into a car-driving AI. For those who need to drive into a tornado, there will likely be a simple manual override. After all, 99.9% of people will never drive into one.

      2. Parts of the self-driving technology has been making their way into the current cars; from the ability to parallel-park, to the ability to engage brakes when sudden obstacles appear, to ability to stay in lane, slow down or speed up to maintain position in moving traffic… And these technologies are showing that they can make these kinds of decisions more efficiently than humans. After all, humans are extremely unreliable and inconsistent when driving; there are numerous distractions out there that pretty much completely neutralise all those years of human experience; from pestering children (or spouses), to billboards (still or video), to other signage, movement and events taking place within the driver’s field of view that require his brain to process them and discard as not relevant to driving, all the while reducing the amount of attention given to the actual driving and relevant events. A sufficiently fast processor, coupled with sufficiently fast code, can effortlessly process all this non-relevant information, objectively, meticulously and efficiently sorting the level of priority of the information, analysing it, determining its effects on the driving and making decisions based on all that data.

        The point is, humans are far from perfect, and even with all the experience, they can still be very easily distracted form the main task of assessing dangers around the driving path and planning to avoid them. AI has the potential to do a much more consistent job of it.

    2. Sounds like you expect the automated system to be better than a human driver in the same situation and completely avoid the pothole that you couldn’t avoid.. Perhaps we can settle for properly maintaining control of the car better than a human in that case.

      1. I could not avoid it because I was hemmed in by traffic and barriers. To the Right and behind me was a truck at 70 MPH, to the left a concrete barrier. In my lane was a pothole deep enough to swallow a chair.

        Could not see it until you are on top of it.

        My point is that automated cars will be driving under such conditions and I will remain skeptical until I see proof they can deal with roads out in Flyover country.

        Subaru EyeSight, considered to b the most effective lane departure and automatic braking currently on the market is dependent upon visual cues from cameras mounted near the mirror on the windshield. Not sure how well it can deal with old beater roads with faded lines, or fog, or black ice or patchy slush.

        Many of us with considerable experience driving in bad weather have learned how to deal with such things, but I wonder if the technology has developed that far- especially at a cost that consumers can afford.

    1. I would venture a guess that one of the first features added to the self-driving technology is to record a ‘black box’ of every single event related to driving, especially the data regarding which part of the input control came from the AI and which from the driver, precisely to avoid the possibility of a human driver ‘fiddling with the auto-drive’, for whatever reasons. Any claim that “it is the computer’s fault” will be difficult to make when there is a complete and detailed record of every aspect of the driving leading up to the event.

  3. It would be more cost-effective and safe to outsource the driving of these cars to “remote driving centers” in India. The cars would be driven like drones are remotely piloted. Liberal drones inside the cars could read Twitter and express their love for Sanders instead of driving, boring!

  4. Wait, so you guys didn’t know the test versions of the self-driving cars were already on US roads for a few years now? They’ve clocked in hundreds of thousands of miles and apparently have been involved in less than a handful of accidents, none apparently the fault of the Google car.

  5. I want to see how legally this is going to work… What if there’s contributory negligence by the Google mobile?

    The legal questions of liability in an accident are myriad.. Are the car manufacturers opening themselves up to massive amounts of liability??

    Instead of suing the “driver” when there is none, can an injured party sue the car manufacturer and/or Google under the Strict Liability doctrine, since it would be a product failure, NOT human error…

    I’m not convinced that this has been well thought-through. The legal system will probably be left to deal with figuring it out.

    If it is product liability, the car manufacturers & Google have opened themselves up to massive amounts of liability under this scenario.

Reader Feedback

This site uses Akismet to reduce spam. Learn how your comment data is processed.