The Dark Secret of Artificial Intelligence: No one really knows how the most advanced algorithms do what they do

“Last year, a strange self-driving car was released onto the quiet roads of Monmouth County, New Jersey. The experimental vehicle, developed by researchers at the chip maker Nvidia, didn’t look different from other autonomous cars, but it was unlike anything demonstrated by Google, Tesla, or General Motors, and it showed the rising power of artificial intelligence,” Will Knight writes for MIT Technology Review. “The car didn’t follow a single instruction provided by an engineer or programmer. Instead, it relied entirely on an algorithm that had taught itself to drive by watching a human do it.”

“Getting a car to drive this way was an impressive feat. But it’s also a bit unsettling, since it isn’t completely clear how the car makes its decisions,” Knight writes. “Information from the vehicle’s sensors goes straight into a huge network of artificial neurons that process the data and then deliver the commands required to operate the steering wheel, the brakes, and other systems. The result seems to match the responses you’d expect from a human driver. But what if one day it did something unexpected—crashed into a tree, or sat at a green light? As things stand now, it might be difficult to find out why. The system is so complicated that even the engineers who designed it may struggle to isolate the reason for any single action. And you can’t ask it: there is no obvious way to design such a system so that it could always explain why it did what it did.”

“The mysterious mind of this vehicle points to a looming issue with artificial intelligence. The car’s underlying AI technology, known as deep learning, has proved very powerful at solving problems in recent years, and it has been widely deployed for tasks like image captioning, voice recognition, and language translation. There is now hope that the same techniques will be able to diagnose deadly diseases, make million-dollar trading decisions, and do countless other things to transform whole industries,” Knight writes. “But this won’t happen — or shouldn’t happen — unless we find ways of making techniques like deep learning more understandable to their creators and accountable to their users… ‘It is a problem that is already relevant, and it’s going to be much more relevant in the future,’ says Tommi Jaakkola, a professor at MIT who works on applications of machine learning. ‘Whether it’s an investment decision, a medical decision, or maybe a military decision, you don’t want to just rely on a ‘black box’ method.'”

Much more in the full article – highly recommendedhere.

MacDailyNews Take: Oh, relax. What could possibly go wrong?

9 Comments

  1. Tldr – we need to trust other intelligences, human or otherwise. Trust is not a function of rational thought, but of communication established through social protocols.

  2. You kids and your fancy terms like “AI” and “neural networks”. In my day we called it spaghetti code and pitied the poor SOB who had to maintain it after the guy who wrote the crap got fired.

    1. The problem here isn’t a human programmer who is writing spaghetti code or failing to comment it. It is new code (or the equivalent thereof) that is being written by the program itself as it inducts or deducts new heuristic rules from its interactions with the environment.

      Examples: “Human driver stops when the intersection has an octagonal sign, so I should too.” “The last time I entered a curve on wet pavement I went into a skid, so I should slow down this time.”

      When I was learning programming in the 1960s, academic computer scientists were even more horrified by self-modifying code than by the dreaded GOTO statement. Their preferred languages like Algol, Pascal, and Modula were designed to make it almost impossible. Languages like LISP and FORTH that made writing self-modifying programs easy were more fun, but the results were often impossible to debug.

      That is the problem here: not that the human-written code is not perfectly clear, but that the heuristics evolved by the computer itself will be nearly impossible to read after it causes a (literal) crash.

  3. At SOME point, something with “artificial intelligence” (contradiction in terms?) will, for “it’s” own reasons kill one or many humans, or animals, and we are supposed to accept it just because the damn thing is a more highly developed robot than its predecessor?
    Really? Accept it as a normal part of life the we have to tolerate because that is the machines “culture”?

    Really?

Reader Feedback

This site uses Akismet to reduce spam. Learn how your comment data is processed.