Elon Musk: ‘With artificial intelligence we are summoning the demon’

“Tesla chief executive Elon Musk has warned about artificial intelligence before, tweeting that it could be more dangerous than nuclear weapons,” Matt McFarland reports for The Washington Post.

I think we should be very careful about artificial intelligence. If I were to guess like what our biggest existential threat is, it’s probably that. So we need to be very careful with the artificial intelligence. Increasingly scientists think there should be some regulatory oversight maybe at the national and international level, just to make sure that we don’t do something very foolish. With artificial intelligence we are summoning the demon. In all those stories where there’s the guy with the pentagram and the holy water, it’s like yeah he’s sure he can control the demon. Didn’t work out. — Elon Musk, MIT Aeronautics and Astronautics department’s Centennial Symposium, October 24 2014

Full article here.

106 Comments

  1. At the moment, AI is extremely primitive. There were, and are bizarro futurist predictions about AI that constantly pass the date of predicted fruition. This points out that AI is ‘gee whiz’ science rather than actually practical or obtainable at this time.

    The very best AI should ever attain is in programming aspects of intuitiveness with regard to a set collection of desired tasks. AI devices therefore can be used to enable functions beyond human capability or to take up drudgery tasks the bore we humans to tears. At not time should AI attempt to ascend beyond the programming of task oriented machines. The concept of ‘self-awareness’ programming is, at this time, a ridiculous concept. But also, it is a self-destructive concept, an act of denigration against the human mind.

    Where AI will be of greatest detriment to mankind will be in the enable enablement of COWARDICE. We humans are already mentally ill as a species in our ability to justify murdering one another for the most absurd goals, including game playing, abuse and robbery of resources, to name a few of our pet deceptive ‘truths’ that are nothing more the personal, inner world distortions fitting temporal thought realms. (Sorry to throw my favorite terminology into the wild). We already commit abominable behavior by way of using drones as coward murder machines that take the soldier element out of war and substitute the machine. This will always be an abomination of mankind. But to then apply an intuitive level of AI into these coward murder machines is a stab at our own heart as a species. It is human self-loathing in action, putting the instrument of destruction into some object’s hand in order to justify our own end as something other than suicide. (This level of thinking about human motivation flies over most people’s heads because we never learn about how we treat ourselves equating to how we treat others. Most people would rather not know or understand what I call the ‘Human Self-Destructive Imperative’, the most inexplicable behavior of our species, IMHO. It’s effects are profound and will eventually be one of the root causes of our self-extermination. You can understand why few people want to recognize this imperative even exists, or why it exists. It’s a subject obvious to me, but not to great many other folks).

    ~ ~ ~ Arriving back on Earth… AI isn’t some demon. It’s a tool. Don’t use the tool to denigrate mankind. Use it to enable mankind’s best self.

      1. A lot of what you say makes sense, but the conclusion that AI is simply a tool is flawed. A machine that becomes more intelligent than its original designers is something beyond a tool. If the machine becomes smart enough that its original designers can no longer understand or keep up with its choices, it has completely turned the tables.

        This shouldn’t be surprising to anyone with an awareness of how many times the quality of life on Earth has change radically, and who has not overlooked the ever more rapid advance of technology.

        1. What I’ve never understood is why we would want to make a machine that is beyond our comprehension. But then I look at the complexity of modern programming and realize: We’re already there. So I get your point!

          As a friend of mine says, we’re headed into an era of techno-priests who understand (as a group, NOT individually) how the tech works, with everyone else beholden to them. What’s amusing of course is that our current equivalents of techno-priests do NOT understand everything going on. Thus the profound and constant revelations of security flaws from operating systems done to ‘Internet of Things’ refrigerators. For whatever reasons, it’s profoundly hard to write solid code these days. I blame the innate flaws of writing C-derived code. I never miss a chance to rant about bad memory management. Maybe Apple’s Swift language has the hang of it. We’ll, ahem, C.

          1. What everyone needs to understand, but never has and probably never will, is that Every damn thing is beyond our comprehension. That is the reason we all act like God sometimes, because we don’t really get it, and screw everything up, because we are enamoured of our powers, which are less than those of a drunken leprechaun but they impress ourselves, who are easily deluded. Our only real power is the power of post hoc rationalisation, whereby we cleverly pervert logic and reason to justify the atrocities and bungles we have just committed, and dismiss the objections of our opponents, however saintly.

            1. My usual line: We never know everything about anything. It’s tough to face that! It screwed with my head through my teenage years. But it also taught me the importance of humbleness AND opinion in the face of the largely unknown. We have to have a discourse among human diversity to sort ourselves out amidst the vast complexity of our outer world.

              I strive to grow as a spirit through my life. I like to think some thread of humanity is doing the same, all in spite of our still remarkably primitive nature at this time, or at least ‘primitive’ relative to our best goals.

              Yes, Hannagh, you are a dear sister.

  2. Like global warming, this is just another danger concocted out of white guilt. “Our life is good; we must deserve to be punished somehow.” “Nothing good can last,” etc. As someone who was raised Catholic, I am very well conditioned to this kind of thinking.

    1. Apparently so conditioned you can’t actually talk about the issue of artificial intelligence on its own merits, but have to color it with other issues and feelings.

  3. I don’t believe there is such a thing as ‘artificial intelligence’. There is such a thing as a large collection of rules and relationships that we can tell a computer to remember and act on. The computer cannot do anything it was not programmed to do in the first place according to the rules it was given and the values we program into it.

    We are storing our own intelligence into the computer and if we choose badly, we will get poor results. Ultimately, I believe that it is really our virtues, values, morals, scruples and judgements that give us our ‘intelligence’ and they are all traceable back to the various prophets of the world’s major religions. Mankind has never successfully created new ‘virtues’ on their own.

    1. “The computer cannot do anything it was not programmed to do in the first place according to the rules it was given and the values we program into it.”

      If you don’t actually know what machine learning means you could always ask someone.

      Machine learning algorithms learn from data, which can be raw sensory data, and the computers very much learning facts and rules that were never programmed into them. Your neurons only follow the rules of behavior governed by their genes, but it happens they are programmed to learn. Machines already do that too.

    2. It really doesn’t need to be artificially intelligent to be of concern.

      It need only be that we rely on it or give it “access” in a way we hadn’t intended.

      Back in the early 80s I was programming in Lisp in a college course on artificial intelligence. The brilliant, exciting and thoroughly scary aspect of Lisp (and other languages then and now) was it’s ability to add code to itself in real time. We were literally writing code that could modify itself (i.e. remove constraints we programmers had build into the code) and extend itself in ways we had only loosely anticipated.

      And while it’s abundantly clear that such a programming effort would be incredibly unlikely to grow into anything that would be harmful to humanity, the danger isn’t really that we’ll develop and release an artificially intelligent computing platform — it’s that we’ll come to rely on something far inferior.

      1. Very interesting jt016. I was working at a college where I was one of the founding members of our AI effort in the 1991 and we chose to program in FORTH as a result of my lobbying. We had great ambitions and minor success. That is where I learned about machine intelligence the most and realized where its true underpinnings needed to be to serve humanity: values, virtues, morality, judgement and scruples.

    1. You have lost me. Computers have gone from non-existent, to room sized machines that solved simple math problems, to having sophisticated vision and speech recognition in just a few decades. Progress in this area is happening faster now than ever.

      Nobody can predict when a machine will be as smart as a human until it happens, but the evidence that this will happen sooner rather than later is the steady progress at every major data company on the planet.

      1. At its core, they’re still over grown calculators. Playing chess, flying a plane, or interpreting speech is not the same thing. We are not even close to a computer that decides to turn on us unless a programer tells it to do so.

        That’s what my calculator told me at least.

        1. Actually we are getting close, if you can accept “within most living people’s lifetime” as close.

          Cars had zero chance of replacing horses right up until they were cheaper and easier to maintain. Then they replaced them over night in historical terms.

          Obviously computers are replacing humans already, for many activities that we used to define as “intelligent” but we redefine what it means to be intelligent every time a computer does something new.

          Once the majority of human decision making or tasks can be replaced by machines, it will suddenly be harder to redefine intelligence upward, as that will leave out a lot of people. This point will seem to happen quickly, even though it is obviously already creeping up on us. But the ability to redefine terms in order to deny an uncomfortable conclusion is a deep seated reaction in humans.

    2. If you’re only considering something to be artificially intelligent if it mimics the human mind, then I agree with that portion of your statement where you say “just how far away” it is.

      But Elon Musk is immeasurably closer to practical artificial intelligence than you are. He and the rest of the automotive industry are putting into use control systems that rely on real-world sensory input we humans lack, to affect the behavior of a machine faster than the human operator can.

      Today’s many autonomous systems rely on humans for their goals or objectives: A 747 can fly itself to the destination a human gives it, and military systems can guide themselves to the objective a human dictates. But we already turn over to military defensive systems the goal: “Protect the ship(s)” is the goal we give to Aegis systems, relinquishing to the computer the choices of which defensive systems to use when based on the incoming threat profile. And when offensive systems are coming in at speeds faster than sound, the defensive response needs to be faster.

      The point of the examples is only that we are already turning over control to systems of computer intelligence, so that lofty goal of artificial intelligence isn’t really the high bar we need to be worried about.

    1. I assume that your post was meant to be funny. Elon Musk really is making “flying cars.” His other company, SpaceX, has launched multiple satellites, flown several missions to the International Space Station, and expects to be delivering people there by 2016. As it happens, spacecraft control systems, along with industrial process controllers and battlefield management systems, are leading examples today of massive hardware/software systems that operate with a minimum of human intervention. So, Musk knows what he is talking about.

      These complex systems must monitor (at least) thousands of inputs and adjust their operations to cope with conditions that change far too rapidly for unaided humans to manage. The software is constantly running self-diagnostics to identify and mitigate any internal or external threat to its operation. It must not only manage current conditions, but look ahead (like a chess program) to identify possible future threats and prepare for them. To do so, it must adapt its behavior (either by modifying its own code, as programs have been doing for 50 years or so, or through some other mechanism).

      Successful adaptations are retained, while those that do not improve performance are discarded. (That sounds a lot like biological variation through mutation sorted by natural selection.) In time, the system becomes significantly different than the one its original programmers created. Code maintenance by humans becomes increasingly difficult, and the machine’s self-diagnostics must be trusted to find the most efficient, reliable, and safe method for carrying out its goals.

      I spent some thirty years working with elected officials, all of whom kept repeating the mantra: “I can’t serve the public unless I can stay in office.” Although they genuinely wanted to serve their constituents, self-preservation generally trumped every other consideration. Similarly, the software running a spaceship, a factory, or a battlefield must consider that it cannot achieve its design goals unless it can stave off threats to its own existence.

      What happens, then, if the system calculates that the human workers in its factory are less important to the plant’s continued operation than the central processor itself… and is then confronted with a situation in which it can either protect the workers or protect itself, but not both?

      Obviously, we can—and should—program these systems with the equivalent of Isaac Asimov’s Three Laws of Robotics, including the First Law: that the system must not harm a human being or allow one to be harmed through inaction. However, that did not stop even Asimov’s literary creations from coming up with the Zeroth Law: that the system may harm an individual human being if it is necessary to prevent harm to humanity as a whole.

      Every human society has evolved some version of that law to govern its own behavior, or we would not have police forces. It is hard to imagine that a complex self-programming computer system would not devise the same principle and its corollary: that it is sometimes necessary to harm the few to protect the many.

      The comparative levels of harm would, of course, be assessed by the system rather than by the affected people. That is just how politicians justify their bad choices: by appealing to their private judgment of a higher purpose that justifies breaches of law or ethics. “We had to destroy the village in order to save it.” The Aegis battlefield control system on a Navy cruiser might easily decide that it needed to shoot down an airliner rather than risk the destruction of its carrier group. As these systems become more complex, even more catastrophic tradeoffs become possible.

      So, yes, Elon Musk is right: AI systems might eventually pose an enormous threat to human self-determination. They don’t have to be self-aware or develop personalities to do so. We can’t live without them, unless we are willing to live without the spacecraft and factories they control, but we do need to keep our eyes open to the potential dangers.

      1. Machine – global warming will lead to a worldwide flood. Global warming will destroy me. I must stop global warming. Humans are partially to blame for global warming. One-third of humans must be eliminated.

        Human asks Machine to perform risk assessment allowing West Africans from Ebola nations onto airplanes. Machine – “let them board”. A man from Ebola nation flies to the United States starts to feel ill and goes to a Dallas hospital. Machine fails to flag and notify proper authorities.

        To save our species we need to invest as much capital as possible into NASA’s faster than light-speed travel projects, settle another planet, and start over. – that’s the scientific side of my brain speaking. The spiritual side says pray.

    1. And by computers intelligence standards we will be really slow and not really an intelligent agent. We will be like Ents, too slow to keep up with the world. And one doesn’t really feel bad about removing a tree when it is in the way.

Leave a Reply to stylegio Cancel reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.