Elon Musk: ‘With artificial intelligence we are summoning the demon’

“Tesla chief executive Elon Musk has warned about artificial intelligence before, tweeting that it could be more dangerous than nuclear weapons,” Matt McFarland reports for The Washington Post.

I think we should be very careful about artificial intelligence. If I were to guess like what our biggest existential threat is, it’s probably that. So we need to be very careful with the artificial intelligence. Increasingly scientists think there should be some regulatory oversight maybe at the national and international level, just to make sure that we don’t do something very foolish. With artificial intelligence we are summoning the demon. In all those stories where there’s the guy with the pentagram and the holy water, it’s like yeah he’s sure he can control the demon. Didn’t work out. — Elon Musk, MIT Aeronautics and Astronautics department’s Centennial Symposium, October 24 2014

Full article here.

106 Comments

  1. Musk is right for these reasons:

    Intelligence is a scale. The lowest form of intelligence is the binary – 0/1, and the ability to recognize the difference. All complexity of intelligence arises from that basic concept.

    Artificial intelligence can be defined as the ability of a created mechanism to recognize differences.

    Artificial intelligence has the ability to be detrimental when it can act (or choose not to act) on a difference in the physical world (including virtual worlds, which of course must have an existence in the physical world and which through programming and mechanism may interact with the physical world).

    We can already make a simple machine that can harm without a human responsible for the trigger (a motion sensor hooked to a gun, for example) – that qualifies as a weapon and because it acts on an if/then statement, has a certain level of ‘intelligence’, and can clearly cause harm in an autonomous fashion). Picture a hunter’s motion camera that is attached via an actuator to a trigger.

    Currently, the limit to that danger is notably:
    1. The gun cannot load itself.
    2. It’s supply of ammo and energy is not infinite.
    3. It has no mechanism for renewal and self-repair.
    4. It has no ability to adapt itself.
    5. It cannot replicate itself.

    Hence, there is validity in creating a globally recognized set of rules that all should obey:

    Do not create machines that can create machines other than the machine it was created to make (and do not create a machine that can create itself).
    Do not create machines without reasonable power limits (or with the ability to adapt or exceed those limits.
    Do not create machines that have the potential to exceed their programming.
    Do not create machines that have the ability to increase their own complexity.

  2. Human beings have proved over and over again that they are neither smart enough nor careful enough to manage technology as potentially deadly as artificial intelligence. It is remarkable that some human beings are smart enough to invent things like atomic energy and artificial intelligence, but these are not the people who make policy decisions regarding the use of such technology. Most people—including most decision makers throughout the world—are self-absorbed idiots who think all they have to do is say “mistakes were made” when they screw up. I’m with Elon Musk 100%.

Leave a Reply to John Smith Cancel reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.