Stephen Hawking: A.I. could be ‘worst event in the history of our civilization’

“The emergence of artificial intelligence could be the ‘worst event in the history of our civilization’ unless society finds a way to control its development, high-profile physicist Stephen Hawking said Monday,” Arjun Kharpal reports for CNBC. “He made the comments during a talk at the Web Summit technology conference in Lisbon, Portugal, in which he said, ‘computers can, in theory, emulate human intelligence, and exceed it.'”

“He admitted the future was uncertain,” Kharpal reports. “‘Success in creating effective AI, could be the biggest event in the history of our civilization. Or the worst. We just don’t know. So we cannot know if we will be infinitely helped by AI, or ignored by it and side-lined, or conceivably destroyed by it,’ Hawking said during the speech. ‘Unless we learn how to prepare for, and avoid, the potential risks, AI could be the worst event in the history of our civilization. It brings dangers, like powerful autonomous weapons, or new ways for the few to oppress the many. It could bring great disruption to our economy.'”

Kharpal reports, “Hawking explained that to avoid this potential reality, creators of AI need to ’employ best practice and effective management.'”

MacDailyNews Take: Good luck with that. Have you ever visited this planet?

“It’s not the first time the British physicist has warned on the dangers of AI. And he joins a chorus of other major voices in science and technology to speak about their concerns,” Kharpal reports. “Tesla and SpaceX CEO Elon Musk recently said that AI could cause a third world war, and even proposed that humans must merge with machines in order to remain relevant in the future.”

Read more in the full article here.

MacDailyNews Take: Alright, for the heck of it, let’s try to stay optimistic:

Asimov’s The Three Laws of Robotics:

1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

• A powerful AI system tasked with ensuring your safety might imprison you at home. If you asked for happiness, it might hook you up to a life support and ceaselessly stimulate your brain’s pleasure centers. If you don’t provide the AI with a very big library of preferred behaviors or an ironclad means for it to deduce what behavior you prefer, you’ll be stuck with whatever it comes up with. And since it’s a highly complex system, you may never understand it well enough to make sure you’ve got it right.James Barrat, Our Final Invention: Artificial Intelligence and the End of the Human Era

• You realize that there is no free will in what we create with AI. Everything functions within rules and parameters.Clyde DeSouza, MAYA

• But on the question of whether the robots will eventually take over, [Rodney A. Brooks] says that this will probably not happen, for a variety of reasons. First, no one is going to accidentally build a robot that wants to rule the world. He says that creating a robot that can suddenly take over is like someone accidentally building a 747 jetliner. Plus, there will be plenty of time to stop this from happening. Before someone builds a “super-bad robot,” someone has to build a “mildly bad robot,” and before that a “not-so-bad robot.”Michio Kaku, The Future of the Mind: The Scientific Quest to Understand, Enhance, and Empower the Mind

Protesters stage anti-artificial intelligence rally at SXSW – March 15, 2015


  1. What is interesting to me is how close will intelligent systems and robotics and 3D printing bring us to a post-scarcity society. In a post-scarcity society, all of our bickering ends because scarcity is diminished to the point of being negligible. Suddenly capitalism vs. socialism means nothing. Both are replaced by a technologically advanced society where the production of all of human needs is handled by machines.

    The problem here becomes what drives human beings. I have to get my ass off to a job today, but I would much rather sit around thinking about this, tinkering with neural networks.

    In this brave new world, what jobs will people do? Why will people get up in the morning?

    In the most successful societies, people are motivated by the pursuit of income in order to pay for their existence. They sell their services and in return receive money to purchase goods and services from others.

    What happens when all of that becomes effectively free? What happens when material need no longer exists?

      1. When anything disappears, there is obviously a difference, so the more precise question has to be What entity would “care”—experience a significantly different state of being—if humans went extinct? It wouldn’t matter to us any more, because we’d be gone. But it might matter very much to other life forms, or to God, or to the Universe itself, for any number of reasons. Any such possibility connects us to them, and those connections form our meanings.

    1. Outside of your first world gated community, there is a planet with over 5 or 6 billion people who suffer from resource scarcity every day. AI will not resolve this. AI, designed by first world capitalists for first world capitalists, will predictably ignore the needs of the unwashed masses. That’s what Wall Street and its zealots do.

      Interesting that someone here is claiming to be a christian and proposing that, despite what his good book says, he has doubt that his overlord really wants the most capable life form on Earth to persist or to steward a healthy planet for all life. If nothing matters, then you might as well quit everything in despair right now. Conversely, if you claim to be a christian and you lead your life as a greedhead, harming everyone to maximize your wealth in your mortal existence, you would have to be delusional to think your blonde haired, blue eyed Jesus is going to save your imaginary soul. Nobody knows. I propose that improving quality of life in perpetuity for all living creatures, and not profit maximization or religious/ political domination is the best goal for all. Too bad the regular trolls on this site only care about themselves and their partisan politics.

  2. Never been impressed by Mr. Hawking’s vision or predictions.. I don’t care how big his brain is.. for someone that doesn’t or didn’t believe in God then supposedly said he does… I have to question his “authority” over mankind, earth, and space. In a hundred years I would bet the next Genius will disprove most of what he has said or written. Science cannot explain even 10% of the human brain, Mankind, or the Universe so I’ll pass until they can explain over 50%. As Alan Parsons wrote in a song, “It’s all Psychobabble to Me.”

Reader Feedback

This site uses Akismet to reduce spam. Learn how your comment data is processed.