Stephen Hawking: A.I. could be ‘worst event in the history of our civilization’

“The emergence of artificial intelligence could be the ‘worst event in the history of our civilization’ unless society finds a way to control its development, high-profile physicist Stephen Hawking said Monday,” Arjun Kharpal reports for CNBC. “He made the comments during a talk at the Web Summit technology conference in Lisbon, Portugal, in which he said, ‘computers can, in theory, emulate human intelligence, and exceed it.'”

“He admitted the future was uncertain,” Kharpal reports. “‘Success in creating effective AI, could be the biggest event in the history of our civilization. Or the worst. We just don’t know. So we cannot know if we will be infinitely helped by AI, or ignored by it and side-lined, or conceivably destroyed by it,’ Hawking said during the speech. ‘Unless we learn how to prepare for, and avoid, the potential risks, AI could be the worst event in the history of our civilization. It brings dangers, like powerful autonomous weapons, or new ways for the few to oppress the many. It could bring great disruption to our economy.'”

Kharpal reports, “Hawking explained that to avoid this potential reality, creators of AI need to ’employ best practice and effective management.'”

MacDailyNews Take: Good luck with that. Have you ever visited this planet?

“It’s not the first time the British physicist has warned on the dangers of AI. And he joins a chorus of other major voices in science and technology to speak about their concerns,” Kharpal reports. “Tesla and SpaceX CEO Elon Musk recently said that AI could cause a third world war, and even proposed that humans must merge with machines in order to remain relevant in the future.”

Read more in the full article here.

MacDailyNews Take: Alright, for the heck of it, let’s try to stay optimistic:

Asimov’s The Three Laws of Robotics:

1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

• A powerful AI system tasked with ensuring your safety might imprison you at home. If you asked for happiness, it might hook you up to a life support and ceaselessly stimulate your brain’s pleasure centers. If you don’t provide the AI with a very big library of preferred behaviors or an ironclad means for it to deduce what behavior you prefer, you’ll be stuck with whatever it comes up with. And since it’s a highly complex system, you may never understand it well enough to make sure you’ve got it right.James Barrat, Our Final Invention: Artificial Intelligence and the End of the Human Era

• You realize that there is no free will in what we create with AI. Everything functions within rules and parameters.Clyde DeSouza, MAYA

• But on the question of whether the robots will eventually take over, [Rodney A. Brooks] says that this will probably not happen, for a variety of reasons. First, no one is going to accidentally build a robot that wants to rule the world. He says that creating a robot that can suddenly take over is like someone accidentally building a 747 jetliner. Plus, there will be plenty of time to stop this from happening. Before someone builds a “super-bad robot,” someone has to build a “mildly bad robot,” and before that a “not-so-bad robot.”Michio Kaku, The Future of the Mind: The Scientific Quest to Understand, Enhance, and Empower the Mind

SEE ALSO:
Protesters stage anti-artificial intelligence rally at SXSW – March 15, 2015

Reader Feedback

This site uses Akismet to reduce spam. Learn how your comment data is processed.