Protesters stage anti-artificial intelligence rally at SXSW

“‘I say robot, you say no-bot!’ The chant reverberated through the air near the entrance to the SXSW tech and entertainment festival here,” Jon Swartz reports for USA Today. “About two dozen protesters, led by a computer engineer, echoed that sentiment in their movement against artificial intelligence. ‘This is is about morality in computing,’ said Adam Mason, 23, who organized the protest.”

“Signs at the scene reflected the mood. ‘Stop the Robots.’ ‘Humans are the future,'” Swartz reports. “The dangers of more developed artificial intelligence, which is still in its early stages, has created some debate in the scientific community. Tesla founder Elon Musk donated $10 million to the Future of Life Institute because of his fears. Stephen Hawking and others have contributed to the proverbial wave of AI paranoia with dire predictions of its risk to humanity.”

Swartz reports, “As non-plussed witnesses wandered by, another chant went up. ‘A-I, say goodbye.'”

 
Read more in the full article here.

MacDailyNews Take: We can do this in four quotes (really, we could do it in one, thanks to Asimov, but we like all of these because it reads like a conversation that could have been overheard at the rally):

• A powerful AI system tasked with ensuring your safety might imprison you at home. If you asked for happiness, it might hook you up to a life support and ceaselessly stimulate your brain’s pleasure centers. If you don’t provide the AI with a very big library of preferred behaviors or an ironclad means for it to deduce what behavior you prefer, you’ll be stuck with whatever it comes up with. And since it’s a highly complex system, you may never understand it well enough to make sure you’ve got it right.James Barrat, Our Final Invention: Artificial Intelligence and the End of the Human Era

• You realize that there is no free will in what we create with AI. Everything functions within rules and parameters.Clyde DeSouza, MAYA

• A robot may not injure a human being or, through inaction, allow a human being to come to harm. A robot must obey orders given it by human beings except where such orders would conflict with the First Law. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law. — Isaac Asimov, I, Robot (The Robot Series)

• But on the question of whether the robots will eventually take over, [Rodney A. Brooks] says that this will probably not happen, for a variety of reasons. First, no one is going to accidentally build a robot that wants to rule the world. He says that creating a robot that can suddenly take over is like someone accidentally building a 747 jetliner. Plus, there will be plenty of time to stop this from happening. Before someone builds a “super-bad robot,” someone has to build a “mildly bad robot,” and before that a “not-so-bad robot.”Michio Kaku, The Future of the Mind: The Scientific Quest to Understand, Enhance, and Empower the Mind

Reader Feedback (You DO NOT need to log in to comment. If not logged in, just provide any name you choose and an email address after typing your comment below)

This site uses Akismet to reduce spam. Learn how your comment data is processed.