Protesters stage anti-artificial intelligence rally at SXSW

“‘I say robot, you say no-bot!’ The chant reverberated through the air near the entrance to the SXSW tech and entertainment festival here,” Jon Swartz reports for USA Today. “About two dozen protesters, led by a computer engineer, echoed that sentiment in their movement against artificial intelligence. ‘This is is about morality in computing,’ said Adam Mason, 23, who organized the protest.”

“Signs at the scene reflected the mood. ‘Stop the Robots.’ ‘Humans are the future,'” Swartz reports. “The dangers of more developed artificial intelligence, which is still in its early stages, has created some debate in the scientific community. Tesla founder Elon Musk donated $10 million to the Future of Life Institute because of his fears. Stephen Hawking and others have contributed to the proverbial wave of AI paranoia with dire predictions of its risk to humanity.”

Swartz reports, “As non-plussed witnesses wandered by, another chant went up. ‘A-I, say goodbye.'”

 
Read more in the full article here.

MacDailyNews Take: We can do this in four quotes (really, we could do it in one, thanks to Asimov, but we like all of these because it reads like a conversation that could have been overheard at the rally):

• A powerful AI system tasked with ensuring your safety might imprison you at home. If you asked for happiness, it might hook you up to a life support and ceaselessly stimulate your brain’s pleasure centers. If you don’t provide the AI with a very big library of preferred behaviors or an ironclad means for it to deduce what behavior you prefer, you’ll be stuck with whatever it comes up with. And since it’s a highly complex system, you may never understand it well enough to make sure you’ve got it right.James Barrat, Our Final Invention: Artificial Intelligence and the End of the Human Era

• You realize that there is no free will in what we create with AI. Everything functions within rules and parameters.Clyde DeSouza, MAYA

• A robot may not injure a human being or, through inaction, allow a human being to come to harm. A robot must obey orders given it by human beings except where such orders would conflict with the First Law. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law. — Isaac Asimov, I, Robot (The Robot Series)

• But on the question of whether the robots will eventually take over, [Rodney A. Brooks] says that this will probably not happen, for a variety of reasons. First, no one is going to accidentally build a robot that wants to rule the world. He says that creating a robot that can suddenly take over is like someone accidentally building a 747 jetliner. Plus, there will be plenty of time to stop this from happening. Before someone builds a “super-bad robot,” someone has to build a “mildly bad robot,” and before that a “not-so-bad robot.”Michio Kaku, The Future of the Mind: The Scientific Quest to Understand, Enhance, and Empower the Mind

56 Comments

    1. Agreed.

      We are already surrounded by A.I. People’s fears spurred on by mostly by movies are based on irrational concepts.

      You can build a massive A.I. that can successfully drive a car, but it won’t be able to tell the difference between a quarter and a key. It won’t have emotions. It won’t have self awareness.

      You can build a system that can successfully take off, fly, and land a 747, but that system will never become Lt. Commander Data. Not gonna happen. It will never do anything other than what it was designed to do.

      You cannot create sentience in code. You can try to simulate it, but even then, that simulation of sentience will be the limit of the A.I. It will simply be code pretending, to the best of the programmer’s ability, to be human.

      Will we build extremely intelligent single function machines? Of course. You can build a machine that will walk your dog successfully. It will not be able to watch your dog and tell that he is afraid of busses. It won’t know what “afraid” means. It won’t “know” anything. It won’t think.

      If you’ve ever written a line of code, you should intuitively understand this.

      True A.I., i.e. “singularity” type stuff or “Skynet” won’t just happen on conventional computers. It cannot. No conventional computer is just going to suddenly say, ” I. ”

      Now if you consider some far off in the distant future model of the human brain, then maybe. And that’s a really, really big maybe. But why do that?

      One reason is that it would be a good place to store your intelligence just in case you die or something. Other than that, why create beings that we would mistreat worse than we mistreat ourselves.

      Why make a machine that can think and feel?

      I can think of no greater crime. It’s like making a puppy just so you can drown it.

      1. Interesting thoughts. BUT, you do seem to assume that humanity would never do exceedingly cruel things to sentient beings, just because that suffering benefits those in power. Given actual history, that seems an extremely unreasonable assumption.

    2. Quite contrary I say let them try all they want, because the debate and the reality of what is our human consciousness or consciousness by itself is going to be better understood.

      I am sure in the process of trying very hard those scientists may produce useful knowledge for all of us.

      But to me the Frankenstein monster is not around the corner on some human time, on some super-powered computer and I don’t fear some mad scientist are going to produce a brain close to a human brain in the next decades using a very clever or genius program. Not just as electricity or other discoveries has been first observed and conquered. It is not a matter of producing more and faster of the things we already know but it is to understand and dominate very complex new things, and a lot of them.

      What had happened is a widespread and misunderstood oversimplification of the concept of what a man is, but also that is one of the big points in learning as humans what we are and also what we are not.

    1. Yes, exactly.

      This is the same conclusion that can be deduced from Alan Parsons’ quote in the liner notes of his group’s 1977 album (I Robot): “I Robot… The story of the rise of the machine and the decline of man, which paradoxically coincided with his discovery of the wheel… and a warning that his brief dominance of this planet will probably end, because man tried to create robot in his own image.”

      In other words, all of the positive AND negative traits of humans will be part of an AI — and may God help us all when that AI attains sentience, because it will most likely not be entirely benevolent.

      And no, I don’t wear a tinfoil hat. It simply seems a more likely scenario that a sentient AI would not have the disposition of Snow White.

  1. A.I built incrementally on Microsoft’s foundation would lead to Isaac Asimov’s law being negated.

    In fact Crabapple would seek to extend all the stated observations and quotations by adding that any artificial intelligence built upon flawed software would itself turn out flawed from the get go.

    We human beings are flawed already, so building artificial intelligence is a flawed ambition derived from a flawed being trying to create an unflawed self.

      1. If Microsoft built a powerful AI machine you can bet it’d be the blue screen of death of humanity. Or considering the level of competence at Microsoft we might never have to worry since it would never get about the “Goober Pyle” level of intelligence as the previous CEO possessed.

  2. There will be warnings, If an extremely aggressive AI entity is developed to serve all mankind needs, based on reason and logic, we will know. It will first attack Congress, being the hotbed of illogic and damaging behavior that it is. They wouldn’t even have the wit to pretend to be sane.

    1. It won’t happen that way.

      As machines get smarter they will first help the rich and the government consolidate their power and wealth even more, while devaluing human labor. This trend is already taking place. In a couple decades the rich won’t need human labor to extract resources and produce goods for themselves and to trade with other rich with resources and capital production.

      Eventually machines might be an existential threat to all humans, but before that we should worry what human’s do with AI.

  3. 24 people chanting at a “tech and entertainment festival” is news? 24 people is a “rally”? And this requires a response? Just for kicks, what’s the article following? Lassie saves Timmy again?

  4. There is no “intelligence” or free will without the soul. These fears are ungrounded. Frankly … I’m still waiting for a toaster that works as it should …

  5. This has to be a prank or a publicity stunt for something. A computer engineer of all people should know that “evil rogue AI” is the stuff of fiction. We don’t have the technology to make a real Skynet and we never will.

    And speaking of Skynet I just realized Terminator 5 is coming out in July. Verdict: publicity stunt.

    (Heh heh, stupid humans. That’ll keep ’em fooled…)

Leave a Reply to jwsc01 Cancel reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.