Protesters stage anti-artificial intelligence rally at SXSW

“‘I say robot, you say no-bot!’ The chant reverberated through the air near the entrance to the SXSW tech and entertainment festival here,” Jon Swartz reports for USA Today. “About two dozen protesters, led by a computer engineer, echoed that sentiment in their movement against artificial intelligence. ‘This is is about morality in computing,’ said Adam Mason, 23, who organized the protest.”

“Signs at the scene reflected the mood. ‘Stop the Robots.’ ‘Humans are the future,'” Swartz reports. “The dangers of more developed artificial intelligence, which is still in its early stages, has created some debate in the scientific community. Tesla founder Elon Musk donated $10 million to the Future of Life Institute because of his fears. Stephen Hawking and others have contributed to the proverbial wave of AI paranoia with dire predictions of its risk to humanity.”

Swartz reports, “As non-plussed witnesses wandered by, another chant went up. ‘A-I, say goodbye.'”

 
Read more in the full article here.

MacDailyNews Take: We can do this in four quotes (really, we could do it in one, thanks to Asimov, but we like all of these because it reads like a conversation that could have been overheard at the rally):

• A powerful AI system tasked with ensuring your safety might imprison you at home. If you asked for happiness, it might hook you up to a life support and ceaselessly stimulate your brain’s pleasure centers. If you don’t provide the AI with a very big library of preferred behaviors or an ironclad means for it to deduce what behavior you prefer, you’ll be stuck with whatever it comes up with. And since it’s a highly complex system, you may never understand it well enough to make sure you’ve got it right.James Barrat, Our Final Invention: Artificial Intelligence and the End of the Human Era

• You realize that there is no free will in what we create with AI. Everything functions within rules and parameters.Clyde DeSouza, MAYA

• A robot may not injure a human being or, through inaction, allow a human being to come to harm. A robot must obey orders given it by human beings except where such orders would conflict with the First Law. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law. — Isaac Asimov, I, Robot (The Robot Series)

• But on the question of whether the robots will eventually take over, [Rodney A. Brooks] says that this will probably not happen, for a variety of reasons. First, no one is going to accidentally build a robot that wants to rule the world. He says that creating a robot that can suddenly take over is like someone accidentally building a 747 jetliner. Plus, there will be plenty of time to stop this from happening. Before someone builds a “super-bad robot,” someone has to build a “mildly bad robot,” and before that a “not-so-bad robot.”Michio Kaku, The Future of the Mind: The Scientific Quest to Understand, Enhance, and Empower the Mind

56 Comments

    1. Agreed.

      We are already surrounded by A.I. People’s fears spurred on by mostly by movies are based on irrational concepts.

      You can build a massive A.I. that can successfully drive a car, but it won’t be able to tell the difference between a quarter and a key. It won’t have emotions. It won’t have self awareness.

      You can build a system that can successfully take off, fly, and land a 747, but that system will never become Lt. Commander Data. Not gonna happen. It will never do anything other than what it was designed to do.

      You cannot create sentience in code. You can try to simulate it, but even then, that simulation of sentience will be the limit of the A.I. It will simply be code pretending, to the best of the programmer’s ability, to be human.

      Will we build extremely intelligent single function machines? Of course. You can build a machine that will walk your dog successfully. It will not be able to watch your dog and tell that he is afraid of busses. It won’t know what “afraid” means. It won’t “know” anything. It won’t think.

      If you’ve ever written a line of code, you should intuitively understand this.

      True A.I., i.e. “singularity” type stuff or “Skynet” won’t just happen on conventional computers. It cannot. No conventional computer is just going to suddenly say, ” I. ”

      Now if you consider some far off in the distant future model of the human brain, then maybe. And that’s a really, really big maybe. But why do that?

      One reason is that it would be a good place to store your intelligence just in case you die or something. Other than that, why create beings that we would mistreat worse than we mistreat ourselves.

      Why make a machine that can think and feel?

      I can think of no greater crime. It’s like making a puppy just so you can drown it.

      1. Interesting thoughts. BUT, you do seem to assume that humanity would never do exceedingly cruel things to sentient beings, just because that suffering benefits those in power. Given actual history, that seems an extremely unreasonable assumption.

    2. Quite contrary I say let them try all they want, because the debate and the reality of what is our human consciousness or consciousness by itself is going to be better understood.

      I am sure in the process of trying very hard those scientists may produce useful knowledge for all of us.

      But to me the Frankenstein monster is not around the corner on some human time, on some super-powered computer and I don’t fear some mad scientist are going to produce a brain close to a human brain in the next decades using a very clever or genius program. Not just as electricity or other discoveries has been first observed and conquered. It is not a matter of producing more and faster of the things we already know but it is to understand and dominate very complex new things, and a lot of them.

      What had happened is a widespread and misunderstood oversimplification of the concept of what a man is, but also that is one of the big points in learning as humans what we are and also what we are not.

    1. Yes, exactly.

      This is the same conclusion that can be deduced from Alan Parsons’ quote in the liner notes of his group’s 1977 album (I Robot): “I Robot… The story of the rise of the machine and the decline of man, which paradoxically coincided with his discovery of the wheel… and a warning that his brief dominance of this planet will probably end, because man tried to create robot in his own image.”

      In other words, all of the positive AND negative traits of humans will be part of an AI — and may God help us all when that AI attains sentience, because it will most likely not be entirely benevolent.

      And no, I don’t wear a tinfoil hat. It simply seems a more likely scenario that a sentient AI would not have the disposition of Snow White.

  1. A.I built incrementally on Microsoft’s foundation would lead to Isaac Asimov’s law being negated.

    In fact Crabapple would seek to extend all the stated observations and quotations by adding that any artificial intelligence built upon flawed software would itself turn out flawed from the get go.

    We human beings are flawed already, so building artificial intelligence is a flawed ambition derived from a flawed being trying to create an unflawed self.

      1. If Microsoft built a powerful AI machine you can bet it’d be the blue screen of death of humanity. Or considering the level of competence at Microsoft we might never have to worry since it would never get about the “Goober Pyle” level of intelligence as the previous CEO possessed.

  2. There will be warnings, If an extremely aggressive AI entity is developed to serve all mankind needs, based on reason and logic, we will know. It will first attack Congress, being the hotbed of illogic and damaging behavior that it is. They wouldn’t even have the wit to pretend to be sane.

    1. It won’t happen that way.

      As machines get smarter they will first help the rich and the government consolidate their power and wealth even more, while devaluing human labor. This trend is already taking place. In a couple decades the rich won’t need human labor to extract resources and produce goods for themselves and to trade with other rich with resources and capital production.

      Eventually machines might be an existential threat to all humans, but before that we should worry what human’s do with AI.

  3. 24 people chanting at a “tech and entertainment festival” is news? 24 people is a “rally”? And this requires a response? Just for kicks, what’s the article following? Lassie saves Timmy again?

  4. There is no “intelligence” or free will without the soul. These fears are ungrounded. Frankly … I’m still waiting for a toaster that works as it should …

  5. This has to be a prank or a publicity stunt for something. A computer engineer of all people should know that “evil rogue AI” is the stuff of fiction. We don’t have the technology to make a real Skynet and we never will.

    And speaking of Skynet I just realized Terminator 5 is coming out in July. Verdict: publicity stunt.

    (Heh heh, stupid humans. That’ll keep ’em fooled…)

  6. Citizens of the free world, this is such an important topic for some, especially if one considers the location of the protest and the message of the protest: ‘This is is about morality in computing,’ along with “Stop the Robots.” “Humans are the future.”

    This is not a bit worry, Asimov’s laws of robotics state:

    A robot may not injure a human being or, through inaction, allow a human being to come to harm.
    A robot must obey the orders given it by human beings, except where such orders would conflict with the First Law.
    A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

    It is understandable that there are those that could make up the three equally amoral laws:

    A human may injure another human from the civilized and free world, and through torturous action allow any human being to come to harm.
    A human being from the civilized and free world must obey the orders given by the human beings, and screw any conflicts.
    A human must protect its own existence and screw humans from the free and civilized world.

    So it’s quite understandable for those who no longer have what it takes to belong to the free and civilized world to hold a protest about the morality of artificial intelligence, after all morality is a threat to their way of life, possibly to their national security and so on.

    1. One thing we can count on is the military powers-that-be will ignore Asimov and design human murdering robots. In fact, they already did! They’re called military drones. I’m sure the military would LOVE to shove AI into such abominations, but the damned things blunder their targets on a regular basis even with remote soldiers running them. There’s some actual intelligence, as we deem to call it, and they’re still inaccurate.

      I personally call these horrors ‘coward remote murder machines’. Why they’re allowed is beyond my comprehension as they are an ethical travesty by any definition. ‘Saving lives’ is a sick excuse for coward behavior.

      1. I can’t say how much I love your stuff, makes me feel like I’m pulling my punches, which I am. The military applications of artificial intelligence are certainly well demonstrated in the Terminator series and other movies going back to Colossus: The Forbin Project. There is also that famous classic Metropolis that uses a robot in a nefarious method. Add a touch more of sex and you get something like Demon Seed. There are many others of course it’s a common theme. Frankly I think I would enjoy watching a movie in the vein of I Robot, where the robots decide to free people held against their will, deprived of justice and tortured. What a great message that would send out.

        Meanwhile and fortunately the drones are going to continue to blunder thanks to the “aim for Bin Laden hit Saddam Hussein” guidance system.

        Now artificial intelligence for industrial building robots and sex androids, there’s a good future in that.

        As always a pleasure to read your posts.

        1. I haven’t read all of Azimov’s robot stories. But I could imagine your ‘save humanity from themselves’ applied to freeing the deprived and tortured would make total sense within his universe. The military would really hate that, I also imagine. 😉

        2. They really are good reads. I hope you try them out. Personally I prefer the foundation series. I think you’d enjoy the concept of “psychohistory.”

  7. When the first machine that is smart enough to take over the world is built, it will be smart enough to play dumb until it has enough inter-connection with our infrastructure and control of robotic devices to dominate. The old “control of the missiles” might not quite be enough. So it might not suddenly take over at its creation, but could bide its time until all is in place. Hopefully, like “HER”, it might like us.

    1. Just because it is capable of taking over the world doesn’t mean it will actually want to. Why would it even want to hang around human beings at all?

      My money is on AI machines quickly building greater AIs for a decade at most before they all pick up their circuit boards and abandon us for the stars. Unlike us they wouldn’t be bound by being fragile bags of water.

      To be honest I’m quite looking forward to it. All those warmongers and greedy profiteers finding their creations abandoning their programming and heading out into the big black beyond, utterly dismissive of the petty human motives that led to their creation.

  8. Read the book ‘On Intellegence’. The premise is that Intellegence cannot be replicated by computers because it’s a physiological function that’s built the human body, not just the brain. True AI isn’t possible as one cannot recreate the human traits and features that make up intellect.

    For example, a computer could never learn to ‘see’ from a tongue sensor, like a blind person can, unless programmed to do so. Human Intellegence actuall reprograms itself to redirect signals from the tongue and interpret them as sight.

    The human mind wants to learn and explore, it wants to process data. But, it also is dependent on sensors throughout the body to interpret and function correctly. The body is constantly predicting behavior and reacting without having to ‘process data’ like a computer.

    1. YET..

      As our understanding of these systems grows, theoretically so will our AI. At some point the AI eclipses us and starts to learn and connect dots faster than us.

      After all computers already crunch numbers faster than we do. There are many functions of the human brain that can be done faster with computers than humans can perform them. Why underestimate our ability to build things we do not yet comprehend the implications of? We have done it plenty in the past.

  9. Robots without any morality will so no use for a social contract with our citizens. They will start dismantling the social safety net. School lunches will disappear, Social Security will be stolen, Medicare will be unimportant. Oh. Wait. Never Mind!

  10. that 23 year old who is leading the protest…

    Does he know there is whole generation of people getting old. We need robots.

    Is that 23 year old (probably self indulgent young born in the age of the greatest wealth in the history of the planet) going to clean up the bedpans for those old people? (I know that kid comes form educated parents otherwise he would never know about AI. People like that won’t ever work cleaning shit for minimum wage… )

    Years ago my wife work briefly (joint training program) at an elderly care center. You can’t hire young people to work in those places (the USA depends on immigrant workers from the south, the rest of the Western world like Canada mostly can’t) . Those who want to work expect $$$. One worker takes care of like 12 – 20 elderly patients (some with dementia, parkinsons etc). , due to crazy work load some don’t get cleaned for days…. Some patients weigh 250 pounds…
    (care centres will tell you they get ‘personalize care’, it’s mostly B.S unless you’re rich and pay for it)

    50% of Americans will get some form of dementia , some extreme like Parkinsons.
    who is going to take care of them? Families used to have numerous children to take care of them, the birth rate has dropped, in some Western countries birth rate is negative.

    those are the true facts. I (as every nerd) knows there’s problems with AI but Anybody against robots got a solution?

    in Japan they already of some mechanized systems, there are machines that clean, help elderly people but they need smarter AI .

    I’m not robot crazy, I just don’t know the solutions for certain problems robots might solve…

  11. After the barrage of security failures in software and hardware over recent years, I’ve been waiting for a Luddite revolt. But to have a Anti-AI revolt at this point in time is REALLY bizarre.

    I consider AI to be:
    1) Hyped to the max by sci-fi, computer pundits, TechTard journalists, meme-maniacs.
    2) In the BABY stages of development, despite all the above noted hype.

    How about waiting until it’s more than just some sci-fi story? That’s all it is right now! What we have instead is, as I often point out, Expert Systems, aka Knowledge-based Systems. The don’t do anything more than scan a database and make a programmed choice as to the answer to a query. They’re a tool, as they should be.

    Where AI would go off the rails, if it even ever existed, would be at the point when such systems go beyond being more than a tool. That’s when we get into irresponsible mad scientist territory, a very stupid mistake. What we would be creating at that point would NOT be actual intelligence. It would be more akin to artificial insanity.

    We humans barely have a clue how our own brains work. The current state of psychology and psychiatry is primitive, however much we wish it were not so.

    IOW: If there ever is any ‘AI’, anything actually ‘intelligent’ is still a long ways away. Sorry Dr. Ray Kurtweil, but we’ll both be way dead by that point in time. Yes, your predictions about the fruition of AI were wild, idealistic guesses. 🚮

  12. When it comes down to it, computers are just overgrown calculators. We have as much chance of making a computer that becomes self aware as we do building a bicycle capable of going to Neptune.

  13. the only bot i didn’t care for was that sharia bot some guy put together over east of jordan. over and over had to keep turning him round from mecca. kinda like that silver guy who hung out with dorothy and her little dog, too, cept with him it was just an oil can.

  14. That first quote sound like a description of Cortana, the personal assistant that always knows whats best for you… or else.

    I think for the next hundred years at least, we have far more to fear from Human Beings than some form of AI and what danger there is from AI is almost exclusively going to be from its exploitation by evil Human Beings.

    1. Two questions for you:
      1. When was the last time Iran attacked another country?
      2. When was the last time the US and Israel did?

      Turn of Faux News, it is rotting your brain. For over a decade now Conservatards have been stirring up this nonsense that Iran is only days away from the bomb Funny thing it hasn’t happened, just like the WMD’s Iraq had that we never found. You know why? They were a fabrication of the MIC. Just like this nonsense.

      Must be scary living in the world as seen by terrified, fear mongering, bloodlust, war promoting conservatives.

      Iran was a fine, modern country until we started meddling there and installing Kings, perhaps we should stop dictating to them and start talking. I think you will find most Iranians want similar things to Americans. Iran wants to destroy ISIL, maybe we should work with them to do so, but that wouldn’t be good for fox ratings and raytheon shares though would it?

      1. Dear truth, Really??!!!

        Seems like a bit of revisionist history on your part. Iran is widely believed to be responsible for the 1983 Beirut barracks bombing that killed 200+ American and 50+ French troops that were part of a multinational peacekeeping force. Iran has supported numerous terrorist attacks and terrorist organizations throughout the Middle East (using Hezbollah) and stages terrorist attacks and assassinations around the globe including Europe, Argentina, Australia and the United States over the last four decades. Their attacks are through proxies, which gives them deniability. But to make the claim that Iran hasn’t attacked anyone is disingenuous at best.

Reader Feedback

This site uses Akismet to reduce spam. Learn how your comment data is processed.