Woz explains why he no longer agrees with Elon Musk, Stephen Hawking on Artificial Intelligence threat

“Tesla and SpaceX chief Elon Musk and renowned physicist Stephen Hawking have issued some terrifying predictions about the future of artificial intelligence,” Catherine Clifford reports for CNBC. “Musk has said AI is more dangerous than North Korea. He’s also warned it is ‘fundamental risk to the existence of human civilization.’ Hawking has said ‘AI could be the worst event in the history of our civilization. It brings dangers, like powerful autonomous weapons, or new ways for the few to oppress the many. It could bring great disruption to our economy.'”

“Steve Wozniak used to agree with Musk and Hawking’s foreboding about AI, but he doesn’t anymore, said the Apple co-founder at the Nordic Business Forum in Stockholm on January 24,” Clifford reports. “‘Artificial intelligence doesn’t scare me at all,’ Wozniak said… ‘I reversed myself.'”

“That’s because in cognitive ability, machines are eons behind even a young child. He’s not impressed,” Clifford reports. “In the end, Wozniak decided the human brain is so much more powerful than machines, that he is not worried about the existential threat to society that Musk and Hawkings have talked about.”

Read more in the full article here.

MacDailyNews Take: Okey-dokey, we’re with Woz!

And, conveniently, it’s the perfect excuse for the interns to do their one important job of the week: Tap That Keg!

To not becoming our robots’ pets: Hoist!

SEE ALSO:
Stephen Hawking: A.I. could be ‘worst event in the history of our civilization’ – November 7, 2017
Tech billionaires including Elon Musk think we live in a simulation of the real world – October 6, 2016
Woz: One day, robots will make us their pets – June 25, 2015
Woz: We made machines too important; that makes us the family pet – May 2, 2015

27 Comments

  1. just for arguments sake i note it doesn’t take a lot of brains to kill people

    already we’ve got predator drones, how hard is it to make an a.i to self pilot them?

    every industrialized country is experimenting on robotic weapons and training a.i to out think humans in tactical situations, already you can set up a.i networks to supervise things like anti-missile batteries

    Look at those a.i controlled drones that can do intricate acrobatics like clouds of bees — they can out fly any human pilot ( no matter what Battlestar Galactica would want us to believe)

    I believe in a.i as i don’t think we can go forward, explore planets and so on, without them but the tech linked to robots will be disruptive.

    1. You saved me some time. Fundamentally, the danger is more a matter of the degree of control that we cede to automation than it is to the innate intelligence of that AI. IMO, a disaster is far more likely to occur because of an automation failure or flaw in the design of that automation than because of an AI with evil intent, at least in the next few decades.

      If we get to the point of real AI, like in an Asimov novel, then all bets are off unless we figure out an immutable way to encode the Three Laws of Robotics.

  2. We are in danger of destroying ourselves by our greed and stupidity. We cannot remain looking inwards at ourselves on a small and increasingly polluted and overcrowded planet.
    – Stephen Hawking

    At the moment, I ☁ abstractly ☁ consider the all mighty authority of judge and jury Gort to be better than the clown show of idiocracy politics we have in the USA, both worthless parties, as Peterblood71 points out above.

    Here on Earth: Much as I admire Ray Kurzweil, the progress of “Artificial Intelligence” is incredibly slow. The enthusiasm for its progress has been immense. Its results have been meagre. We’re still only working with Expert Systems which by no means equate to AI, except in the imagination.

    We humans have this primitive science we call ‘Psychology’ that is mired in the muck of religion, culture, embarrassment, totalitarianism, propaganda, modesty, bombast, manipulation, fear, reproduction, abuse, HYPE and severe over-simplification. We literally cannot comprehend, as humans, our own ‘intelligence’. I don’t know what it is. We’re consistently stymied when we attempt to understand what it is because of the above noted muck.

    Then there’s the Dark Age of Computing we’re living in, which we increasingly understand, day by day, to be fraught with errors and error-prone methods of coding. I won’t even go into the profound ‘LUSER’ factor amidst our species.

    Result: If there was an AI that stood up and took over humanity today, it would be a fizzling, shorted-out pile of blown circuitry tomorrow due to either tripping over bungles in its own code or from a massive attack of human hackery.

    So, thank you Woz for stepping out of the Kurzweil reality distortion field. We may well leap out of the Dark Age of Computing and have to worry about something legitimately artificially intelligent. But it’s not now!

    What we REALLY have to worry about is psychopathy politics, whereby we human use our new techno-tools to kill one another on a more grand and COWARDLY scale. I foresee ever more sophisticated MURDER DRONES/ROBOTS in our future. Then, as usual, there’s the consequence of our continual defecation inside our own house. Let’s face these very human horrors instead of worrying about faerie changlings, witches, vampires, demons, zombies and rogue-bots.

    🔮 👻 👸 🤖 🎭 🦁 🐯 🐻 😱 🙀

    1. I think you have to understand a bit about AI before making any kind of assertion. I would think that Musk might, but it is possible that Hawking does not. I’m surprised that Woz might.

      It is possible to be brilliant beyond categorization in the natural sciences and yet still be wrong about certain things and Hawking is wrong here, as are all people predicting that AI will destroy the world. It won’t happen. At least not in the “Skynet” way.

      The phrase A.I. has gotten out of hand, like most nomenclature the moment marketing people get hold of it.

      The same thing happened back in the 80s when we thought A.I. was going to take over the world, and that the Japanese were far ahead of everyone with their “5th Generation” computers. It essentially fizzled out when it became clear that our imaginations had far exceeded the capabilities of the hardware.

      What has happened is that the same ideas and techniques from the 80s have been recycled. Machine Learning, Neural Networks, Expert Systems, Speech Synthesis, Vision Systems, Natural Language Processing, Knowledge Engineering, robotics, and so forth, are being revived.

      This time, however, the hardware has become fast enough, small enough, powerful enough, and inexpensive enough to apply the science.

      It wasn’t possible, for instance, in 1986 to build a computer that could be contained within an automobile that was fast enough and had enough storage to house the code and knowledge base to drive an automobile. Coding, however, hasn’t changed *all that much.*

      So A.I. from the 80s is back, but now it can actually be implemented. My $1000 handheld supercomputer takes the place of a massive computer room full of giant systems from DEC, Data General, CDC, or IBM costing millions of dollars.

      The thing is, “A.I.,” as we are referring it, is still focused on highly discrete single purpose intelligent systems. In fact, it wouldn’t be so scary if we just said “Smart” instead of A.I., but “A.I”. fuels the imaginations of investors.

      It also conjures up images of Skynet, Terminators, and HAL, but all you have to do to see how close we are to that is look at Siri or Alexa or any natural language bot out there. They’re utterly stupid.

      What worries many people is AGI or “Artificial General Intelligence.” It’s a holy grail and we’re no closer to building a system that has human-like general intelligence than we are to Warp Drive, Transporters, and Food Replicators.

      You can write a system that can take off, fly, and land a 747, but it doesn’t know what a penny is. It isn’t self-aware, and has no knowledge or instructions outside of flying that plane. It’s a zillion dollar automaton.

      It is true we will see “A.I.” techniques utilized throughout our lives, from self-driving cars to every device in our houses sharing data about our behavior, to make us happier and more comfortable, but again, these are discrete systems.

      I.e. your toaster will learn to make toast exactly the way you like it. Your coffee maker will make the best damn cup of coffee you’ve ever had. Your bathroom scales will share data with your Apple Watch about your behavior, along with your food buying habits, and some disembodied but completely stupid voice will tell you to lay off the twinkies.

      But… your toaster isn’t going to wake up in the morning and tell your car “If you crash him into a pole today, we can be rid of him.”

      All of the A.I. in the world will still not produce a single “THOUGHT.” It will still be stupid machines doing what stupid machines do, following instructions. And your super advanced self-driving car will have no idea what toast is, but it will drive a car better than any human being possibly could.

      The only thing to be apprehensive about will be how fast will human beings lose jobs and purpose to these systems? In this respect, no one will be safe.

      I can already foresee completely automated home construction for instance or completely automated UBER like services. Robots will be preparing and serving food. Systems will be taking all that information your home gathers about your health and communicating to your automated “Intelligent” physician. Your toilet will be doing waste analysis and sending data to auto-Doctor as well.

      With all that monitoring, you’d be getting a free checkup every day!

      Of course, you will have access and control over all of this via your iPad or other such devices. Conventional desktop and laptop computers are dying out.

      But as far some monolithic human-like intelligence taking over the world… meh.

      With one caveat… if people learn to connect a computer to the human brain… maybe. If we learn to copy a human brain, and convert it into software somehow, maybe, but that’s just crazy talk right now. Hundreds of years crazy talk.

      1. Exactly. To sum it up, the systems you are talking about do not have the capability to answer a question that someone has not programmed it to answer. It can only “choose” the answer from a list that it has been given.
        Siri, and all the others are better than they have been, but a smart 6 year old can confuse them.

        And thats good in the overall scheme of things.

        1. People also need to learn to recognize when so called experts are paving the way for their government grants and government agencies are paving the way for their own budget increases. Nothing sounds so stupid as a Congress person quoting someone who is asking them for research money. The easiest way to get government money is to establish nationwide moral panic. A moral panic is a feeling of fear spread among a large number of people that some evil threatens the well-being of society. … Stanley Cohen states that moral panic happens when “a condition, episode, person or group of persons emerges to become defined as a threat to societal values and interests.”

      2. ⭐Thank you thetheloniousmac.

        In keeping with my concept of ‘The Age Of Trivia’: Yes, this is all First World Thinking. To save space I’ll simply point to the book ‘Brave New World’ by Aldus Huxley to get an idea of the growing separation of thinking between the First and Third world. Meanwhile, the natural systems of Earth dictate our survival, no matter how much we ignore them and wreck them. We live on Miracle Planet Earth, Our Only Home. <–As I call it.

  3. I have mixed comments. I strongly doubt that strong AI will be achieved soon. The short answer is that we don’t know how intelligence and the brain work so we can’t reverse engineer it. On the other hand, computers and expert systems will get faster and more powerful. Abuse of this technology will hurt people. Failures of this will be hard to deal with. This will be akin to the large database problem. Wrong information gets in the database and you go through hell getting it corrected. An AI sytem will produce a result that can’t be explained or tested. What to do?

    If self driving cars produce quarter of the fatalities as do human drivers then we’d probably accept that. What if an AI system mostly works well but has a once in a hundred years chance of causing a huge catastrophe? Is that acceptable?

    What about repair and maintenance? The Golden Gate Bridge is 80 years old. We can easily maintain it for another 80 years. When an AI system is 80 years old who will check the software? What if the AI system is writing its own software and starts behaving a little badly? Some years ago a navy ship running Windows got hit with a massive BSOD and had to towed back to port. Are we smart enough to build massive systems that will be as resilient as we need it to be?

    Complex life has managed itself on this planet for something like 600 million years. In that time no fatal crash of DNA coding and replication has brought this system to a halt. Very impressive.

  4. Her is where he needs to rethink his position again. “That’s because in cognitive ability, machines are eons behind even a young child. He’s not impressed,” so it would be OK to put a young child in command of weapons of mass destruction because they are putting these weapons under the control of AI.

  5. Steve Wozniak is a weather cock who changes his opinion whenever the winds change. Elon Musk is a hypocrite with a prejudicial interest in regulating AI as longs as it benefits his own AI interests. Musk is dangerous because he rolls out products without much thought about the consequences, example, his flame throwers which can be misused as a weapon.

    Authority figures are dangerous, rather than AI. Musk wants to plant chips in the brain of consumers, thus allowing corporations and governments to have greater access and control of their citizens. Regulation also gives governments and a select group of interested parties a monopoly and control over a technology.

Reader Feedback

This site uses Akismet to reduce spam. Learn how your comment data is processed.