Woz: One day, robots will make us their pets

“Steve Wozniak foresees a world far in the future controlled by artificial intelligence,” Teena Hammond reports for TechRepublic. “This used to scare him, but after reasoning his way through it in a uniquely Woz way, he’s decided it will benefit humanity in the long run.”

“‘They’re going to be smarter than us and if they’re smarter than us then they’ll realize they need us,’ Wozniak told the audience of 2,500 people today at The Moody Theater in Austin, Texas, as part of the Freescale Technology Forum 2015, which runs June 22-25,” Hammond reports. “Wozniak said he used to lie awake and worry about the concept. But he finally realized, ‘It’s actually going to turn out really good for humans. And it will be hundreds of years down the stream before they’d even have the ability. They’ll be so smart by then that they’ll know they have to keep nature, and humans are part of nature. So I got over my fear that we’d be replaced by computers. They’re going to help us. We’re at least the gods originally.'”

Steve WozniakI want the Internet of Things. It does things for me. I don’t have to think. The Internet of Things, if it ever did want to take over the world, would send a message to the computers of today saying, ‘build us the Internet of Things, that’s what we need.’ It makes things nice for humans, so we want this. If it turned on us, it would surprise us. But we want to be the family pet and be taken care of all the time. — Apple co-founder Steve Wozniak

Tons more, from Apple to Segway to Android and more, in the full article – recommendedhere.

MacDailyNews Take: We should be so lucky. 😉

[Thanks to MacDailyNews Readers “Fred Mertz” and “Lynn Weiler” for the heads up.]

32 Comments

    1. One thing we should never do is create something that can feel. If we build robots that are as intelligent or more than we are, knowing how cruel we can be, the most cruel thing in the world would be to give them feelings.

      1. T-Mac,

        I haven’t been much on computer games since the early 90’s, for almost the same reason. It dawned on me one day that the degree of difficulty or ease for (winning) games was programmed into the games themselves by the programmers. (BTY, I didn’t mind, back then, of taking a program text editor, finding the “score” section and “bumping” the score up.) Games since then lost all of its fun for me.

        You are right. When you can’t control the negative or positive aspect, the results can be horrible. That is what feelings do.

      1. Your reasoning is unclear. Modern humans have existed for a relatively short period of time, perhaps a few hundred thousand years or so (pick your number, but it is on the order of a million, plus or minus). Technology has only become advanced in the past couple of centuries. Computers and robotics have only existed, even in rudimentary form, for about 70 years.

        Advancement towards advanced robotics continues. The big question is whether or not biology somehow has a monopoly on self-awareness and creativity and emotions, or if these are all mathematical constructs like the rest of the universe. If it is the latter, then AI is inevitable.

        I get tired of people casually disparaging Woz. He says some interesting things and has some interesting points of view. You don’t have to agree with him, or even like him. But he has made some significant contributions to our technological progress – probably a lot more than the vast majority of the people who post disparaging remarks about him.

  1. Woz, we already have corporations that make us their pets. Witness ‘Fast Track’ passing in the US Congress. Here come TPP and TTIP, the treaties that will make corporatocracy the law of the human globe. And these same people will be sending robots to your door to shut us all up one of these days. We have a lot more immediate and dangerous things to worry about than SkyNet or the need for a Butlerian Jihad. [Dune reference]. IOW: Psychopathic humans are NOT our friends.

    1. Frank Herbert’s Dune is an interesting reference, because it takes place in a distant future where “thinking machines” are outlawed. How that came to be is all backstory to the Dune story, where there are “guilds” of humans evolved and trained to take the place of thinking machines.

      So, that’s the twist (still just the backstory to the actual Dune story). First, humans create machines that replace humans and more, and things are good for a long time. But it leads to conflict; humans eventually prevail. To avoid repeating history, humans evolve into enhanced “breeds” to replace the thinking machine. Some have mega-computational ability, some can navigate spaceships by “folding space,” some have abilities like the “Jedi” in Star Wars, etc.

      That might be the third option, but without the A.I. takeover. It’s not US against THEM. Technology gradually enhances humans over the generations. The “new normal” is established with each generation, so there is no resistance (from the general population) to the ongoing change. In this way, humans incorporate the tech of artificial intelligence, without ever spawning it as a separate entity. Humans gradually become the “thinking machines.”

  2. Everyone has an opinion and Woz does too. Unfortunately, Woz thinks his opinion matters and so do I think mine matters. I lost respect for him years ago.

  3. Woz was a wonderful engineer, a genius at making chips and hardware perform far beyond their original design, but his conversations are merely fun and beyond his narrow area of expertise. And I’m sure he would agree.

  4. There are at least 5 separate but complementary modes of intelligence that are all necessary for the level of “intelligence” that has been achieved by man, and arguably to some extent by other living critters we share Gaia with.

    1. Logical (left brain/cortex) intelligence, related to attention, and using reason to breaking problems down into parts and pieces that facilitate solving the problem. This mode is highly dependent on framing problems within an artificial “closed system”.
    2. Holistic (right brain/cortex) intelligence, related to awareness, and seeing the big picture and where we and our problems are integrated into that big picture. This mode is able to visualize a “problem” within the broader context of the reality of open systems.
    3. Emotional intelligence, related to wants and needs and how to manipulate our own emotions and the emotions of others to get what we want and need – located in the limbic system and “wired up” by the time we are 5 or 6 years old.
    4. Physical intelligence, related to physical action, partially located in the brain stem and throughout the body’s nervous system and related to physical talents.
    5. Spiritual intelligence, related to universal awareness and grasping the difference between right and wrong, good and bad, proper and improper, etc….

    All of these modes of intelligence function using memory, which is arguably the only function where computers MAY have an advantage.

    Machine intelligence is just scratching the surface of #1. We have a very very long way to go before we approach anything like the artificial intelligence depicted in entertainment focused programs like Halle Berry’s “Extant” – an enjoyable work of dramatic fiction. In my own personal opinion, machines will never reach the level of intelligence of humans, of even of many other forms of life.

    1. THANK YOU 🙂

      You perfectly described the elements of what we all “feel”, so never underestimate common sense !

      But where do we go without Jon Stewart 🙁

  5. If you watch the DARPA Robotics Challenge, you will realize that robots are a long long way off.

    Simply getting out of a car is a challenge. Why????

    Humans have spent millions of years, life in general, billions of years moving through space. We have taken something very complicated and turned it into something very simple to us. It’s so simple and we have so much overhead to deal with space, we find it difficult to comprehend just how difficult it is to teach a robot to pick up where we left off.

    We seem to forget, math is very very simple. Yet it is very hard for us, mostly because we haven’t spent millions of years doing math. It’s not our thing. But since it’s so simple, it didn’t take much for us to teach computers how to do it.

    Human perception of MATH=DIFFICULT, is not the same as IF COMPUTERS + MATHS = EASY then COMPUTERS + EVERYTHING ELSE = EASY. It’s our own misunderstanding of the fundamental problems of describing the world around us…

    Instincts are better than our programming abilities.

    The solution to a robot hunting you down, is to close the door behind you.

  6. I think the process of developing AI, should also include showing respect and spirituality. We should teach AI as we teach our own children… That they learn love and empathy, as we do.

    But that is far far down the road. Woz did say hundreds of years.

      1. Religion is important, in order to understand the human condition. There are so many variable perspectives, that unless you know all of them, people will seem a lot more crazy than they are. But maybe they really are crazy. What if Chappie landed in a religious group? (Chappie is a robot, from a recent movie, for those who don’t know.). “I need some moneies for a new battery!”

  7. Woz’s iPhone told him to say “One day…” so we wouldn’t realize it’s already happened. Just look at all the people who’ve learned to sit, fetch, and roll over on iPhone release day.

  8. I think people are slightly misunderstanding Woz. He says that we ‘want to be the family pet’, which suggests a wish for robots to create a support system in which they do everything for us, but one in which we have ultimate control. I would be happy with that. There are many instances in life where we give other human beings the feeling of independence and direction, when we, in fact, are directing their ‘choices’. Bring on the era of the truly intelligent robot and make it soon, please.

  9. Given the most logical examination of the past and present “achievements” and limitations of humanity, we are far more likely to be destroyed by our creations than given paradise by them.

    Before ultra-powerful machines and robots achieve self-awareness and the ability to self-replicate (not to mention a humanitarian view of humanity), we will likely have used them for our own multiple and distinct agendas to destroy civilization and/or wipe ourselves out.

  10. I agree with Woz but disagree as well. Us humans are going to dabble with AI and it will benefit mankind for a while, but since man will make AI, there will be some imperfections. That is if they get it in before WW 3 at about 2040 to 2050. Then there will be no AI and 3/4 of the world’s population gone.

Reader Feedback

This site uses Akismet to reduce spam. Learn how your comment data is processed.