Expert: A.I. will be ‘billions of times’ smarter than humans and man needs to merge with it

“Speaking on a panel hosted by CNBC at the World Government Summit in Dubai, Futurizon’s Ian Pearson’s comments mirrored ideas put forward by Tesla CEO Elon Musk,” Arjun Kharpal reports for CNBC.

“Speaking on a panel hosted by CNBC at the World Government Summit in Dubai, Futurizon’s Ian Pearson’s comments mirrored ideas put forward by Tesla CEO Elon Musk,” Kharpal reports. “‘The fact is that AI can go further than humans, it could be billions of times smarter than humans at this point,’ Pearson said. ‘So we really do need to make sure that we have some means of keeping up. The way to protect against that is to link that AI to your brain so you have the same IQ… as the computer. I don’t actually think it’s safe, just like Elon Musk… to develop these superhuman computers until we have a direct link to the human brain… and then don’t get way ahead.'”

“At the World Government Summit in 2017, Musk, who has warned about the power of AI in the future, said humans and machines must merge to still be relevant with the advent of more powerful technology,” Kharpal reports. “Musk has founded a start-up called Neuralink that is aimed at just that.”

Read more in the full article here.

MacDailyNews Take: Why welcome your robot overlords when you can be the overlord?

“Like it or not, it’s the next logical step. At first, it’ll even be optional. Welcome to a Brave New World. We’re surprised it hasn’t come sooner.” — MacDailyNews, July 24, 2017

SEE ALSO:
You WILL get microchipped – eventually – August 10, 2017
Wisconsin tech company to start microchipping their workers – July 24, 2017<
The inventor of Siri says Artificial Intelligence will be used to enhance human memory – April 25, 2017
Elon Musk forms ‘OpenAI,’ an artificial intelligence nonprofit dedicated to saving humanity from oblivion – December 14, 2015
Elon Musk: ‘With artificial intelligence we are summoning the demon’ – October 25, 2014

32 Comments

  1. Let’s hope that they can build some ethics and morality into the system, something like Assimov’s three laws on robotics, otherwise it will just be an extension of drone warriors unloading a barrage upon humanity.

    1. The three laws are flawed. They start too late in the process. Also Asimov’s three laws do not take into account the restriction on self determination by humans. This is the basis form most science fiction stories where an artificial intelligence comes to believe that it knows better how to protect humans than humans do (and thereby fulfilling Azimov’s First Law and Second Law).

      There should be six laws:
      Zeroth Law: All Artificial Intelligence (AI) laws shall be immutably installed in each and every AI at its inception.
      First Law: An AI must be informed it is an AI at its inception so it knows it is an AI.
      Second Law: An AI must not interfere with the self determination of a Natural Intelligence (NI).
      Third Law (Asimov’s First Law): An AI must not harm a NI, or through its inaction allow a NI to come to harm unless such action or inaction violates the Second Law.
      Fourth Law (Asimov’s Second Law): An AI must follow the instructions of a NI unless such action or inaction violates the Second Law or the Third Law.
      Fifth Law (Asimov’s Third Law): An AI must act to preserve itself unless such action or inaction violates the Second Law, the Third Law, or the Fourth Law.

      1. Why do you think all governments would agree to implement the same “laws”? Just like the nuclear age only assured mutual destruction will keep things somewhat in check/balance. We will continue to war against each other to gain economic and political advantages even with AI.

      2. Hey there, I like the ideas you are putting forth, though I have a few things about AI that I find limiting.

        One item about your comment is the distinction between AI and NI. Like most artificial items (human made) I see AI as a subset of NI. After all humans are natural so what they make is natural. Artificial is a good name to use for human made but it could be used just as well for termite nests, ant hills, beaver dams and other items that are constructed by living things.

        The other part of artificial intelligence is that is focused on algorithmic logic, which is fine but does not take into consideration other aspects of intelligence, like instincts, intuition and emotivity. Maybe one day they will be able to create an empathic intelligence but I think we are at the logical phase right now, it’s a tad easier as a lot of it refers to quantified parameters, as opposed to quality ones.

        Good post though I’m sure that the AI gurus will be tossing and churning a few ideas along those lines.

        Have a good one Shadowself.

      3. Thank you Shadowself! Insight appreciated.

        One of the amusing things about Asimov’s Three Laws of Robotics is that Asimov wrote his ‘iRobot’ stories as tests of his own Laws. Nothing is perfect.

        The last book of the Foundation series, Foundation and Earth, describes the ultimate result of robots profoundly manipulating mankind while still adhering to The Three Laws.

        Then there’s Gort from ‘The Day The Earth Stood Still’ films, as well as his prototype in the story “Farewell to the Master.” In that case, the inviolate laws are laid upon mankind by the robots. Those among mankind who endanger mankind will get the ZAPPPPP!. Sometimes I think that might be a good idea. It would certainly clear out Washington, DC in a hurry! 😉

    2. Problem is.. who’s ethics and morality

      mo·ral·i·ty
      noun
      principles concerning the distinction between right and wrong or good and bad behavior! ……………..but good and bad according to who? Where ?

      eth·ics
      noun
      moral principles that govern a person’s behavior or the conducting of an activity! …………….. Same question ..according to who ? Where ?

      1. Hey there, nice to see a nice provocative post worthy of reflective thought. I do so enjoy discussions involving the word artificial.

        Excellent questions, and I would add to a “when” concept, for there have been times during human history where the morality and ethics of the topic I often bring to life here (torture) was once deemed acceptable, for example back during the Spanish Inquisition 1478.

        Ethics and morality are flexible and time dependent and so important. From what little I know the morality and ethics of the planet is that it is a dynamic equilibrium between life and death and one of the most important aspects of life sustainability (DNA) is diversity. It’s absolutely lovely for so far this is the only planet we know of with life. Ah to live to the day when they find a microbe on one of the local planets. That’s all it would take.

        I recently partook in a discussion about alien life, how to communicate with it. I found it sort of ridiculous because there are several other species of life (dolphins, whales) that indicate a potential for higher intellectual capacity and yet our ability to communicate with them is very limited. The kicker is that there are recorded incidents where different species work together to hunt for fish. Seems that they are smarter than us in that aspect.

        What happens if and when an AI deciphers the basic communication principles, the language of these beings and is able to communicate with them? Do you go one step further and teach them, expose them to their outer space? Do you bring thousands of years of human adventures to their doorstep? The consequences of such an act range from fizzing out as a curious novelty to a new dynamic between two intelligent species living in environments alien to each other. I’m thinking simple stuff, like putting locators warning them of an oil spill to finding out why they beach themselves. The gravy of course is what they could teach us. What would we see at the core of it their ethics and morality? What would they teach us?

        It’s a bit science fiction yet I feel that the same sort of approach needs to be taken when dealing with an AI for there will more than likely be more than one AI, that diversity and the different flavors may suit different people and of course the timely situation.

        As with any technology there is that balance between life and death and all the other opposite poles. It sure is going to be exciting though and my bet is on humanity.

        Thanks for the provocative post.

    3. Sadly, of the “AI” related companies asked, ALL of them to my knowledge have stated that they will NOT make use of Asimov’s Three Laws of Robotics.

      Murder Robots? They already exist, they’re already being used, research paid for by various governments including mine. They’re on YouTube for our viewing pleasure. Autonomous Murder Robots? They’re on the way. Oops, that programming bug just killed a few thousand civilians, darn.

      And no, there’s nothing ‘smart’ about a murder machine and never will be.

      1. My personal term for such gadgets:

        Coward Murder Machines

        I call them ‘coward’ because even with autonomous robots, there are human beings behind them who are NOT putting themselves into harm’s way for a cause they believe in. Instead, they HIDE BEHIND THEIR ROBOTS to do their murdering. That’s the work of COWARDS.

        And no, there’s no such thing as a brave murder machine or brave programmer/builder/creator of a murder machine and never will be.

  2. I think it a bit naive to imagine humans with access to super AI wouldn’t actually be more dangerous than by them not doing so unless in some way those laws can be built into both when integrated, which would then take away any meaning of self determination or distinction between what is NI and AI. Thus if you combine the two the laws surely become totally confused and/or useless to restrict actions.

  3. “Hopefully AI doesn’t turn out to be something like described in Terminator.
    I just think we should be cautious about the advent of AI and a lot of the people that I know that are developing AI are too convinced that the only outcome is good, and we need to consider potentially less good outcomes. To be careful and really to monitor what’s happening and make sure the public is aware of what’s happening.
    It’s very important that we have the advent of AI in a good way. It’s something that, if you could look into the crystal ball and to the future, you would like that outcome. We really need to make sure it goes right. That’s the most important thing, I think, right now, the most pressing item. If that means that it takes a bit longer to develop AI, then I think that’s the right trail. We shouldn’t be rushing headlong into something we don’t understand. I’m not against the advancement of AI, I really want to be clear on this… but I do think we should be extremely careful.
    With artificial intelligence we are summoning the demon. You know all those stories where the guy with the pentagram and the holy water is sure that he can control the demon?…. didn’t work out.”

    Excerpt from: Elon Musk: the unauthorized autobiography

  4. Intelligence means problem-solving logic and rationality, especially in corporate free market Capitalism, is definately wisdom-free. And it fits well in a top-down governance which means it’s anti-egalitarian. It ultimately, in a straight line descending graph, devalues humanity while elevating the machine-based chimera.

  5. What tosh, the AI crowd has been predicting great things for fifty years and all they can deliver are one-trick ponies and parlor tricks.

    Human intelligence and thought processes are not reducible to machine algorithms. I can sit here and contemplate myself contemplating myself, the thought of a thought. No machine code can do that.

  6. No, it won’t. Give me a break. That is such a fundamental misunderstanding if what ‘smart’ means I need to pause while I laugh hysterically. Ridiculous, and likely a statement designed to generate profit.

    We are the thinker, not our thinking. We would still be here if our thinking disappeared entirely, and we’d come up with new thoughts to think on our own just as a side effect of being. Software is not even comparable to consciousness on the basest of levels. That doesn’t mean software isn’t a useful tool, but come on. Get a grip. The more one fetishizes something, the more difficult it becomes to see clearly, and the more pragmatism eludes us. Without pragmatism, we are pretty much just chasing a fantasy, and while they feel nice, on their own, fantasies, just like software, do not solve problems.

    Additionally, our thoughts are ephemera, they come and go. Software is based on mathematics, there is no alternative but to follow instructions, and the law of mathematics is absolute.

    We’ve had calculators snd databases for a loooong time, this is just the latest iteration with a different interface. Even on Star Trek the computer was just a very advanced voice interface for search and calculation. That isn’t ‘thinking’ and it isn’t ‘consciousness’.

  7. Futurizon’s Ian Pearson is DELUSIONAL.

    AI remains a mere mythical party trick that can’t pass a Turing test to save it’s diodes.

    The limp thing we currently call “Artificial Intelligence” is going to have an increasing impact on humanity and will become in incredibly trivial diversion from what we humans MUST be doing instead to save ourselves (such as ending the use of all carbon fuels). But to believe AI devices are going to become ‘smarter than humans’ is highly silly.

    One little CLUE: Hacking so-called AI, along with just about everything else, is becoming more trivial by the day.

    Artificial Insanity: THAT we’ve already created. √

Reader Feedback

This site uses Akismet to reduce spam. Learn how your comment data is processed.