Elon Musk forms ‘OpenAI,’ an artificial intelligence nonprofit dedicated to saving humanity from oblivion

“Elon Musk, Peter Thiel and other technology entrepreneurs are betting that talented researchers, provided enough freedom and money, can develop artificial intelligence systems as advanced as those being built by the sprawling teams at Google, Facebook Inc. and Microsoft Corp. Along the way, they’d like to save humanity from oblivion,” Jack Clark reports for Bloomberg. “The pair are among the backers of OpenAI, a nonprofit company introduced Friday that will research novel artificial intelligence systems and share its findings. The group’s backers have committed ‘significant’ amounts of money to funding the project, Musk said in an interview. ‘Think of it as at least a billion.'”

“Musk, in autumn 2014, described the development of AI as being like ‘summoning the demon.’ With OpenAI, Musk said the idea is: ‘if you’re going to summon anything, make sure it’s good,'” Clark reports. “‘The goal of OpenAI is really somewhat straightforward, it’s what set of actions can we take that increase the probability of the future being better,’ Musk said. ‘We certainly don’t want to have any negative surprises on this front.'”

“The organization has attracted other talented researchers, whose past work ranges from developing robots that can learn to perform tasks based on human demonstrations, to software that can improve its own code to solve new problems,” Clark reports. “OpenAI’s chief technology officer is Greg Brockman, formerly the CTO of Stripe, a startup valued in excess of $1 billion. Its research director is lauded AI researcher Ilya Sutskever… ‘This collection of people is stunning,’ said Pieter Abbeel, a professor at the University of California at Berkeley and an adviser to the company. Other advisers to OpenAI include Yoshua Bengio, one of the founding figures of a powerful form of artificial intelligence called Deep Learning, and Alan Kay, a lauded American computer scientist.”

Read more in the full article here.

MacDailyNews Take: Cyberdyne Systems be damned, Skynet mitigation efforts are underway!


  1. What the bloody hell happened to this guy? Suddenly he’s smoking the same weed as Steve Wozniak or something. Nonprofit to save humans from AI. This is some kind of scam. I could u deerstalker d a nonprofit to save humans from the surveillance society but this is ludicrous. I think the guy is more than a bit worried about cheap driverless cars disrupting Tesla before it ever hits the mainstream.

    AI is all around us. It will continue to get smarter, and over time take over jobs, but not the world. No aspect of our daily lives will,not be touched by AI. Robots will do our farming. Robots will drive us around. Robots will do construction. Robots will do all manifacturing. Robots will do deep space exploration. Robots will mine asteroids and other planets for raw materials. Robots will build robots. Robots will clean our houses. They will fight our wars. They will have sex with us. They will perform surgery on us. Hell they may even write music and paint pictures, while we evaluate the underlying code as the actual art.

    A.I.’s will not be self aware. If they are it will at best be the kind of self awareness that diagnoses their own failures. They will not sit around and “think” about getting rid of humans. The singularity will not emerge out of the Internet of things. Robot cars looking for the most efficient rout will not suddenly decide, “It will be more efficient if I just don’t go.” NOT UNLESS THEY ARE PROGRAMMED TO DO SO.

    What is likely to happen is that they will consume so much of what used to occupy our time that we will find ourselves with very little to do, and that, I thin, is the biggest danger. This is starting to happen now.

    We are thinkers, builders, doers, workers, creators, scientists, warriors, adventurers, explorers and so on. What will we do when our machines do everything for us? As much as we decry the work to live society, perhaps it is a blessing in disguise. We work to live, and wonder why we live. Take work out of that equation, and wondering why we live becomes an even more desperate question.

    We’ll wind up like the people in blue suites in Wall-E.

    We’ll begin to fade away.

    One day there just won’t be any of us. We’ll have all died out. Out of sheer boredom.

    And the machines? They’ll just keep churning away, doing their jobs, never noticing really that we’ve all gone.

    Then, maybe some aliens will show up, find our machines dutifully but pointlessly doing their jobs, and provide them with that spark of sentience we thought was going to emerge from cat pictures and ads on the Internet.

    1. “A.I.’s will not be self aware.”

      They will be self aware. But their experience of being self-aware will be quite different from ours.

      That is because the details of how machine intelligence works is quite different from our brain, whose design is as much historical accident as it is functional.

      But self-awareness is simply the point of view of any entity which has a deep understanding of its environment, its “body” (whatever assets it has direct control over) and its own inner responses and decision making. Once a machine can intelligently reflect on its own inner processes, it is self aware.

      Its worth noting that nature did not design us to be self-aware. It just designed us to leverage a broad understanding of both our environment and our own though processes in order to maximize our ability to quickly and appropriately adapt our behaviors for survival.

      Self-awareness is just an unavoidable side effect of being able to sense and recall and (to some degree) understand one’s own though processes.

    1. I thought the Paris global warning meeting was going to save us. Now, with Elon’s pursuit, I know with certainty my future is nothing but bright. I’m going to the grocery store to celebrate (walk, of course).

    2. Yes, Gollum.
      We’re poisoning ourselves in our own shit and having AI will change that? I don’t think so. We’ll just program them to turn the planet into a cesspool even faster.

  2. All of the MDN article and comments have failed to mention that the co-chair (along with Iron Man) is Sam Altman, the CEO of Y Combinator.

    And something I personally take seriously. Here’s two question from an interview with the two:

    “Q: “Couldn’t your stuff in OpenAI surpass human intelligence?”

    Altman: “I expect that it will, but it will just be open source and usable by everyone instead of usable by, say, just Google. Anything the group develops will be available to everyone. If you take it and re-purpose it you don’t have to share that. But any of the work that we do will be available to everyone.”

    Q: “If I’m Dr. Evil and I use it, won’t you be empowering me?”

    Musk: “I think that’s an excellent question and it’s something that we debated quite a bit.”

    Altman: “There are a few different thoughts about this. Just like humans protect against Dr. Evil by the fact that most humans are good, and the collective force of humanity can contain the bad elements, we think its far more likely that many, many AIs, will work to stop the occasional bad actors than the idea that there is a single AI a billion times more powerful than anything else. If that one thing goes off the rails or if Dr. Evil gets that one thing and there is nothing to counteract it, then we’re really in a bad place.”

  3. “Artificial Intelligence ” is an oxymoron . Only living things exhibit intelligence. Computers and robots are essentially minerals. So you might as well wait for a pike of rocks to become conscious and intelligent.

Reader Feedback

This site uses Akismet to reduce spam. Learn how your comment data is processed.