Woz, Musk, AI experts urge pause on training of AI systems that can outperform GPT-4

Apple co-founder Steve Wozniak, Elon Musk, and a group of artificial intelligence experts and industry execs are advocating for a six-month pause in the training of artificial intelligence systems more powerful than GPT-4, they said in an open letter, citing potential risks to society and humanity.

Apple co-founder Steve Wozniak
Apple co-founder Steve Wozniak

Reuters:

The letter, issued by the non-profit Future of Life Institute and signed by more than 1,000 people including Musk, Apple co-founder Steve Wozniak and Stability AI CEO Emad Mostaque, called for a pause on advanced AI development until shared safety protocols for such designs were developed, implemented and audited by independent experts.

“Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable,” the letter said.

The letter also detailed potential risks to society and civilization by human-competitive AI systems in the form of economic and political disruptions, and called on developers to work with policymakers on governance and regulatory authorities.

MacDailyNews Take: Good luck with this noble effort, but this proposed “pause” runs contrary to human nature and, as in all endeavors that attempt to do so, will fail.

As we wrote in 2007, “Business models that fly in the face of human nature are doomed to failure.”

Systems of government that fly in the face of human nature are doomed to failure, too. — MacDailyNews, October 19, 2019

Anything that flies in the face of human nature is unsustainable and will therefore fail at some point.

Pause Giant AI Experiments: An Open Letter” verbatim:

AI systems with human-competitive intelligence can pose profound risks to society and humanity, as shown by extensive research[1] and acknowledged by top AI labs.[2] As stated in the widely-endorsed Asilomar AI Principles, Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources. Unfortunately, this level of planning and management is not happening, even though recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control.

Contemporary AI systems are now becoming human-competitive at general tasks,[3] and we must ask ourselves: Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization? Such decisions must not be delegated to unelected tech leaders. Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable. This confidence must be well justified and increase with the magnitude of a system’s potential effects. OpenAI’s recent statement regarding artificial general intelligence, states that “At some point, it may be important to get independent review before starting to train future systems, and for the most advanced efforts to agree to limit the rate of growth of compute used for creating new models.” We agree. That point is now.

Therefore, we call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4. This pause should be public and verifiable, and include all key actors. If such a pause cannot be enacted quickly, governments should step in and institute a moratorium.

AI labs and independent experts should use this pause to jointly develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts. These protocols should ensure that systems adhering to them are safe beyond a reasonable doubt.[4] This does not mean a pause on AI development in general, merely a stepping back from the dangerous race to ever-larger unpredictable black-box models with emergent capabilities.

AI research and development should be refocused on making today’s powerful, state-of-the-art systems more accurate, safe, interpretable, transparent, robust, aligned, trustworthy, and loyal.

In parallel, AI developers must work with policymakers to dramatically accelerate development of robust AI governance systems. These should at a minimum include: new and capable regulatory authorities dedicated to AI; oversight and tracking of highly capable AI systems and large pools of computational capability; provenance and watermarking systems to help distinguish real from synthetic and to track model leaks; a robust auditing and certification ecosystem; liability for AI-caused harm; robust public funding for technical AI safety research; and well-resourced institutions for coping with the dramatic economic and political disruptions (especially to democracy) that AI will cause.

Humanity can enjoy a flourishing future with AI. Having succeeded in creating powerful AI systems, we can now enjoy an “AI summer” in which we reap the rewards, engineer these systems for the clear benefit of all, and give society a chance to adapt. Society has hit pause on other technologies with potentially catastrophic effects on society.[5] We can do so here. Let’s enjoy a long AI summer, not rush unprepared into a fall.

Notes and references here.

Please help support MacDailyNews. Click or tap here to support our independent tech blog. Thank you!

Support MacDailyNews at no extra cost to you by using this link to shop at Amazon.

18 Comments

  1. MacDailyNews Take: “Good luck with this noble effort, but this proposed “pause” runs contrary to human nature and, as in all endeavors that attempt to do so, will fail.”

    About that…why haven’t we gone back to the moon?

    1. Humans didn’t seem to have the wherewithal, or the concern, to NOT create and unleash mRNA–likely with aims/hopes to cull the population. The same evil is at work everywhere. Why in the World, then, would one think an admonition; “hold your AI horses” is going to stop those with malicious intent?

  2. Contrary to popular belief perhaps, AI may not be good for human nature. There can be significant negative impacts to this tech and they require careful consideration, which is what Woz and Musk and others are warning about, to blow it off like that MDN, shows ignorance on your part

    1. Accusing others of “ignorance” when you’re so ignorant that you can’t read, or understand what you’ve just read, is richly ironic.

      Nowhere in their Take does MDN “blow off” the warning, they simply state that it won’t be heeded and explain why.

      1. “Blow it off” is not specifically stated in exact words. But it is implied. True reading comprehension you don’t understand, also lies in one: reading between the lines, and two: what is not said…

        1. MDN is very clear, but I will explain it for those who need help.

          Your first clue: MDN calls the letter a “noble effort.”

          Meaning: They side with the letter writers and signatories.

          Your next clue: MDN then explains why the pause won’t happen.

          Reason: Such a pause would run contrary to human nature and therefore at least some humans will ignore the warning and proceed with their AI development. (In this case, you can infer that those who won’t pause their work would naturally fear that others wouldn’t heed the pause and would then leapfrog those who paused, so pretty much nobody will pause. HUMAN NATURE.).

          It’s very simple to understand if you possess, at the very least, basic reading comprehension capability.

        2. Obviously you flunked reading comprehension, AGAIN. I was talking about @Why characterization that you don’t seem to grasp. As to “noble effort” in the the first line is NEGATED by the rest of the take that also implies don’t waste your time, never happen. Pay attention: you can’t have it BOTH WAYS and ALL WAYS unless you’re a Democrat. The conclusion any journalist or writer is taught, is the last sentence of news reports and stories. You must have received a participation trophy in school, it shows…

        3. You replied to me, “Reading Comprehension is your Friend,” not “Why?” in your first reply above, so I naturally responded to you.

          If you were replying to “Why?,” you should’ve used the “Reply” link under the comment made by “Why?,” not the “Reply” link under my comment, genius.

        4. By the way: “noble effort” is in no way “negated” by the rest of MDN’s Take.

          Perhaps English is not your first language? If so, your confusion is understandable.

          A “noble effort” is, by definition, an admirable action that is based on high moral principles.

          The “noble effort” does not lose its admiration just because it is destined to fail (because human nature will not allow it to succeed).

          Many “noble efforts” fail. They remain admirable and laudable nontheless.

  3. Meanwhile at the governments super-secret Non-BinaryHatten Project, the nations greatest scientist, who saved up from BatFever and Global WhateverIsGoingOnToday Weather, are still at work to created THEIR AI that can one day “save the verld!” by replacing human common sense with Leftist Reality…

  4. I asked Open Chat AI if AI should be regulated? It’s response.

    The question of whether AI should be regulated is a complex one with no easy answer. On the one hand, AI has the potential to bring about tremendous benefits, such as improved efficiency and safety, and better decision-making. However, there are also concerns about the potential negative consequences of AI, including job displacement, biased decision-making, and even the possibility of AI systems becoming uncontrollable and posing a threat to humanity.

    Given these risks, many experts believe that AI should be regulated in some way. Some have called for specific regulations around the development and deployment of AI systems, such as requirements for transparency and accountability, and the establishment of ethical guidelines for their use. Others have argued for more general regulations around the use of AI, such as limits on how much data can be collected and used to train AI systems.

    Ultimately, the question of whether and how AI should be regulated will depend on a variety of factors, including the nature and scope of the AI systems in question, the potential risks and benefits they pose, and the values and priorities of the society in which they are being developed and deployed. It is likely that a balanced and thoughtful approach to AI regulation will be needed, taking into account the potential benefits and risks of these technologies, as well as the interests of various stakeholders

  5. What happens when the majority of humanity are useless mouths or, worse, useless minds? So far AI and automation has only augmented human labor and increased its productivity, and many believe that augmentation will continue to be its role. But some are now convinced that AI is now so far superior to humanity in certain fields that humans would simply have nothing to add…

    Once AI can get up on its own and walk around and observe then the world becomes its training set and its capabilities will expand dramatically. That possibility is now at hand with the advent of plugins that will allow it to interact with other services, potentially drones, mobile robots and other sources of information. Were this to happen, “augmentation” could rapidly give way to replacement. Some people would literally become economically obsolete…

    There is the danger of resource overconsumption. If artificial intelligence becomes better each year, the economy and by implication the UBI would keep growing unless some artificial cap on the resources it could access were imposed. The useless mouths would just get better games, more food, and better public housing until political pressure to reduce population comes from the Greens (assuming humans are still involved in politics).

    When economies grow, they typically require more resources such as energy, water, and materials, and produce more waste and pollution. This will stoke fears of climate change, air and water pollution, and loss of biodiversity etc among the Woke, who will presumably be advising the state — all for the sake of people who are no longer even economically relevant. Various approaches such as circular economy models, renewable energy, and zero-waste strategies will be tried, but ultimately there will be a conflict between resource use and generosity of UBI. The answer is less people.

    MORE: “Useless Minds” by Richard Fernandez, March 28, 2023

  6. Technology is not neutral. Every technological innovation comes with positive and negative consequences: think guns, nuclear energy, internet, automobile, and so on.

    Whatever your take on the origin of mankind and human nature, just a cursory understanding of world history demonstrates that mankind is fully capable of creating/building/doing/inventing amazing good for the benefit of everyone. The same historical record shows mankind is fully capable of great lust for power and money to do evil/violence/destruction.

    Often mankind in general and individuals in particular achieve their greatest good when they are restraining their natural impulses.

    1. You’re on the same track that I was thinking. Humans are dangerous to humans too. AI is a human invention and thus it has a danger to it because it is built by humans and based on humans.

  7. It’s getting to the point where;

    I might be movin’ to Montana soon
    Just to raise me up a crop of
    Dental Floss
    Raisin’ it up
    Waxen it down
    In a little white box
    I can sell uptown

Reader Feedback

This site uses Akismet to reduce spam. Learn how your comment data is processed.