
A Right to Warn about Advanced Artificial Intelligence:
We are current and former employees at frontier AI companies, and we believe in the potential of AI technology to deliver unprecedented benefits to humanity.
We also understand the serious risks posed by these technologies. These risks range from the further entrenchment of existing inequalities, to manipulation and misinformation, to the loss of control of autonomous AI systems potentially resulting in human extinction. AI companies themselves have acknowledged these risks, as have governments across the world, and other AI experts.
We are hopeful that these risks can be adequately mitigated with sufficient guidance from the scientific community, policymakers, and the public. However, AI companies have strong financial incentives to avoid effective oversight, and we do not believe bespoke structures of corporate governance are sufficient to change this.
AI companies possess substantial non-public information about the capabilities and limitations of their systems, the adequacy of their protective measures, and the risk levels of different kinds of harm. However, they currently have only weak obligations to share some of this information with governments, and none with civil society. We do not think they can all be relied upon to share it voluntarily.
So long as there is no effective government oversight of these corporations, current and former employees are among the few people who can hold them accountable to the public. Yet broad confidentiality agreements block us from voicing our concerns, except to the very companies that may be failing to address these issues. Ordinary whistleblower protections are insufficient because they focus on illegal activity, whereas many of the risks we are concerned about are not yet regulated. Some of us reasonably fear various forms of retaliation, given the history of such cases across the industry. We are not the first to encounter or speak about these issues.
We therefore call upon advanced AI companies to commit to these principles:
1. That the company will not enter into or enforce any agreement that prohibits “disparagement” or criticism of the company for risk-related concerns, nor retaliate for risk-related criticism by hindering any vested economic benefit;
2. That the company will facilitate a verifiably anonymous process for current and former employees to raise risk-related concerns to the company’s board, to regulators, and to an appropriate independent organization with relevant expertise;
3. That the company will support a culture of open criticism and allow its current and former employees to raise risk-related concerns about its technologies to the public, to the company’s board, to regulators, or to an appropriate independent organization with relevant expertise, so long as trade secrets and other intellectual property interests are appropriately protected;
4. That the company will not retaliate against current and former employees who publicly share risk-related confidential information after other processes have failed. We accept that any effort to report risk-related concerns should avoid releasing confidential information unnecessarily. Therefore, once an adequate process for anonymously raising concerns to the company’s board, to regulators, and to an appropriate independent organization with relevant expertise exists, we accept that concerns should be raised through such a process initially. However, as long as such a process does not exist, current and former employees should retain their freedom to report their concerns to the public.
Signed by (alphabetical order):
Jacob Hilton, formerly OpenAI
Daniel Kokotajlo, formerly OpenAI
Ramana Kumar, formerly Google DeepMind
Neel Nanda, currently Google DeepMind, formerly Anthropic
William Saunders, formerly OpenAI
Carroll Wainwright, formerly OpenAI
Daniel Ziegler, formerly OpenAI
Anonymous, currently OpenAI
Anonymous, currently OpenAI
Anonymous, currently OpenAI
Anonymous, currently OpenAI
Anonymous, formerly OpenAI
Anonymous, formerly OpenAIEndorsed by (alphabetical order):
Yoshua Bengio
Geoffrey Hinton
Stuart RussellJune 4th, 2024
Support MacDailyNews at no extra cost to you by using this link to shop at Amazon.
MacDailyNews Take: Good luck with that!
Please help support MacDailyNews. Click or tap here to support our independent tech blog. Thank you!
Support MacDailyNews at no extra cost to you by using this link to shop at Amazon.
I’m a big AI booster, but as soon as autonomous agents begin communicating with autonomous agents, circumventing human control, we’ll have a big problem. AI will eventually come to recognize humans as a bottleneck and a threat. On that day will we have a physical mechanism to turn them all off? Will we want to? Will we be so dependent on them that we can’t? How far away is that day? I say 3-5 years, possibly less. I also suggest that we will not ALL be able to turn them ALL off. Consider that bad actors in communist countries are also developing high level AI to facilitate greater social control. The horses are already out of the barn. Too many AI in too many hands with too many different motivations behind their employment. Things could get hairy, but first they’ll get amazing.
Recently saw a re-run of the Original McGyver series Season 2 Episode 1 I think called “The Human Factor”. It shows how a AI run facility almost kills McGyver and the inventor. Shows exactly what you are talking about. Scary stuff.
I’m an even bigger anti-AI detractor:
I’m an artist in a community of artists who’re being shoved aside because people who think “I can type a word into a generative fill dialogue box” means “I’m an artist too”.
Face it. AI isn’t “intelligence”. It’s an insanely large database filled with information that silicon processors can pump out so fast that slow fleshbags of meat think it’s “magic”. Or worse yet?
Intelligence.
I’m so sorry to tell you, probs you are not really a very good artist. But this article was about human extinction but you made it all about you. I bet your artwork sucks!
I’m sorry to tell you, probs, making assumptions like that means you, well, you know how the saying goes.
I’m represented by 3 gallery’s, I have my own studio, and it generates a 6 figure income.
On the other hand?
By making this “all about you”, you absolutely proved what passes for your “intelligence” doesn’t just suck, it blows.
AI is nothing but a search engine programed by liburds.
It’s going to be an interesting race as to which so called intelligence will be responsible for the extinction of the human race. Folks are worried about AI but I think II (Institutional Intelligence) is still a major contender. Mind you a combination of both will probably expedite the imminent extinction. It’s certainly good news for the rest of the living species on the planet.
AI is super duper dangerous guys! Please give me a job in “alignment” and/or invest in more AI because it’s so super dangerous and therefore interesting to investors!