As early as the end of March 2023, the Future of Life Institute published an open letter pointing out the potential risks posed by systems like ChatGPT and calling for a 6-month pause in development so that potential negative consequences can be investigated.
On 5/30/2023, a “1 sentence statement” was published warning of human extinction due to AI.
The statement is: „Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”
Signatories include:
- Demis Hassabis, CEO, Google DeepMind
- Sam Altman, CEO, OpenAI
- Dario Amodei, CEO, Anthropic
- Ilya Sutskever, Co-Founder and Chief Scientist, OpenAI
- Mustafa Suleyman, CEO, Inflection AI
- Shane Legg, Chief AGI Scientist and Co-Founder, Google DeepMind
- James Manyika, SVP, Research, Technology & Society, Google-Alphabet
- Eric Horvitz, Chief Scientific Officer, Microsoft
- Stuart Russell, Professor of Computer Science, UC Berkeley
- Peter Norvig, Education Fellow, Stanford University
- Geoffrey Hinton, Professor of Computer Science, University of Toronto
- Yoshua Bengio, Professor of Computer Science, U. Montreal / Mila
So the signatories are heads of large IT or AI companies as well as very renowned AI scientists like Stuart Russell and Peter Norvig, the authors of the world’s most important AI textbook for many years.
The signatories of the “1-sentence statement call” are real AI experts and these calls should be taken seriously, just as the calls by climate scientists a few decades ago should have been taken seriously. Compared to climate change, we have much less time on AI risks. The signatories, including the heads of major AI companies, urge regulations for AI.
Translated with DeepL