Over 1,100 AI experts, including influential figures in the field such as Elon Musk and Yoshua Bengio, have signed an open letter calling for a pause on the development of state-of-the-art artificial intelligence (AI) models. The letter warns that society is unprepared for the advanced systems being developed and calls for a moratorium on training AI systems that are more powerful than GPT-4, which includes GPT-5.
Signatories Express Concerns and Call for Safe AI Tools
The signatories argue that we can and should slow down AI progress to make decisions about which types of AI to build or not build and how fast or slow to go. The open letter questions whether we should develop nonhuman minds that could replace us, flood our information channels with propaganda and untruth, and automate away all jobs. Moreover, the risks of advanced AI are not just about posing an existential threat to humanity but also changing the world in ways many would reject.
The open letter recommends that independent experts use the pause to develop and implement shared protocols for safe AI tools. The signatories express concerns about biased responses, misinformation, consumer privacy, and the impact on professions and technology.
Governments Should Step in if Pause Is Not Enacted Quickly
If a pause cannot be enacted quickly, governments should step in and institute a moratorium. Congress and the federal government lack deep expertise in AI, but the case for taking a breath and developing standards to control it is bolstered by this uncertainty.
Labs Should Be Involved In The Pause And Release Of New Protocols
The pause should be public and verifiable, involving all key actors – all artificial intelligence labs working on advanced AI models must be involved. The wave of attention around ChatGPT has renewed an arms race among tech companies to develop similar AI tools in their products. Concerns about biased responses, misinformation, consumer privacy, and impact on professions and technology are increasing.
Some governing agencies in China, the EU, and Singapore have introduced early versions of AI governance frameworks. Therefore, the signatories of the open letter agree that it’s time for a pause to develop new protocols.
OpenAI’s CEO Agrees It’s Time For A Pause
OpenAI CEO Sam Altman is “a little bit scared” of the tech his company is creating. The signatories of the open letter agree that it’s time for a pause.
Microsoft and Google did not respond to requests for comment on the letter, but signatories of the letter seemingly include people from numerous tech companies building advanced language models, including Microsoft and Google. OpenAI spent more than six months working on the safety and alignment of GPT-4 after training the model and is not currently training GPT-5.
Conclusions
In conclusion, AI experts around the world are calling for a moratorium on developing advanced AI models more powerful than GPT-4. They argue that society is unprepared for the consequences of these systems and that we should slow down AI progress to make decisions about which types of AI to build or not build and how fast or slow to go. Governments should step in if a pause cannot be enacted quickly, and all key actors must be involved in developing safe AI tools. The risks of advanced AI are significant, and governments must take steps now to reduce them before it’s too late.
Image Source: Wikimedia Commons