Labs have been “locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one—not even their creators—can understand, predict, or reliably control,” said an open letter signed by the likes of Apple cofounder Steve Wozniak and politician Andrew Yang.
Elon Musk and Steve Wozniak are among hundreds of high-profile technologists, entrepreneurs and researchers calling on AI labs to immediately stop work on powerful AI systems, urging developers to step back from the “out-of-control race” to deploy ever more advanced products while we better assess the risks advanced artificial intelligence poses to humanity.
Any AI lab working on systems more powerful than GPT-4—the powerful engine driving OpenAI’s Chat-GPT—should “immediately pause” work for at least six months so humanity can take stock of the risks such advanced AI systems pose, urged an open letter published Wednesday by the Future of Life Institute and signed by more than 1,000 people.
Any pause should be “public and verifiable” and include all key players, the letter said, urging governments to “step in” and force the issue for those who are too slow or unwilling to stop.
The letter said the fast-paced developments of recent months underscore the need for drastic action, with “labs locked in an out-of-control race” to develop and deploy increasingly powerful systems that no one—including their creators—can understand, predict or control.
Labs and independent experts should use the pause to develop a set of shared safety protocols that are audited and overseen by outside experts, the letter said, which should make sure AI systems “are safe beyond a reasonable doubt.”
Signatories include a bevy of well-known computer scientists like Yoshua Bengio and Stuart Russell, researchers from academic and industrial heavyweights like Oxford, Cambridge, Stanford, Caltech, Columbia, Google, Microsoft and Amazon, as well as prominent tech entrepreneurs like Skype cofounder Jaan Tallinn, Pinterest cofounder Evan Sharp and Ripple cofounder Chris Larsen.
As people are invited to append their own names to the letter—which also include author Yuval Noah Harari and politician Andrew Yang—the list should be treated with a degree of skepticism and the Verge reported that OpenAI chief Sam Altman had seemingly been added as a joke.
AI rush sparks concern
The tremendous success of ChatGPT, an artificial intelligence chatbot created by U.S.-based OpenAI, triggered a frantic rush to get new AI products to market. Tech’s biggest players and countless startups are all now jostling to hold or claim space in the fast-emerging market, which could shape the future of the entire sector, and labs are working to develop ever more capable products. In the near term, experts warn AI systems risk exacerbating existing bias and inequality, promoting misinformation, disrupting politics and the economy and could help hackers.
In the longer term, some experts warn AI may pose an existential risk to humanity and could wipe us out. While future-facing, they argue the prospect of superintelligent AI must be addressed before it is developed and ensuring systems are safe should be a key factor of development today.
The open letter ends on a positive note: “Humanity can enjoy a flourishing future with AI. Having succeeded in creating powerful AI systems, we can now enjoy an ‘AI summer’ in which we reap the rewards, engineer these systems for the clear benefit of all, and give society a chance to adapt. Society has hit pause on other technologies with potentially catastrophic effects on society. We can do so here. Let’s enjoy a long AI summer, not rush unprepared into a fall.”
Billionaire philanthropist Bill Gates, founder and former CEO of Microsoft, which is heavily invested in OpenAI, was not named as a signatory on the letter. Gates has previously acknowledged the transformative impact AI will have on society, lauded the “stunning” advances seen in the field over recent months and said a key focus of his is ensuring its benefits are enjoyed equitably, and particularly by those most in need of support.
In a recent blog post, Gates identified many of the same issues raised in the open letter signed by the likes of Musk. Gates said the social concerns surrounding AI should be worked out between governments and the private sector pushing the technology to ensure it is used for good. Of technical problems, Gates said recent progress has not made some problems any more “urgent today than it was before” and researchers are already working on fixing other pressing technical issues and are likely to succeed within a few years.
He said issues surrounding super-intelligence—an AI that exceeds human capabilities across the board and has divided the AI community as to whether it is a genuine risk or hopelessly speculative—are legitimate but also no closer given recent developments. Such concerns “will get more pressing with time,” Gates added.
This article was first published on forbes.com