AI experts and entrepreneurs have signed an open letter calling for a pause in AI development.
In the letter des Future of Life Institute warns of a “runaway” race to deploy the new technology.
The letter raises concerns about misinformation and the automation of the labor market.
We are currently testing machine translations of articles by our US colleagues at Insider as an additional service to our readers. This article has been automatically translated and reviewed by an editor. We welcome feedback at the end of the article.
Artificial intelligence heavyweights are calling for a pause in advanced AI development. Next to Elon MuskApple co-founder Steve Wozniak, Pinterest co-founder Evan Sharp, and Stability AI CEO Emad Mostaque all have one open letter signed by the Future of Life Institute. The non-profit organization is committed to reducing existential risks through powerful technologies.
The letter warns that AI systems like OpenAI’s GPT-4 “can compete with humans in general tasks” and pose a potential danger to humanity and society. He calls on the AI labs to suspend the training of technologies more powerful than GPT-4 for six months. Meanwhile, the dangers of the new technology should be properly assessed.
Industry experts like Yoshua Bengio, sometimes referred to as one of the “godfathers of AI,” and influential computer scientist Stuart Russell have also behind the writing. At the time of publication, no representatives from OpenAI appear to have signed the letter.
The letter raises concerns about the spread of misinformation, the risk automation poses to the labor market, and the possibility of a loss of control over civilization. We have summarized the most important points for you.
Runaway AI
The non-profit organization speaks of the possibility that developers could lose control of powerful new AI systems and their intended impact on civilization. It is claimed that the companies are driving the development of AI technologies that are so advanced that not even the developers “can understand, predict, or reliably control” them.
The letter states: “Should we risk losing control of our civilization? Such decisions must not be delegated to unelected tech leaders.”
A “dangerous race”
The letter warns that AI companies are in a “runaway race to develop and deploy” new advanced systems. In recent months, the viral popularity of OpenAI’s ChatGPT seemed to be driving other companies to launch their own AI products publish.
Businesses are being urged to reap the rewards of an “AI summer.” Meanwhile, society should be given the chance to adapt to the new technology – instead of heading for a sudden “fall”.
AI automation and misinformation
The letter highlights several risks of the new technology, including the possibility that non-human minds will eventually “outnumber, outsmart, obliterate, and replace us.”
The authors also write that AI systems can compete with humans for some tasks and raise concerns about misinformation and the automation of work: “Should we allow machines to flood our information channels with propaganda and untruths? Should we automate all jobs, including those that fulfill us?”
Six month break
The open letter calls for a six-month break from developing AI systems that are more powerful than those already on the market.
The authors urge developers to work with policymakers to create AI governance systems. And they emphasize the need for regulators as well as AI “watermarks” to help people distinguish between human and AI-created content. The letter also notes the need for “well-resourced institutions” to deal with the economic and political disruptions caused by AI.
The open letter said the pause should be a step back from a “dangerous race” for advanced technologies, rather than a complete halt to overall AI development.
“>External content not available
Your privacy settings prevent the loading and display of all external content (e.g. graphics or tables) and social networks (e.g. Youtube, Twitter, Facebook, Instagram etc.). To display, please activate the settings for social networks and external content in the privacy settings .