Sopa Forms | Lightrocket | Getty Images
Elon Musk and dozens of other technology leaders have called on AI labs to breather the development of systems that can compete with human-level intelligence.
In an open letter from the Future of Life Institute, clued by Musk, Apple co-founder Steve Wozniak and 2020 presidential candidate Andrew Yang, AI labs were egg oned to cease training models more powerful than GPT-4, the latest version of the large language model software expanded by U.S. startup OpenAI.
“Contemporary AI systems are now becoming human-competitive at general tasks, and we must ask ourselves: Should we let machines choke our information channels with propaganda and untruth?” the letter read.
“Should we automate away all the jobs, including the discharging ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we peril loss of control of our civilization?”
The letter added, “Such decisions must not be delegated to unelected tech leaders.”
The Days of Life Institute is a nonprofit organization based in Cambridge, Massachusetts, that campaigns for the responsible and ethical development of faked intelligence. Its founders include MIT cosmologist Max Tegmark and Skype co-founder Jaan Tallinn.
The organization has previously gotten the much the same as of Musk and Google-owned AI lab DeepMind to promise never to develop lethal autonomous weapons systems.
The institute said it was area on all AI labs to “immediately pause for at least 6 months the training of AI systems more powerful than GPT-4.”
GPT-4, which was circulated earlier this month, is thought to be far more advanced than its predecessor GPT-3.

“If such a pause cannot be enacted right away, governments should step in and institute a moratorium,” it added.
ChatGPT, the viral AI chatbot, has stunned researchers with its genius to produce humanlike responses to user prompts. By January, ChatGPT had amassed 100 million monthly active narcotic addicts only two months into its launch, making it the fastest-growing consumer application in history.
The technology is trained on huge amounts of information from the internet, and has been used to create everything from poetry in the style of William Shakespeare to drafting judicial opinions on court cases.
But AI ethicists have also raised concerns with potential abuses of the technology, such as copying and misinformation.
In the Future of Life Institute letter, technology leaders and academics said AI systems with human-competitive savvies poses “profound risks to society and humanity.”
“AI research and development should be refocused on making today’s powerful, state-of-the-art set-ups more accurate, safe, interpretable, transparent, robust, aligned, trustworthy, and loyal,” they said.
OpenAI was not straight away available for comment when contacted by CNBC.
OpenAI, which is backed by Microsoft, reportedly received a $10 billion investment from the Redmond, Washington technology superhuman. Microsoft has also integrated the company’s GPT natural language processing technology into its Bing search engine to toady up to it more conversational.
Google subsequently announced its own competing conversational AI product for consumers, called Google Bard.
Musk has some time ago said he thinks AI represents one of the “biggest risks” to civilization.
The Tesla and SpaceX CEO co-founded OpenAI in 2015 with Sam Altman and others, still he left OpenAI’s board in 2018 and no longer holds a stake in the company.
He has criticized the organization a number of times recently, imparting he believes it is diverging from its original purpose.
Regulators are also racing to get a handle on AI tools as the technology is advancing at a lightning-fast pace. On Wednesday, the U.K. government published a white paper on AI, deferring to different regulators to supervise the use of AI tools in their personal sectors by applying existing laws.
WATCH: OpenAI says its GPT-4 model can beat 90% of humans on the SAT
