‘Everyone on Earth will die,’ warns artificial intelligence researcher — RT World News

Humanity Unprepared To Survive Encounter With Much Smarter Artificial Intelligence, Says Eliezer Yudkowsky

Stopping the development of advanced artificial intelligence systems worldwide and harshly punishing those who violate the moratorium is the only way to save humanity from extinction, a top artificial intelligence researcher has warned.

Eliezer Yudkowsky, co-founder of the Machine Intelligence Research Institute (MIRI), wrote an op-ed for TIME magazine on Wednesday explaining why he hasn’t signed a petition calling for “all AI labs immediately suspend for at least six months the training of AI systems more powerful than GPT-4”, a large multimodal language model, released by OpenAI earlier this month.

Yudkowsky argued that the letter, signed by the likes of Apple’s Elon Musk and Steve Wozniak, was “Asking too little to solve” the problem posed by the rapid and uncontrolled development of AI.

“The most likely outcome of building an AI of superhuman intelligence, in anything remotely like current circumstances, is that literally everyone on Earth will die,” Yudkowsky wrote.

Survive an encounter with a computer system that “don’t care about us or sentient life in general” would require “precision and preparation and new scientific knowledge” that humanity currently lacks and is unlikely to obtain in the foreseeable future, he argued.

“Sufficiently intelligent AI will not remain confined to computers for long” Yudkowsky warned. He explained that the fact that it is already possible to email strings of DNA to labs to produce proteins will likely allow AI “build artificial life forms or directly initiate postbiological molecular manufacturing” and go out into the world.

According to the researcher, an indefinite, worldwide moratorium on new major AI training races must be introduced immediately. “There can be no exceptions, including for governments or the military,” he underlined.

International agreements should be signed to cap the computing power each can use to train such systems, Yudkowsky insisted.

“If the intelligence says that a country outside the agreement is building a GPU (graphics processing unit) cluster, fear less an armed conflict between nations than a violation of the moratorium; be prepared to destroy a rogue data center with an airstrike,” he wrote.

ChatGPT creator warns of dangers of AI

The threat of artificial intelligence is so great that it should be made “explicit in international diplomacy that preventing AI extinguishment scenarios is considered a priority before preventing a full nuclear exchange”, he added.

You can share this story on social media:


Not all news on the site expresses the point of view of the site, but we transmit this news automatically and translate it through programmatic technology on the site and not from a human editor.
Back to top button