After GPT-4, tech leaders call for 6-month break in AI race : NPR


The OpenAI logo is seen on a mobile phone in front of a computer screen showing the release of ChatGPT on March 21 in Boston. A group of prominent computer scientists and other tech industry notables are calling for a 6-month break to review the risks of powerful artificial intelligence technology.

Michael Dwyer/AP


hide caption

toggle caption

Michael Dwyer/AP

After GPT-4, tech leaders call for 6-month break in AI race : NPR

The OpenAI logo is seen on a mobile phone in front of a computer screen showing the release of ChatGPT on March 21 in Boston. A group of prominent computer scientists and other tech industry notables are calling for a 6-month break to review the risks of powerful artificial intelligence technology.

Michael Dwyer/AP

Are tech companies moving too fast in deploying powerful artificial intelligence technology that could one day outsmart humans?

That’s the conclusion of a group of leading computer scientists and other tech industry notables such as Elon Musk and Apple co-founder Steve Wozniak, who are calling for a 6-month break to review the risks.

Their petition published on Wednesday is a response to San Francisco startup OpenAI’s recent release of GPT-4, a more advanced successor to its widely used AI chatbot ChatGPT that helped spark a race between tech giants Microsoft and Google. to reveal similar applications.

What do they say?

The letter warns that AI systems with “human competitive intelligence can pose profound risks to society and humanity” – from flooding the internet with misinformation and automating jobs to more catastrophic future risks outside the world. field of science fiction.

It states that “the past few months have seen AI labs locked in an uncontrollable race to develop and deploy ever more powerful digital minds that no one – not even their creators – can reliably understand, predict or control.”

“We call on all AI labs to immediately suspend for at least 6 months the training of AI systems more powerful than GPT-4,” the letter reads. “This pause must be public and verifiable, and include all key players. If such a pause cannot be enacted quickly, governments must step in and institute a moratorium.”

A number of governments are already working to regulate high-risk AI tools. The UK published a document on Wednesday outlining its approach, which it says will “avoid heavy legislation that could stifle innovation”. European Union lawmakers from the 27 countries negotiated sweeping rules on AI.

Who signed it?

The petition was organized by the nonprofit Future of Life Institute, which says confirmed signatories include Turing Award-winning AI pioneer Yoshua Bengio and other leading AI researchers such as Stuart Russell. and Gary Marcus. Wozniak, former U.S. presidential candidate Andrew Yang and Rachel Bronson, president of the Bulletin of the Atomic Scientists, a science-based advocacy group known for its warnings against nuclear war ending humanity, also joined Wozniak. .

Musk, who runs Tesla, Twitter and SpaceX and was a co-founder of OpenAI and an early investor, has long expressed concerns about the existential risks of AI. A more surprising inclusion is Emad Mostaque, CEO of Stability AI, maker of the Stable Diffusion AI image generator that partners with Amazon and competes with OpenAI’s similar generator known as DALL-E.

What is the answer ?

OpenAI, Microsoft and Google did not respond to requests for comment on Wednesday, but the letter already has plenty of doubters.

“A break is a good idea, but the letter is vague and doesn’t take regulatory issues seriously,” says James Grimmelmann, professor of digital and information law at Cornell University. “It’s also deeply hypocritical for Elon Musk to sign given how hard Tesla has fought blame for faulty AI in its self-driving cars.”

Is it AI hysteria?

While the letter raises the specter of nefarious AI far smarter than what actually exists, it’s not the “superhuman” AI that has some of those who signed up worried. While impressive, a tool like ChatGPT is simply a text generator that makes predictions about which words would respond to the prompt given to it based on what it has learned from ingesting massive amounts of data. written works.

Gary Marcus, a New York University professor emeritus who signed the letter, said in a blog post that he disagrees with others who worry about the short-term prospect. intelligent machines so intelligent that they can improve themselves beyond the control of humanity. What worries him most is the “poor AI” that is widely deployed, including by criminals or terrorists to trick people or spread dangerous misinformation.

“Current technology already poses enormous risks for which we are ill-prepared,” Marcus wrote. “With future technology, things may well get worse.”


npr

Not all news on the site expresses the point of view of the site, but we transmit this news automatically and translate it through programmatic technology on the site and not from a human editor.
Back to top button