A few months before OpenAI board member Ilya Sutskever became famous for his key role in ousting CEO Sam Altman, Sutskever co-wrote a little-noticed but apocalyptic warning about the threat posed by artificial intelligence.
Superintelligent AI, Sutskever co-wrote on a business blog, could lead to “the loss of power of humanity, or even the extinction of humanity,” since engineers are unable to prevent the AI of “going rogue”. The message echoes OpenAI’s charter, which calls for avoiding uses of AI if they “harm humanity.”
Sutskever’s call for caution, however, came at a time of dizzying growth for OpenAI. A $10 billion investment from Microsoft earlier this year helped develop GPT-4, a viral chat bot that the company says now has 100 million weekly users.
Altman’s forced departure arose in part from frustration between him and Sutskever over a tension at the heart of the company: increased awareness of the risks posed by AI, on the one hand, and explosive growth the launch and marketing of new products on the market. the other, the New York Times reported.
Certainly, details remain scarce on the reason for Altman’s departure. The move comes after a review undertaken by the company’s board of directors, OpenAI announced Friday.
“Mr. Altman’s departure follows a deliberative review process conducted by the Board of Directors, which concluded that he had not always been candid in his communications with the Board of Directors, which hindered its ability to carry out its responsibilities,” the company said in a statement.
Altman was hired by Microsoft days after his departure, prompting a letter Monday signed by nearly every OpenAI employee calling for the company’s board to resign and Altman to return, according to a copy of letter obtained by ABC News. .
OpenAI’s board of directors, the letter states, “informed the management team that allowing the destruction of the company ‘would be consistent with the mission’ of the company.”
Stuart Russell, an AI researcher at the University of California, Berkeley and co-author of a study on the societal dangers of the technology, said OpenAI faces tension centered on its development mission. artificial general intelligence, or AGI, a form of AI. which could imitate human intelligence and potentially surpass it.
“If you’re funding a multibillion-dollar company to pursue AGI, that seems to be an inherent conflict with the goal of ensuring the safety of AI systems,” Russell told ABC News, emphasizing that one Still don’t know why exactly. Altman left the company.
Division over the existential threat posed by AI looms industry-wide as the technology spreads across institutions from manufacturing to mass entertainment, sparking disagreements over the pace of development and the direction of possible regulation.
An open letter written in May by the Center for AI Safety warned that AI posed an “extinction risk” akin to a pandemic or nuclear war, with signatures from hundreds of researchers and industry leaders like Altman and Demis Hassabis, the CEO of Google. DeepMind, the AI division of the tech giant.
For his part, Altman said that rapid deployment of AI enables stress testing of products and is the best way to avoid significant damage.
However, other AI luminaries have balked at the supposed risk. Yann LeCun, chief AI scientist at Meta, told the MIT Technology Review that the fear of an AI takeover is “absurdly ridiculous.”
Warnings from industry titans about the risks of AI have emerged alongside an increasingly competitive industry in which rapid product development requires massive investment, putting pressure on companies to are pursuing commercial uses of the technology, University of Michigan professor Anjana Susarla. State University’s Broad College of Business, which studies the responsible deployment of AI, told ABC News.
“The very large investments required to build these types of technologies mean that companies have to make a trade-off between the profits they would generate from these investments and thinking about an abstract benefit from artificial intelligence,” Susarla said.
Microsoft’s multibillion-dollar investment earlier this year deepened a long-standing relationship between Microsoft and OpenAI, which began with a billion-dollar investment from the tech giant four years ago.
OpenAI was founded as a nonprofit organization in 2015. As of last month, the company was on track to generate more than $1 billion in revenue over a one-year period from the sale of its artificial intelligence products, The Information reported.
In addition to uniting OpenAI employees behind Altman, his recent ouster appears to have resolved some of the tensions with Sutskever.
“I deeply regret my participation in the board’s actions,” said Sutskever, a longtime AI researcher and co-founder of OpenAI. job on X Monday. “I never intended to harm OpenAI. I love everything we have built together and will do everything I can to bring the company together.”
The choice of Altman’s replacement, meanwhile, could provide insight into the company’s future approach to security.
OpenAI has named interim CEO Emmett Shear, former general manager of video game streaming platform Twitch.
In a podcast interview on “The Logan Bartlett Show” in July, Shear described AI as “pretty dangerous in and of itself” and placed the odds of a massive AI-related disaster in a range between 5 % and 50% – an estimate which he called the “probability of catastrophe”.
In September, Shear said on X that he favors “slowing down” the development of AI.
“If we are currently at a speed of 10, a break is reduced to 0,” Shear wrote. “I think we should aim for a 1-2 instead.