Jannah Theme License is not validated, Go to the theme options page to validate the license, You need a single license for each domain name.
Business

OpenAI, rising from the ashes, has a lot to prove even with the return of Sam Altman

Nature

Image credits: Darrell Etherington with licensed Getty files

OpenAI’s power struggle that captivated the tech world following the firing of its co-founder Sam Altman has finally reached its end – at least for now. But what to think?

It’s almost as if praise is in order – as if OpenAI is dead and a new, but not necessarily improved, startup stands among it. Former Y Combinator President Altman is back at the helm, but is his return justified? OpenAI’s new board is off to a less diverse start (i.e., it’s all white and male), and the company’s founding philanthropic goals are at risk of being co-opted by more capitalists.

This is not to say that the old OpenAI was perfect, far from it.

As of Friday morning, OpenAI had a six-person board of directors: Altman, OpenAI Chief Scientist Ilya Sutskever, OpenAI President Greg Brockman, tech entrepreneur Tasha McCauley, Quora CEO Adam D’Angelo and Helen Toner, director of the Center for Security and Emerging Technologies at Georgetown. . The board of directors was technically linked to a nonprofit organization that held a majority stake in the for-profit side of OpenAI, with absolute decision-making power over OpenAI’s activities, investments, and general direction. for profit.

OpenAI’s unusual structure was established by the company’s co-founders, including Altman, with the best of intentions. The nonprofit’s exceptionally brief (500 words) charter specifies that the board of directors makes decisions ensuring “that artificial general intelligence benefits all of humanity,” leaving board members to decide how best to interpret this. Neither “profit” nor “revenue” is mentioned in this North Star document; Toner reportedly once told Altman’s management team that triggering the collapse of OpenAI “would actually be consistent with (the nonprofit’s) mission.”

Perhaps the arrangement would have worked in a parallel universe; for years this seemed to work quite well at OpenAI. But once powerful investors and partners were involved, things got… trickier.

Altman firing unites Microsoft and OpenAI employees

After the board abruptly furloughed Altman on Friday without telling almost anyone, including the bulk of OpenAI’s 770 employees, the startup’s backers began expressing their displeasure privately and in public.

Satya Nadella, the CEO of Microsoft, a major collaborator of OpenAI, was reportedly “furious” to learn of Altman’s departure. Vinod Khosla, the founder of Khosla Ventures, another OpenAI backer, said on X (formerly Twitter) that the fund wanted Altman back. Meanwhile, Thrive Capital, Khosla Ventures, Tiger Global Management and Sequoia Capital are reportedly considering taking legal action against the board if weekend negotiations to reinstate Altman are unsuccessful.

Now OpenAI employees were not non Aligned with these investors from outward appearances. Rather, almost all of them – including Sutskever, in an apparent change of heart – signed a letter threatening the board with a mass resignation if they chose not to reverse course. But consider that these OpenAI employees had a lot to lose if OpenAI collapsed – aside from job offers from Microsoft and Salesforce.

OpenAI was in talks, led by Thrive, to potentially sell employee shares, which would have increased the company’s valuation from $29 billion to somewhere between $80 billion and $90 billion. Altman’s sudden exit – and the turnover of OpenAI’s questionable interim CEOs – gave Thrive cold feet, putting the sale in jeopardy.

Altman won the five-day battle, but at what cost?

But now, after several breathless and trying days, some form of resolution has been found. Altman — along with Brockman, who resigned Friday in protest of the board’s decision — is back, albeit subject to an investigation into the concerns that precipitated his removal. OpenAI has a new transitional board, satisfying one of Altman’s requests. And OpenAI would maintain its structure, with investor profits capped and a board free to make decisions that aren’t driven by revenue.

Salesforce CEO Marc Benioff posted on X that “the good guys” won. But it might be premature to say so.

Of course, Altman “won,” defeating a board that accused him of “not (being) always upfront” with board members and, according to some reports, prioritizing growth over mission . In one example of this alleged scam, Altman allegedly criticized Toner in an article she co-authored that cast OpenAI’s approach to security in a critical light – to the point where he attempted to push her out from the administration board. In another, Altman “infuriated” Sutskever by rushing the launch of AI-based features at OpenAI’s first developers conference.

The board has not explained itself, even after repeated occasions, citing possible legal challenges. And it’s safe to say they fired Altman in an unnecessarily histrionic way. But there’s no denying that the filmmakers might have had valid reasons for letting Altman go, at least based on how they interpreted their humanist directive.

The new board appears likely to interpret this directive differently.

Currently, OpenAI’s board of directors consists of Bret Taylor, former co-CEO of Salesforce, D’Angelo (the only remnant of the original board), and Larry Summers, economist and former president of Harvard. Taylor is an entrepreneur, having co-founded numerous companies, including FriendFeed (acquired by Facebook) and Quip (through whose acquisition he joined Salesforce). Meanwhile, Summers has strong ties to business and government – ​​an asset for OpenAI, was likely the thinking around his selection, at a time when regulatory scrutiny of AI is intensifying.

The filmmakers don’t seem to be an outright “win” for this reporter, however – not if the intention was diversity of viewpoints. Even if six seats remain to be filled, the first four set a rather homogeneous tone; Such a board would actually be illegal in Europe, which requires companies to reserve at least 40% of their board seats for female candidates.

Why some AI experts are worried about OpenAI’s new board

I’m not the only one to be disappointed. A number of AI academics took to X to express their frustrations earlier today.

Noah Giansiracusa, a professor of mathematics at Bentley University and author of a book on social media recommendation algorithms, takes issue with both the all-male makeup of the board and Summers’ appointment, which he says , has a habit of making unflattering remarks about women.

“Whatever one thinks of these incidents, the optics are not good, to say the least – especially for a company that has led the way in AI development and has reshaped the world we live in,” Giansiracusa said via text message. “What I find particularly troubling is that OpenAI’s primary goal is to develop artificial general intelligence that “benefits all of humanity.” Since half of humanity is made up of women, recent events don’t give me much confidence on this subject. Toner most directly represents the security aspect of AI, and this is so often the position that women have been placed in, throughout history but especially in technology: protecting society from serious damage while men get the credit for innovating and governing the world.

Christopher Manning, the director of Sanford’s AI Lab, is slightly more charitable than—but in agreement with—Giansiracusa in his assessment:

“The new OpenAI board is probably still incomplete,” he told TechCrunch. “Nevertheless, the current board of directors, devoid of people with deep knowledge of the responsible use of AI in human society and composed entirely of white men, is not a promising start for an AI company. AI as important and influential. »

Inequity plagues the AI ​​industry, from the annotators who label the data used to train generative AI models to the harmful biases that often emerge in those trained models, including OpenAI’s models. Summers, to be honest, has have expressed concern about the potentially harmful ramifications of AI – at least when it comes to livelihoods. But critics I spoke with find it hard to believe that a board like OpenAI’s current one will consistently prioritize these challenges, at least not in the same way as a more diverse would.

This begs the question: why didn’t OpenAI try to recruit a well-known AI ethicist like Timnit Gebru or Margaret Mitchell for the original board? Were they “not available”? Did they refuse? Or did OpenAI not make an effort in the first place? Maybe we’ll never know.

OpenAI has a chance to be wiser and more worldly in selecting the remaining five board seats – or three, if Altman and a Microsoft executive take one each (as rumor has it). Unless they choose a more diverse path, which Daniel Colson, director of the think tank AI Policy Institute, said on X may well be true: a few people or a single laboratory cannot be trusted to ensure the responsible development of AI.





Nature

Gn bussni

Not all news on the site expresses the point of view of the site, but we transmit this news automatically and translate it through programmatic technology on the site and not from a human editor.
Back to top button