The OpenAI power struggle that captivated the tech world after co-founder Sam Altman was fired has finally reached its end — at least for the time being. But what to make of it?
It feels almost as though some eulogizing is called for — like OpenAI died and a new, but not necessarily improved, startup stands in its midst. Ex-Y Combinator president Altman is back at the helm, but is his return justified? OpenAI’s new board of directors is getting off to a less diverse start (i.e. it’s entirely white and male), and the company’s founding philanthropic aims are in jeopardy of being co-opted by more capitalist interests.
That’s not to suggest that the old OpenAI was perfect by any stretch.
As of Friday morning, OpenAI had a six-person board — Altman, OpenAI chief scientist Ilya Sutskever, OpenAI president Greg Brockman, tech entrepreneur Tasha McCauley, Quora CEO Adam D’Angelo and Helen Toner, director at Georgetown’s Center for Security and Emerging Technologies. The board was technically tied to a nonprofit that had a majority stake in OpenAI’s for-profit side, with absolute decision-making power over the for-profit OpenAI’s activities, investments and overall direction.
OpenAI’s unusual structure was established by the company’s co-founders, including Altman, with the best of intentions. The nonprofit’s exceptionally brief (500-word) charter outlines that the board make decisions ensuring “that artificial general intelligence benefits all humanity,” leaving it to the board’s members to decide how best to interpret that. Neither “profit” nor “revenue” get a mention in this North Star document; Toner reportedly once told Altman’s executive team that triggering OpenAI’s collapse “would actually be consistent with the [nonprofit’s] mission.”
Maybe the arrangement would have worked in some parallel universe; for years, it appeared to work well enough at OpenAI. But once investors and powerful partners got involved, things became… trickier.
Altman’s firing unites Microsoft, OpenAI’s employees
After the board abruptly canned Altman on Friday without notifying just about anyone, including the bulk of OpenAI’s 770-person workforce, the startup’s backers began voicing their discontent in both private and public.
Satya Nadella, the CEO of Microsoft, a major OpenAI collaborator, was allegedly “furious” to learn of Altman’s departure. Vinod Khosla, the founder of Khosla Ventures, another OpenAI backer, said on X (formerly Twitter) that the fund wanted Altman back. Meanwhile, Thrive Capital, the aforementioned Khosla Ventures, Tiger Global Management and Sequoia Capital were said to be contemplating legal action against the board if negotiations over the weekend to reinstate Altman didn’t go their way.
Now, OpenAI employees weren’t unaligned with these investors from outside appearances. On the contrary, close to all of them — including Sutskever, in an apparent change of heart — signed a letter threatening the board with mass resignation if they opted not to reverse course. But one must consider that these OpenAI employees had a lot to lose should OpenAI crumble — job offers from Microsoft and Salesforce aside.
OpenAI had been in discussions, led by Thrive, to possibly sell employee shares in a move that would have boosted the company’s valuation from $29 billion to somewhere between $80 billion and $90 billion. Altman’s sudden exit — and OpenAI’s rotating cast of questionable interim CEOs — gave Thrive cold feet, putting the sale in jeopardy.
Altman won the five-day battle, but at what cost?
But now after several breathless, hair-pulling days, some form of resolution’s been reached. Altman — along with Brockman, who resigned on Friday in protest over the board’s decision — is back, albeit subject to a background investigation into the concerns that precipitated his removal. OpenAI has a new transitionary board, satisfying one of Altman’s demands. And OpenAI will reportedly retain its structure, with investors’ profits capped and the board free to make decisions that aren’t revenue-driven.
Salesforce CEO Marc Benioff posted on X that “the good guys” won. But that might be premature to say.
Sure, Altman “won,” besting a board that accused him of “not [being] consistently candid” with board members and, according to some reporting, putting growth over mission. In one example of this alleged rogueness, Altman was said to have been critical of Toner over a paper she co-authored that cast OpenAI’s approach to safety in a critical light — to the point where he attempted to push her off the board. In another, Altman “infuriated” Sutskever by rushing the launch of AI-powered features at OpenAI’s first developer conference.
The board didn’t explain themselves even after repeated chances, citing possible legal challenges. And it’s safe to say that they dismissed Altman in an unnecessarily histrionic way. But it can’t be denied that the directors might have had valid reasons for letting Altman go, at least depending on how they interpreted their humanistic directive.
The new board seems likely to interpret that directive differently.
Currently, OpenAI’s board consists of former Salesforce co-CEO Bret Taylor, D’Angelo (the only holdover from the original board) and Larry Summers, the economist and former Harvard president. Taylor is an entrepreneur’s entrepreneur, having co-founded numerous companies, including FriendFeed (acquired by Facebook) and Quip (through whose acquisition he came to Salesforce). Meanwhile, Summers has deep business and government connections — an asset to OpenAI, the thinking around his selection probably went, at a time when regulatory scrutiny of AI is intensifying.
The directors don’t seem like an outright “win” to this reporter, though — not if diverse viewpoints were the intention. While six seats have yet to be filled, the initial four set a rather homogenous tone; such a board would in fact be illegal in Europe, which mandates companies reserve at least 40% of their board seats for women candidates.
Why some AI experts are worried about OpenAI’s new board
I’m not the only one who’s disappointed. A number of AI academics turned to X to air their frustrations earlier today.
Noah Giansiracusa, a math professor at Bentley University and the author of a book on social media recommendation algorithms, takes issue both with the board’s all-male makeup and the nomination of Summers, who he notes has a history of making unflattering remarks about women.
“Whatever one makes of these incidents, the optics are not good, to say the least — particularly for a company that has been leading the way on AI development and reshaping the world we live in,” Giansiracusa said via text. “What I find particularly troubling is that OpenAI’s main aim is developing artificial general intelligence that ‘benefits all of humanity.’ Since half of humanity are women, the recent events don’t give me a ton of confidence about this. Toner most directly representatives the safety side of AI, and this has so often been the position women have been placed in, throughout history but especially in tech: protecting society from great harms while the men get the credit for innovating and ruling the world.”
Christopher Manning, the director of Sanford’s AI Lab, is slightly more charitable than — but in agreement with — Giansiracusa in his assessment:
“The newly formed OpenAI board is presumably still incomplete,” he told TechCrunch. “Nevertheless, the current board membership, lacking anyone with deep knowledge about responsible use of AI in human society and comprising only white males, is not a promising start for such an important and influential AI company.”
I’m thrilled for OpenAI employees that Sam is back, but it feels very 2023 that our happy ending is three white men on a board charged with ensuring AI benefits all of humanity. Hoping there’s more to come soon.
— Ashley Mayer (@ashleymayer) November 22, 2023
Inequity plagues the AI industry, from the annotators who label the data used to train generative AI models to the harmful biases that often emerge in those trained models, including OpenAI’s models. Summers, to be fair, has expressed concern over AI’s possibly harmful ramifications — at least as they relate to livelihoods. But the critics I spoke with find it difficult to believe that a board like OpenAI’s present one will consistently prioritize these challenges, at least not in the way that a more diverse board would.
It raises the question: Why didn’t OpenAI attempt to recruit a well-known AI ethicist like Timnit Gebru or Margaret Mitchell for the initial board? Were they “not available”? Did they decline? Or did OpenAI not make an effort in the first place? Perhaps we’ll never know.
Reportedly, OpenAI considered Laurene Powell Jobs and Marissa Mayer for board roles, but they were deemed too close to Altman. Condoleezza Rice’s name was also floated, but ultimately passed over.
OpenAI says the board will have women but they just can’t find them! It’s so hard because the natural makeup of a board is all white men, and it is especially important to include the men who had to step down from previous positions for their statements about women’s aptitude. https://t.co/QiiDd6Se18
— @[email protected] on Mastodon (@timnitGebru) November 23, 2023
OpenAI has a chance to prove itself wiser and worldlier in selecting the five remaining board seats — or three, should Altman and a Microsoft executive take one each (as has been rumored). If they don’t go a more diverse way, what Daniel Colson, the director of the think tank the AI Policy Institute, said on X may well be true: a few people or a single lab can’t be trusted with ensuring AI is developed responsibly.
Updated 11/23 at 11:26 a.m. Eastern: Embedded a post from Timnit Gebru and information from a report about passed-over potential OpenAI women board members.
>>> Read full article>>>
Copyright for syndicated content belongs to the linked Source : TechCrunch – https://techcrunch.com/2023/11/23/openai-emerging-from-the-ashes-has-a-lot-to-prove-even-with-sam-altmans-return/