OpenAI and the Limitations of Non-Profit Boards
A board's failed attempt at governance hastens AI's dominance over humankind
It seems that AI will conquer humanity sooner than expected in part because of a board failure.
A few weeks ago, the non-profit board of OpenAI tried to fulfill its oversight function, firing Sam Altman, the CEO, for not being “consistently candid in his communications with the board.” (Let’s join the speculation that Altman pursued AI development faster or with less oversight than the board desired, and indeed that had been OpenAI’s mission in the first place (i.e., balancing AI’s development against the serious risks involved)). Four members of the board — including OpenAI’s chief scientist, Ilya Sutskever, a co-founder — voted to fire Mr. Altman, “claiming that he could no longer be trusted with the company mission to build artificial intelligence that 'benefits all of humanity.'" That seems important!
Five days after his firing, Altman was restored as CEO. Every board member who voted for his ouster has since been replaced. New board members, all men (including, ominously, Larry Summers), are now in charge, or, more likely, are in charge in name only as they clearly acquired their board seat in exchange for acquiescing to whatever Altman wants going forward.
Wrong or right, the OpenAI board did what boards are supposed to do: oversee the CEO. Because of the considerable risks in AI development, nothing could be more important. That's why OpenAI originally put in place an unusual governance structure to begin with: OpenAI is controlled by the board of a nonprofit, and says that “its investors have no formal way of influencing decisions.” (See OpenAI’s description of its structure here.) But, as AI development accelerated and the stakes increased, so did the problematic power dynamics that plague all non-profit boards. Over the past year,
“Mr. Altman, the chief executive . . . made a move to push out one of the board’s members because he thought a research paper she had co-written was critical of the company.
Another member, Ilya Sutskever, who is also OpenAI’s chief scientist, thought Mr. Altman was not always being honest when talking with the board. And board members worried that Mr. Altman was too focused on expansion while they wanted to balance that growth with A.I. safety.”
I spend a lot of time and energy on issues with non-profit boards, and this potentially catastrophic situation illustrates why. There is so much money and power concentrated in the hands of certain non-profit entities, and their philanthropic benefactors, who make decisions that impact all of us. AI is a threat that I can barely comprehend. And who is watching over it? Our government has displayed a severe lack of engagement, oversight or even basic understanding of the threats imposed by the tech sector (Remember Senator Orrin Hatch questioning Mark Zuckerberg on how Facebook makes money?).1 So the board acts as the main protective barrier against hubris, poor judgment, selfishness, evil, etc.
Sam Altman is a 38 year old tech guy, lifted up as the second coming by an older generation of tech guys who are neither trustworthy nor well-suited to safeguarding humanity’s future (for example, Eric Schmidt). It’s strange that we as a society consistently put so much faith in arrogant young tech prodigies. Over and over again we are blinded by a combination of confidence plus math or technical prowess, equating that skill set with overall intelligence and good judgment. (Sam Bankman-Fried is another example of this misplaced deification.)
Tech skills represent one narrow, if admirable, expression of intelligence. And I think we all know that people who possess this skill are often absolved from showing maturity or judgment or empathy because we hold their quantitative abilities in such high regard. Shockingly selfish or inexcusable behavior is accepted because they are “geniuses.” These are not the men (I’m going to say men here because tech companies are by and large run by men) I want making the decisions on humanity’s future.
Admirably, OpenAI’s board actually tried to impose rules on tech’s latest messiah. That’s the main function of a board. I can’t say whether or not it was justified or smart or too late, but it was certainly the board’s decision to make. In response, the tech world weighed in, Microsoft came on strong, and it became clear that many people do not want Sam Altman to abide by any rules but his own, because there is too much money and power at stake.
Now, despite (in fact because of) the prior OpenAI board’s attempt at ethical governance, that board has been dissolved. The new board ensures Altman is accountable to no one. A man-boy is in charge of humanity’s future, free to make whatever decisions he wants, and many want to keep it that way. In the future, it won’t even matter whether or not the board was right to fire him. We will be in a different world then.
“How do you sustain a business model in which users don’t pay for your service?” Sen. Orrin Hatch (R-UT) asked Zuckerberg early on in the hearing.
“Senator, we run ads,” Zuckerberg replied.