The All-Too-Familiar Melodrama of OpenAI

Sam Altman crossing the proverbial Delaware for his triumphant return to OpenAI.

One of the biggest tech stories of the year unfolded just before the Thanksgiving break with the firing and subsequent re-hiring of Sam Altman as the CEO of OpenAI, the creator of ChatGPT.

The coverage swiftly jumped from the business pages to front-page news, elevating Altman briefly to a Musk level of notoriety (i.e. just below the Kardashians) and causing me to wonder if the entire ordeal might have been a well-orchestrated PR stunt.

As I consumed the minute-by-minute drama—from fired, to forming a competitive company, to joining Microsoft, to employee mutiny, to social media uprising, to speculative re-hire, to definitely not getting re-hired, to full-blown board showdown, to board purge, and, ultimately, his triumphant return—something struck me as oddly familiar.

Finally, I realized why: I’ve seen many of the scenes in this movie before.

Like others who have spent their career in venture-backed tech startups, I’ve lived through variants of similar melodrama in many of my previous professional experiences. Granted, that comparison might be like observing similarities between one of my high school romantic relationships and the courtship of Taylor Swift and Travis Kelce, but the pattern recognition is definitely there—just at a much more modest scale. Also, admittedly, I’m observing all this from afar through the distorted lens of media coverage, but allow me to elaborate on the parallels:

Founder cult of personality—Since Steve Jobs and Bill Gates, one of the most enduring characteristics of tech companies is the cult of personality around a perceived-to-be-irreplaceable founder. Altman solidified his celebrity status with his successful board showdown. But every founder I’ve ever worked with (including myself) perceived themselves to be irreplaceable. The number of tech founders who actually possess the skillset necessary to scale with their business is extraordinarily small. Like, count-on-one-hand small. Yet every founder thinks they are that one in a million. Many, maybe even most, are narcissists who prioritize their own survival ahead of the company’s. The same traits that made them good founders—such as stubbornness, audaciousness, and not-taking-no-for-an-answer—often make them shitty leaders. In almost every case I’ve witnessed, the founder fought, screamed, lied, cheated, and generally brought the company to the brink of self-inflicted implosion before stepping aside. Altman is apparently not an exception.

Inexperience around the table—The OpenAI executive team immediately declared themselves ready to jump ship with Sam. Perhaps not surprising given the limited experience among the executive team. The Information noted that “none of the six people who report to CEO Sam Altman…has held a significant executive position in past jobs.” Which points to another enduring, if unfortunate, trait in tech companies: inexperienced “executive” teams comprised of loyalists, valued more for their allegiance than their experience. People with C-level titles in their twenties often behave more like fraternity brothers than thoughtful, diligent executives. Yet, the prevailing wisdom in Silicon Valley, tainted by ageism, is that youth and inexperience are important assets, not potential liabilities. When there’s no adult in the room, drama ensues.

Technical leaders who want to be business leaders—Perhaps the most consequential player in the initial revolt to overthrow Altman was co-founder and Chief Scientist, Ilya Sutskever. Sutskever’s motives for allegedly orchestrating the coup and personally delivering the termination notice, only to reverse himself days later and tweet, “I deeply regret my participation in the board’s actions,” are unclear. But he wouldn’t be the first technical leader to think he should be the one in charge. Perhaps he changed his tune when he was not named interim CEO? Who knows. But the pattern is a recognizable one. Tech startups and technical people frequently dismiss the “soft skills” of business leadership; roll their eyes at the “suits” with MBAs. Too many startups have failed because technologists—rightly revered for their technical acumen—don’t see the limitations of their own social, interpersonal, and decision-making skills.

Nonexistent board governance—OpenAI had a highly unusual corporate structure in which a nonprofit board governed a for-profit corporation, ostensibly to ensure the ethical development of AI technology. In a move I did not see coming (italics = sarcasm), when push came to shove, the for-profit corporation won. Although the justification offered by OpenAI’s board for Altman’s firing that he had “failed to be consistently candid” was ridiculously weak, the very purpose of a board of directors is to provide fiduciary and ethical oversight to the company, and the CEO specifically. But, as at most tech companies, the veneer of independent governance had no teeth, leading to multiple governance mistakes. Independent board members Helen Toner and Tasha McCauley, also the only two women on the board, were both dismissed. In their place are Bret Taylor, who helped orchestrate the sale of Twitter to Elon Musk (enough said), and former Treasury secretary Lawrence Summers, an addition reminiscent of George Shultz’s role on the Theranos board, i.e. flashy name, no relevant experience. Silicon Valley startup boards are conspiracies between the founder(s) and the venture investor(s), neither of whom are remotely independent. It’s time to reintroduce the concept of boards as independent overseers rather than rubber stampers.

Greed leads to infighting—The fact OpenAI’s otherwise routine founder drama played out as national news is because of the sheer shit tons of money at stake. With a private valuation north of $80 billion, OpenAI is already among the 100 most valuable companies in the U.S.—comparable in valuation to Citigroup, Boston Scientific, and KKR, but with fewer than 800 employees and only a handful of private investors. When venture capitalists, executives, and average employees all believe they are destined to be billionaires the fangs come out. This so-called unicorn problem of insanely valued private companies is a hard one to remedy. At some level, billionaire investors and corporations are entitled to do what they want with their money. But with no regulatory or free market mechanism to keep the internal hype from bubbling over, public cage matches like this one are a predictable byproduct. An allegedly major reason behind Altman’s support by rank-and-file employees is his plan to allow employee stock sales.

Survival trumps integrity—The other fundamental dynamic with OpenAI that is indicative of most tech startups is the need to move recklessly full-speed ahead or risk death at the hands of a competitor. Despite their billions of dollars raised and enormous valuation, OpenAI faces stiff competition, most notably from Anthropic but also hundreds, if not thousands, of well-funded AI startups and corporate tech titans. Indeed, OpenAI’s eye-popping success has spawned an irrational venture capital orgy of throwing money at any competent programmer who happens to use the term “AI” in any conversational context. The open letter last March calling for a slowdown in AI development signed by multiple tech leaders is but a distant memory, if it was ever anything beyond a cynical ploy—like a group of gangsters pointing guns at each other expecting someone else to put down their weapon first. Many see the letter and its surrounding publicity as little more than an effort to catch up to OpenAI’s lead. Altman himself largely dismissed the letter and his no-holds-barred pursuit of AI leadership is a major reason for the company’s success. But while this win-at-all-costs mindset can generate outsized return on investment, it also fosters misbehavior, corner-cutting, deliberate lack of oversight, and, occasionally, outright fraud (see 2022’s greatest hits here).

Regardless of these observations, it’s certainly not my place to opine if Altman should stay or go. By most accounts, he’s intelligent, thoughtful, articulate, and well-liked. He’s also been accused of pushing the envelope, alienating close colleagues, and potentially developing AI technology in an unsafe way. That question—about the ethical and moral obligations behind the development of AI, much like the development of atomic energy—seemed to be at the center of this kerfuffle and was exactly why the board was comprised as it was in the first place. Now, with that guardrail gone, Altman will have to rely on his own moral compass.

While many of the stories emanating out of OpenAI may be triggering for me, I’ve already processed those experiences through the therapy of writing. All of the above are themes in my first novel, Bit Flip. Although that story is completely fictionalized, mashed up, and remixed, much of what I wanted to portray is how the structure of venture-backed startups encourages these patterns of misbehavior. Many close friends who have worked in the tech industry have told me the novel portrays that world so accurately that it induced PTSD for them.

I offer the above observations not as a criticism of anyone I’ve ever worked with or for—the vast majority of whom are close, lifelong friends and colleagues—but instead as a critique of the structure, expectations, incentives, and precedents of the venture-backed startup model. A preconceived notion of how businesses should be built that has become so deeply engrained in the Silicon Valley ethos that we can’t imagine any other way. OpenAI tried a different corporate structure and it broke almost immediately under the first pressure test. Now, with all the hype around OpenAI and Altman himself, a next generation of would-be founders are learning the same bad habits.

Michael TriggComment