It’s mystifying. I’m not a fan of the company, OpenAI — they’re the ones hyping up ChatGPT, they’re 49% owned by Microsoft that, as usual, wants to take over everything, and their once and future CEO Sam Altman seems like a sleazy piece of work. But he has his fans. He was abruptly fired this past week (and what’s up with that?) and there was some kind of internal revolt and now he’s being rehired? Appointed to a new position?. Confusion and chaos! It’s a hell of a way to run a company.
Here, though, is a hint of illumination.
Sam Altman, the CEO of OpenAI, was unexpectedly fired by the board on Friday afternoon. CTO Mira Murati is filling in as interim CEO.
OpenAI is a nonprofit with a commercial arm. (This is a common arrangement when a nonprofit finds it’s making too much money. Mozilla is set up similarly.) The nonprofit controls the commercial company — and they just exercised that control.
Microsoft invested $13 billion to take ownership of 49% of the OpenAI for-profit — but not of the OpenAI nonprofit. Microsoft found out Altman was being fired one minute before the board put out its press release, half an hour before the stock market closed on Friday. MSFT stock dropped 2% immediately.
Oh. So this is a schism between the controlling non-profit side of the company, and the money-making for-profit side. It’s an ideological split! But what are their differences?
The world is presuming that there’s something absolutely awful about Altman just waiting to come out. But we suspect the reason for the firing is much simpler: the AI doom cultists kicked Altman out for not being enough of a cultist.
There were prior hints that the split was coming, from back in March.
In the last few years, Silicon Valley’s obsession with the astronomical stakes of future AI has curdled into a bitter feud. And right now, that schism is playing out online between two people: AI theorist Eliezer Yudkowsky and OpenAI Chief Executive Officer Sam Altman. Since the early 2000s, Yudkowsky has been sounding the alarm that artificial general intelligence is likely to be “unaligned” with human values and could decide to wipe us out. He worked aggressively to get others to adopt the prevention of AI apocalypse as a priority — enough that he helped convince Musk to take the risk seriously. Musk co-founded OpenAI as a nonprofit with Altman in 2015, with the goal of creating safer AI.
In the last few years, OpenAI has adopted a for-profit model and churned out bigger, faster, and more advanced AI technology. The company has raised billions in investment, and Altman has cheered on the progress toward artificial general intelligence, or AGI. “There will be scary moments as we move towards AGI-level systems, and significant disruptions, but the upsides can be so amazing that it’s well worth overcoming the great challenges to get there,” he tweeted in December.
Yudkowsky, meanwhile, has lost nearly all hope that humanity will handle AI responsibly, he said on a podcast last month. After the creation of OpenAI, with its commitment to advancing AI development, he said he cried by himself late at night and thought, “Oh, so this is what humanity will elect to do. We will not rise above. We will not have more grace, not even here at the very end.”
Given that background, it certainly seemed like rubbing salt in a wound when Altman tweeted recently that Yudkowsky had “done more to accelerate AGI than anyone else” and might someday “deserve the Nobel Peace Prize” for his work. Read a certain way, he was trolling Yudkowsky, saying the AI theorist had, in trying to prevent his most catastrophic fear, significantly hastened its arrival. (Yudkowsky said he could not know if Altman was trolling him; Altman declined to comment.)
Yudkowsky is a kook. What is he doing having any say at all in the operation of any company? Why would anyone sane let the LessWrong cultists anywhere near their business? It does explain what’s going on with all this chaos — it’s a squabble within a cult. You can’t expect it to make sense.
This assessment, though, helps me understand a little bit about what’s going on.
Sam Altman was an AI doomer — just not as much as the others. The real problem was that he was making promises that OpenAI could not deliver on. The GPT series was running out of steam. Altman was out and about in the quest for yet more funding for the OpenAI company in ways that upset the true believers.
A boardroom coup by the rationalist cultists is quite plausible, as well as being very funny. Rationalists’ chronic inability to talk like regular humans may even explain the statement calling Altman a liar. It’s standard for rationalists to call people who don’t buy their pitch liars.
So what from normal people would be an accusation of corporate war crimes is, from rationalists, just how they talk about the outgroup of non-rationalists. They assume non-believers are evil.
It is important to remember that Yudkowsky’s ideas are dumb and wrong, he has zero technological experience, and he has never built a single thing, ever. He’s an ideas guy, and his ideas are bad. OpenAI’s future is absolutely going to be wild.
There are many things to loathe Sam Altman for — but not being enough of a cultist probably isn’t one of them.
We think more comedy gold will be falling out over the next week.
Should I look forward to that? Or dread it?
It’s already getting worse. Altman is back at the helm, there’s been an almost complete turnover of the board, and they’ve brought in…Larry Summers? Why? It’s a regular auto-da-fé, with the small grace that we don’t literally torture and burn people at the stake when the heretics are dethroned.