97 Comments
Comment deleted
Expand full comment
Comment deleted
Expand full comment

Damn, I wish our continued survival didnt depend upon the whims of a few political actors.

Expand full comment

I feel like there is often an implicit assumption that everyone with serious concerns about AI risk should in some sense be on the same side or at least not literally trying to suppress concerns expressed by other activists. After all, this is often true for other movements. Even though climate change activists may have disagreements over the details of how reducing carbon emissions should be accomplished (is nuclear a part of it? Do you use carbon taxes, cap and trade etc) for the most part those disagreements are minor relative to what they agree on so practically they can function as policy/political allies.

This just isn't true regarding AI. If your concerns are about an accidental self-improving supervillain style paperclip maximizer you may favor exactly the opposite policies than someone concerned about AI fueled threats to democracy/society, someone who is worried about a slow takeoff or someone who thinks most AI will be safe but fears the incentives to make an unsafe one. Equally impactful are your theories on the extent to which it's politically plausible to limit development. If you're convinced someone will build AGI within the next 20 years no matter what then you're best play is probably to try and make sure it's you so you can shift the probability by a few percent in the right direction.

And while I don't think the risk is that high, if you judge the probabilities of doom to be quite high with substantial certainty that's exactly the sort of situation where the usual arguments about the importance of avoiding censorship and not suppressing ideas fail. Indeed, I fear part of the attraction of AI doomerism is that it really does offer a good justification for breaking all the usual norms of behavior to save the world -- and we've grown up on comics lionizing exactly that.

Expand full comment

Indeed, I'm wondering how seriously EA's really take their concern about AI in the short term and how selfless they really are..since if the answer to those questions is 'very' then actions like trying to assassinate Sam Altman start to sound plauasible. (I'd call myself an EA but I'm neither selfless nor concerned about AI)

I fear it won't be too long before EA/rationalism has their own Ted Kazinsky (sp?)

Expand full comment

What a insightful!

Expand full comment

Excellent rundown as usual Zvi, I was looking forward to this after seeing the news about Sam returning last night.

On a tangent, I’ve been trying to get a sense for how this is being perceived more widely, and it’s remarkable how popular the following sentiment (paraphrased) is

> This was a struggle between profit driven capitalist incentives for closed models and _ethical_ open source models leading the future, and Altman returning is the capitalists winning.

Which is not only an incorrect framing of the events, it’s a complete misunderstanding of how OAI is structured as a non-profit and what happens if OAI were to break apart. Not to mention a distressingly optimistic view of open source models.

Expand full comment

I'm curious on the base rate for not being consistently candid among CEOs. It must be extremely high relative to the base population. You have to make people like your company, whether that person is a customer, investor, or employee. As I see it, we don't and probably will never know what actually explains the board's actions. The plausible theories are

1. Actual dangerous capability increases

2. Sam trying to create dangerous capability increases via hardware improvements

3. More normal breakdowns in communication followed by frustrations in the board on Sam's actions

4. The board basically went insane and did something really stupid for no good reason

I was worried about 1 for a while, but we're all still here so not much reason to worry about that

2 seems plausible. Sam will acknowledge that AI could be dangerous when asked, then will go back to increasing capabilities

3 seems most likely, but I think we wouldn't actually know unless OpenAI becomes more open than it has been so far, and they don't have to do that so they probably won't

4 could happen. Let he who has never done something stupid for no reason cast the first stone. This also seems like the most popular reason on X, but nobody on X knows anything more than anyone else. They're just posting while mad

Expand full comment

If this was the story, I would have more sympathy for the board had straight up said it was about a policy conflict between the non-profits objective and maximizing profits, particularly wrt the board being allowed to publish AI risk research that might run counter to profit maximization. That sounds like a stand they could have defended.

Instread, they implied Altman lied about something, but it was never clear what. And people's imagination was that it would have to have been something really bad. As Yudkowsky said, "shot somebody" was the kind of thing that was inferred.

Expand full comment

The alternative theory is that the board got the feeling that Altman was a sociopath, and got spooked. But their sense of this was more a vibe from talking to him than anything they could strictly prove.

Counter argument: tech ceo? Of course he's a sociopath. How could you possibly be surprised by this?

Expand full comment
Nov 22, 2023·edited Nov 23, 2023

What does this mean with respect to Microsoft if Sam regains (and keeps) his job as CEO of OpenAI? Part of his ploy to regain his position was to be hired in the interim directly by Microsoft. Was that _just_ a ploy, or is he now (in some sense) still working for them? Your diagram shows that Microsoft is a non-controlling owner, but it seems to me that they must now have a lot more leverage (implicitly or explicitly) than they used to.

Expand full comment

I think your allegiances are a bit too on display here. You're being far more charitable to the inexplicably silent high p-doom people that agree with you, while painting Altman as a careless chessmaster, but there really isn't enough evidence to determine who was wrong, especially when the only saving grace for Toner is an article by Cade Metz of all people.

I'm disappointed, you're usually more neutral than this.

Expand full comment

Great and important article. Sad that you have to spin a narrative, but it appears to be the most likely way this went and necessary in this situation.

"Once the board pulled the trigger firing him in response, Altman had a choice on what to do next, even if we all knew what choice he would make."

This was not clear to me and a huge update regarding his trustworthiness and dedication to the official OpenAI mission. I'd guess others feel the same.

Expand full comment

> This is a Fight For Control; Altman Started it

There is an exceedingly large amount of speculation regarding Altman's motives in this essay. It's worth noting that he has a) delayed release of GPT multiple times, b) has no monetary incentive, c) has testified frankly multiple times that he cares about safety. People can claim they know better than everyone else what's in his heart, but his actions are out there to be seen and is what ought to be used when writing things like this.

At this point to point the finger back at his actions seems like motivated reasoning.

Expand full comment

Need for non-competes, anyone?

Expand full comment

Zvi, you say Sam Altman is "CEO of OpenAI with zero equity", which is the story I normally see. However Matt Levine in his newsletter recently said that "OpenAI Global LLC is an $86 billion startup with employee and VC and strategic shareholders", which presumably includes Sam as a big shareholder. Any idea if Sam really has zero equity or does he and the employees share in the billions of dollars in equity?

Expand full comment