18 Comments
Nov 20, 2023·edited Nov 20, 2023

Irrespective of whether or not the OpenAI board's removal of Sam Altman was because of AGI doom fears, has it now been established that even a pure of heart non-profit isn’t in a defensible position to safely steward building an AGI?

If a non-profit AI lab seems like they’re succeeding and indeed pumps the brakes, a megacorp will offer approximately $infinity to hire everyone away. We don't have to speculate. This weekend proves it.

Maybe not every AI lab is run by a charismatic plucky VC CEO that could easily enlist all of his employees to follow if he's opposed by a board, but there should be serious concerns now that "a people dynamic necessary to build a competent non-profit AI lab" and "not corruptible by material wealth and glory" is an impossible combination.

Expand full comment

Given the existence of Facebooks open source models and numerous other AI labs with capabilities rapidly approaching those of GPT-4, I don’t think it would’ve mattered much if OpenAI’s board somehow prevailed in the conflict. Only the government can stop GPT-6 and GPT-7 from being developed, everyone else lacks sufficient coordination capabilities.

Expand full comment

The best hope is that the government takes notice of this and intervenes appropriately, before it is too late.

Expand full comment

Well said. Here's a thought, though.

Most markets have more than one player. Yes, AI may be very different.

But just maybe, commercial success by Anthropic will fuel them - enough - that they make substantial safety contributions before it is too late.

With a lot of luck, maybe they outrun Moloch. No, you're right, probably not. Well, maybe Moloch will start partying too hard with investor money and the good guys can sneak past.

Expand full comment

Moloch does not, alas, rest

Expand full comment

I don't think this follows. One thing that did not happen was the board saying anything like "This is an AI safety decision; we're hitting the big red 'stop' button." If they had said anything like that, I think the current situation would look a lot different.

Instead, the secrecy, surprise, unconvincing justifications, attempts to walk it back, and resignation talk make me think that the board was not competent to even run a gaming guild. (Seriously, I've seen stuff like this happen.) I'm sure the board members are fine people otherwise, but sometimes good, thoughtful nerds, when placed in positions of responsibility, can get their heads so far up their asses (i.e., hyperfocus) that they lose sight of ground truth. (And here I'm speaking of myself, in a different situation, as a director of a 501(c)(3).) The rapid hiring of Altman (or his founding of a competitor), and the mass revolt of employees, should have been predictable consequences of their actions, if the board had any situational awareness. The failure to articulate a convincing reason for their actions was simple incompetence.

This isn't a judgement on the directors' "value" as human beings. Not everyone is cut out to be a director, or a CEO, or a startup founder. And sometimes we only find out when we try and fail. That's life. (Or perhaps, of course, in this case, apocalypse.)

Expand full comment

The board could have, but did not, explain why they fired Sam. Note that all the explanations on AI safety, power grabs etc are all speculation. If they found a dangerous capabilities advance, then they could say that. If Sam lied to them about that advance, then say that. The fact that they don't defend themselves or their reasoning makes them seem crazy. Ilya's tweet in particular about how he regrets his actions seems very foolish. What was he originally thinking? Why did he change his mind? This seems like handing the most powerful models to MSFT without trying to make anything different happen.

Expand full comment

Also, there was the internal memo from Lightcap:

"We can say definitively that the board's decision was not made in response to malfeasance or anything related to our financial, business, safety, or security/privacy practices. This was a breakdown in communication between Sam and the board."

Which would seem to rule out many of the theories for why Sam might have been fired.

Expand full comment

As a skeptic of the doomer narrative, this feels much much more like the children running the show than anything else. Too much of the actions and rhetoric of the doomer community continue to look insane to someone not steeped in the lore and nuance, no matter how much I read any of the thinkers on the subject. It all reads like a religious cult explaining how obvious the return of their alien overlords is and how you just need to have a cup of this really nice Cool-Aid they made so that the aliens know you're cool when they get back. Further this ties in quite nicely with a clean narrative of the rationalist and EA communities eating themselves by becoming so disconnected from reality and focused on weird concerns that outsiders are entirely lost. The response from those communities seems to be a "Good, if you weren't sufficiently devoted to the cause then we're glad to lose you," far too much like a No True Scotsman. Even worse, the deeper into the doom cult it goes, the more it seems to get regulatory scrutiny in the worst possible ways. No one wins in this clownshow and I'm sorry to see it, but at least it exposes the "rationality" of the people involved. We'll see if the MS subsidiary is actually a stable relationship or if we're about to get 10 new Anthropics when the lead engineers run into big corp roadblocks, but either way this is going to be a wild ride for another several months at least.

Expand full comment

"Lets making something more intelligent than humans!"

"Surely there is no risk to it!"

Doom is the default reality: 99% of species have gone extinct. I am not EA, just an ordinary guy and like 80% of humanity, like to continue living.

Expand full comment

> Too much of the actions and rhetoric of the doomer community continue to look insane to someone not steeped in the lore and nuance, no matter how much I read any of the thinkers on the subject.

Believing the basic "doomer" idea that AI could be extremely dangerous to humanity, is in fact the norm. It's considered obvious to most people, and would be so even if there wasn't a call from most relevant experts and world leaders. The group that believes it could be safe, while not small enough to call a "bubble", is definitely a minority.

Expand full comment

Thanks for the list. A commenter on HackerNews points out the lack of dates and times in your list, even though you announce "All times stated here are eastern". Possibly adding them back would be helpful.

Expand full comment

I'm not intelligent enough to understand the AI situation, but, I do love the Gettysburg address. For those who are not familiar:

https://www.abrahamlincolnonline.org/lincoln/speeches/gettysburg.htm

Four score and seven years ago our fathers brought forth on this continent, a new nation, conceived in Liberty, and dedicated to the proposition that all men are created equal.

Now we are engaged in a great civil war, testing whether that nation, or any nation so conceived and so dedicated, can long endure. We are met on a great battle-field of that war. We have come to dedicate a portion of that field, as a final resting place for those who here gave their lives that that nation might live. It is altogether fitting and proper that we should do this.

But, in a larger sense, we can not dedicate -- we can not consecrate -- we can not hallow -- this ground. The brave men, living and dead, who struggled here, have consecrated it, far above our poor power to add or detract. The world will little note, nor long remember what we say here, but it can never forget what they did here. It is for us the living, rather, to be dedicated here to the unfinished work which they who fought here have thus far so nobly advanced. It is rather for us to be here dedicated to the great task remaining before us -- that from these honored dead we take increased devotion to that cause for which they gave the last full measure of devotion -- that we here highly resolve that these dead shall not have died in vain -- that this nation, under God, shall have a new birth of freedom -- and that government of the people, by the people, for the people, shall not perish from the earth.

Expand full comment

I've read virtually every word you've written since I discovered your COVID posts just before the Omicron wave, and I just want to say - once again, you've managed to research and write a much more informative and useful perspective on this situation, than any media source I've read, mainstream or otherwise. Thank-you for what you're doing and Happy Thanksgiving Zvi to you and your family - the world needs you and what you're doing right now

Expand full comment

To the point and clear as usual, if I hadn't been biting my fingers watching every move through the weekend, this would've made a great summary. Great use for anyone that's out of the loop and wants some info.

As an erratum, Altman's joke was ‘AGI has been achieved internally’ not ‘AI has been achieved internally’ at the end of point 14.

Expand full comment

Looks like it’s all over, as Altman and Brockman return. Presumably we’ll see the new board restructured or stuffed in such a way that this won’t happen again? Now to try and deduce why it happened!

Expand full comment