28 Comments
User's avatar
Askwho Casts AI's avatar

FYI the second link (leaks confirmed the bulk the story I told at that first link) links to a private, inaccesable page.

Zvi Mowshowitz's avatar

Thanks, fixed - occasionally it will grab that instead of the right link and I won't notice.

Karma Infinity's avatar

It’s wild how much of this reads like a parable about scale and trust. OpenAI was supposed to be the grown-up in the room—mission-first, transparency-forward, alignment-aware. But when the stakes got real, the same old boardroom drama showed up, wearing an existential risk hoodie.

Zvi nails something subtle here: the real danger isn’t just rogue AGI—it’s the slow erosion of clarity about who’s steering and why. When safety becomes a PR line and governance a chessboard, public trust frays fast.

Feels like we’re watching a test run of how not to guide a planetary-impact org.

Sherman's avatar

top rated comment is aigc... 🤨

Elliot Olds's avatar

I think you misread Thiel’s alleged statement to Sam. Twice you recount him as saying “half of the country” believes the EA narrative. In the article he says “half of the company” (OpenAI) believes it.

Zvi Mowshowitz's avatar

Also very false, but good catch! Will confirm and then fix.

NoodleIncident's avatar

I would expect that any prompt like this would get a sycophantic “yes”, whether or not the article really supports it. How often do you get a “no” for similar prompts?

> Claude agreed, this was one shot, I pasted in the full article and asked:

> Zvi: I've shared a news article. Based on what is stated in the news article, if the reporting is accurate, how would you characterize the board's decision to fire Altman? Was it justified? Was it necessary?

> Claude 3.7: Based on what's stated in the article, the board's decision to fire Sam Altman appears both justified and necessary from their perspective, though clearly poorly executed in terms of preparation and communication.

Scott S's avatar

Since Claude 3.7, I have started to get subtle pushback sometimes when I’m wrong. E.g. I’ve asked leading questions like “hey I think this thing is wrong, but could you explain clearly why it’s wrong?” And had it respond “actually no. This is right, and here’s why”

vectro's avatar

How could one phrase the prompt so that the sycophantic response would be the other way? The phrasing as it is seems pretty neutral to me.

Mo Diddly's avatar

“AGI by 2028” is something that I have only heard said out loud since the election. Is this because

(A) it’s always been likely but Altman et al didn’t want this to become an election talking point? Or

(B) it’s not actually very likely, except maybe in a super narrow sense, but this kind of puffy self aggrandizement is part of what it takes to get Trump on your side?

MichaeL Roe's avatar

What we already knew, and have had confirmed:

- The board was justified in firing Altman for lying to them, and they’re justified whether or not you think X risks are real

What we didn’t know until now

- Some of the things Altman was “not consistently candid” about were related to AI safety. If you think X risks are a real threat, this makes it much worse than other possibilities, such as some sort of financial shenanigans. I really thought we were given assurances at the time that it wasn’t about safety — would need to go back to see who said that.

This revelation probably rules out other theories for the board trying to fire him (e.g. if his sister’s accusation of sexual assault turned out to be substantiated with some concrete evidence)

Douglas Knight's avatar

If anything, this article pushes me to agree with framing the conflict as a factional conflict. It wasn't because the safetyists wanted to shut things down, but it also wasn't about Sam Altman as an out of control individual lying about random things. It was about him undermining the safety board. That's much more object-level than undermining a director. And this suggests that he undermined her not because she was independent, but because she cared about something in particular.

I think you're reading "the narrative" in a lot more detail that me. Thiel's statements seem crazy to me. I've never heard anything like them. I guess that's evidence that you were right to read stuff into "the narrative." But the claim that this was about factions and safety protocols (not imminent danger) seems straightforwardly correct.

rebecca's avatar

I think you might be confusing the safety board with the board board?

Douglas Knight's avatar

The old story is that he lied about directors to other directors. The new story is that he lied about the safety board to the board of directors.

rebecca's avatar

I read it as being saying it’s about both - and the safety board part is exclusive info to the publication so they’re just highlighting it more

Douglas Knight's avatar

What do you think I am confused about?

I am highlighting the safety board because it is evidence that it is a factional conflict about safety.

Nikita Sokolsky's avatar

> We discovered that self-hiding NDAs were aggressively used by OpenAI, under threat of equity confiscation, to control people and the narrative.

Was anything of substance ever revealed after the NDAs were cancelled? There was a lot of 'meta' drama about the NDAs being in place but what exactly did we learn thanks to them becoming abandoned, other than, well, the existence of said NDAs? I keep asking this question every few months and have yet to receive a meaningful answer.

Nikita Sokolsky's avatar

Here's the Gemini 2.5 answer to the above question, btw:

- ~May 17, 2024 — Jan Leike announced resignation due to fundamental disagreements with leadership over the company's core priorities.

- ~May 17, 2024 — Jan Leike stated that OpenAI's "safety culture and processes have taken a backseat to shiny products."

- ~May 17, 2024 — Jan Leike said securing necessary computing resources ("fighting for compute") for safety work became difficult.

- ~Mid-May 2024 — Daniel Kokotajlo stated he left OpenAI because he lost confidence leadership would "behave responsibly around AGI."

- ~Mid-May 2024 — Daniel Kokotajlo voiced concerns that "competitive dynamics" in the AI race were overriding caution within OpenAI.

Is this actually the full list or did Gemini miss something?

rebecca's avatar

I can’t remember who it was, but one ex-employee explicitly said he was only able to say the criticisms he was saying due to the NDAs being revoked

Nikita Sokolsky's avatar

Thanks! But what were those criticisms, did they reveal anything new, and did the fact that he (and, I guess, a dozen of his colleagues?) expressed them changed anything? So far it looks like... the original "NDA-gate" had at least 10x more views/interest than any subsequent 'leaks' from past employees.

rebecca's avatar

Leaks are different to criticisms (non-disclosure vs non-disparagement). I'd be surprised if any of the 12 ex-OpenAI employees who filed the amicus brief in support of Elon Musk's case again OpenAI would be able to do so without NDA-gate? Outside of the legal sphere, there are a number of ex employees (including people who were previously technical) now working independently to further pro-transparency governance & lobbying, which involves criticising the labs in general, and I'd guess OpenAI specifically, though I haven't been tracking that closely. They would be really hamstrung in what they could say, and likely many would just stay away from public discourse on these topics out of caution I'd say.

scott cunningham's avatar

That wsj article was great. I had missed it.

But it seems like the board, if you read all the way down, kind of screwed it up, even if you see things how the journalist did as they kind of seemed just as sneaky and secretive. They should have probably in hindsight just being more straightforward and not be sneaking around. They were trying to protect people by not sharing information, and then were surprised when it went south.

It sounds like it just was a huge mess.

vectro's avatar

Doesn't Zvi address this in his article?

scott cunningham's avatar

He seemed to say it was not so relevant to bring up if I understood him correctly.

avalancheGenesis's avatar

There's not much to add at this point, but I am happy to deposit more Bayes points at DWATS Savings and Loan. Cool to see The Real Press(tm) centrally confirm your own investigations. Although the flip side of that is also depressing, in that legacy media took this long to finally get to the bottom of things, and as you note the narrative fallout is already pretty settled. Speed premium, indeed. Many Such Cases with recent historical events...

Adham Bishr's avatar

Thank you for calling balls and strikes fairly on this.

Sam Penrose's avatar

Thank you once more. Those reading along might consider trying to get Zvi's valuable work in front of people who can affect Altman. One obvious audience is his cohort at YC. Many of them believe in him deeply and will be inclined to reject this reporting out of hand, but they also pride themselves on honesty and accuracy. This piece may cross-pressure some of them. In a non-Trump world I'd suggest regulators in general; not sure who in government has power here. Ro Khanna might be an interesting target.