Discussion about this post

User's avatar
Karma Infinity's avatar

It’s wild how much of this reads like a parable about scale and trust. OpenAI was supposed to be the grown-up in the room—mission-first, transparency-forward, alignment-aware. But when the stakes got real, the same old boardroom drama showed up, wearing an existential risk hoodie.

Zvi nails something subtle here: the real danger isn’t just rogue AGI—it’s the slow erosion of clarity about who’s steering and why. When safety becomes a PR line and governance a chessboard, public trust frays fast.

Feels like we’re watching a test run of how not to guide a planetary-impact org.

Expand full comment
NoodleIncident's avatar

I would expect that any prompt like this would get a sycophantic “yes”, whether or not the article really supports it. How often do you get a “no” for similar prompts?

> Claude agreed, this was one shot, I pasted in the full article and asked:

> Zvi: I've shared a news article. Based on what is stated in the news article, if the reporting is accurate, how would you characterize the board's decision to fire Altman? Was it justified? Was it necessary?

> Claude 3.7: Based on what's stated in the article, the board's decision to fire Sam Altman appears both justified and necessary from their perspective, though clearly poorly executed in terms of preparation and communication.

Expand full comment
26 more comments...

No posts