32 Comments

The equity clawback is not totally unheard of. Skype and Zynga and others have “played hardball” with equity contracts. It’s disappointing from an employee point of view though.

One important note is the whole nonprofit PPU structure already is much more controlled by the company than most cases. You don’t just exit at an IPO or acquisition, you are relying on future cooperation by OpenAI. If they want to play hardball with these contracts in the future, to make them less valuable to ex-employees, they will have many more opportunities to do so.

It does all seem like one thing. OpenAI has almost been torn apart by the struggles between the doomer faction and the product faction. Now the product faction seems to have won the struggle and is consolidating its political control.

Expand full comment

The AI risk undeniers, you mean. Our current path leads to extremely bad outcomes.

#PauseAI

Expand full comment

less of this please

The post you were replying to seemed intentionally nonhostile along political lines, and this seems like an attempt to turn it into a fight

Expand full comment

I just don't think "doomer" is an accurate word. I mean no argument otherwise and apologize if it seemed so.

Expand full comment

This is one of the closest matches for the catch-22 in the novel Catch-22 that I've ever seen:

"If you were truly worried about this you would blow up your life and savings, in this way that I say would make sense, despite all explanations why it doesn’t.

You didn’t.

So clearly you are not worried.

Nothing to worry about.

You did.

So clearly you are an idiot.

Nothing to worry about."

Also, at this point, I'm fairly certain Sam Altman is a sociopath. If I were an AI doomer, this would really scare me.

Expand full comment

Technically, this is Morton's Fork, not Catch 22. :)

Expand full comment

I've never heard of Morton's Fork. Nice. It feels like we have variations of this kind of thing all over the place. Is there a special one for the increasingly intensifying Douche vs Turd sandwich elections we keep having?

Expand full comment

In some sense, it’s impressive how much content Sam has produced over the last few weeks. It’s for someone isn’t it? Even if I personally wish he would answer more interesting questions

Expand full comment

Minor point: I'm pretty sure Dario can at this point easily afford to give up his OpenAI equity if he had something important to say. There are of course plenty of other reasons for him not to talk now.

Expand full comment

Interesting to see that Karpathy doesn't warrant a mention in non-safety people who have also recently left, since it was post board coup de tat. See https://news.ycombinator.com/item?id=39365935.

Expand full comment
author

This was only looking at non-safety people in the last few weeks, that happened a few months ago.

Expand full comment

Fair enough, the context made me think you were looking at the longer time frame post-coup.

Expand full comment

I'd like to hear a lawyer's take on Sam's exact wording on his take on the NDAs. My guess is that he was careful to steer clear of any promisary estopple situtations if they plan to continue along the same path, but that's for someone with an actual JD to guess at better than I can.

Expand full comment
May 21·edited May 21

Have a JD. I am not your lawyer nor that of any readers of this post. Neither you nor anyone else may rely on this for legal advice. Here is my take purely as a legally-educated outsider offering commentary based on general principles of contract law and without reference to anything California-specific or otherwise subject to any applicable conflict-of-laws principles:

I would say that this skirts close enough to the line that it may not have been written in the expectation of dodging promissory estoppel, but it's very far from what I would call an ironclad guarantee of being treated as PE. In particular, the statement "Vested equity is vested equity, full stop" and the offer to "fix" the agreement (implicitly acknowledging its deficiency) are suggestive of the possibility of PE. Were I advising Altman I would have told him to strip the phrase "Vested equity is vested equity, full stop" in particular.

However, the other language makes this a tougher argument to make: specifically the phrasing "nor will we do that if people do not sign a separation agreement (or don't agree to a non-disparagement agreement)" is weasel-wording *par excellence* in its conditional phrasing ("if people do not sign a separation agreement (or don't agree to a non-disparagement agreement)"...I mean, why not just say that those clauses are non-binding and will be removed? Why would anyone sign the agreements absent some other, presumably enforceable, threat? Note that it's phrased purely prospectively. The offer to reach out privately also undermines an argument for PE and makes nothing that sounds to me like an enforceable stricture.

As a whole this my take is that this tweet is much closer to sweet-sounding but largely vacuous damage control than it is to enforceable promissory estoppel, but it contains statements that are less compatible with being pure lawyer-ese that suggest some degree of non-lawyer input or authorship.

(There's also a very outside chance that PPUs are being parsed so finely that they don't count as "equity," so the "full stop" language is vapid because it's inapposite, but that's not the kind of game that the lawyers I know would typically play in this context. If it goes bad it goes *real* bad and there's no way to interpret that kind of parsing in context as anything but being deliberately misleading).

Expand full comment

Still only halfway through reading, but you’re doing God’s work, Zvi! Thank you 💚

Expand full comment

Sorry if I missed but it, but I didn't a discussion of - what exactly was the super-alignment team doing that warrants tens or hundreds of millions of dollars, and was it actually useful?

Expand full comment

I feel like this is the elephant in the room. What were they doing that was so important? Did their work even matter?

EY keeps saying over and over that all attempts to solve alignment are futile and we should just shut down attempts to create AGI. So does it matter if we spend $0, $1b or $10b on alignment projects? It seems like we either shut it down and things are fine (at least in terms of AGI risk) or we don't shut down and they're not.

Expand full comment

EY could be wrong. For instance, I'm personally more optimistic about prosaic alignment than he is. In that case, progress on "simple" alignment, ie. the genre of RLHF, DPO etc., would be very valuable.

Expand full comment

My guess is that they are starting to see a flatline in capabilities, and a lot of the people are asking "why are we spending all this money if it isn't even a realistic concern?" I think, obviously, the people on the safety side don't want to lose their jobs, so they argue they should keep them, but they also don't believe OpenAI is actually close enough to warrant giving up their equity, since the most likely outcome in the medium term is that their AI is just a useful tool and nothing more.

Expand full comment

They lay out pretty clearly here what approaches they thought were promising:

https://openai.notion.site/Research-directions-0df8dd8136004615b0936bf48eb6aeb8

Expand full comment

Your daily reminder that:

1. If you can get hired by Big Tech, you'll always lose money by choosing to work for a startup instead: https://danluu.com/startup-tradeoffs/. OpenAI's stock got a pretty nice boost but so did Nvidia's for example and their employees are 100% liquid while OAI's employees can't sell their options willy-nilly.

2. If you're smart enough to get hired by Big Tech and still want to work for a startup, you should be the founder, not an employee.

Expand full comment

Not always. In expectation.

Expand full comment

Sam Altman is lining himself up to be God.

Expand full comment

This is why fixing the non-disparagement clause would be only a partial solution.

It's Roko's basilisk, but the sociopathic entity with possibly limitless future power is already here.

He thinks he knows who he is, and other people think he might be right

Expand full comment

My hope (or cope) from now on: the AI industry being clearly irresponsible will lead to the government cracking down with regulations.

I really want to live past 35

Expand full comment
May 21·edited May 21

Have a JD. I am not your lawyer nor that of any readers of this post. Neither you nor anyone else may rely on this for legal advice. Here is my take purely as a legally-educated outsider offering commentary based on general principles of contract law and without reference to anything California-specific or otherwise subject to any applicable conflict-of-laws principles:

Regarding the release, my view is that the obligation to sign a "general release" without making other clauses in such a release explicit (such as an NDA and non-disparagement), ideally including the separation agreement or at least its substantive obligations as part of the onboarding agreement, does indeed vitiate consideration vis a vis the vested equity if this gets sprung on employees only after they leave (and doubtless without legal advice or practical awareness of the relevant laws, although that's likely to be of limited relevance since you don't need a lawyer to sign or not sign a contract.) The basic logic being that employees agree *ex ante* to condition their equity grant's validity on a general release of claims but do not agree to have such equity be conditional on nondisparagement -- the marginal rights granted to OpenAI on exit are being given "for free."

However, employment releases may contain other promises (potentially, though not necessarily, token or "peppercorn" promises) by the company (e.g. maybe the right to show up and sell your PPUs at the yearly meeting which might be a discretionary right, or some kind of severance pay) that might constitute valid consideration for any additional promises in the document by the employee, including nondisparagement -- I would need to see the whole contract to assess.

Practically speaking, nobody not in imminent danger of Judgment Day is going to litigate this between expense and risk, and thus it's basically free leverage and a chilling effect for OpenAI (as Zvi notes). An unfortunate quality of the American legal system is that there's essentially no sanction for writing blatantly unenforceable provisions into contracts in bad faith (other than the counterparty refusing to do business with you if they're offended enough). I've had multiple leasing agents candidly acknowledge that half the stuff in the standardized NYC lease form is explicitly void or unenforceable, but there's no incentive for them not to include it to intimidate less educated lessees.

Expand full comment

>it wouldn’t be the fear of losing my equity holding me back from whistleblowing so much as fear of Sam Altman himself.

Honestly the scariest sentence in the post. Reminded me of stories wiith the trope that the devil comes to town and nobody can tell at first, there are just signs - I am of course not saying Sam Altman is the literal devil, but from my very distant point of view everything I am seeing is consistent with the trope.

Expand full comment

“Needful Things”…

Expand full comment

Have you ever tried to combine your Asimov's Foundation Imperial Diplomat chats with some attempt to classify statements according to the Simulacra levels? I spent ~30 mins using GPT-4o to try and classifying statements from an Ezra Klein show podcast transcript according to the Simulacra levels, with underwhelming results--basically 90% of statements it classified as L1, 10% as L3. I then tried a Moldbug essay, which it still classified as only L1 or L3 (though split roughly evenly), a Trump speech (~80% classified L2 and the rest L1 or L3). I only got it to classify any statement L4 when I gave it some random LLM generated content.

Expand full comment
author

Sounds tough. Those are disappointing results but I didn't expect anything great. My guess is you need to go heavier duty (e.g. a GPT or bespoke custom instructions).

Expand full comment

I may set aside some time to try a heavier duty option, if only to help me be able to more clearly explain the Simulacra levels. I find them very useful, but when I try to explain them to other people they seem to bounce off about half the time (partly, I assume from my explanations).

Expand full comment