19 Comments

"Leaking"... to the board of directors?

Ffs, we are doomed

Expand full comment

This is one reason we need to write to lawmakers and emphasize on whistleblower protections. It seems like something everyone can agree on.

Expand full comment

I just signed up with PauseAI. Hopefully I can do something useful, even if it's small

Expand full comment

As Zvi said, we all have a LOT more power than we think. I'll check and see if we can connect there. #PauseAI

Expand full comment

Basically, Leopold publicly aided the losing side in the fight for control of OpenAI, so he got fired when his faction lost. Similar to Ilya.

This is pretty normal in the world of corporate board fighting… read Barbarians at the Gate for a great story of a similar situation in the 80s. (That one even has private detectives! And a lot more vice.)

Expand full comment

Another case of nominative determinism: The name Leopold means "brave people". (German, so I'm assuming the implication is "lionlike" and not only brave.)

Expand full comment
Jun 17·edited Jun 17

I don't know of any of the fact of this particular case, but just a general observation: As a securities litigator, my understanding is that OpenAI employees who raise concerns about potential security risks have strong whistleblower protections under Sarbanes-Oxley. The Supreme Court's decision in Lawson v. FMR LLC establishes that employees of contractors to publicly traded companies like Microsoft are covered by Sarbox anti-retaliation rules.

Hence, if an OpenAI employee has a good faith belief that the company is not properly addressing significant security risks, it follows that information potentially material to Microsoft's shareholders is not being disclosed. That's a sufficient basis for employees to report their concerns internally, to law enforcement, or to Congress. They do not need to prove actual securities laws violations to be shielded from retaliation. One major limit, however, is that Sarbox does *not* protect disclosures to news media or otherwise sending information outside the organization/government channels.

The SEC has broad authority and substantial resources, and they are sometimes happy to explore exotic theories of securities fraud. An OpenAI whistleblower making a case that the company is concealing critical risks could get the SEC's attention, on the theory that information is being concealed from Microsoft investors. While this isn't legal advice for any particular situation, would-be whistleblowers at OpenAI should know they may have a path to protect themselves if they feel compelled to speak out.

(Disclaimer: not legal advice. Employees in this situation should obtain counsel before acting)

Expand full comment

> There are of course… other factors here.

This was too vague, for me at least. I'm a regular reader.

Expand full comment

That $0 FMV valuation to the IRS -- it sounds both terrifying for the employees *and* arguably fraudulent to the USG at the same time!

Expand full comment
author

On the contrary, if it is true that they can take it back for $0 and it's not going to pay a distribution then the FMV is indeed $0! Fridge brilliance.

Expand full comment

I guess I understand the non-disparagement clause. If your company makes widgets, you don’t need one until the anti-widget doomers claim that widgets end the world. Like it’s liability protection because you won’t just get “my boss was an asshole” from ex-employees. You’ll get “my boss wants to kill your children”

Expand full comment

Zvi, do you have any general thoughts on which institutions folks with relevant skills can work at and move the needle in the right direction?

It feels Anthropic are still broadly on the side of the angels, but beyond that it's probably just niche outfits like ArcEvals, government work like the UK's AISI, or academia.

Expand full comment
author

My current position is essentially:

OpenAI: Mustache twirling villains even if they weren't working on AI.

Google: Are what they are, if there is an alignment/safety role form your own judgment but obviously don't work on capabilities.

Anthropic: They are not making this easy, investigate and decide for yourself, but working in the alignment/safety departments seems fine at least.

Other Labs: To extent they are competitive, mostly somewhere between Google and OpenAI, except Meta which is worse.

Expand full comment

> So that was the worst they could find in what was presumably a search for a reason to fire him.

I'm confused... legally speaking, why did they need a reason? Couldn't they just fire any employee at any time without stating the reason?

Expand full comment
author

Legally speaking they mostly do not need a reason, except that things are triggered if a firing is 'for cause' versus not. But they chose to spin things anyway.

Expand full comment

nitpicking point. I understand that the memo was sent to the board after Sam's sacking when the board were basically no longer the real controllers of the company. am I getting this right?

Expand full comment

I'm thinking about making a public petition/commitment version of the Right to Warn (at least points 1-3) stating the signee won't use paid versions of any model from labs not abiding by it. Possibly also a trigger clause where that commitment only comes into effect after N other signees. Any feedback on that idea?

Expand full comment
Jun 23·edited Jun 23

I remember an argument from an old discussion about a "ticking bomb" scenario: is it permissible to torture a terrorist suspect to save hundreds or thousands of people from an imminent terrorist attack? The argument was, sure, you can do it, but you get 10-20 years in prison *regardless of the outcome*. If you're pretty sure that the stakes are that high and that you got the right guy and it will work, surely you will be willing to pay that price, again, *regardless of the outcome*? And if you're not that sure, then you don't clear the incredibly high bar required to make that terrible decision. It's seemed a very weird proposition at first, but the more I thought about it the more sense it made to me.

To contrast it with what's happening here is actually hilarious. It's not just with a very heavy heart that someone should decide to release confidential info, not the least because of higher order consequences. Yet we see people who apparently balance a non-negligent chance of imminent total extinction of the human race with having to sell their equity in 60 days, and want the latter removed because it outweighs the former.

Expand full comment