11 Comments
User's avatar
Edward Scizorhands's avatar

I was going to dunk on Lehane for not vanishing into dust after making up vast right-wing conspiracy, but he came up with it a few years *before* Lewinsky, and then Hillary Clinton used his term.

Still maybe he should still be a little quiet.

> When a third party inserts themselves into active litigation,

Oh brother.

Andras Kornai's avatar

Dear Zwi, thanks for putting in the notforprivategain.org link. This (May 12 version) says "OpenAI has a bespoke legal structure based on nonprofit control". But does it? Has anybody expressed the Articles of Incorporation in a contract language like Solidity? Is there a machine verification that the contract actually asserts this?

I am of course very aware that OpenAI has better access to deep lawyers than any non-profit (other than itself). This precisely means that the non-profit arm of OpenAI needs to ask frontier models whether the restructuring attempt is legit. Note that there is already a very active research area, just ask ChatGPT (my own prompt was: "Could you give a summary of the current effort on applying theorem provers to formalized legal contracts?" but pick your own).

To put the question in a positive manner: has anyone provided a formalized version of the existing articles of incorporation and related docs that make up this bespoke framework?

[insert here] delenda est's avatar

That's interesting, but I'm not sure what it has to do with the legal dispute, since Courts decide those, and they don't refer to formalised versions of contracts in the sense you mean.

Ethics Gradient's avatar

So, as it happens I recently served a response to a similarly-overbroad third-party subpoena in one of my own cases, and while I think that most assessments of the specific dynamics at play here are broadly correct, it's worth noting that Nathan's response (viz., "We're not affiliated with Musk, and as to the rest go fuck yourself") is a fairly typical and time-honored way of responding to third-party subpoenas. Not that someone a year out of law school would necessarily be familiar with that (and if served on him in his personal capacity rather than as an agent of Encode then wtf mates), and it isn't cheap to hire third-party attorneys to do anything, but *in and of itself* and from the perspective of practicing attorneys (for whom the expense is, of course, a feature rather than a bug) this isn't conceptually that big a deal to handle, and ChatGPT (if you want to be ironic) or Claude (if you don't) could probably write a pretty robust "GFY" response to this.

That doesn't make any of this okay, but, in a way that's perhaps not intuitive to people more used to interacting with the legal system in ways that don't brook as much flexibility or common sense (parking tickets, etc.) this is an instance in which "telling the other side to go fuck themselves" is actually a responsive gambit that's totally consonant with the legal system (as it was here). It would be actually litigating the issue further that would be potentially reflective of intimidation and bad-faith, whereas the MoFo guys were probably more in the vein of "overzealous in a boring, fishing expeditiony way."

Again, broadly consonant with the conclusions reached in Zvi's article.

CCCCC's avatar

"If they had taken that approach, this incident would still have damaged trust, especially since it is part of a pattern, but far less so than what happened here. If that happens soon after this post, and it comes from Altman, from that alone I’d be something like 50% less concerned about this incident going forward, even if they retain Chris Lehane."

Genuine question, why? From what I've seen, every time something like this happens at OpenAI (the NDA, Superalignment, 4o's sycophancy etc.) Altman goes on twitter and says "Uwu we made a fucky wucky, we're vewy sowwy," literally nothing of any value happens internally, and a few months later we get another scandal. Yet he's still seemingly so credible you that you'd be "50% less concerned" if he posted a tweet about it. Genuinely why?

Jonathan Woodward's avatar

My guess would be that Zvi thinks knowing what proper behavior should be, even if Altman chooses not to act that way consistently, is better than neither knowing nor caring about proper behavior.

Mark's avatar

Sociopaths are generally both very good at knowing how society works, and totally uncaring about the debris caused by their messing with society.

Jeffrey Soreff's avatar

"At least OpenAI (and xAI) are (at least primarily) using the courts to engage in lawfare over actual warfare or other extralegal means, or any form of trying to leverage their control over their own AIs. Things could be so much worse."

Very much agreed.

I'm not happy about lawfare, but, as a mere member of the public, I don't see any obvious, viable alternative. E.g. when I see claims like "no other company takes a legal action like X", I have essentially no way to check that. Test a negative that applies over hundreds of years of law and millions of potential litigants??? At absolute most, all I can do is maybe look at the legal action and maybe take some guesses about whether it looks flatly illegal. Unwritten mores and norms in the legal system are completely opaque to me (and, I suspect, the vast bulk of the public).

One could imagine a world where everyone followed the spirit of every law and all legal actions were launched in good faith, but, not only is that spectacularly different from the actual world, it might not even be a _coherent_ dream - everyone has a different idea of what the spirit of a law _is_ and of what _counts_ as good faith (and, for that matter, what counts as reasonable).

Michael S. Tucker's avatar

Thanks to Zvi for this insightful timeline summary of OpenAI’s controversial legal and business actions. Zvi’s final recommendations for the Board are accurate. Both Chris Lehane’s and Jason Kwon’s records of contentious and damaging actions to the company's reputation are widely documented publicly. Recently, Joshua Achiam has spoken out where others have hesitated, suggesting the call is also coming from within the organization. If the board needs independent third-party advice, hiring an external firm like Global Strategy Group, Public Opinion Strategies, EY-Parthenon, or Freedman Consulting for research on public, governmental, and industry perceptions would emphasize the importance of implementing the suggested changes.

Roger Ison's avatar

I am reminded that integrity is not a big thing. It is many small choices, one after another, every day.