23 Comments

Did you see that Ezra Klein gave you a shout-out in the NY Times?

Expand full comment

"win-win deal to diffuse" → "defuse"

"Executive Order could also does not" → extra "could"

"a bit three" → "a big three"

Expand full comment

- Kai Greshake (Donovan): Lot of details wrong here. In short: You can absolutely hook an LLM up to a closed-loop network that has regular material imported from the outside; you just have a sanitization process, plus like seventeen other safety protocols. It's not 100%, but neither is spam blocking, and we all use email.

- Larry Summers (insiders): Better phrased as "insiders don't criticize other insiders IN PUBLIC." In private, all the time.

Expand full comment

Thank you for writing these. Always impressively informative and fascinating!

Expand full comment

The board history timeline is pretty unusual. This board is churning like crazy! And never revealing the reasons for anything. In 2023 three different people left the board for undisclosed reasons. Or in Reid Hoffman’s case, it was later reported that he didn’t really think he had a conflict, but he stepped down because Sam wanted it.

It’s no real surprise that Sam wanted control of the board. That’s exactly how Zuck, the Google founders, and Steve Jobs operated. Maybe he just pushed a little too hard. Oops.

Personally I have confidence in Bret Taylor to be a good board member. He stood up to Elon Musk and handled himself in an incredibly stressful job while selling Twitter.

Bill Gates called himself a “minor wizard” for being resistant to Steve Jobs’s reality distortion field. I think Bret Taylor is also a minor wizard.

Expand full comment

Excellent as always Zvi, cheers!

Expand full comment

Shouldn't the resolution of the OpenAI back-and-forth as being "basically primate dominance games" cause us to update more towards p(doom)? It's like the quintessential archetype of the source of basic coordination problems, and they performed *less* well than most for-profit corporations do today, and at much higher stakes.

Because it sure doesn't fill me with warm fuzzies that the smart people on either side of the dust-up have "AI don't kill everyone-ism" priorities straight.

Expand full comment

Great as ever. Not sure about your understanding of the insiders rule - yes, as an insider you theoretically get to choose whether to listen to the outsiders' perspective, but if you're an insider in a non-scenescent organism/organisation then the further in you get the more likely it is that you have been chosen because you won't do so, and moulded so that you don't do so, and had the information that reaches you filtered in such a way that it would be difficult to do so. Insiders are as crippled as outsiders, just in different ways.

Expand full comment

I’m starting to feel like “artificial intelligence” as a phrase is preventing some people from taking doom seriously, because it’s too abstract, and they can’t engage with it. I wonder if saying “digital intelligences” or “intelligent software” would effect a change in those people’s perception of concern.

Expand full comment

re: the Loopt story about hiring people to pretend to be employees:

I can't find it mentioned online, but I recall hearing a story about how Palantir had gov't folks over at their offices to vet them early on. They were very small then so they went to Fry's to buy a bunch of computers and set them up to make it seem like Palantir had a lot more employees. Then they set the meeting for first thing in the morning, so they could posture like engineers never come to work before noon anyway, etc.

Not a value judgment for or against the Loopt thing, but the fact that Palantir did it is interesting considering that it was, you know, to get a defense contract.

Expand full comment

> machine-based systems that can, for a set of defined objectives, make predictions, recommendations, or decisions influencing real or virtual environments

Am I missing something, or how does this not apply to a f---ing thermostat?

Expand full comment

Re: 17 (EA ?= totalitarian), I have to push back a little against your dismissal. The reason that EA proposals are not "completely ordinary regulatory regimes" is that they require cooperation of all world powers, something which (IIRC) EY has advocated that we should be willing to enforce with threats of bombing datacenters and so on.

A typical/boring law decided by the US govt is not totalitarian (e.g. some change to the tax code or whatever). Enforcement is directed inwards, and since the value of the law doesn't depend on absolute compliance, there's some slack on the margins.

In contrast, consider something like copyright law. The value of such laws depends in relatively large part on people outside the US going along with them. So we get a bunch of bullshit like US laws being imported into the EU in non-democratic ways, as riders on free trade agreements and so on. This already feels a lot more totalitarian, especially viewed from outside the US.

A more extreme example would be a law on gain-of-function research. A single breach can infect the world in a month, so 99% worldwide compliance is about as good as 0% worldwide compliance. We are aiming for 99.999%+ compliance. So, either (1) essentially all world governments agree on this, (2) we go to war or (3) we shrug and live with the risk of GoF for now, while slowly working on pushing consensus towards (1).

Now, where on this spectrum is AI regulation according to EAs? Seems pretty clear to me that it's another step beyond GoF research. Higher stakes. A harder to contain adversary. Option (1) is not the reality we live in; (3) is out per typical timelines of EAs. That leaves (2). A call that we need worldwide effective AI regulation *right now*, whatever the cost, *is* a call for totalitarianism.

(reposting here as I had trouble getting the comment through on wordpress)

Expand full comment

I took Ilya's alignment comments on that podcast to be an intentional simplification, but now that I read this I want to go back and re-listen. My guess is still on that though.

Expand full comment

As you are A LOT smarter than me, especially as it pertains the subject of AI, I had a few questions about the pessimistic (or realistic, if you prefer) position if you don't mind:

1. While a lot of experts are worried about AI killing everyone, it seems those closest to the frontier are not as worried as their fellow experts. Shouldn't that update us in an optimistic direction?

2. Assuming we reject any Pascal's mugging-type notion of infinite negative or positive utility, what would the perceived p(doom) need to be for you to agree to "take the plunge" on ASI? And does your answer weight good and bad outcomes equally?

3. Given the possible worlds we could be in, isn't it a good thing that the people at the world's (current) two leading AI companies take AI safety somewhat seriously? Shouldn't that be an additional cause for optimism?

Apologies if you've answered these questions before, and feel free to deny any of the premises if you think they are false.

Expand full comment