3 Comments

“There was huge pressure exerted on holdouts to fall in line, and not so subtle warnings of what would happen to their positions and jobs if they did not sign and Altman did return.”

I was one of the ~5% who didn’t sign. I did not perceive huge pressure to sign nor did I face any repercussions. A couple of people messaged me if I had seen the doc and was going to sign (quite reasonable given the lack of company-wide comms at the time). I said I agreed with the letter in spirit but not every particular point, so didn’t want to sign. My answer was accepted without pressure or judgment. So based on my actual experience, I would dispute your narrative of huge pressure and warnings. I really don’t think it’s true at all.

Expand full comment

It's refreshing to see someone on the Safety-ist side of the AI risk spectrum - to avoid using the "doomer" slur - to acknowledge that we should accept some level of risk given the potential upside. Kudos to Zvi.

Is 1% risk appropriate? 10%? 99%? Impossible to say, because those numbers are entirely fictitious. I think EY is obviously correct that you can't nitpick individual doom scenarios and Zvi is obviously correct that this is a potentially tremendously out of distribution event.

Those sauces taste equally good served over goose and gander. You cannot say that something is so novel that we have no way to predict its effects, then assume they will be negative based off comparisons to previous incidents in human life. You cannot point out that AI can go bad in ways we cannot predict, then poke holes in reasons why things could go well and declare yourself 99% certain of doom.

It goes against rationalist mores, but people should stop offering percentiles. They're useful for calibrating where a conversation partner sits, sure, but so is "big risk", "little risk", etc. They convey equal amounts of information without the pretense that even offering a range like 10-25% engages in.

"Why AI now?"

Two reasons:

1) In the long run, we are all dead. AI developed after my lifetime does me no good.*

2) Imagine a world where events fortuitously lined up so that the Industrial and Information Revolutions happened basically simultaneously with the Agricultural Revolution.** We could skip the thousands of years of slavery, warfare over land, etc. that the earlier agricultural systems were incentivized to engage in.

There is "big risk" that we could end up in a world where AI is good enough to replace ~75% of labor. This will naturally select for a world where those ~75% of people must subsist off the largess of the 25% fortunate enough to have the "genetic gifts" (read: guanxi) to remain employed. Many, many ways this goes terribly.

Is this worth risking humanity's existence over? That's a personal decision. I can't but note, though, that the people who would have us slow down are the ones most threatened by their most unique asset - intelligence - becoming obsolete. These aren't the people who work the vast majority of jobs that are dull, soul-sucking, or otherwise cause a grey existence. They want you to do it. They'll be busy going to fun conferences and writing long arguments about the things they are predisposed to find fascinating.

Most people are already slaves to an uncaring intelligence. AI certainly presents new and exciting risks. It's also the only thing offering real upside. Place your bets.

* Thanks to cryonics there's still that .0001% chance I'll be able to benefit. If you aren't signed up for cryonics - do it today. The more people who do the better chance someone / something will find a reanimation process that I can benefit from.

** If your monocle is popping with anger over the low probability, remember that superintelligence is similarly out of distribution.

Expand full comment

Great Analysis ... lots to unpack here. However, the panic argument fails. We're way too ignorant. There's SO much more to know and find out. We only figured out how the sun works barely 100 years ago (quantum tunneling facilitating fusion of protons and neutrons). We didn't actually know what powered the sun until then. Up to the point of QM and nuclear physics, people thought sun was powered first by burning wood, then coal, then oil. And this, after many thousand of years and billions of people wondering about the sun (after they stopped ascribing the sun's power to gods). We didn't understand photosynthesis until 1945. We're essentially clueless about dark energy and dark matter that make up most of the universe. By the way, who's talking about a random black hole coming within range to destroy the stability of the solar system? It doesn't need to get too close. Maybe you want ASI to help figure our way out of the challenge?

Chaitin, in "The limits of mathematics "

"The normal view of mathematics is that if something is true, it's true for a reason, right? In mathematics the reason that something is true is called a proof. And the job of the mathematician is to find proofs. So normally you think if something is true it’s true for a reason. Well, what Ω shows you, what I’ve discovered, is that some mathematical facts are true for no reason! They’re true by accident! And consequently they forever escape the power of mathematical reasoning."

there's a lot more to be said about this. I will be doing so later. Right now, discussion is good, panic is not!

Expand full comment