36 Comments
User's avatar
Greg's avatar

Grim. The good news is that the nearest bar is open.

Expand full comment
Miles Shuman's avatar

With extinction-risk and religion, both, I’m militantly agnostic: I DON’T KNOW AND YOU DON’T EITHER

Expand full comment
Life In The Labyrinth's avatar

Do you think that’s a priori true? Is it ever necessary to examine arguments about either of those subjects in order to determine whether or not they are knowable? Or can you simply come to your conclusion before examining the evidence because of the nature of these subjects?

Expand full comment
Miles Shuman's avatar

I completely endorse examining all available evidence, in both cases. My conclusion is simply that there’s very little I consider to be *evidence* in either one. A lot of “reasoning”, but in the absence of sufficient grounding in reality/evidence to yield any conclusions with significant predictive or diagnostic value.

Expand full comment
Mark's avatar

The question remains (in both cases I guess) - in practice, what are you going to *do*?

Expand full comment
Miles Shuman's avatar

Mostly chop wood & carry water, living the one life I’ve been given, and accept that the fate of humanity is not something I control?

Expand full comment
Mark's avatar

Some aspects of your life have "been given to" you, others you can change for yourself. And you should! This is also the case for humanity as a whole, although the influence of any one individual on humanity as a whole is much smaller proportionately.

Expand full comment
artifex0's avatar

If anyone wants to subject themselves to that NYT review, there's an archived version at: https://archive.ph/br5ls . The author manages to fit an impressive number of both misunderstandings and expressions of contempt into a very short word count.

Expand full comment
Mo Diddly's avatar

I didn’t read it as contempt so much as self-aggrandizement. This top expert in the field has some opinions and laid them out in a book. I too, a reporter, have opinions, and I will give our two opinions roughly equal word count.

Expand full comment
Jamie Fisher's avatar

BINGO!

Expand full comment
Lex Spoon's avatar

That's a substantial complaint, though! This book is paywalled and has scary tabloid-like tactics such as big sans-serif fonts and usage of harsh red and black coloring.

The Archive post is online and has its reasoning stated for anyone to discuss and reply to. That is an important principle of evidence-based reasoning.

Expand full comment
Max B's avatar

Well are humans superior to chimps? Yes . So to me there is nothing wrong with AIs beng superior to humans. And also it is unlikely ALL humans will go extinct. Just like chimps did not.

Expand full comment
Nathan Fish's avatar

Superiority is not a concern for an aligned ASI. Chimps exist because some humans prefer that chimps exist, and this has so far outweighed human preferences to put the last of their habitats to other uses.

There's little reason to believe that an unaligned ASI would happen to prefer that humans exist.

Expand full comment
Mark's avatar
Sep 18Edited

Humans are a threat to destroy AI (by turning it off). Chimps are not a threat to destroy humanity.

If a species were in fact a threat to destroy humanity or even a significant part of humanity, you can bet we would indeed annihilate it. We did this with smallpox, we did it with wolves in many regions, there are currently discussions to do it to certain mosquito species.

Maybe future AI would leave some of us in zoos or nature reserves where we do not threaten it. Better than extinction I guess.

Expand full comment
Max B's avatar

Exactly. AI would likely leave humans alive . Just like we try to conserve many animal species . And AI being more intelligent likely to be smarter about it too

Expand full comment
Mark's avatar

Like I described, humans do not try to conserve species that threaten them.

Expand full comment
Nathan C's avatar

If an AI is super intelligent, beyond human control, then humans wouldn't be a threat to it. Wolves and humans share 84% of the same DNA. A lone human wandering into a pack of wolves could be in trouble; and so we feel fear, and in some places eradicated wolves. An AI with the power to manipulate and work around people wouldn't have this fear of us, or the motivation to kill us.

Expand full comment
Mark's avatar

Intelligence doesn't necessarily allow manipulation. I am far more intelligent than my toddler, but I mostly fail at motivating them to do what I want when they don't want it.

In any case, even if manipulation were perfect, at some point there would come unavoidable resource conflicts between humans and AI. For example AI might want to tile the entire world in solar panels, at the expense of all human agricultural land. The current parallel is that US suburbs and roads steadily expand into the countryside at the expense of animal habitat.

Expand full comment
Jeffrey Soreff's avatar

"Maybe future AI would leave some of us in zoos or nature reserves where we do not threaten it."

Agreed. I think that the odds of this are unpredictable, somewhat like the utility function that an ASI comes up with is unpredictable (I like Yudkowsky & Soares's "sucralose" analogy). Will an ASI have hobbies? Will it have a hobby of maintaining some humans? Nobody knows.

Expand full comment
Dan's avatar

I'm curious what efforts are being made to put the book in the hands (or headphones) or policymakers who matter.

Expand full comment
avalancheGenesis's avatar

Of course the Ems guy doesn't like the book. Very on-brand for Hanson! I know the divergence between OB and LW started long ago, but it's remarkable how seemingly-small initial differences led to this yawning chasm over a foundational tenet of rationalism. Like, even if I didn't find myself in Yudkowsky's camp generally...reading the two, one could be forgiven for naively assuming they have no relation, and aren't part of a greater subcultural/discoursive sphere. Other offshoots like the post-rats at least feel like kissing cousins still.

Looking forward to my copy arriving. That and FdB's incipient novel: two apropos tales for a crazy time to be alive.

Expand full comment
Jean Marie Carey's avatar

Stephen Fry believes, and often espouses, that mammals are not sexually dimorphic, and Matthew Yglesias is in favour of the censorship of the President of the United States, so I am not sure their favourable opinion of a book they’ve likely only skimmed and barely understood are ringing endorsements.

Expand full comment
Jamie Fisher's avatar

Why can't Yudkowksy and Hanson have a debate again?

Or, why can't we lock Yudkowksy and Hanson in a room until the two of them agree?

I say this mostly seriously, because how do you convince the other 99% of humanity who doesn't know or doesn't care... when the other 99% of humanity has a whole refrigerator full of intellectuals on their side?

Expand full comment
Lex Spoon's avatar

There is certainly a case for danger. The things the book and these reviews describe can happen. There is a great story in Verner Vinge about how super-intelligent vampires escape captivity to show one of the many ways that an AI could do it.

However, there is much reason to believe it's not the most likely outcome, and from what I can find online reviews (the book itself is paywalled), the book does not address these reasons that stand out most to me. Here are my top reasons:

* The history of super-intelligence is not that the higher-intelligence species eradicates the other ones. The most compelling example to me is that of the fungi kingdom. The emergence of animals and then humans was largely a non-event for fungi, for hundreds of millions of years at this point, and we should consider that and ask why? Even though we can beat them easily in a direct fight, the main thing is that we have no reason to.

* Most calls for catastrophy disappear quietly 10-20 years later. This one is special, sure, but only so special.

* The history of trade. Humans do as well as we do due to trade, and a super-intelligence is likely to calculate that the same thing is good for them. Otherwise, we are not talking about a super-intelligence at all and just some other sort of technological threat.

The larger harms I am worried about are not direct eradication but subtler things:

* We could fall into something like the story of Narcissus, staring at our reflections until we fade from existence. Our brains are not made for the cult-like messages they are receiving via broadcast TV, social media, and now conversational AI. Our web browsers do not include good counter-measures, and a lot of people are not taking their own steps to check facts and touch grass.

* A weapon could get loose. While there is some risk that a coding assistant or a Reddit auto-poster will be what dooms us, my money is on a drone army that got loose--something actually designed to kill humans but that was a little too good at its job.

Expand full comment
Nathan C's avatar

Entirely agree. The odds that a super intelligent being will want to murder everyone are low. Humans don't want to murder all chimps, or zebras, for example. A highly intelligent AI would be more interested in studying us than eradicating us.

Unfortunately it feels like a lot of discussion of this book is tinted by hero worship of Yudkowsky as a pillar of the Rationalist community. "If anyone builds it, everyone dies" is a pithy slogan that folks can use to express their orthodoxy as part of the group.

Expand full comment
[insert here] delenda est's avatar

I don't think that the primary claim is that they will _want_ to murder us. It is rather that they won't allocate resources to us and just won't notice when we die.

Expand full comment
Max B's avatar

There is a lot of resources on earth and even more in the solar system. Maybe humans wont be able to mindlessly and endlessely consume when AI is in charge...

But that there will be absolutely no resources to sustain some human population is highly unlikely.

Expand full comment
[insert here] delenda est's avatar

That's one way of looking at it: some fraction of us get to enjoy subsistence, kinda like Lost.

Leaving aside the virtue of that possibility, I think you may be better focusing on the possibility of AI being exploited by the wrong person to the wrong end.

Expand full comment
Max B's avatar

There are already plenty of things which can be exploited by wrong person to the wrong end.

AI actually seems least dangerous in this way, because again, try to imagine chimps exploiting humans. Doesn't work ehh?

Expand full comment
[insert here] delenda est's avatar

I don't see the analogy, future AIs will likely exceed our processing power by as much as we exceed chimps, and then by even more.

But my point was about humans: NK, for example, or Russia, or China, could leverage AI to severely weaken us.

Expand full comment
Jeffrey Soreff's avatar

"The history of super-intelligence is not that the higher-intelligence species eradicates the other ones."

I agree with your example of fungi, but consider the absence of surviving Neanderthals, Homo Erectus, etc. There _are_ surviving great ape species, but they basically survive because some humans have a passionate interests in preserving them. If it weren't for that, humans would very likely have crowded them out of their habitat.

Expand full comment
Nicholas Halden's avatar

Is it really right to call Scott's review "very positive"? That seems pretty misleading to me. He concludes:

"Despite my gripes above, this is an impressive book. Eliezer Yudkowsky is a divisive writer, with plenty of diehard fans and equally committed enemies. At his best, he has leaps of genius nobody else can match; at his worst, he’s prone to long digressions about how stupid everyone who disagrees with him is. Nate Soares is equally thoughtful but more measured and lower-profile (at least before he started dating e-celebrity Aella). His influence tempers Yudkowsky’s and turns the book into a presentable whole that respects its readers’ time and intelligence. The end result is something which I would feel comfortable recommending to ordinary people as a good introduction to its subject matter."

Which seems kinda mixed, maybe bullish-neutral, but then you have to pair that with the fact that his view has drifted much lower from an initial 95+% p(doom) that Eliezer seemingly still holds in this book.

Expand full comment
Jeffrey Soreff's avatar

I do really like the "sucralose version of subservience" as an analogy suggesting how strangely attempts to control an AI via training may go wrong.

I'm still unpersuaded that AIs will be as profoundly alien as the authors expect. They are being trained on human text, after all. I think a stronger case is just to note that they can wind up _at least_ as alien as our fellow humans have become: jihadists, Nazis, Maoists etc. which is essentially just as bad.

I wish their example scenario didn't depend so much on the _speed_ of an AI. The calculation of Sable's ability on page 119 looks like it uses 1 GPU = 1 human brain to come up with the "200,000 brains sharing memories". As nearly as I can tell, 1 GPU executing about 10^9 op codes per second, even if an op code is approximately a neural firing, is more like 10^7 neurons (at 100 firings/sec) - about 1/1000th of a brain.

p 113 "It [a superintelligence] would have no limits but the laws of physics." Many CAD problems are NP-hard. If P!=NP (unproven, but that's what everyone expects), getting the optimal solution to any of these is usually exponential, and unfeasible for anyone or anything. A weaker, but much more likely claim is that a superintelligence would be able to find _good enough_ solutions, where they exist, to all significant design problems, which would have approximately the same consequences.

One a closely related note, I'm copying what I wrote in https://www.astralcodexten.com/p/book-review-if-anyone-builds-it-everyone/comment/155197865 :

Yeah, I get somewhat irritated by not distinguishing between:

1) somewhat enhanced ASI - a bit smarter than a human at any cognitive task

(Given the "spikiness" of AIs' capabilities, the first AI to get the last-human-dominated cognitive task exactly matched will presumably have lots of cognitive capabilities well beyond human ability)

2) The equivalent of a competent organization with all of the roles filled by AGIs

3) species-level improvement above human

4) "It'll be so smart that it'll be able to do anything not expressly forbidden by physical law".

Since we are an existence proof for human-level general intelligence, it seems like (1) must be possible (though our current development path might miss it). Since (2) is just a known way of aggregating (1)s, and we know that such organizations can do things beyond what any individual human can, both (1) and (2) look like very plausible ASIs.

For (3) and (4) we don't have existence proofs. My personal guess is that (3) is likely, but the transition from (2) to (3) might, for all I know, take 1000 years of trying and discarding blind alleys.

My personal guess is the (4) probably is too computationally intensive to exist. Some design problems are NP-hard and truly finding the optimal solutions for them might never be affordable.

EDIT: Two additional comments:

a) I think the strongest argument in the book is that we have almost always developed new technologies with a series of debugging steps. While there are examples of inventions that worked on the first attempt (the trinity test was not a fizzle) they are very much the exception, not the rule.

b) I think the argument that we only get one try with ASI is pretty weak. It depends very much on how _much_ smarter the ASI than humans, how _much_ power it has initially, how _much_ faster it thinks than humans. John von Neumann did not wind up as emperor of the world. Would an ASI as smart as a dozen cooperating John von Neumanns end up as emperor of the world? I don't think the answer is trivially yes.

Expand full comment
Dominic Caldwell's avatar

We already have superintelligence, Zvi. Our world has had superintelligence since long before you were born. And energy constraints alone, plus category errors, make the kind of superintelligence you are talking about Obvious Nonsense to quote a wise person. https://dominiccaldwell.substack.com/p/gpt5-is-great-but

Expand full comment