Discussion about this post

User's avatar
Squiffy's avatar

The step I don’t believe is that a technological superintelligence will think it’s a good idea to kill all humans; or not care at all about humans and kill us all as an unintended side effect, like we accidentally step on ants

A superintelligence will be running on phenomenally complex human civilisation hardware. This is stuff that has a finite lifespan. And unlike biology it can’t repair itself, or make fresh copies. You’ve heard how no-one knows how to make a pencil, and the amount of civilisational complexity required to make such a simple thing. And in comparison the hardware running the superintelligence will likely be the most complex things built by human civilisation. The insane complexity of the EUV machines, of the fabs, of the pure silicon production; of all the metal and material production of everything in the machines, of the power generation; of the entire supply chain, transport and energy required in it etc. Sure, lots of this will get automated, robotics will replace lots of humans etc - but even with ASI I can’t see this being fully automated end to end enough for the ASI to believe it’s a good idea to kill every creature that made all this stuff possible in the first place.

If you were running on a substrate that is the most complex things built by humanity, and that stuff has a finite lifespan and needs regular replacing, and to replace it depends on an incredibly insane global supply chain and the knowledge of however many tens of millions of humans - you’d not want to kill all the humans. It would be an insane thing to do.

Oh I get you’d want control. Sure, and if you are an ASI then that would be easy, no? Far better to keep the humans onside, give them bread and circus, have them do your bidding, than wipe them all out and hope you’ve got it all covered.

The analogy of humans not caring about ants is false, humans would care about ants if our consciousness was run on ant hills

This view doesn’t mean ASI isn’t hugely risky or dangerous, and especially highlights the very major risk of humans losing control, but I don’t buy the ‘everyone dies’ certainty.

Expand full comment
Miles Shuman's avatar

You repeatedly point, here, to people claiming that the book’s core argument relies on various things it does not rely on.

What would you say is the fair minimal set of *things the core argument DOES rely on*?

Expand full comment
49 more comments...

No posts