Discussion about this post

User's avatar
Zvi Mowshowitz's avatar

This is the parent comment for cruxes. If you can identify something that, if you changed your mind about that question, would change your mind about the whole scenario, then reply with it here - or if it's already here, like the existing comment (and refrain from liking comments here for any other reason).

Expand full comment
Dylan Black's avatar

I really enjoy your articles, you think very deeply about your subject. However I’ve noticed a trend in AI discourse in general where “intelligence” is treated as a superpower, that magically allows you to skip all the difficult, experimental steps of, for example, building a nanotech super weapon. It seems to me that a superintelligent AI will have much the same problem as I have when trying to make anything really novel work - the world has a lot of confounding factors and it’s almost impossible to get something really new right on the first try. I can’t remember how many times my experiments on things that I KNEW worked were foiled by a loose screw, or a slightly magnetic screw, or a cable touching another cable when it wasn’t supposed to, or any other un-simulate-able problem. The AI takeover scenario postulated here has so. many. steps like this.

Expand full comment
72 more comments...

No posts