4 Comments

What about the option where AGI just isn't that powerful? I mean maybe it's really useful but maybe you can only get the equivalent of an Einstein working for you in exchange for half a data center worth of computation (and even self-improvement only offers a factor of 2x or so).

Such a world doesn't look very different than our world and doesn't clearly fall into any of your buckets. Sure, maybe some AIs eventually become dictators but chance plays a large role there (even political savants are far from guaranteed winners). For your buckets to be exclusive you really need AGI not to just be general intelligence but to offer near godlike powers.

Expand full comment
author

You need both halves of that: It's not so smart or capable, only barely over the line and stalls there indefinitely, AND it's super expensive and stays that way. I consider that a very narrow edge case that seems highly unlikely to get hit, and would mostly call that 'not AGI' here, simply say that if the AGI is not an efficient use of atoms to create cognition versus humans we don't count it.

Expand full comment

Yes, I agree with the fact that AGI needs to offer some substantial computational advantages (tho I'd quibble about what that means in some cases) but why does it follow that just because it's an efficient use of atoms to do computation that it therefore must either remain controlled or become dominant. Seems like you need to assume not just that AGI is a useful way to do computation but that somehow that computational power is necessarily transformed into political or military power in a way that means no relatively balanced equilibrium is possible.

Why couldn't you simply have AI holding some positions of power in the world and people holding others. Even if AI is very beneficial computationally it's not clear why you'd expect it all to work for the country run by an AI side anymore than all scientists work for the country run by a scientist.

And besides computational power is nice but we've got physical bodies honed by millions of years of evolution while it might take a damn long time for AI to move to similarly survivable, modular and useful bodies. As such, as long as AGI doesn't have godlike abilities to manipulate people the most effective coalitions might have to involve various kinds of power sharing (eg bc maybe if everything goes to shit AI knows it's not one most likely to survive).

Expand full comment

And I'd note that even FOOM style takeoff doesn't really help get you a result where one particular AI alignment ends up dominating. After all if AI alignment is hard then it's going to be hard for AIs to so if they try and spawn more capable versions or even subsystems they'll have the same problems with them having potentially different goals.

Expand full comment