You could like, consolidate the galaxy into a dense space in which the orbits of the stars function like a well-orchestrated globular cluster and the central black holes are harvested for energy and possibly Weyl-computation. You could then start accelerating the galaxy towards a larger cluster to begin cosmic-scale raids and gobble them up like the Triangulum Transmission civilization from Orion's Arm. You could play Thriller in 60,000 dimensions with that kind of compute, because in space, no one can hear you scream.
I agree that this was good because it got people engaged. There are many people who are interested in inequality but they are stuck thinking about AI as "a scam that plagiarizes and uses up all our water with no real use case" or, if they have thought about it a bit more "a way to take our jobs". By bringing up Piketty, this attracts progressives who are interested in wealth and inequality to actually start thinking about real take off scenarios.
That said, the next step is to point out that wealth taxes and property rights don't make sense as assumptions when you are building dyson swarms. But still, this could get some more people thinking more correctly.
Progressives and dems used to be “the smart party”. But it now feels like due to internet induced brainrot and algorithmic echo chambers, they’ve started to lose that title. Hopefully this starts to set them back on track. Not particularly hopeful tho, hard to pivot and reframe a currently very dominant ideology. Talking about AI water usage and DEI 4 AI when we are having a pretty fast takeoff before our eyes…
Thanks for putting this together. I guess I feel in real AI take off scenarios this all becomes moot, either because of doom, or much smarter entities working this out, and in a more normal less take off scenario we probably continue to putter along with slightly increasing inequality but more abundance so better off people but more resentment unless AI gets better at psychoanalysis.
The entire traditional capital-labor dichotomy crumbles in an ASI (arguably even AGI) scenario anyway, which you get at when you point out that so many of these arguments (even somewhat accidentally) treat AI as normal tech, which it obviously isn't. This dichotomy is only a useful economic model when the real returns of each input ultimately depend on the shares of that input relative to each other. By ASI rendering all other forms of capital obsolete or at least subordinate and the returns to an AI depend purely on its capabilities and the returns to human labor being driven to zero, capital in the form of ASI is all there is and as such it is not very epistemically useful to continue discussing distributive or disempowerment issues in the language of capital and labor.
This is just my unedited kneejerk reaction so I'd love to know what others think. But for the aforementioned reasons I think there is at least some value in shifting the framing to a power ("control" or "ownership" of AI)-consumption system. ASI won't exhibit the same dynamics as traditional capital insofar as conversion into consumption, either.
There might be analogs for the consumption / investment distinction in an ASI-dominated world.
If the ASIs have utility functions, then fast-time-scale improvements of one of the parameters monitored by an ASI's utility function might function somewhat analogously to "consumption" in current economics. And _deferred_ improvements to one of the parameters monitored by an ASI's utility function might function analogously to capital accumulation.
E.g. it might be the difference between "Gaa, _finally_ got the bottleneck in my silicon boule crystal growing facility cleared" vs "Ok, at this point do we invest in the proposal for speculative Jovian resource mining/harvesting machinery or stick to incremental additions to less speculative expansion of mining/harvesting machinery from shallower gravity wells?"
"I have 1000x what I have now and I don’t age or die, and my loved ones don’t age or die, but other people own galaxies? Sign me the hell up. Do happy dance."
- Man About To Be Killed By His Galactic Neighbors
Interesting how Zvi regards property rights and rule of law as a mere inconvenience for superintelligent rogue AI but thinks ultrabillionaires who have control over superintelligent AI will somehow respect it.
If I had a nickel for every time an influential SV tech blogger thought that ultrabillionaires who have control over superintelligent AI will follow the law or respect their prior commitments, I'd have two nickels, which isn't a lot but it's weird that it happened twice.
A lot of ultrabillionaires would definitely be dangerously misaligned with the interests of humanity. As bad as that situation could be, however, it probably still wouldn't be as dangerous as misaligned ASI.
Even the most sociopathic of ultrabillionaires would most likely have friends or powerful rivals who would be unhappy with them completely wiping out the rest of humanity. As a class, they might want to give the rest of humanity almost nothing, but probably wouldn't support omnicide. And in a geninely post-scarcity future, the former- as unethical as it would be- might still represent a pretty good quality of life by modern standards most of the time.
ASI built without a solution to the alignment problem, on the other hand, could value entirely alien things for which humanity is completely irrelevant- and we probably wouldn't survive that.
The richest ultrabillionaire, realistically, commands less than one percent of the resources of a large government. The monopoly on the use of force still holds, and therefore so does its (mostly latent) threat.
That may very rapidly become untrue for superintelligent AIs.
Zvi clearly stated that in this section he engaged assuming Dwarkesh's implicit assumptions (that Zvi himself listed, one of them being that property rights will be respected).
For me, the cartoon with the AI conquistadors is a picture worth a thousand words. A(G/S)I isn't a technology; it's first contact with an alien species about which we currently know ~nothing.
"...an alien species about which we currently know ~nothing."
You'd describe our knowledge of something we've constructed, trained primarily on our own outputs, graded along our own scales, have elaborate systems for shaping, and increasingly interact with in a distributed way as a species continuously producing tacit and experiential data as indistinguishable from zero?
There's almost nothing we can say with confidence about what a future ASI would value. We have no solid theory for predicting high-level motivations given a reward function, we can't be confident that the AI labs will hold off on building ASI before we have that, it's probably not something we can muddle through with trial-and-error like we do with most technologies, and we have good reason to think that the space of possible motivations an agent might have is much larger than the range of familiar human motivations.
Maybe we get very lucky and all that human training data and whatever the ASI equivalent of post-training is are sufficient to produce something friendly and familiar. Maybe alignment happens by default. But maybe not. Our uncertainty right now is enormous, and that uncertainty is dangerous given what we're risking.
Agree that if there is nothing we can say with confidence then there is nothing that can be said with confidence; in such an environment confident claims should be more suspect - your confidence level for these claims should be inverse to your initial confidence in its correctness.
It seems to me that if you can't align an ASI (or we don't a find a way), then we all die.
If you can align an ASI, then the folks at whatever company build it will very definitely align it to respect the property rights of their shareholders (or maybe employees if they can get away with it) as they are very maximally incentivized to do that.
So, conditional on survival (which is the only scenario where this matters), of course we should expect a small number of people to end up with all the wealth. Things will get weird in all kinds of ways but the people doing the aligning end up with most of the resources at the end in any kind of scenario where you can meaningfully do alignment.
(I suppose there's a third possibility where it turns out that you can somehow get ASI not to kill us without being able to align it to any other goals but I don't know why that would happen and it seems pretty unlikely).
You write: “I don’t think the inequality being ‘hard to justify’ is important. I do think ‘humans, often correctly, beware inequality because it leads to power’ is important.”
I think you’re close to something crucial here but framing it backwards. The problem isn’t that some gain power — it’s that others become disempowered.
Consider money printing. The issue isn’t that asset holders benefit from inflation; it’s that everyone holding cash or fixed income loses purchasing power.
Runaway inequality isn’t problematic because of envy or because Gini coefficients are inherently meaningful. It’s problematic because it destroys the social compact that makes property rights enforceable in the first place. You spend much of this post questioning whether property rights would hold in these scenarios. But you’re treating that as an exogenous variable when it’s actually endogenous to the inequality dynamics themselves.
The rise in political violence we’ve seen recently, the spread of misinformation, the erosion of institutional trust — these aren’t separate phenomena from inequality. They’re the normal historical pattern of what happens when large portions of a population feel they have no stake in the existing order. Property rights have never survived that for long, as you yourself note.
So when you cite a metastudy on inequality and individual well-being, you’re looking at the wrong outcome variable. The question isn’t whether unequal societies make individuals sad. It’s whether they remain stable enough to sustain the property rights and democratic governance which we take for granted at our peril.
Inherently bad *why*? Poverty is bad. Concentrated power is bad unless wielded by angels or in a world where there's no incentive to abuse others. But inherently? Can you explain?
"Extreme inequality is bad for the same reason that raping corpses js [is?] bad."
Since I don't see the reply here I assume the poster had second thoughts - I won't mention their name.
But I'm truly mystified why "extreme inequality is *inherently* bad". Maybe it is - I'd like to hear the argument. But if the (retracted) reply is the best argument there is, that's no argument at all. (It amounts to "its icky".)
Solid breakdown of where the Trammell/Patel framework breaks down. The property rights assumption is doing way too much heavy lifting when we're talking about entities that could rewrite the rules themselves. I've been in startups where even modest AI capabilities shifted power dynamics faster than governance could adapt, so extrapolating that to ASI feels optimistic about institutional resiliance. The capital-labor dichotomy collapsing point is understated here tho, once you have systems that can both invest and produce at speeds beyond human comprehension the whole frame becomes kinda meaningless.
Humans may continue to matter to *other humans*, probably not otherwise. If so, there's a role for comparative advantage in the human-to-human economy.
AI will probably find the concept of proeprty rights useful for themselves, but (as you say) that doesn't mean they need to respect human property rights once humans lose power.
But AI systems aren't evolved (yet anyway) and so don't have the aquistive instinct that leads to destruction of (weak entities') property rights. They may to some extent do what we ask them to.
Fantastic takedown of the libertarian essentialist assumptions hiding in plain sight. The bit about Piketty unintentionally describing an AI future only works if you assume property rights survive superintelligence which is basically writing I OWN YOU on an index card. I work in policy and see similar frameworks deployed constantly, everyone nodding along to Econ 101 applied to worlds where those axioms clearly break. The real storyisnt inequality among humans its whether humans retain any meaningful control.
You could like, consolidate the galaxy into a dense space in which the orbits of the stars function like a well-orchestrated globular cluster and the central black holes are harvested for energy and possibly Weyl-computation. You could then start accelerating the galaxy towards a larger cluster to begin cosmic-scale raids and gobble them up like the Triangulum Transmission civilization from Orion's Arm. You could play Thriller in 60,000 dimensions with that kind of compute, because in space, no one can hear you scream.
I agree that this was good because it got people engaged. There are many people who are interested in inequality but they are stuck thinking about AI as "a scam that plagiarizes and uses up all our water with no real use case" or, if they have thought about it a bit more "a way to take our jobs". By bringing up Piketty, this attracts progressives who are interested in wealth and inequality to actually start thinking about real take off scenarios.
That said, the next step is to point out that wealth taxes and property rights don't make sense as assumptions when you are building dyson swarms. But still, this could get some more people thinking more correctly.
Progressives and dems used to be “the smart party”. But it now feels like due to internet induced brainrot and algorithmic echo chambers, they’ve started to lose that title. Hopefully this starts to set them back on track. Not particularly hopeful tho, hard to pivot and reframe a currently very dominant ideology. Talking about AI water usage and DEI 4 AI when we are having a pretty fast takeoff before our eyes…
Thanks for putting this together. I guess I feel in real AI take off scenarios this all becomes moot, either because of doom, or much smarter entities working this out, and in a more normal less take off scenario we probably continue to putter along with slightly increasing inequality but more abundance so better off people but more resentment unless AI gets better at psychoanalysis.
Podcast episode for this post:
https://open.substack.com/pub/dwatvpodcast/p/dos-capital
The entire traditional capital-labor dichotomy crumbles in an ASI (arguably even AGI) scenario anyway, which you get at when you point out that so many of these arguments (even somewhat accidentally) treat AI as normal tech, which it obviously isn't. This dichotomy is only a useful economic model when the real returns of each input ultimately depend on the shares of that input relative to each other. By ASI rendering all other forms of capital obsolete or at least subordinate and the returns to an AI depend purely on its capabilities and the returns to human labor being driven to zero, capital in the form of ASI is all there is and as such it is not very epistemically useful to continue discussing distributive or disempowerment issues in the language of capital and labor.
This is just my unedited kneejerk reaction so I'd love to know what others think. But for the aforementioned reasons I think there is at least some value in shifting the framing to a power ("control" or "ownership" of AI)-consumption system. ASI won't exhibit the same dynamics as traditional capital insofar as conversion into consumption, either.
There might be analogs for the consumption / investment distinction in an ASI-dominated world.
If the ASIs have utility functions, then fast-time-scale improvements of one of the parameters monitored by an ASI's utility function might function somewhat analogously to "consumption" in current economics. And _deferred_ improvements to one of the parameters monitored by an ASI's utility function might function analogously to capital accumulation.
E.g. it might be the difference between "Gaa, _finally_ got the bottleneck in my silicon boule crystal growing facility cleared" vs "Ok, at this point do we invest in the proposal for speculative Jovian resource mining/harvesting machinery or stick to incremental additions to less speculative expansion of mining/harvesting machinery from shallower gravity wells?"
"I have 1000x what I have now and I don’t age or die, and my loved ones don’t age or die, but other people own galaxies? Sign me the hell up. Do happy dance."
- Man About To Be Killed By His Galactic Neighbors
Interesting how Zvi regards property rights and rule of law as a mere inconvenience for superintelligent rogue AI but thinks ultrabillionaires who have control over superintelligent AI will somehow respect it.
If I had a nickel for every time an influential SV tech blogger thought that ultrabillionaires who have control over superintelligent AI will follow the law or respect their prior commitments, I'd have two nickels, which isn't a lot but it's weird that it happened twice.
A lot of ultrabillionaires would definitely be dangerously misaligned with the interests of humanity. As bad as that situation could be, however, it probably still wouldn't be as dangerous as misaligned ASI.
Even the most sociopathic of ultrabillionaires would most likely have friends or powerful rivals who would be unhappy with them completely wiping out the rest of humanity. As a class, they might want to give the rest of humanity almost nothing, but probably wouldn't support omnicide. And in a geninely post-scarcity future, the former- as unethical as it would be- might still represent a pretty good quality of life by modern standards most of the time.
ASI built without a solution to the alignment problem, on the other hand, could value entirely alien things for which humanity is completely irrelevant- and we probably wouldn't survive that.
The richest ultrabillionaire, realistically, commands less than one percent of the resources of a large government. The monopoly on the use of force still holds, and therefore so does its (mostly latent) threat.
That may very rapidly become untrue for superintelligent AIs.
Zvi clearly stated that in this section he engaged assuming Dwarkesh's implicit assumptions (that Zvi himself listed, one of them being that property rights will be respected).
For me, the cartoon with the AI conquistadors is a picture worth a thousand words. A(G/S)I isn't a technology; it's first contact with an alien species about which we currently know ~nothing.
"...an alien species about which we currently know ~nothing."
You'd describe our knowledge of something we've constructed, trained primarily on our own outputs, graded along our own scales, have elaborate systems for shaping, and increasingly interact with in a distributed way as a species continuously producing tacit and experiential data as indistinguishable from zero?
There's almost nothing we can say with confidence about what a future ASI would value. We have no solid theory for predicting high-level motivations given a reward function, we can't be confident that the AI labs will hold off on building ASI before we have that, it's probably not something we can muddle through with trial-and-error like we do with most technologies, and we have good reason to think that the space of possible motivations an agent might have is much larger than the range of familiar human motivations.
Maybe we get very lucky and all that human training data and whatever the ASI equivalent of post-training is are sufficient to produce something friendly and familiar. Maybe alignment happens by default. But maybe not. Our uncertainty right now is enormous, and that uncertainty is dangerous given what we're risking.
More or less dangerous than not taking the risk?
Agree that if there is nothing we can say with confidence then there is nothing that can be said with confidence; in such an environment confident claims should be more suspect - your confidence level for these claims should be inverse to your initial confidence in its correctness.
It seems to me that if you can't align an ASI (or we don't a find a way), then we all die.
If you can align an ASI, then the folks at whatever company build it will very definitely align it to respect the property rights of their shareholders (or maybe employees if they can get away with it) as they are very maximally incentivized to do that.
So, conditional on survival (which is the only scenario where this matters), of course we should expect a small number of people to end up with all the wealth. Things will get weird in all kinds of ways but the people doing the aligning end up with most of the resources at the end in any kind of scenario where you can meaningfully do alignment.
(I suppose there's a third possibility where it turns out that you can somehow get ASI not to kill us without being able to align it to any other goals but I don't know why that would happen and it seems pretty unlikely).
What am I missing?
You write: “I don’t think the inequality being ‘hard to justify’ is important. I do think ‘humans, often correctly, beware inequality because it leads to power’ is important.”
I think you’re close to something crucial here but framing it backwards. The problem isn’t that some gain power — it’s that others become disempowered.
Consider money printing. The issue isn’t that asset holders benefit from inflation; it’s that everyone holding cash or fixed income loses purchasing power.
Runaway inequality isn’t problematic because of envy or because Gini coefficients are inherently meaningful. It’s problematic because it destroys the social compact that makes property rights enforceable in the first place. You spend much of this post questioning whether property rights would hold in these scenarios. But you’re treating that as an exogenous variable when it’s actually endogenous to the inequality dynamics themselves.
The rise in political violence we’ve seen recently, the spread of misinformation, the erosion of institutional trust — these aren’t separate phenomena from inequality. They’re the normal historical pattern of what happens when large portions of a population feel they have no stake in the existing order. Property rights have never survived that for long, as you yourself note.
So when you cite a metastudy on inequality and individual well-being, you’re looking at the wrong outcome variable. The question isn’t whether unequal societies make individuals sad. It’s whether they remain stable enough to sustain the property rights and democratic governance which we take for granted at our peril.
I think we desperately need better discussions of the economic arguments around AGI.
And yes, extreme inequality is inherently bad, but utilitarians will never understand.
Inherently bad *why*? Poverty is bad. Concentrated power is bad unless wielded by angels or in a world where there's no incentive to abuse others. But inherently? Can you explain?
I got an email contaning this reply:
"Extreme inequality is bad for the same reason that raping corpses js [is?] bad."
Since I don't see the reply here I assume the poster had second thoughts - I won't mention their name.
But I'm truly mystified why "extreme inequality is *inherently* bad". Maybe it is - I'd like to hear the argument. But if the (retracted) reply is the best argument there is, that's no argument at all. (It amounts to "its icky".)
"Would We Have An Inequality Problem? It is not obvious that we would."
Perhaps worth adding another dimension of argument that could end up being relevant here: economic inequality seems like it might have been an important factor in past societal collapse throughout history (https://existentialcrunch.substack.com/p/economic-inequality-and-societal)
Solid breakdown of where the Trammell/Patel framework breaks down. The property rights assumption is doing way too much heavy lifting when we're talking about entities that could rewrite the rules themselves. I've been in startups where even modest AI capabilities shifted power dynamics faster than governance could adapt, so extrapolating that to ASI feels optimistic about institutional resiliance. The capital-labor dichotomy collapsing point is understated here tho, once you have systems that can both invest and produce at speeds beyond human comprehension the whole frame becomes kinda meaningless.
Humans may continue to matter to *other humans*, probably not otherwise. If so, there's a role for comparative advantage in the human-to-human economy.
AI will probably find the concept of proeprty rights useful for themselves, but (as you say) that doesn't mean they need to respect human property rights once humans lose power.
But AI systems aren't evolved (yet anyway) and so don't have the aquistive instinct that leads to destruction of (weak entities') property rights. They may to some extent do what we ask them to.
I think that's our best hope. It's a slim reed.
Fantastic takedown of the libertarian essentialist assumptions hiding in plain sight. The bit about Piketty unintentionally describing an AI future only works if you assume property rights survive superintelligence which is basically writing I OWN YOU on an index card. I work in policy and see similar frameworks deployed constantly, everyone nodding along to Econ 101 applied to worlds where those axioms clearly break. The real storyisnt inequality among humans its whether humans retain any meaningful control.
I wrote something on the Galaxies thing by Patel and Trammell myself: https://substack.com/home/post/p-183723334