Some essays don’t ask questions—they trigger algorithms.
This one clicked like a gear in motion. Not cold, but clinical. Not cynical, but sobering. Beneath the charts and logic flows was something older: the fear that the systems we build might start building us back.
What if mechanization isn’t about tools, but about trust? About the tradeoffs we stop noticing until they define the frame?
And what do we lose when complexity stops serving clarity?
I saw the Dwarkesh episode video in my 'recommended' list on YouTube but only started watching it because someone on Twitter thought the views of the two Mechanize founders offered reason to hope (that an AI takeover was less likely).
I think they offer some reasonable disagreement about whether and how many missing capabilities are sufficient for AGI (and how hard they will be to achieve), but I, like you, disagree.
They basically scoff at the risk of misalignment or any of the existing evidence that it's likely or already happening.
But I was also surprised that they Speak Directly into the Microphone and pretty plainly affirm that, even given those disagreements, expect takeover anyways. Their thinking about what seem like the obvious implications of that future are strange – maybe very 'far mode' and maybe because they either haven't thought about it in detail or don't want to?
Thanks for that, I listened to the whole thing because I was curious about their economic bottlenecks argument but the inconsistency in the rest of their arguments was too much
Agree with other commenters: I vastly prefer this podcast format.
Relatedly, to the "it will no longer be economical to build out the compute" - Niels Bohr, in the 1930s, dismissed the practicality of an atomic bomb. Isolating the k > 1 fissionable radioisotopes was too energy intensive. His direct quote was "You would have to turn the whole United States into a factory..." to get the U-235 for even a single bomb.
Which was a good dismissal. Until General Groves and the Manhattan Project he led effectively did that. The Oak Ridge plant alone consumed something like 1% of the United States' entire electricity production in 1943.
Existential strategic competition can move more than mountains.
I think revenue is a pretty good metric to focus on, because everyone can agree it matters, and it's much less hand-wavy than most of these "intelligence explosion" debates.
For comparison, the global tech industry makes about $10T in revenue (according to o3). So yeah, a world where AI is making $1T in revenue, that's certainly important, but not revolutionary. Probably more useful to look at is the growth rate. For example OpenAI made $4B in revenue in 2024 and projects $12B in 2025. If they managed to triple for four more years, they'd be the highest-revenue tech company. (Apple and Microsoft around 250-400B, Amazon higher but maybe shouldn't really count.)
The "AI 2027" scenario, for example, seems like it would have to involve an even higher growth rate than this yearly 3x.
Talking in terms of dollar values (or any other fixed unit of account) only makes sense close to equilibrium. Long-run comparisons need to look at actual quantities of stuff.
Consider, for example, the late Roman Republic, with a GDP of somewhere around 20 billion denarii per year. A denarius could buy you about ten kilograms of wheat, or one day of unskilled manual labor. And so depending on how you construct your basket, you can conclude the Roman economy was anywhere from about the size of Starbucks to the size of the UK. Both are wrong - industrial economies are not agrarian economies but bigger, they are categorically different things.
A post-AI-takeoff economy, similarly, will not just be a bigger version of ours; a dollar then is not just more or fewer dollars now. Suppose the price of land stays pretty much stable, but the price of labor drops 50% year over year. And then it does it again. And again. And again. So, are constant nominal revenues in such a scenario doubling in real terms, or staying flat? Depends on what you're buying with them.
It's super funny that Zvi acknowledges human capital owners would inevitably be overthrown by the AIs doing the actual labor, but seems oblivious to the revolutionary implications of this.
Similarly, is it really surprising that the market would produce a deeply anti-human company like Mechanize? Maybe this says something important about the anti-human nature of the market itself?
Mildly prefer this format, yes. I would like to have some links and/or timestamps noted for some of the main arguments or significant pieces to listen to.
1. The Spartans absolutely won at Thermopylae. Yes they all died, and yes they failed to stop the Persians, but that was never a realistic possibility. Their goal was to delay the Persians long enough for the Athenian fleet to leave Athens and go across to the island of Salamis rather than be burned along with the city of Athens. And they succeeded - the few days they fought let the Athenian fleet unite with triremes from other city states and defeat the Persian fleet, which won the war. Thermopylae was very much a success, once you adjust for what the win condition actually was.
2. They talk a bit about how what someone in 1500 would do to influence the world. Well, what I was thinking was that someone in 0AD who wanted to destroy the institution of slavery did something that succeeded 1900 years later… Christianity was pretty vital to the moral argument against slavery, after all
Other than that, generally agree with your points. This was one of the few of Dwarkesh’s podcasts that I ended up convinced in the opposite direction of his guests - their arguments seemed incoherent and unconvincing to me in general
I didn't get past the mission statement before bailing out. "We will achieve this by creating simulated environments and evaluations that capture the full scope of what people do at their jobs".
Oh, you're just going to perfectly quantify every aspect of every job and then write perfect numerical benchmarks to train a model to do then better? The only thing more staggering than the stupidity is the sheer arrogance, the hubris, the pig-ignorant self-aggrandizing GALL.
This will crash and burn for extremely obvious reasons and I will enjoy it deeply.
Some essays don’t ask questions—they trigger algorithms.
This one clicked like a gear in motion. Not cold, but clinical. Not cynical, but sobering. Beneath the charts and logic flows was something older: the fear that the systems we build might start building us back.
What if mechanization isn’t about tools, but about trust? About the tradeoffs we stop noticing until they define the frame?
And what do we lose when complexity stops serving clarity?
Still mapping the machine.
Still asking who’s driving.
What would you redesign?
Your turn now ♾️
Podcast episode for this post:
https://open.substack.com/pub/dwatvpodcast/p/you-better-mechanize
I actually had a dream today in which I had been reading your post about Anthropic’s recent interpretability papers!
This format flowed better -- by which I mean it was more readable to me more quickly and intuitively -- than the rigid timestamp format, imo.
I don’t listen to the podcasts, only read your reviews, so found the flowier format more enjoyable / decipherable than the timestamped.
I see no guarantee that this will even be good for current humans.
Travel booking too cheap to meter!
I saw the Dwarkesh episode video in my 'recommended' list on YouTube but only started watching it because someone on Twitter thought the views of the two Mechanize founders offered reason to hope (that an AI takeover was less likely).
I think they offer some reasonable disagreement about whether and how many missing capabilities are sufficient for AGI (and how hard they will be to achieve), but I, like you, disagree.
They basically scoff at the risk of misalignment or any of the existing evidence that it's likely or already happening.
But I was also surprised that they Speak Directly into the Microphone and pretty plainly affirm that, even given those disagreements, expect takeover anyways. Their thinking about what seem like the obvious implications of that future are strange – maybe very 'far mode' and maybe because they either haven't thought about it in detail or don't want to?
Thanks for that, I listened to the whole thing because I was curious about their economic bottlenecks argument but the inconsistency in the rest of their arguments was too much
I vastly prefer this format of podcast review!
Agree with other commenters: I vastly prefer this podcast format.
Relatedly, to the "it will no longer be economical to build out the compute" - Niels Bohr, in the 1930s, dismissed the practicality of an atomic bomb. Isolating the k > 1 fissionable radioisotopes was too energy intensive. His direct quote was "You would have to turn the whole United States into a factory..." to get the U-235 for even a single bomb.
Which was a good dismissal. Until General Groves and the Manhattan Project he led effectively did that. The Oak Ridge plant alone consumed something like 1% of the United States' entire electricity production in 1943.
Existential strategic competition can move more than mountains.
I think revenue is a pretty good metric to focus on, because everyone can agree it matters, and it's much less hand-wavy than most of these "intelligence explosion" debates.
For comparison, the global tech industry makes about $10T in revenue (according to o3). So yeah, a world where AI is making $1T in revenue, that's certainly important, but not revolutionary. Probably more useful to look at is the growth rate. For example OpenAI made $4B in revenue in 2024 and projects $12B in 2025. If they managed to triple for four more years, they'd be the highest-revenue tech company. (Apple and Microsoft around 250-400B, Amazon higher but maybe shouldn't really count.)
The "AI 2027" scenario, for example, seems like it would have to involve an even higher growth rate than this yearly 3x.
Talking in terms of dollar values (or any other fixed unit of account) only makes sense close to equilibrium. Long-run comparisons need to look at actual quantities of stuff.
Consider, for example, the late Roman Republic, with a GDP of somewhere around 20 billion denarii per year. A denarius could buy you about ten kilograms of wheat, or one day of unskilled manual labor. And so depending on how you construct your basket, you can conclude the Roman economy was anywhere from about the size of Starbucks to the size of the UK. Both are wrong - industrial economies are not agrarian economies but bigger, they are categorically different things.
A post-AI-takeoff economy, similarly, will not just be a bigger version of ours; a dollar then is not just more or fewer dollars now. Suppose the price of land stays pretty much stable, but the price of labor drops 50% year over year. And then it does it again. And again. And again. So, are constant nominal revenues in such a scenario doubling in real terms, or staying flat? Depends on what you're buying with them.
Yes, this format is better. Though I usually read/listen to the original pod first.
Also: concierge-level social kung-fu, a very good phrase. thank you sir o3
It's super funny that Zvi acknowledges human capital owners would inevitably be overthrown by the AIs doing the actual labor, but seems oblivious to the revolutionary implications of this.
Similarly, is it really surprising that the market would produce a deeply anti-human company like Mechanize? Maybe this says something important about the anti-human nature of the market itself?
Marx's ghost is cackling.
Re Marx's ghost:
<mildSnark>
An ASI takeover can be viewed as a final battle of Capital and Labor.
(Hmm... Capital taking over - even over the capitalists...)
</mildSnark>
Mildly prefer this format, yes. I would like to have some links and/or timestamps noted for some of the main arguments or significant pieces to listen to.
Two thoughts on the history parts specifically:
1. The Spartans absolutely won at Thermopylae. Yes they all died, and yes they failed to stop the Persians, but that was never a realistic possibility. Their goal was to delay the Persians long enough for the Athenian fleet to leave Athens and go across to the island of Salamis rather than be burned along with the city of Athens. And they succeeded - the few days they fought let the Athenian fleet unite with triremes from other city states and defeat the Persian fleet, which won the war. Thermopylae was very much a success, once you adjust for what the win condition actually was.
2. They talk a bit about how what someone in 1500 would do to influence the world. Well, what I was thinking was that someone in 0AD who wanted to destroy the institution of slavery did something that succeeded 1900 years later… Christianity was pretty vital to the moral argument against slavery, after all
Other than that, generally agree with your points. This was one of the few of Dwarkesh’s podcasts that I ended up convinced in the opposite direction of his guests - their arguments seemed incoherent and unconvincing to me in general
I didn't get past the mission statement before bailing out. "We will achieve this by creating simulated environments and evaluations that capture the full scope of what people do at their jobs".
Oh, you're just going to perfectly quantify every aspect of every job and then write perfect numerical benchmarks to train a model to do then better? The only thing more staggering than the stupidity is the sheer arrogance, the hubris, the pig-ignorant self-aggrandizing GALL.
This will crash and burn for extremely obvious reasons and I will enjoy it deeply.
Especially weird given that they also say the “world is too richly detailed to reason about”…