Regarding 5c / 24: This is as close to Tyler has been to outright saying that populations will view AI as existential threats not to their life but to their lives, and inherent in that is the direct friction he brings up.
Perhaps it is his unique intersection of being in higher eduction as well as his exposure to the DC area, but what Tyler seems to be getting at is that power in the real world is the limiting factor. Who has it, and how is it exercised. These debates focus so keenly on the rational economic and intelligence based power dynamics, but the sectors of the economy that are currently the least productive operate in almost all cases by completely different power dynamics and incentives.
AI will not have a ballot, they cannot prevent a union card from being filed and they don't (currently) have the ability to intimidate politicians in their restaurants or shame them in their places of worship or effect their relative status.
Consider the case of commercial construction a notoriously underproductive but major slice of the US economy. Possibly the most exposed part of the development pipeline to near-term AI is the design, architecture, schematic and permitting process which can be done entirely digitally and often online with legible rules that can be cross checked. I'd venture to guess the actual job most likely to be impacted by that first would be the architects.
If AI were to replace a very large amount of jobs in the architecture space, the logical follow-on (as is often argued) could be the development capital would seek to repeat this process for all the downstream jobs that are harder to crack on rational economic grounds and the jobs would eventually be replaced, some with only AI, but most requiring large leaps in robotics as well. That would result in a massive explosion in our ability to build not just the projects themselves but the businesses they support and a huge jump in GDP growth associated with it.
But these industries already operate in a highly credentialed and regulated space with very different power dynamics setting the boundaries of what is possible. I would expect rapidly that politicians would be cajoled to alter and increase credentialing rules for projects to be approved, to require minimum union labor to finish construction, etc. The framework for doing this already exists in many areas of the country and simply needs to be dusted off and replicated. AI would still improve the finish level, the speed and the quality or projects but there will be a big drag on how much improvement can be achieved.
No amount of intelligence is going to change the minds of large groups of society at risk for displacement because the most intelligent thing to do as an individual in these cases is self preservation, and those individuals are who elect the politicians. This compounds in every sector. Imagine all doctors unionizing and forcing every hospital to meet some minimum doctor and nurse quotas, to force the use of AI into the current box it is in as a copilot and not an agent.
The assumption with strong AI that we get less unions and less regulation instead of more unions and more arbitrary hurdles in low productivity areas seems wildly misplaced in the short to medium term as the difference in returns to capital versus labor comes into even sharper relief.
Tyler has said that intelligent people are very good at figuring out how to climb hierarchies, but not so good at figuring out which hierarchies to climb or even that climbing hierarchies is something they could or even should be trying to do in the first place. Wise people do, but are worse climbers.
In Clausewitz you have Strategy, Operations and Tactics. I think more intelligence would make someone better at Operations and Tactics, but wouldn't help that much with Strategy, particularly with Grand Strategy. Meaning, it wouldn't help answer or even postulate the question "Which are our goals and why we want them?", because those are judgment calls, in the Kantian sense of the word judgment.
So when he underrates intelligence, I think that's what he means. He thinks it's great at figuring out how to solve this problem or that problem but not at choosing which problem to pursue next and up to what point and why. It's not only misaligned with economic growth, but not really aligned with anything in particular by default.
So if the wise EU people somehow manage to align AGI with them, then we wouldn't have as much economic growth as if it was aligned with the uncultured barbarians.
Uncultured barbarians think that things that look good to them are good, wise EU people think that there are things that look good but are secretly bad, actually, and things that look bad but are secretly good, actually.
Really wish that Dwarkesh had asked him about ASI. It's not clear to me that Cowen has ever actually taken a moment to consider whether AI that would intellectually compare to humans the way we do to wild animals is plausible, or what effects it might have if so.
This conversation is excellent. Tyler's position about diminishing returns is a a huge CRUX (in the Rationalist sense) that people thinking about AI's impact in the next 5-10 years have to decide on which side they land on.
Scott Alexander's recent blog about "priest classes" is also very salient in my mind as a related issue, because the barriers that will be imposed to AI Control/Influence of various disciplines will initially be gated by these people.
If AI does FOOM when it does what Dwarkesh insinuates (recursive-self improvement) it might figure out a way to rapidly undermine the priest classes (bypass the bottlenecks Cowen is talking about) and take control. But:
1. It's not at all clear to me if AI will FOOM, in either the Yudkowsky or Christiano projection.
2. If you're FOOM-ing, the priest class issue is, at best, a side concern. The easiest way to get rid of the priest class is for all of them to drop dead.
William Gibson's maxim is what one should return to here.
“The future is already here – it's just not very evenly distributed.“
Choose your vassalship to a technofeudal lord carefully, the interregnum has the potential to be quite horrible. Despite ending on a positive note, Cowen's point about Rare But Nightmare Fuel Wars being a possible outcome is not to be ignored.
The biggest issues in America (and the developed world in general) are government regulation, energy, the housing theory of everything, (these three are obviously related), and AI.
If we could reduce regulation (except in AI safety) growth could go back to 50s growth rates, or China 2000-2015. Let's make it happen.
I'm largely with Tyler here, though I wouldn't presume to speak for him.
I think it's sufficient for these conclusions to assume that AI will not readily be able to crack commercially viable, general purpose robotics. In that case, AI could take over the majority of cognitive work while still having little impact on growth because it remains dependent on human labor to do most physical tasks and that's the bottleneck.
To sketch a fuller scenario, suppose:
1) No development of commercially viable general purpose robots.
2) The development of AI agents that can perform at the 90th percentile level cognitively in most professions.
3) AI continues to have very high compute requirements, requiring heavy capital expenditure.
4) Most people remain mistrustful of AI relative to other humans and by default act less cooperatively towards AI agents than human ones.
I think all of these are eminently reasonable assumptions, and if you take them together they suggest a world where AI massively fucks over knowledge workers but doesn't generate particularly impressive economic growth.
A lot of our economy right now takes the form of systems that operate fairly efficiently under their current paradigm, where big productivity gains require a shift in paradigm. And, even if useful in the long run, a shift in that paradigm would entail immense capital costs. Especially if AI is sucking up more and more capital investment, it's just hard to pull of those redesigns.
Take, say, a grocery store. Give it extremely capable AI (but not robots). What changes? The AI can do better at predicting demand and managing inventory, it can take over bookkeeping, legal work, scheduling shifts, etc. That's great, but those are really small gains. And you still need human labor to stock shelves, clean up spills, bring in the carts from the parking lot, deter theft, etc. As I understand it, one of the big reasons that self checkout remains so limited is because people are much more willing to steal from a machine than a human (see #4 above). Better tech doesn't change that. So, you end up with a grocery store that no longer requires inputs from expensive human professionals (which are a tiny share of costs) but otherwise looks about the same.
Could AI come up with some whole new paradigm for how people get food? Sure, maybe, though that's a problem we've thrown a hell of a lot of intelligence at for a long time. To the extent it does, though, it's probably going to cost a fortune to set up that new paradigm and there's just not enough free capital floating around out there to make that transition quickly.
Hard disagree with that last sentence. There is a _boatload_ of capital available for even the sketchiest of ideas. That is the one thing you do not have to worry about.
Innovation can not be bought with money and financial engineering. Newton did not invent calculus for money and without calculus your modern world is impossible. The magical AI you’re concerned with is actually just gradient descent, aka root finding using derivatives, which was invented by Newton and his contemporaries.
I don't really get how AI can replace all knowledge workers and we don't get robots.
As for the grocery store, in that case it's like trying to make a faster car. How about taking a picture of your fridge, having a nice conversation with DeepClaudeGPT on what you want to eat and why for the week, and get thay delivered by the next day? This is something that was really really really difficult to do a few years ago, and today seems possible. Note that it's grocery delivery which seems to be something real and not meal delivery that felt very ZIRPy.
I was very much on your side as I listened to the podcast - in fact, I’m always a bit relieved when I read your takeaways and find that your intuitions meshed with mine.
That being said, I think your statement “Feels like bottleneck is almost a magic word or mantra at this point” kind of misses the point. He is saying about bottlenecks what you have often said about AGI/ASI/advanced AI - any given bottleneck may be overcome or reduced or a story may be told about why it’s not real. But, he thinks, there is a fundamental truth that bottlenecks will fill the space available, and if the one you’re talking about now can be reduced, well, there’s always another. I don’t agree with it, but I think it’s worth understanding his point of view on it.
Separately, Tyler’s thoughts on Churchill baffled me. How you can look at Churchill’s career and say that only his late career was impressive, I just don’t understand.
Regarding 5c / 24: This is as close to Tyler has been to outright saying that populations will view AI as existential threats not to their life but to their lives, and inherent in that is the direct friction he brings up.
Perhaps it is his unique intersection of being in higher eduction as well as his exposure to the DC area, but what Tyler seems to be getting at is that power in the real world is the limiting factor. Who has it, and how is it exercised. These debates focus so keenly on the rational economic and intelligence based power dynamics, but the sectors of the economy that are currently the least productive operate in almost all cases by completely different power dynamics and incentives.
AI will not have a ballot, they cannot prevent a union card from being filed and they don't (currently) have the ability to intimidate politicians in their restaurants or shame them in their places of worship or effect their relative status.
Consider the case of commercial construction a notoriously underproductive but major slice of the US economy. Possibly the most exposed part of the development pipeline to near-term AI is the design, architecture, schematic and permitting process which can be done entirely digitally and often online with legible rules that can be cross checked. I'd venture to guess the actual job most likely to be impacted by that first would be the architects.
If AI were to replace a very large amount of jobs in the architecture space, the logical follow-on (as is often argued) could be the development capital would seek to repeat this process for all the downstream jobs that are harder to crack on rational economic grounds and the jobs would eventually be replaced, some with only AI, but most requiring large leaps in robotics as well. That would result in a massive explosion in our ability to build not just the projects themselves but the businesses they support and a huge jump in GDP growth associated with it.
But these industries already operate in a highly credentialed and regulated space with very different power dynamics setting the boundaries of what is possible. I would expect rapidly that politicians would be cajoled to alter and increase credentialing rules for projects to be approved, to require minimum union labor to finish construction, etc. The framework for doing this already exists in many areas of the country and simply needs to be dusted off and replicated. AI would still improve the finish level, the speed and the quality or projects but there will be a big drag on how much improvement can be achieved.
No amount of intelligence is going to change the minds of large groups of society at risk for displacement because the most intelligent thing to do as an individual in these cases is self preservation, and those individuals are who elect the politicians. This compounds in every sector. Imagine all doctors unionizing and forcing every hospital to meet some minimum doctor and nurse quotas, to force the use of AI into the current box it is in as a copilot and not an agent.
The assumption with strong AI that we get less unions and less regulation instead of more unions and more arbitrary hurdles in low productivity areas seems wildly misplaced in the short to medium term as the difference in returns to capital versus labor comes into even sharper relief.
Here's the way I think about economic growth under a regime of an AI whose cognitive capacity (intelligence?) surpasses that of humans: that AI is an infinitely replicable and accumulable thing, unlike human labor. This has to imply economic growth rates higher than historic averages, or whatever slight bump Cowen sees. Whether it implies explosive growth is another question entirely. More here, with some interesting responses: https://open.substack.com/pub/maximumprogress/p/agi-will-not-make-labor-worthless?r=37ez3&utm_campaign=comment-list-share-cta&utm_medium=web&comments=true&commentId=85254626
Tyler has said that intelligent people are very good at figuring out how to climb hierarchies, but not so good at figuring out which hierarchies to climb or even that climbing hierarchies is something they could or even should be trying to do in the first place. Wise people do, but are worse climbers.
In Clausewitz you have Strategy, Operations and Tactics. I think more intelligence would make someone better at Operations and Tactics, but wouldn't help that much with Strategy, particularly with Grand Strategy. Meaning, it wouldn't help answer or even postulate the question "Which are our goals and why we want them?", because those are judgment calls, in the Kantian sense of the word judgment.
So when he underrates intelligence, I think that's what he means. He thinks it's great at figuring out how to solve this problem or that problem but not at choosing which problem to pursue next and up to what point and why. It's not only misaligned with economic growth, but not really aligned with anything in particular by default.
So if the wise EU people somehow manage to align AGI with them, then we wouldn't have as much economic growth as if it was aligned with the uncultured barbarians.
Uncultured barbarians think that things that look good to them are good, wise EU people think that there are things that look good but are secretly bad, actually, and things that look bad but are secretly good, actually.
Really wish that Dwarkesh had asked him about ASI. It's not clear to me that Cowen has ever actually taken a moment to consider whether AI that would intellectually compare to humans the way we do to wild animals is plausible, or what effects it might have if so.
This conversation is excellent. Tyler's position about diminishing returns is a a huge CRUX (in the Rationalist sense) that people thinking about AI's impact in the next 5-10 years have to decide on which side they land on.
Scott Alexander's recent blog about "priest classes" is also very salient in my mind as a related issue, because the barriers that will be imposed to AI Control/Influence of various disciplines will initially be gated by these people.
If AI does FOOM when it does what Dwarkesh insinuates (recursive-self improvement) it might figure out a way to rapidly undermine the priest classes (bypass the bottlenecks Cowen is talking about) and take control. But:
1. It's not at all clear to me if AI will FOOM, in either the Yudkowsky or Christiano projection.
2. If you're FOOM-ing, the priest class issue is, at best, a side concern. The easiest way to get rid of the priest class is for all of them to drop dead.
William Gibson's maxim is what one should return to here.
“The future is already here – it's just not very evenly distributed.“
Choose your vassalship to a technofeudal lord carefully, the interregnum has the potential to be quite horrible. Despite ending on a positive note, Cowen's point about Rare But Nightmare Fuel Wars being a possible outcome is not to be ignored.
The biggest issues in America (and the developed world in general) are government regulation, energy, the housing theory of everything, (these three are obviously related), and AI.
If we could reduce regulation (except in AI safety) growth could go back to 50s growth rates, or China 2000-2015. Let's make it happen.
I'm largely with Tyler here, though I wouldn't presume to speak for him.
I think it's sufficient for these conclusions to assume that AI will not readily be able to crack commercially viable, general purpose robotics. In that case, AI could take over the majority of cognitive work while still having little impact on growth because it remains dependent on human labor to do most physical tasks and that's the bottleneck.
To sketch a fuller scenario, suppose:
1) No development of commercially viable general purpose robots.
2) The development of AI agents that can perform at the 90th percentile level cognitively in most professions.
3) AI continues to have very high compute requirements, requiring heavy capital expenditure.
4) Most people remain mistrustful of AI relative to other humans and by default act less cooperatively towards AI agents than human ones.
I think all of these are eminently reasonable assumptions, and if you take them together they suggest a world where AI massively fucks over knowledge workers but doesn't generate particularly impressive economic growth.
A lot of our economy right now takes the form of systems that operate fairly efficiently under their current paradigm, where big productivity gains require a shift in paradigm. And, even if useful in the long run, a shift in that paradigm would entail immense capital costs. Especially if AI is sucking up more and more capital investment, it's just hard to pull of those redesigns.
Take, say, a grocery store. Give it extremely capable AI (but not robots). What changes? The AI can do better at predicting demand and managing inventory, it can take over bookkeeping, legal work, scheduling shifts, etc. That's great, but those are really small gains. And you still need human labor to stock shelves, clean up spills, bring in the carts from the parking lot, deter theft, etc. As I understand it, one of the big reasons that self checkout remains so limited is because people are much more willing to steal from a machine than a human (see #4 above). Better tech doesn't change that. So, you end up with a grocery store that no longer requires inputs from expensive human professionals (which are a tiny share of costs) but otherwise looks about the same.
Could AI come up with some whole new paradigm for how people get food? Sure, maybe, though that's a problem we've thrown a hell of a lot of intelligence at for a long time. To the extent it does, though, it's probably going to cost a fortune to set up that new paradigm and there's just not enough free capital floating around out there to make that transition quickly.
Hard disagree with that last sentence. There is a _boatload_ of capital available for even the sketchiest of ideas. That is the one thing you do not have to worry about.
Innovation can not be bought with money and financial engineering. Newton did not invent calculus for money and without calculus your modern world is impossible. The magical AI you’re concerned with is actually just gradient descent, aka root finding using derivatives, which was invented by Newton and his contemporaries.
I don't really get how AI can replace all knowledge workers and we don't get robots.
As for the grocery store, in that case it's like trying to make a faster car. How about taking a picture of your fridge, having a nice conversation with DeepClaudeGPT on what you want to eat and why for the week, and get thay delivered by the next day? This is something that was really really really difficult to do a few years ago, and today seems possible. Note that it's grocery delivery which seems to be something real and not meal delivery that felt very ZIRPy.
I was very much on your side as I listened to the podcast - in fact, I’m always a bit relieved when I read your takeaways and find that your intuitions meshed with mine.
That being said, I think your statement “Feels like bottleneck is almost a magic word or mantra at this point” kind of misses the point. He is saying about bottlenecks what you have often said about AGI/ASI/advanced AI - any given bottleneck may be overcome or reduced or a story may be told about why it’s not real. But, he thinks, there is a fundamental truth that bottlenecks will fill the space available, and if the one you’re talking about now can be reduced, well, there’s always another. I don’t agree with it, but I think it’s worth understanding his point of view on it.
Separately, Tyler’s thoughts on Churchill baffled me. How you can look at Churchill’s career and say that only his late career was impressive, I just don’t understand.