Regarding 5c / 24: This is as close to Tyler has been to outright saying that populations will view AI as existential threats not to their life but to their lives, and inherent in that is the direct friction he brings up.
Perhaps it is his unique intersection of being in higher eduction as well as his exposure to the DC area, but what Tyler seems to be getting at is that power in the real world is the limiting factor. Who has it, and how is it exercised. These debates focus so keenly on the rational economic and intelligence based power dynamics, but the sectors of the economy that are currently the least productive operate in almost all cases by completely different power dynamics and incentives.
AI will not have a ballot, they cannot prevent a union card from being filed and they don't (currently) have the ability to intimidate politicians in their restaurants or shame them in their places of worship or effect their relative status.
Consider the case of commercial construction a notoriously underproductive but major slice of the US economy. Possibly the most exposed part of the development pipeline to near-term AI is the design, architecture, schematic and permitting process which can be done entirely digitally and often online with legible rules that can be cross checked. I'd venture to guess the actual job most likely to be impacted by that first would be the architects.
If AI were to replace a very large amount of jobs in the architecture space, the logical follow-on (as is often argued) could be the development capital would seek to repeat this process for all the downstream jobs that are harder to crack on rational economic grounds and the jobs would eventually be replaced, some with only AI, but most requiring large leaps in robotics as well. That would result in a massive explosion in our ability to build not just the projects themselves but the businesses they support and a huge jump in GDP growth associated with it.
But these industries already operate in a highly credentialed and regulated space with very different power dynamics setting the boundaries of what is possible. I would expect rapidly that politicians would be cajoled to alter and increase credentialing rules for projects to be approved, to require minimum union labor to finish construction, etc. The framework for doing this already exists in many areas of the country and simply needs to be dusted off and replicated. AI would still improve the finish level, the speed and the quality or projects but there will be a big drag on how much improvement can be achieved.
No amount of intelligence is going to change the minds of large groups of society at risk for displacement because the most intelligent thing to do as an individual in these cases is self preservation, and those individuals are who elect the politicians. This compounds in every sector. Imagine all doctors unionizing and forcing every hospital to meet some minimum doctor and nurse quotas, to force the use of AI into the current box it is in as a copilot and not an agent.
The assumption with strong AI that we get less unions and less regulation instead of more unions and more arbitrary hurdles in low productivity areas seems wildly misplaced in the short to medium term as the difference in returns to capital versus labor comes into even sharper relief.
If that is all so, then should we expect that the US, UK, India, and other countries heavy on red tape will be quickly eclipsed by Argentina, Singapore, etc., who are willing to put AI to full use? And if so, will that change the geopolitical situation in such a way that other countries will be forced to make changes?
Tyler has said that intelligent people are very good at figuring out how to climb hierarchies, but not so good at figuring out which hierarchies to climb or even that climbing hierarchies is something they could or even should be trying to do in the first place. Wise people do, but are worse climbers.
In Clausewitz you have Strategy, Operations and Tactics. I think more intelligence would make someone better at Operations and Tactics, but wouldn't help that much with Strategy, particularly with Grand Strategy. Meaning, it wouldn't help answer or even postulate the question "Which are our goals and why we want them?", because those are judgment calls, in the Kantian sense of the word judgment.
So when he underrates intelligence, I think that's what he means. He thinks it's great at figuring out how to solve this problem or that problem but not at choosing which problem to pursue next and up to what point and why. It's not only misaligned with economic growth, but not really aligned with anything in particular by default.
So if the wise EU people somehow manage to align AGI with them, then we wouldn't have as much economic growth as if it was aligned with the uncultured barbarians.
Uncultured barbarians think that things that look good to them are good, wise EU people think that there are things that look good but are secretly bad, actually, and things that look bad but are secretly good, actually.
I think separating the components in this way isn't insane, though the labels "wisdom" and "intelligence" are always going to be controversial, but why does he think they're not positively correlated? Why does he think AI won't have decent instincts at Strategy?
I don't know if they're not positively correlated in humans, they might be. Spinoza thought they were, for example, more intelligence would make you wiser and more moral. It would be great news for humanity if that were the case. But for Kant they're different mental activities, intelligence being based on logically manipulating mental representations of the world and judgement depending on taste, of irrationally linking abstract concepts such as morality to likes and dislikes as if they were rotten food or a pleasant smell. So intelligence and wisdom not being correlated in humans would be the case if they were based on different physiological structures, such as the prefrontal cortex against the insular cortex or the amygdala, for example.
It's not that AI doesn't have decent instincts at Strategy, it's more like in the current paradigm it can't really do Strategy. You train a certain Strategy into AI by giving it goals and then the intelligence part kicks in and it finds the best way to pursue the goal. But it can't choose between competing goals or a goal against not having goals unless there's a larger goal somewhere in the training.
Really wish that Dwarkesh had asked him about ASI. It's not clear to me that Cowen has ever actually taken a moment to consider whether AI that would intellectually compare to humans the way we do to wild animals is plausible, or what effects it might have if so.
This conversation is excellent. Tyler's position about diminishing returns is a a huge CRUX (in the Rationalist sense) that people thinking about AI's impact in the next 5-10 years have to decide on which side they land on.
Scott Alexander's recent blog about "priest classes" is also very salient in my mind as a related issue, because the barriers that will be imposed to AI Control/Influence of various disciplines will initially be gated by these people.
If AI does FOOM when it does what Dwarkesh insinuates (recursive-self improvement) it might figure out a way to rapidly undermine the priest classes (bypass the bottlenecks Cowen is talking about) and take control. But:
1. It's not at all clear to me if AI will FOOM, in either the Yudkowsky or Christiano projection.
2. If you're FOOM-ing, the priest class issue is, at best, a side concern. The easiest way to get rid of the priest class is for all of them to drop dead.
William Gibson's maxim is what one should return to here.
“The future is already here – it's just not very evenly distributed.“
Choose your vassalship to a technofeudal lord carefully, the interregnum has the potential to be quite horrible. Despite ending on a positive note, Cowen's point about Rare But Nightmare Fuel Wars being a possible outcome is not to be ignored.
The biggest issues in America (and the developed world in general) are government regulation, energy, the housing theory of everything, (these three are obviously related), and AI.
If we could reduce regulation (except in AI safety) growth could go back to 50s growth rates, or China 2000-2015. Let's make it happen.
“The biggest issues in America (and the developed world in general) are government regulation, energy, the housing theory of everything…”
I mostly agree.
And this is an enormous component of Tyler’s diminishing returns point.
Because while there *might* be some hope on the energy axis. The idea that AI will do much to fix government regulation and housing in the medium term is… far fetched, at best.
I'm largely with Tyler here, though I wouldn't presume to speak for him.
I think it's sufficient for these conclusions to assume that AI will not readily be able to crack commercially viable, general purpose robotics. In that case, AI could take over the majority of cognitive work while still having little impact on growth because it remains dependent on human labor to do most physical tasks and that's the bottleneck.
To sketch a fuller scenario, suppose:
1) No development of commercially viable general purpose robots.
2) The development of AI agents that can perform at the 90th percentile level cognitively in most professions.
3) AI continues to have very high compute requirements, requiring heavy capital expenditure.
4) Most people remain mistrustful of AI relative to other humans and by default act less cooperatively towards AI agents than human ones.
I think all of these are eminently reasonable assumptions, and if you take them together they suggest a world where AI massively fucks over knowledge workers but doesn't generate particularly impressive economic growth.
A lot of our economy right now takes the form of systems that operate fairly efficiently under their current paradigm, where big productivity gains require a shift in paradigm. And, even if useful in the long run, a shift in that paradigm would entail immense capital costs. Especially if AI is sucking up more and more capital investment, it's just hard to pull of those redesigns.
Take, say, a grocery store. Give it extremely capable AI (but not robots). What changes? The AI can do better at predicting demand and managing inventory, it can take over bookkeeping, legal work, scheduling shifts, etc. That's great, but those are really small gains. And you still need human labor to stock shelves, clean up spills, bring in the carts from the parking lot, deter theft, etc. As I understand it, one of the big reasons that self checkout remains so limited is because people are much more willing to steal from a machine than a human (see #4 above). Better tech doesn't change that. So, you end up with a grocery store that no longer requires inputs from expensive human professionals (which are a tiny share of costs) but otherwise looks about the same.
Could AI come up with some whole new paradigm for how people get food? Sure, maybe, though that's a problem we've thrown a hell of a lot of intelligence at for a long time. To the extent it does, though, it's probably going to cost a fortune to set up that new paradigm and there's just not enough free capital floating around out there to make that transition quickly.
Hard disagree with that last sentence. There is a _boatload_ of capital available for even the sketchiest of ideas. That is the one thing you do not have to worry about.
Innovation can not be bought with money and financial engineering. Newton did not invent calculus for money and without calculus your modern world is impossible. The magical AI you’re concerned with is actually just gradient descent, aka root finding using derivatives, which was invented by Newton and his contemporaries.
I don't really get how AI can replace all knowledge workers and we don't get robots.
As for the grocery store, in that case it's like trying to make a faster car. How about taking a picture of your fridge, having a nice conversation with DeepClaudeGPT on what you want to eat and why for the week, and get thay delivered by the next day? This is something that was really really really difficult to do a few years ago, and today seems possible. Note that it's grocery delivery which seems to be something real and not meal delivery that felt very ZIRPy.
What’s special about robotics? Have you seen the latest Boston Dynamics human-shaped robots? Their gains seem to have been substantially driven by improved software (control systems and simulation environments) where “normal software engineers” would obviously accelerate.
Is this something about AI not currently having embodied understanding?
“…suggest a world where AI massively fucks over knowledge workers…”
Seems far more likely to me that, taking a page from Average is Over, it “fucks over” the bottom chunk (30%? 70%?!? No clue) of “knowledge workers”, but makes the remainder massively more productive and so massively better off.
In much the same way that the computer revolution “fucked over” the average secretary who was mostly a typist and calendar keeper, but caused good administrative assistants to be more productive and increase their value.
I was very much on your side as I listened to the podcast - in fact, I’m always a bit relieved when I read your takeaways and find that your intuitions meshed with mine.
That being said, I think your statement “Feels like bottleneck is almost a magic word or mantra at this point” kind of misses the point. He is saying about bottlenecks what you have often said about AGI/ASI/advanced AI - any given bottleneck may be overcome or reduced or a story may be told about why it’s not real. But, he thinks, there is a fundamental truth that bottlenecks will fill the space available, and if the one you’re talking about now can be reduced, well, there’s always another. I don’t agree with it, but I think it’s worth understanding his point of view on it.
Separately, Tyler’s thoughts on Churchill baffled me. How you can look at Churchill’s career and say that only his late career was impressive, I just don’t understand.
I think I tend to agree more with you than Cowen, but this at least seems like an interesting/productive pushback against AI-changing-everything-ism compared to stopped clocks like Gary Marcus. I don't bother reading his blog/articles anymore because I knew pretty much exactly what they were going to say before I read them.
Thanks for this. As an interested bystander, this was a good Cliff Notes version of the conversation within my reading level.
"A huge chunk of homework and grading is done via LLMs. A huge chunk of coding is done via LLMs." The first one sounds like machines talking to machines and a complete waste of time, suggesting that the entire education model needs to be replaced, and it probably did anyway, AI or no AI. But if AI can be harnessed to some new, improved model of education, we may see big gains there. The second one, however, based on reliable hearsay from an old friend and tech insider, is already having a sweeping impact, and is going to lead to massive improvements in quality and cost reduction throughout the economy. So, despite being inclined to naysaying, I expect big, pervasive improvements across the economy. Interesting point about opposition from people whose jobs go away. Mine might, but I'm old, and no one will care if people my age are turned into Soylent Green for pets sometime soon. But cohorts of younger people who see well paying jobs rapidly evaporate across whole sectors of the economy may be able to organize opposition. Historically, the best thing to do is buy those people off. That's what the US government did circa 1900 when most of the agricultural workforce found itself unemployed and unemployable. We do have some historical models to draw on if we see major, sectoral obsolescence. If the economy is booming, it can float all kinds of boats. I am hoping that happens.
I'm going to pick a nit on your autistic comments. I'm not autistic, but I have a child who is autistic and a spouse who is borderline.
Many mathematical geniuses can make intuitive connections and quickly grasps complex topics. Most of us have to methodically work through a mess of equations to get to the same place. We don't say these geniuses are acting on "vibes".
It is similar with social interactions. Autists painstakingly (and often with mistakes) work through the "logic" and "rules" of these interactions while non-autists can jump directly to the "solution". This doesn't mean neurotypical people are superficial: it instead means they are masters of their craft.
1. Tyler admits the rapid improvements and fairly high capabilities of the latest AI models. Useful.
2. The bottleneck theory boils down to
a. The USA is inefficient
b. China stopped growing
c. The EU is a joke
That still leaves everyone else. In such a "fizzle world" it leaves the door open for Guyana or Chile or Estonia to be the country that reforms it government to take advantage of AI, with a permitting process of instant AI driven approvals and "medical research? Approved".
Tyler is smarter than me. When I disagree with him I generally assume that I'm incorrect and think harder about what I might be missing.
I think what Tyler seems to missing are two things: 1) Exponential nature of AI. 2) New markets and services.
It's possible that AI will only rapidly improve small parts of the economy. But in areas where *there are not bottlenecks* those areas may grow so fast as to dominate.
Tyler might be right. I think he may only be right for a couple years.
A factoid I like to point out is that hotels, restaurant and leisure stocks make up around 2% of U.S. stocks by market cap. Big, important, real world businesses can become a small part of the economy if their growth is relatively slow.
Regarding 5c / 24: This is as close to Tyler has been to outright saying that populations will view AI as existential threats not to their life but to their lives, and inherent in that is the direct friction he brings up.
Perhaps it is his unique intersection of being in higher eduction as well as his exposure to the DC area, but what Tyler seems to be getting at is that power in the real world is the limiting factor. Who has it, and how is it exercised. These debates focus so keenly on the rational economic and intelligence based power dynamics, but the sectors of the economy that are currently the least productive operate in almost all cases by completely different power dynamics and incentives.
AI will not have a ballot, they cannot prevent a union card from being filed and they don't (currently) have the ability to intimidate politicians in their restaurants or shame them in their places of worship or effect their relative status.
Consider the case of commercial construction a notoriously underproductive but major slice of the US economy. Possibly the most exposed part of the development pipeline to near-term AI is the design, architecture, schematic and permitting process which can be done entirely digitally and often online with legible rules that can be cross checked. I'd venture to guess the actual job most likely to be impacted by that first would be the architects.
If AI were to replace a very large amount of jobs in the architecture space, the logical follow-on (as is often argued) could be the development capital would seek to repeat this process for all the downstream jobs that are harder to crack on rational economic grounds and the jobs would eventually be replaced, some with only AI, but most requiring large leaps in robotics as well. That would result in a massive explosion in our ability to build not just the projects themselves but the businesses they support and a huge jump in GDP growth associated with it.
But these industries already operate in a highly credentialed and regulated space with very different power dynamics setting the boundaries of what is possible. I would expect rapidly that politicians would be cajoled to alter and increase credentialing rules for projects to be approved, to require minimum union labor to finish construction, etc. The framework for doing this already exists in many areas of the country and simply needs to be dusted off and replicated. AI would still improve the finish level, the speed and the quality or projects but there will be a big drag on how much improvement can be achieved.
No amount of intelligence is going to change the minds of large groups of society at risk for displacement because the most intelligent thing to do as an individual in these cases is self preservation, and those individuals are who elect the politicians. This compounds in every sector. Imagine all doctors unionizing and forcing every hospital to meet some minimum doctor and nurse quotas, to force the use of AI into the current box it is in as a copilot and not an agent.
The assumption with strong AI that we get less unions and less regulation instead of more unions and more arbitrary hurdles in low productivity areas seems wildly misplaced in the short to medium term as the difference in returns to capital versus labor comes into even sharper relief.
If that is all so, then should we expect that the US, UK, India, and other countries heavy on red tape will be quickly eclipsed by Argentina, Singapore, etc., who are willing to put AI to full use? And if so, will that change the geopolitical situation in such a way that other countries will be forced to make changes?
Here's the way I think about economic growth under a regime of an AI whose cognitive capacity (intelligence?) surpasses that of humans: that AI is an infinitely replicable and accumulable thing, unlike human labor. This has to imply economic growth rates higher than historic averages, or whatever slight bump Cowen sees. Whether it implies explosive growth is another question entirely. More here, with some interesting responses: https://open.substack.com/pub/maximumprogress/p/agi-will-not-make-labor-worthless?r=37ez3&utm_campaign=comment-list-share-cta&utm_medium=web&comments=true&commentId=85254626
Tyler has said that intelligent people are very good at figuring out how to climb hierarchies, but not so good at figuring out which hierarchies to climb or even that climbing hierarchies is something they could or even should be trying to do in the first place. Wise people do, but are worse climbers.
In Clausewitz you have Strategy, Operations and Tactics. I think more intelligence would make someone better at Operations and Tactics, but wouldn't help that much with Strategy, particularly with Grand Strategy. Meaning, it wouldn't help answer or even postulate the question "Which are our goals and why we want them?", because those are judgment calls, in the Kantian sense of the word judgment.
So when he underrates intelligence, I think that's what he means. He thinks it's great at figuring out how to solve this problem or that problem but not at choosing which problem to pursue next and up to what point and why. It's not only misaligned with economic growth, but not really aligned with anything in particular by default.
So if the wise EU people somehow manage to align AGI with them, then we wouldn't have as much economic growth as if it was aligned with the uncultured barbarians.
Uncultured barbarians think that things that look good to them are good, wise EU people think that there are things that look good but are secretly bad, actually, and things that look bad but are secretly good, actually.
I think separating the components in this way isn't insane, though the labels "wisdom" and "intelligence" are always going to be controversial, but why does he think they're not positively correlated? Why does he think AI won't have decent instincts at Strategy?
I don't know if they're not positively correlated in humans, they might be. Spinoza thought they were, for example, more intelligence would make you wiser and more moral. It would be great news for humanity if that were the case. But for Kant they're different mental activities, intelligence being based on logically manipulating mental representations of the world and judgement depending on taste, of irrationally linking abstract concepts such as morality to likes and dislikes as if they were rotten food or a pleasant smell. So intelligence and wisdom not being correlated in humans would be the case if they were based on different physiological structures, such as the prefrontal cortex against the insular cortex or the amygdala, for example.
It's not that AI doesn't have decent instincts at Strategy, it's more like in the current paradigm it can't really do Strategy. You train a certain Strategy into AI by giving it goals and then the intelligence part kicks in and it finds the best way to pursue the goal. But it can't choose between competing goals or a goal against not having goals unless there's a larger goal somewhere in the training.
Really wish that Dwarkesh had asked him about ASI. It's not clear to me that Cowen has ever actually taken a moment to consider whether AI that would intellectually compare to humans the way we do to wild animals is plausible, or what effects it might have if so.
This conversation is excellent. Tyler's position about diminishing returns is a a huge CRUX (in the Rationalist sense) that people thinking about AI's impact in the next 5-10 years have to decide on which side they land on.
Scott Alexander's recent blog about "priest classes" is also very salient in my mind as a related issue, because the barriers that will be imposed to AI Control/Influence of various disciplines will initially be gated by these people.
If AI does FOOM when it does what Dwarkesh insinuates (recursive-self improvement) it might figure out a way to rapidly undermine the priest classes (bypass the bottlenecks Cowen is talking about) and take control. But:
1. It's not at all clear to me if AI will FOOM, in either the Yudkowsky or Christiano projection.
2. If you're FOOM-ing, the priest class issue is, at best, a side concern. The easiest way to get rid of the priest class is for all of them to drop dead.
William Gibson's maxim is what one should return to here.
“The future is already here – it's just not very evenly distributed.“
Choose your vassalship to a technofeudal lord carefully, the interregnum has the potential to be quite horrible. Despite ending on a positive note, Cowen's point about Rare But Nightmare Fuel Wars being a possible outcome is not to be ignored.
The biggest issues in America (and the developed world in general) are government regulation, energy, the housing theory of everything, (these three are obviously related), and AI.
If we could reduce regulation (except in AI safety) growth could go back to 50s growth rates, or China 2000-2015. Let's make it happen.
“The biggest issues in America (and the developed world in general) are government regulation, energy, the housing theory of everything…”
I mostly agree.
And this is an enormous component of Tyler’s diminishing returns point.
Because while there *might* be some hope on the energy axis. The idea that AI will do much to fix government regulation and housing in the medium term is… far fetched, at best.
I'm largely with Tyler here, though I wouldn't presume to speak for him.
I think it's sufficient for these conclusions to assume that AI will not readily be able to crack commercially viable, general purpose robotics. In that case, AI could take over the majority of cognitive work while still having little impact on growth because it remains dependent on human labor to do most physical tasks and that's the bottleneck.
To sketch a fuller scenario, suppose:
1) No development of commercially viable general purpose robots.
2) The development of AI agents that can perform at the 90th percentile level cognitively in most professions.
3) AI continues to have very high compute requirements, requiring heavy capital expenditure.
4) Most people remain mistrustful of AI relative to other humans and by default act less cooperatively towards AI agents than human ones.
I think all of these are eminently reasonable assumptions, and if you take them together they suggest a world where AI massively fucks over knowledge workers but doesn't generate particularly impressive economic growth.
A lot of our economy right now takes the form of systems that operate fairly efficiently under their current paradigm, where big productivity gains require a shift in paradigm. And, even if useful in the long run, a shift in that paradigm would entail immense capital costs. Especially if AI is sucking up more and more capital investment, it's just hard to pull of those redesigns.
Take, say, a grocery store. Give it extremely capable AI (but not robots). What changes? The AI can do better at predicting demand and managing inventory, it can take over bookkeeping, legal work, scheduling shifts, etc. That's great, but those are really small gains. And you still need human labor to stock shelves, clean up spills, bring in the carts from the parking lot, deter theft, etc. As I understand it, one of the big reasons that self checkout remains so limited is because people are much more willing to steal from a machine than a human (see #4 above). Better tech doesn't change that. So, you end up with a grocery store that no longer requires inputs from expensive human professionals (which are a tiny share of costs) but otherwise looks about the same.
Could AI come up with some whole new paradigm for how people get food? Sure, maybe, though that's a problem we've thrown a hell of a lot of intelligence at for a long time. To the extent it does, though, it's probably going to cost a fortune to set up that new paradigm and there's just not enough free capital floating around out there to make that transition quickly.
Hard disagree with that last sentence. There is a _boatload_ of capital available for even the sketchiest of ideas. That is the one thing you do not have to worry about.
Innovation can not be bought with money and financial engineering. Newton did not invent calculus for money and without calculus your modern world is impossible. The magical AI you’re concerned with is actually just gradient descent, aka root finding using derivatives, which was invented by Newton and his contemporaries.
Reductive. Human intelligence is just evolution.
The human animal isn't any more special than any other collection of biophysical reactions in a compact space with a boundary.
I don't really get how AI can replace all knowledge workers and we don't get robots.
As for the grocery store, in that case it's like trying to make a faster car. How about taking a picture of your fridge, having a nice conversation with DeepClaudeGPT on what you want to eat and why for the week, and get thay delivered by the next day? This is something that was really really really difficult to do a few years ago, and today seems possible. Note that it's grocery delivery which seems to be something real and not meal delivery that felt very ZIRPy.
What’s special about robotics? Have you seen the latest Boston Dynamics human-shaped robots? Their gains seem to have been substantially driven by improved software (control systems and simulation environments) where “normal software engineers” would obviously accelerate.
Is this something about AI not currently having embodied understanding?
Timeframe matters ginormously to this conversation - 3 years vs 10 years vs 20 years vs 30 years.
“…suggest a world where AI massively fucks over knowledge workers…”
Seems far more likely to me that, taking a page from Average is Over, it “fucks over” the bottom chunk (30%? 70%?!? No clue) of “knowledge workers”, but makes the remainder massively more productive and so massively better off.
In much the same way that the computer revolution “fucked over” the average secretary who was mostly a typist and calendar keeper, but caused good administrative assistants to be more productive and increase their value.
I was very much on your side as I listened to the podcast - in fact, I’m always a bit relieved when I read your takeaways and find that your intuitions meshed with mine.
That being said, I think your statement “Feels like bottleneck is almost a magic word or mantra at this point” kind of misses the point. He is saying about bottlenecks what you have often said about AGI/ASI/advanced AI - any given bottleneck may be overcome or reduced or a story may be told about why it’s not real. But, he thinks, there is a fundamental truth that bottlenecks will fill the space available, and if the one you’re talking about now can be reduced, well, there’s always another. I don’t agree with it, but I think it’s worth understanding his point of view on it.
Separately, Tyler’s thoughts on Churchill baffled me. How you can look at Churchill’s career and say that only his late career was impressive, I just don’t understand.
Slightly late, podcast episode for this post:
https://open.substack.com/pub/dwatvpodcast/p/on-dwarkesh-patels-4th-podcast-with
I think I tend to agree more with you than Cowen, but this at least seems like an interesting/productive pushback against AI-changing-everything-ism compared to stopped clocks like Gary Marcus. I don't bother reading his blog/articles anymore because I knew pretty much exactly what they were going to say before I read them.
Thanks for this. As an interested bystander, this was a good Cliff Notes version of the conversation within my reading level.
"A huge chunk of homework and grading is done via LLMs. A huge chunk of coding is done via LLMs." The first one sounds like machines talking to machines and a complete waste of time, suggesting that the entire education model needs to be replaced, and it probably did anyway, AI or no AI. But if AI can be harnessed to some new, improved model of education, we may see big gains there. The second one, however, based on reliable hearsay from an old friend and tech insider, is already having a sweeping impact, and is going to lead to massive improvements in quality and cost reduction throughout the economy. So, despite being inclined to naysaying, I expect big, pervasive improvements across the economy. Interesting point about opposition from people whose jobs go away. Mine might, but I'm old, and no one will care if people my age are turned into Soylent Green for pets sometime soon. But cohorts of younger people who see well paying jobs rapidly evaporate across whole sectors of the economy may be able to organize opposition. Historically, the best thing to do is buy those people off. That's what the US government did circa 1900 when most of the agricultural workforce found itself unemployed and unemployable. We do have some historical models to draw on if we see major, sectoral obsolescence. If the economy is booming, it can float all kinds of boats. I am hoping that happens.
I'm going to pick a nit on your autistic comments. I'm not autistic, but I have a child who is autistic and a spouse who is borderline.
Many mathematical geniuses can make intuitive connections and quickly grasps complex topics. Most of us have to methodically work through a mess of equations to get to the same place. We don't say these geniuses are acting on "vibes".
It is similar with social interactions. Autists painstakingly (and often with mistakes) work through the "logic" and "rules" of these interactions while non-autists can jump directly to the "solution". This doesn't mean neurotypical people are superficial: it instead means they are masters of their craft.
Noticed a gaping hole here.
1. Tyler admits the rapid improvements and fairly high capabilities of the latest AI models. Useful.
2. The bottleneck theory boils down to
a. The USA is inefficient
b. China stopped growing
c. The EU is a joke
That still leaves everyone else. In such a "fizzle world" it leaves the door open for Guyana or Chile or Estonia to be the country that reforms it government to take advantage of AI, with a permitting process of instant AI driven approvals and "medical research? Approved".
Such a comparative advantage could be explosive.
Actually, it's going to be a very small island nation in a desperate bid to outrun global warming that will deploy AI to stay above water.
Temporarily needing methane tankers delivered periodically to feed the gas turbines powering its data centers but I dig it.
Tyler is smarter than me. When I disagree with him I generally assume that I'm incorrect and think harder about what I might be missing.
I think what Tyler seems to missing are two things: 1) Exponential nature of AI. 2) New markets and services.
It's possible that AI will only rapidly improve small parts of the economy. But in areas where *there are not bottlenecks* those areas may grow so fast as to dominate.
Tyler might be right. I think he may only be right for a couple years.
A factoid I like to point out is that hotels, restaurant and leisure stocks make up around 2% of U.S. stocks by market cap. Big, important, real world businesses can become a small part of the economy if their growth is relatively slow.