“You don’t have to believe it (or think this is a good idea), but many of the AI insiders really do. Their public statements are not much different than their private ones. Without considering that zero sum dimension, a lot of what is happening in the space makes less sense.”
A great sign that a bubble is developing is when nominally smart people believe their own lies
You can give up to $199 without your name being publicly released. That’s a choice for people concerned about campaign contributions limiting their future options.
In retrospect, was the dot com bubble actually a bubble?
According to ChatGPT, if you invested in a nasdaq index fund at the absolute worst time, March 2000, and held your money until today, you would have made 6.3% annual returns. Not too bad? That’s better than, say, San Francisco real estate, which has made 4.9% returns over that time.
To me, the dot com era was something different. The peak actually *was* an accurate reflection of future cash flows, overall. But many individual stocks were mispriced. Many dot coms were losers, and the success would end up drastically concentrated in a few winners, like Google, Amazon, and Apple, and it happened over a longer time frame than many expected.
I would not be surprised if the AI era is similar, if on average the AI companies succeed, but there is a lot of chaos coming, with some big companies completely failing and others expanding by an order of magnitude.
I mean the stock market lost nearly 80% of its value in the dotcom bubble, and many people in tech went from millionaires to penniless practically overnight. A similar stat about "investing at the worst time" is also true of investing the day before Black Thursday, but it was still a cataclysmic economic event.
Zvi have you heard from people at frontier labs that they’re holding back on releasing larger and better models because they wouldn’t have enough compute to meet inference demand? This strikes me as *entirely* plausible, and Sam has said some stuff to this effect - but only vaguely.
This almost has to be the case since distillation is so much more efficient than training from scratch. If you want to make a variety of models of various sizes and you already have your hyperparameters in order it only makes sense to start with a big model and distill the rest out of that. I think we can safely assume, just based on the speed and costs of the public versions and also release history of the labs, that there is at the very least a Sonnet 4.5 Opus and a something bigger than whatever sits at the top of the GPT 5 router.
> expect such a crisis to have at most modest effects on timelines to existentially dangerous ASI being developed
It may by my lack of economics education speaking, but how can it it be the case? Are current timelines not relying heavily on the ability of the labs to raise huge capital for building huge datacenters and for paying many people who are smarter than current frontier models to manually generate huge amounts of quality data? Wouldn't such a crisis make it much harder for them, plausibly beyond what makes direct economic sense, due to what responsible investers think a responsible invester is expected to do?
I believe we have more than enough compute to create superintelligence, just not with current architectures. A fundamental conceptual insight might be all that is needed to go from zero to superintelligence (or at least "seed AGI") in a single training run.
I interpret "modest effect" to mean a 1-3 year delay, though I would personally bump that to a 2-5 year delay. Would be great news for having time to implement a global treaty, but maybe bad news for the ability to convince people that one is needed.
Yes i agree about having technically much more than enough compute given the right architecture, and maybe even just given the hundreds of ways to squeeze the hardware harder. But that squeezing is costly and takes time. Maybe 2 years is enough. Don't know
Because there are a few companies, Google and Microsoft for example, with the ability to keep the spending up for several years even if there's a major market crash. If the bubble pops we'll see everything around the margins dry up but the premium labs will continue their plans largely unimpeded.
Maybe they will be able to keep spending, but will they dare to? Even as their stock keep falling in response, and maybe others threaten their existing markets with existing mundane AI?
A fair question but my gut strongly says yes. Once you've seen the light so to speak there's no going back. I hope I'm wrong though, as that might open up a new avenue for salvation. A financial crash partly brought on by AI underconfidence without any corresponding loss in the pace of capabilities research would be the absolute worst case scenario, nearly everyone would go back to ignoring the tech and by the time it pops back into the public consciousness it would be too late. On the other hand what we need most is time so if it instead buys us a few years it would probably be worth whatever it is we're trading in return, it's not like the public consciousness that's been stirred up us is doing much at the moment anyway
Thank you for the witchy blog post tag and ensuing informative and insightful commentary. You're right to hail Alex Bores and his success at AI alignment rulemaking. I forwarded a small donation today to his campaign because of your advocacy. My mind is less full of scorpions because of him and you...
My understanding is a lot of AI investment is based on the assumption of smooth sailing to controllable superintelligence and effectively infinite money. In reality, controllable superintelligence probably just isn't possible (on the relevant timescales and resourcing). If it starts to look like governments will eventually adopt Yudkowsky-style pause efforts, I expect the bubble to pop. I expect most surviving worlds to have such pauses and subsequent crashes.
What about the value of a frozen pre-ASI industry, which might be able to swallow many fields even at current tech levels? I don't think that's going to happen either. The current social situation regarding AI is unstable. LLMs are constantly being used by hundreds of millions of people, and the facade of "just a machine" is *extremely* flimsy. Basic concern for AI welfare would likely kill commercial prospects.
Zvi, at time of reading, you removed the Bores note as planned, but not the paragraph that introduces the removed note.
I.e. "Before I dive into the details, a time sensitive point of order, that you can skip if you would not consider political donations:" is still present.
The fact that AI accounts for almost all growth in the US and worldwide is worrying and suggests a possible overreliance on a single industry. It was interesting to see people’s differing opinions on whether we are in a bubble or not, but they are spread pretty evenly, even across people working within the tech industry. Other bubble risks could include geopolitical ones that may not have been accounted for. AI has to become insanely profitable to be a good investment, which is another factor in its status as a “bubble”. The considerations that you listed for not being a bubble, such as the impressive revenue growth for AI companies, the value of what you get with AI products, and high but not extremely high valuations, are all valid reasons not to think we are in a bubble. Finally, the possibility of AGI is one so powerful that if we are able to reach this in the near future, all of that spending will be worth it because of the value it can create.
“You don’t have to believe it (or think this is a good idea), but many of the AI insiders really do. Their public statements are not much different than their private ones. Without considering that zero sum dimension, a lot of what is happening in the space makes less sense.”
A great sign that a bubble is developing is when nominally smart people believe their own lies
"Look how fast the chips will lose value" is the new "Look at the obscene water consumption". Great soundbite, not a lot of substance.
Podcast episode for this post:
https://open.substack.com/pub/dwatvpodcast/p/bubble-bubble-toil-and-trouble
The first Pause AI Representative could be for your own district? That's taking constituent service seriously! Have you vetted him on the Jones Act?
I gave him $50
You can give up to $199 without your name being publicly released. That’s a choice for people concerned about campaign contributions limiting their future options.
In retrospect, was the dot com bubble actually a bubble?
According to ChatGPT, if you invested in a nasdaq index fund at the absolute worst time, March 2000, and held your money until today, you would have made 6.3% annual returns. Not too bad? That’s better than, say, San Francisco real estate, which has made 4.9% returns over that time.
To me, the dot com era was something different. The peak actually *was* an accurate reflection of future cash flows, overall. But many individual stocks were mispriced. Many dot coms were losers, and the success would end up drastically concentrated in a few winners, like Google, Amazon, and Apple, and it happened over a longer time frame than many expected.
I would not be surprised if the AI era is similar, if on average the AI companies succeed, but there is a lot of chaos coming, with some big companies completely failing and others expanding by an order of magnitude.
I mean the stock market lost nearly 80% of its value in the dotcom bubble, and many people in tech went from millionaires to penniless practically overnight. A similar stat about "investing at the worst time" is also true of investing the day before Black Thursday, but it was still a cataclysmic economic event.
The signs that we are in an AI bubble bubble are obvious. I heard the term in real life from someone who doesn’t really care about AI
Zvi have you heard from people at frontier labs that they’re holding back on releasing larger and better models because they wouldn’t have enough compute to meet inference demand? This strikes me as *entirely* plausible, and Sam has said some stuff to this effect - but only vaguely.
This almost has to be the case since distillation is so much more efficient than training from scratch. If you want to make a variety of models of various sizes and you already have your hyperparameters in order it only makes sense to start with a big model and distill the rest out of that. I think we can safely assume, just based on the speed and costs of the public versions and also release history of the labs, that there is at the very least a Sonnet 4.5 Opus and a something bigger than whatever sits at the top of the GPT 5 router.
Couldn’t they just increase prices to ration that compute? Though there are often smart and dumb reasons why companies don’t increase prices
> expect such a crisis to have at most modest effects on timelines to existentially dangerous ASI being developed
It may by my lack of economics education speaking, but how can it it be the case? Are current timelines not relying heavily on the ability of the labs to raise huge capital for building huge datacenters and for paying many people who are smarter than current frontier models to manually generate huge amounts of quality data? Wouldn't such a crisis make it much harder for them, plausibly beyond what makes direct economic sense, due to what responsible investers think a responsible invester is expected to do?
I believe we have more than enough compute to create superintelligence, just not with current architectures. A fundamental conceptual insight might be all that is needed to go from zero to superintelligence (or at least "seed AGI") in a single training run.
I interpret "modest effect" to mean a 1-3 year delay, though I would personally bump that to a 2-5 year delay. Would be great news for having time to implement a global treaty, but maybe bad news for the ability to convince people that one is needed.
Yes i agree about having technically much more than enough compute given the right architecture, and maybe even just given the hundreds of ways to squeeze the hardware harder. But that squeezing is costly and takes time. Maybe 2 years is enough. Don't know
Because there are a few companies, Google and Microsoft for example, with the ability to keep the spending up for several years even if there's a major market crash. If the bubble pops we'll see everything around the margins dry up but the premium labs will continue their plans largely unimpeded.
Maybe they will be able to keep spending, but will they dare to? Even as their stock keep falling in response, and maybe others threaten their existing markets with existing mundane AI?
A fair question but my gut strongly says yes. Once you've seen the light so to speak there's no going back. I hope I'm wrong though, as that might open up a new avenue for salvation. A financial crash partly brought on by AI underconfidence without any corresponding loss in the pace of capabilities research would be the absolute worst case scenario, nearly everyone would go back to ignoring the tech and by the time it pops back into the public consciousness it would be too late. On the other hand what we need most is time so if it instead buys us a few years it would probably be worth whatever it is we're trading in return, it's not like the public consciousness that's been stirred up us is doing much at the moment anyway
Thank you for the witchy blog post tag and ensuing informative and insightful commentary. You're right to hail Alex Bores and his success at AI alignment rulemaking. I forwarded a small donation today to his campaign because of your advocacy. My mind is less full of scorpions because of him and you...
My understanding is a lot of AI investment is based on the assumption of smooth sailing to controllable superintelligence and effectively infinite money. In reality, controllable superintelligence probably just isn't possible (on the relevant timescales and resourcing). If it starts to look like governments will eventually adopt Yudkowsky-style pause efforts, I expect the bubble to pop. I expect most surviving worlds to have such pauses and subsequent crashes.
What about the value of a frozen pre-ASI industry, which might be able to swallow many fields even at current tech levels? I don't think that's going to happen either. The current social situation regarding AI is unstable. LLMs are constantly being used by hundreds of millions of people, and the facade of "just a machine" is *extremely* flimsy. Basic concern for AI welfare would likely kill commercial prospects.
Zvi, at time of reading, you removed the Bores note as planned, but not the paragraph that introduces the removed note.
I.e. "Before I dive into the details, a time sensitive point of order, that you can skip if you would not consider political donations:" is still present.
The fact that AI accounts for almost all growth in the US and worldwide is worrying and suggests a possible overreliance on a single industry. It was interesting to see people’s differing opinions on whether we are in a bubble or not, but they are spread pretty evenly, even across people working within the tech industry. Other bubble risks could include geopolitical ones that may not have been accounted for. AI has to become insanely profitable to be a good investment, which is another factor in its status as a “bubble”. The considerations that you listed for not being a bubble, such as the impressive revenue growth for AI companies, the value of what you get with AI products, and high but not extremely high valuations, are all valid reasons not to think we are in a bubble. Finally, the possibility of AGI is one so powerful that if we are able to reach this in the near future, all of that spending will be worth it because of the value it can create.