It is interesting to contemplate that the preeminent investment bank is getting a front row seat to one of the premier ai labs in the country. My intuition (which may be wrong!) is that investment bankers on the whole have not yet intuited all the implications that flow from AGI….
When I talk to people at OpenAI nowadays, more of them seem focused on becoming a large and profitable tech company, rather than a vision of AGI that involves fundamentally transforming the economy.
You describe "OpenAI charges headfirst into AGI, and fails, because no one develops AGI any time soon." as a failure case. To me it seems like they could end up being worth a trillion dollars just from ChatGPT, even if they never create AGI. Could be the biggest startup success since Facebook.
I don't see how this kind of valuation is possible in a mundane-AI world, where OpenAI has multiple strong competitors, no clear tech moat (if anything, Google has a better moat), open weights alternatives that can be run for inference on consumer hardware, and a steadily diminishing recruiting advantage.
That's my impression as well. There are a lot of useful things that LLMs can be trained to do that will add value, at least similar value to products like Microsoft Office (which is huge - like $50 billion a year right now).
But it took very little time for competitors to catch up with OpenAI, and there's very little that their products seem to be able do to that those competitors cannot. My family was playing a free online game the other day that has the person freehand draw a picture with their mouse, and the AI figures out what you're drawing. Five years ago that would have seemed insane, and now it's a free online game. I don't see OpenAI maintaining any kind of lead, let alone one strong enough to cash in before multiple competitors are offering alternatives.
Let's assume OpenAI's fantastic revenue projections are true, and revenue grows from the current $4B to $100B in 2029. However, given various features of the AI industry (high costs, commoditization from e.g. open-source, possible regulatory headwinds, lack of network effects), it's not clear that a 5-10x revenue multiplier for software company valuation is appropriate. Suppose we instead instead use the 2.35x total market multiplier (probably too pessimistic but a reasonable starting point?) That implies $100B of revenue in 2029 should translate to a valuation of $235B. Given OpenAI is already valued at $157B, this argument suggests their valuation might not go up *that* much, even with fantastic revenue growth.
When you combine factors like
* capped profit status and general legal weirdness
* heavy short-term losses
* low upside potential if we use OpenAI's *own* fantastic projections
* dilution from future investment rounds
...it's not clear to me that OpenAI is actually a good investment. It's high risk, a lot of growth is already priced in, and even if they do something truly transformative, you still have the profit cap / AGI clause to factor in.
I think there are some plausible scenarios where they fail at AGI but are still a good investment. Automating a few percent of human cognitive tasks would add trillions of dollars to the global economy. Maybe it takes 50 years to go from 5% automation to 100% automation and some other company gets their first, but they still make many trillions in profits along the way.
> Sam Altman’s goal is to create safe AGI for the benefit of humanity. He says this over and over again. I disagree with his methods, but I do believe that is his central goal.
I see no reason to treat this as evident, and he has a track record of saying whatever he thinks will benefit him and his plans, so saying something over and over again is little evidence to me. Personally, I suspect him of some of the dark traits like narcissism and it's likely he optimizes for some combination of power and status
"The good news is that the people tasked with arguing this are, effectively, Goldman Sachs. It will be fascinating to see if suddenly they can feel the AGI."
This morning Goldman Sachs revised their growth projections for the coming decade downward and are predicting annualized nominal SP500 growth of just 3 percent per year for the next 10 years. This suggests they not only do not feel the AGI, they are bearish on mundane utility.
Of course the people making these projections are not the people who will be representing OpenAI.
That's more than not feeling the AGI. That's predicting an utter disaster even if AI wasn't involved - the inflation target is 2%, so 1% RGDP/year at best?
To quote Tyler Cowen... are they short the market?
Being a doomer is difficult. You have to persuade everyone to take AGI seriously, but also *not* accidentally persuade them they should invest money in AI companies. A tricky line to walk.
>Roughly 10 million ChatGPT users pay the company a $20 monthly fee, according to the documents. OpenAI expects to raise that price by two dollars by the end of the year, and will aggressively raise it to $44 over the next five years, the documents said.
* Are they going to ask users to re-confirm their subscription every time the price goes up? If they do, expect a ton of cancellations. Subscription services tend to have a ton of inactive users who just don't bother to cancel under ordinary circumstances.
* If they *don't*, and the subscription price just silently rises without the user confirming that they are willing to pay the new higher price, I have a feeling some federal agency like the FTC might take interest. I've never heard of a subscription service silently raising its prices like that.
Crazy idea for doomers: Subscribe to ChatGPT solely so you can initiate a class action lawsuit against OpenAI if they raise the price of your subscription without any opt-in on your part.
- - -
I also think the "raise subscription prices" plan sends a bad signal about their fundamentals. It suggests internal user growth numbers are somewhat disappointing, so in order to meet revenue targets, they need to juice existing users for more. Arguably, the free version of ChatGPT is already a massive financial outlay which is supposed to justify itself by driving subscription growth.
Alternatively, perhaps they know they won't be able to contain costs as their models get bigger.
Have you ever heard of a *successful* business in a low-moat industry whose growth plan is "aggressively raise prices"? In the modern economy, generally you only get to raise prices (in real terms) if you're doing something like housing, medical, or education where there's an element of regulatory capture.
I think a more realistic plan here is to have tiered subscription plans where you pay extra for more advanced models. Price-conscious users might switch to a wrapper like Perplexity which only pays extra for the fancy model when it's really necessary.
I have subscribed to plenty of things where the price raised without requiring action on my part, only some emails notifying me of the upcoming price change. eg. Netflix.
The motive is clear, Sam Altman is building a business. I think its fine to be charitable to him in a public article but his behaviour doesn't seem to point to that imo.
It is interesting to contemplate that the preeminent investment bank is getting a front row seat to one of the premier ai labs in the country. My intuition (which may be wrong!) is that investment bankers on the whole have not yet intuited all the implications that flow from AGI….
When I talk to people at OpenAI nowadays, more of them seem focused on becoming a large and profitable tech company, rather than a vision of AGI that involves fundamentally transforming the economy.
You describe "OpenAI charges headfirst into AGI, and fails, because no one develops AGI any time soon." as a failure case. To me it seems like they could end up being worth a trillion dollars just from ChatGPT, even if they never create AGI. Could be the biggest startup success since Facebook.
This is also my impression, a lot has changed in the last 12 months.
I don't see how this kind of valuation is possible in a mundane-AI world, where OpenAI has multiple strong competitors, no clear tech moat (if anything, Google has a better moat), open weights alternatives that can be run for inference on consumer hardware, and a steadily diminishing recruiting advantage.
That's my impression as well. There are a lot of useful things that LLMs can be trained to do that will add value, at least similar value to products like Microsoft Office (which is huge - like $50 billion a year right now).
But it took very little time for competitors to catch up with OpenAI, and there's very little that their products seem to be able do to that those competitors cannot. My family was playing a free online game the other day that has the person freehand draw a picture with their mouse, and the AI figures out what you're drawing. Five years ago that would have seemed insane, and now it's a free online game. I don't see OpenAI maintaining any kind of lead, let alone one strong enough to cash in before multiple competitors are offering alternatives.
What powered the game? It could have been a small shop leveraging tech on the backend from a larger player.
LMAO, surely you don't mean Google's Quick Draw experiment, which they published at least 8 years ago?
https://www.youtube.com/watch?v=X8v1GWzZYJ4
Here's a professor at NYU who has data about revenue multipliers by industry on his website:
https://pages.stern.nyu.edu/~adamodar/New_Home_Page/datafile/psdata.html
Let's assume OpenAI's fantastic revenue projections are true, and revenue grows from the current $4B to $100B in 2029. However, given various features of the AI industry (high costs, commoditization from e.g. open-source, possible regulatory headwinds, lack of network effects), it's not clear that a 5-10x revenue multiplier for software company valuation is appropriate. Suppose we instead instead use the 2.35x total market multiplier (probably too pessimistic but a reasonable starting point?) That implies $100B of revenue in 2029 should translate to a valuation of $235B. Given OpenAI is already valued at $157B, this argument suggests their valuation might not go up *that* much, even with fantastic revenue growth.
When you combine factors like
* capped profit status and general legal weirdness
* heavy short-term losses
* low upside potential if we use OpenAI's *own* fantastic projections
* dilution from future investment rounds
...it's not clear to me that OpenAI is actually a good investment. It's high risk, a lot of growth is already priced in, and even if they do something truly transformative, you still have the profit cap / AGI clause to factor in.
Podcast episode for this post:
https://open.substack.com/pub/dwatvpodcast/p/the-mask-comes-off-at-what-price
I think there are some plausible scenarios where they fail at AGI but are still a good investment. Automating a few percent of human cognitive tasks would add trillions of dollars to the global economy. Maybe it takes 50 years to go from 5% automation to 100% automation and some other company gets their first, but they still make many trillions in profits along the way.
Do you mean 'windfall' instead of 'waterfall'?
> Sam Altman’s goal is to create safe AGI for the benefit of humanity. He says this over and over again. I disagree with his methods, but I do believe that is his central goal.
I see no reason to treat this as evident, and he has a track record of saying whatever he thinks will benefit him and his plans, so saying something over and over again is little evidence to me. Personally, I suspect him of some of the dark traits like narcissism and it's likely he optimizes for some combination of power and status
If OpenAI actually becomes publicly traded, finally Tyler Cowen can point to a specific stock to buy
"The good news is that the people tasked with arguing this are, effectively, Goldman Sachs. It will be fascinating to see if suddenly they can feel the AGI."
This morning Goldman Sachs revised their growth projections for the coming decade downward and are predicting annualized nominal SP500 growth of just 3 percent per year for the next 10 years. This suggests they not only do not feel the AGI, they are bearish on mundane utility.
Of course the people making these projections are not the people who will be representing OpenAI.
Do you have a link for that report?
That's more than not feeling the AGI. That's predicting an utter disaster even if AI wasn't involved - the inflation target is 2%, so 1% RGDP/year at best?
To quote Tyler Cowen... are they short the market?
I don't have access to the original report (by David Kostin's group), but here are two non-paywalled summaries:
https://qz.com/sp-500-returns-annualized-decade-goldman-sachs-1851677180
https://www.wealthprofessional.ca/investments/etfs/goldman-predicts-3-return-for-sp-500-over-the-next-decade/387336
You don't short the market if you expect the price of the asset to rise, even just rise slowly.
Last I checked, S&P is already pricing in substantial earnings growth for US stocks.
Being a doomer is difficult. You have to persuade everyone to take AGI seriously, but also *not* accidentally persuade them they should invest money in AI companies. A tricky line to walk.
>Roughly 10 million ChatGPT users pay the company a $20 monthly fee, according to the documents. OpenAI expects to raise that price by two dollars by the end of the year, and will aggressively raise it to $44 over the next five years, the documents said.
https://www.theverge.com/2024/9/27/24256317/the-price-of-chatgpt-will-go-up
This plan actually sounds kind of insane.
* Are they going to ask users to re-confirm their subscription every time the price goes up? If they do, expect a ton of cancellations. Subscription services tend to have a ton of inactive users who just don't bother to cancel under ordinary circumstances.
* If they *don't*, and the subscription price just silently rises without the user confirming that they are willing to pay the new higher price, I have a feeling some federal agency like the FTC might take interest. I've never heard of a subscription service silently raising its prices like that.
Crazy idea for doomers: Subscribe to ChatGPT solely so you can initiate a class action lawsuit against OpenAI if they raise the price of your subscription without any opt-in on your part.
- - -
I also think the "raise subscription prices" plan sends a bad signal about their fundamentals. It suggests internal user growth numbers are somewhat disappointing, so in order to meet revenue targets, they need to juice existing users for more. Arguably, the free version of ChatGPT is already a massive financial outlay which is supposed to justify itself by driving subscription growth.
Alternatively, perhaps they know they won't be able to contain costs as their models get bigger.
Have you ever heard of a *successful* business in a low-moat industry whose growth plan is "aggressively raise prices"? In the modern economy, generally you only get to raise prices (in real terms) if you're doing something like housing, medical, or education where there's an element of regulatory capture.
I think a more realistic plan here is to have tiered subscription plans where you pay extra for more advanced models. Price-conscious users might switch to a wrapper like Perplexity which only pays extra for the fancy model when it's really necessary.
I have subscribed to plenty of things where the price raised without requiring action on my part, only some emails notifying me of the upcoming price change. eg. Netflix.
The motive is clear, Sam Altman is building a business. I think its fine to be charitable to him in a public article but his behaviour doesn't seem to point to that imo.