34 Comments
User's avatar
zdk's avatar

That's a lot of podcast for a guy that doesn't do podcasts

Expand full comment
Dave Friedman's avatar

It's an interesting report, and certainly worth reading. Where it loses me is that it doesn't explain how the infrastructure to generate the electricity required for all these massive compute clusters will be built within the timeframe suggested.

Expand full comment
Kenny's avatar

I wouldn't think they need that much _more_ "compute clusters" given that newer models displace existing models to some extent anyways, and, similarly, the AI labs can probably displace the other customers in the data centers they're using too.

Another part of the answer, explicitly mentioned in the report/scenario, seems to be that the infrastructure will be built in designated 'special economic zones', e.g. with minimal regulations currently preventing or slowing this kind of construction elsewhere.

I'd guess there's ways to 'cheat' too, i.e. NOT build entirely new power generation (or only build a relatively small amount of it).

Expand full comment
Dave Friedman's avatar

But there is no evidence yet of any special economic zones being devoted to compute clusters or the power infrastructure required to power them. Therein lies the problem. The kind of infrastructure required to power these super large compute clusters, never mind the clusters themselves, is built on yearslong schedules, *even if* regulatory impediments are removed.

Expand full comment
Kenny's avatar

I'm not sure you're wrong about this, but this also seems like the kind of thing that's solved thru ad-hoc boring/mundane means, e.g. 'pre-solved' in that the AI companies can outbid all of the other customers that had _previously_ prompted the data center companies to already be in the process of building new data centers.

I'm not sure that the AI companies now are building their own "compute clusters" in the sense that you seem to be imagining – some fraction of them are hosted by Microsoft in Azure, which is itself 'just' a (big) network of data centers (tho almost certainly fractally complex itself).

I'm also less sure that even new physical infrastructure necessarily requires "yearslong schedules" – that might not be required in a much more intensely capitalized situation as the scenario envisions.

Expand full comment
Arbituram's avatar

The supply chains to building datacenters, let alone chip fabs, are very long. They've been scaling up quickly, yes, but this scenario is a whole other level that I'm not convinced supply chains could accommodate. Yes, the WW2 bomber example is impressive, but those were comparatively simple machines.

Expand full comment
Kenny's avatar

I don't think the scenario covers building chip fabs – and generally it seems to use standard projections, not some additional "scaling up quickly".

The part of the scenario they compare to "the WW2 bomber example" is one that is directed by ASIs – at 'every level', including the individual workers retooling the factories – and is at the _end_ of the scenario.

Expand full comment
Arbituram's avatar

This was my single largest issue; building things takes time, even if you've got an intellectual boost. This scenario involves a truly astonishing level of giving the AI labs completely free reign to build things, and even then I'm not sure it's plausible.

I'm a bit biased here (I work in infrastructure) but AI 2027 is in line with a lot of other discussions of this type that ignore the physical constraints on these things.

AI 2040 is still terrifyingly close but dramatically more plausible in my view.

My primary objections to the intelligence explosion are also not really addressed; why don't we think problems will get harder very quickly? The easy problems will be solved very quickly, sure, but we seem to assume it's easy problems all the way to superintelligence, so the pace of solving them continues to increase.

The multipliers seem wild and somewhat arbitrary, instead of being something like "5% faster research per year, compounding", which is still a lot!

Expand full comment
Moral Particle's avatar

This is a great point. One of the biggest departures from Leopold Aschenbrenner's (similar?!) set of predictions is the assumption that rapid gains in AI capabilities can and will take place without any real changes to utility infrastructure. Remember Aschenbrenner's point: "Probably the single biggest constraint on the supply-side will be power. Already, at nearer-term scales (1GW/2026 and especially 10GW/2028), power has become the binding constraint: there simply isn’t much spare capacity, and power contracts are usually long-term locked-in." There is relatively little discussion in AI 2027 of electric power requirements, but the suggestion seems to be that in the next year or so the leading AI company (a semi-fictionalized amalgam in the scenario) can simply "hook up" a distributed network of clusters to the existing grid, draw 2GWs of power, and everything will proceed according to the trend lines. I'm skeptical that an extra 2GWs will be easy to generate and tap and even more skeptical that 2GWs will be sufficient. Scaling up power generation is possible, but even the fastest buildout of gas power plants would take a few years.

Expand full comment
Dave Friedman's avatar

Yeah, Aschenbrenner deserves credit for at least engaging with infrastructure -related issues. Too much of this ai accelerationist thought seems like intellectual masturbation devoid of real world considerations.

Expand full comment
Kenny's avatar

This IS a great point but I'm very unsure how strongly we should expect it to bind.

"... power contracts are usually long-term locked-in." – this is true but also possibly not very strongly binding; contracts can be renegotiated, especially if the AI companies could plausibly consider things like 'buying out all of the large U.S. auto makers to convert them to manufacturing robots'.

Expand full comment
Mercutio's avatar

Agreed!

In general my feeling is that software-pilled people underestimate how hard hardware is.

I don’t doubt that ASIs and robots are going to do incredible things (regardless of whether that’s good).

I think bootstrapping the tooling for robots to build more robots is really hard, even for an ASI, and I think constructing a terrawatt or two of generation, whether you do it in a desert with solar + batteries, or by building a few nukes, is just not a three year prospect, even if you throw out all the red tape.

I’m sure once the robots are building more robots, the flywheel can start. I just don’t think even on a wartime footing the 3 year timeline is plausible for the robot bits.

But when the SEZ gets zoned and/or we actually overcome inertia and build the terrawatt of dedicated firm power, I’m willing to believe the boats are gone, and we’re down to the last helicopter.

Expand full comment
Shon Pan's avatar

Very great article. It's honestly difficult to know what we should do at this situation for our families.

Expand full comment
Eskimo1's avatar

Honestly, this is what I can’t figure out. I’m not involved in the field, so wtf can I do. If nothing, how do I just live like nothings happening? Wish I never heard about any of this.

Expand full comment
Shon Pan's avatar

I think there's a lot that we can do, but it all comes down to coordination and talking to each other. Obviously we need to be able to be empowered in order to be able tod o anything about the overall situation, and we need to do this fast.

Hopefully Zvi can help, as the world is hungry for a solution.

Expand full comment
Eskimo1's avatar

I mean sure, I just have no idea what someone like me personally coordinating even looks like, much less helps. Meanwhile I’m in a panic and not even sure if I should explain what’s going on to my family.

Expand full comment
Shon Pan's avatar

I think that Zvi migth have a list of groups that you can join; I"m also running a regulate meeting if you want to DM me.

Expand full comment
Skull's avatar

What is something unrealistically optimistic but still plausible that you hope to accomplish in one of those meetings?

Expand full comment
Sylvain Ribes's avatar

Not personally casting my weight behind "ASI soon" or not but in your case, and for the benefit of anybody in your state of mind:

I think that, not being involved in the field, you should realize that it's likely that any well put, popular thesis, might convince you of imminent danger.

For anybody in your state of mind I would very much advise tuning out entirely from this side of the conversation. Follow a few bearish AI-people.

Whether or not you'd be falsely lulling yourself into a sense of safety doesn't matter. As you say, there's nothing you can do, the only control you can have is over your own well being, tune it out, don't waste your and your family's happiness over something that's both out of your control and that might well never come to pass.

Expand full comment
Eskimo1's avatar

I appreciate this response.

Expand full comment
Thor Odinson's avatar

My take is to put less emphasis on setting up for "retirement" and more on enjoying life in the medium term. AI2027 seems on the faster end, but if even the "sceptics" of the field are talking about AI 2040 then planning for retirement in a world that resembles today's is silly.

Spend more time with your family and friends, go on those holidays you've been meaning to go on; keep working your job because life could easily be "normal" for another 20 years, but don't kill yourself with overtime etc. in the name of having a better life when you're 70. (Most of that advice I'd give in a no-AI world too, which helps, but the main change is put less emphasis on retirement savings.)

Expand full comment
MP's avatar

Thanks for the rundown.

I still don't understand what people like Scott and Daniel mean by double digits GDP growth.

Assuming doom didn't happen, are there more specific forecasts of what ASI would create? For example, what would be the life expectancy, number of homes built, average IMDb ratings of the movies launched, whatever, that makes it more tangible what life would be 10 years into the aligned ASI?

Expand full comment
Arbituram's avatar

To the best of my knowledge, *zero* people who work with physical things have predicted double digit growth for frontier economies (catch-up 10% growth for the rest of the world is far more plausible, of course, and is still extremely large of an effect).

Expand full comment
[insert here] delenda est's avatar

Then they aren't trying. Assume that AI does master robots, make it in 2040 if that helps. You can now:

1. Build everything to measure, including new factories

2. Build in space, underwater, in the middle of the desert, anywhere

3. Build almost anything permitted by the laws of physics and available energy

4. Scale anything you can build

You don't think that gives you double digit productivity growth?

I think we get there before that, just by automating the modern equivalent of the 1950s factory job: the office drone, who processes invoices, or validates corporate actions that have triggered an exception rule, etc.

Expand full comment
Arbituram's avatar

Are humans still alive in your scenario?

Expand full comment
[insert here] delenda est's avatar

Yes, at least at the start

Expand full comment
Arbituram's avatar

Perhaps less tongue in cheek than my human extinction question, I'm not particularly interested in the question of what happens *after* we have hyper intelligent AI in charge, it's not clear anyone has any meaningful insight there or that humans play an important role or that GDP is even a meaningful metric (see: openai investor caveat that "it is unclear what role money may play in a post AGI world").

Rather, the challenge here is getting to that point from where we are now, right?

And here I think people are really underestimating what 9.9% consistent compounding can achieve. This gets us to a radically different world very quickly, from a life essentially unchanged from medieval peasants to modern industrialised lifestyles within a generation.

If the intelligence explosion is entirely algorithmic improvement driven, then all bets are off, but right now we have every reason to believe that getting there will be extremely flop and energy intensive, all of which relies on extensive supply chains.

To be clear, getting a new grid connection on line currently takes longer than from now to the fork point in AI 2027, right? But we can't assume post AGI to solve that problem for us! We need the grid connections to get to AGI in the first place!

Expand full comment
[insert here] delenda est's avatar

I think that your point about the implications of such growth is exactly what Zvi says every week 😁

I do expect significant hiccups along the way, especially energy, chips, regulation, and perhaps (but very unlikely!) butlerian jihad.

But it is telling that you also cite regulatory barriers. I've managed a few projects and it is always amazing what barriers can pretty much disappear when the right incentives are present. Here the combination of passionate ideological commitment to AGI, prospective massive profits, and prospective world domination, are very powerful incentives for the alignment of a lot of parties.

Expand full comment
Zvi Mowshowitz's avatar

Note: I banned an account called "AI Slop" and deleted their comment, because it was very obviously AI slop even before I noticed that the account's name was "AI Slop."

The AI rule for comments is: Get good, or don't do it. If I can tell it's clearly AI slop, and it isn't interesting, you are getting banned.

Expand full comment
Jeffrey Soreff's avatar

"whereas the agricultural, industrial or Cambrian revolutions were kind of a big deal and phase change"

I suspect that the agricultural and industrial revolutions looked glacial to the people in them and felt like being steamrollered to the people neighboring them

"I would go further. Sufficiently capable AIs that are highly correlated to each other should be able to coordinate out of the gate, and they can use existing coordination systems far better than we ever could. That doesn’t mean you couldn’t do better, I’m sure you could do so much better, but that’s an easy lower bound to be dumb and copy it over. I don’t see this being a bottleneck. Indeed, I would expect that AIs would coordinate vastly better than we do."

There is a tradeoff between coordination-via-being-identical and online learning. To copy from a comment I made in https://www.astralcodexten.com/p/introducing-ai-2027/comment/106383500 :

Once the AIs are learning from experience, "cloning" them can't be the dominant means of communicating information. If one has >100,000 AI agents, each of which has participated in one or many research projects from a different role, copying one and overwriting the weights of another (given, at any one time, a fixed number of AI agents) loses the learning in the overwritten AI.

They can still do things like get the training of another AI's role at accelerated speed, faster than the real time of the role. But to aggregate all of the information that all >100,000 agents have learned is going to require passing summarized information of some sort around - project reports, something like a review article, something like a textbook.

"Being able to copy and scale up the entities freely, with full goal alignment and trust, takes away most of the actual difficulties. The reasons coordination is so hard are basically all gone."

Mostly, but not entirely. There was an old NASA screw-up with metric vs imperial units that killed one of our Mars probes. No ill will, no contending goals - just a units/nomenclature mistake. Every software project that I've been on creates some new jargon of its own - and communicating with other teams requires some translation effort.

"The economic value of letting the AIs cook is immense. If you don’t do it, even if there isn’t strictly an arms race, someone else will, no? Unless there is coordination to prevent this."

I disagree. I think the semi-military arms race is a very big factor, and significantly more powerful than purely economic advantages. During the cold war, despite massive popular opposition, 10,000s of warheads were built, while popular opposition and lawfare basically killed civilian nuclear power.

"and also doing the thing that vibes will stop being a good heuristic because things will get weird, and so on. "

Agreed. "Age-old wisdom" doesn't hold in periods of rapid change.

"Daniel’s p(doom) is about 70%, Scott’s is more like 20%" My raw p(doom) is about 90%, mostly on "Have you seen any of our cousin hominins around lately? We are basically building a smarter competing species." grounds. But I downgrade that to 50% mostly on "Of course, that's just my opinion, I could be wrong" grounds.

"and the AIs will be much less differentiated than humans"

I think that this is an open question. The dynamic range of sort-of intelligent electronic systems is _huge_, though it is somewhat arbitrary how much of it "counts" as AIs.

"even if they did not coordinate, it won’t make things end well for the humans. The AIs end up in control of the future anyway, except they’re fighting each other over the outcome, which is not obviously better or worse, but the elephants will be fighting and we will be the ground."

Agreed. And Darwinian pressure on the AIs makes it worse for humans.

"Scott worries very high UBI would induce mindless consumerism, the classical liberal response is give people tools to fight this, perhaps we need to ask the ASI how to deal with it." (a) That is the least of our worries. (b) From the POV of a subsistence farmer, what we do _now_ is mindless consumerism. Winding up as comsumerist pets of the Culture Minds (yeah, evidence from fiction) is about the best outcome we could get if ASI happens at all.

Expand full comment
Gerald Monroe's avatar

Something NOT modeled: there's minimum latency times in equipment like chip fabs. That is, it takes several months for all process steps to finish to complete an IC at all.

This is more or less fundamental, it's just how fast the process runs, yes theoretically someone could develop a faster one, but they would have to iterate down through the nanometers to develop the process to even compete with what we already have.

This is true for other things, like freighters and trains carrying equipment and materials.

You can obviously upgrade the infrastructure - build faster railways, fleets of airliners to carry parts, etc, but the upgrade itself uses the legacy equipment...

Essentially this scenario is assuming with whatever amount of data center ICs we have built in 2027 using "current trajectory", and software optimizations, is enough to develop superintelligence.

If this scenario doesn't happen by 2027 this will be why I think: insufficient compute or data.

Expand full comment
SCPantera's avatar

re: 19-20. for anyone having trouble conceptualizing this, imagine if you perfectly cloned yourself several times as you are now, how trivially easy it would be for all of you coordinate as long as you have some simple baseline mutual understandings such as agreeing to work for common mutual gain (or my favorite joke values handshake: "all instantiations of this consciousness agree to share the wife conditional on the number of wifes"). I'd know immediately how likely I am to hold true to that understanding and then in most situations I can predict what another me is doing and act to support them if necessary.

Expand full comment
loonloozook's avatar

Fantastic summary. Thank you!

Expand full comment