27 Comments

Long post is indeed long, AI audio conversion for those that like to consume that way:

https://askwhocastsai.substack.com/p/on-dwarkeshs-podcast-with-leopold

Expand full comment

I’m a bit surprised at the child comment, not based on implications of AI but more on how many AI researchers have children. I wonder if Leopold will, but I doubt it for the same reason Hanson talks about fertility

Expand full comment

I think that AI researchers tend not to be invested in a human future, to the detriment of my children.

Expand full comment

Man, the constitution alignment stuff is so odd. I wish he spelled out more how that would work, but I almost wonder if it’s not entirely honest. Surely someone that smart has read the constitution and noticed that some parts of it wouldn’t be great if they were perfectly enforced

Expand full comment

"We could and should, of course, be using immigration and recruitment now, while we still can, towards such ends. It is a key missing piece of Leopold’s ‘situational awareness’ that this weapon of America’s is not in his model."

This seems completely wrong, at least if you buy these two parts of Leopold's model, both of which I think are plausible: rapid (~5 years) to automated AI research followed very quickly by ASI and China having an advantage in construction (electricity/datacenters) but a huge disadvantage in algorithms, which can only be made up (under these timescales) via espionage. The American algorithmic lead is on the order of years and doesn't seem to be shrinking, and as such having slightly slower algorithmic progress in the US and slightly faster in China is easily worth it, because Chinese immigrants are far more likely to spy for China than other AI researchers are. Chinese immigrants speed up American algorithmic progress, but they speed up Chinese algorithmic progress even more and allow for the possibility of catch up through theft. In theory, you could prioritize getting Chinese AI researchers to come to the US, then pay them high salaries to sit in a room doing nothing, but that's not happening. In practice, the dial options are "more" or "less".

"The whole thing is totally nuts. Everyone agrees (including both parties) that we desperately need lots more high skill immigration and to make the process work, things would be so much better in every way, in addition to helping with AI."

You're not thinking through the implications of the alignment problem as applied to immigrants rather than AI. Immigrants* have systematically different interests and views than we do and have an enormous and disproportionate effect on your country's politics, institutions, and culture. If these interests are sufficiently different from ours (and they are, since ethnic conflict is zero-sum game which most diasporas enthusiastically engage in) or their views are sufficiently destructive (and in this case they are, since the various Asian diasporas overwhelmingly support the worst parts of the progressive left), then the economic benefits (which, for the marginal high skill immigrant, are very small**; almost all of the gains are coming from a tiny fraction of them) are not worth it. Even setting all of this aside, immigrants make their new countries more like the places they leave; Germany or Britain is one thing, but do you really think the US should be more like India or China?

This becomes even more true in a short-timelines world, as the economic value of human capital rapidly approaches 0 and our only influence over the future, if we have any at all, is political. Bringing in large numbers of capable people with hostile or opposing views and allowing them to participate in your open political system is a terrible idea at any time, but doing so when any technical or economic benefit will completely disappear within a few years is insane and suicidal.

*I'm being somewhat euphemistic here in the interests of keeping the comment section clean. There are large political and cultural differences between different high-skill diasporas, and as such "immigrants" isn't the right term.

**The country with an immigration policy closest to what you're suggesting (enormous volume of selected immigration) is Canada, which is buckling at the seams and not a good role model for America to imitate.

Expand full comment

Is Canada buckling at the seams? There have been some very unfortunate own goals especially re: housing and over regulation, but those are the ills off most western countries at the moment.

On almost all the relevant data points I can think of Canada is doing very well. I'm incidentally on a road trip through Canada at the moment and it seems if anything better than ever, with more mixed couples and ethically mingled groups of friends walking about than anywhere else I've been, by a lot.

Expand full comment

I disagree with your opinion on immigration in the "business as usual" world but you nevertheless raise a very good point about how the calculus radically changes in an "imminent AGI" world.

Expand full comment

I don't have an in-depth comment on all of this yet, but wanted to say:

In addition to giving props to Leopold for having the time, conviction, and skin in the game to say all this (whether I agree with his views or not), I also want to give props to Zvi for reading/watching all the content and transcribing/commenting on it in a 78(!) minute read Substack post*, which is free of charge to readers like myself.

*you know, on top of all the other hour long AI roundup posts, which are also free.

Expand full comment

I remember someone describing rushing to AGI as the domain of idiot disaster monkies. It doesnt matter who gets the poisoned banana first, since we all die*.

*more likely disempowered than terminators. Also bad outcome.

Expand full comment

Echoing other commenters, you are really doing an incredible public service here.

I wanted to comment on the earlier post but did not get around to it. There certainly are some gems here, I love the concept of system 2 thinking for AIs, I previously tried to think how this could be done but really just reinvented chain of thought ☹ so I would love someone smarter than me to explore that more.

One thing that emerges from your reactions is that you obviously have more experience (“situational awareness”) than Leonard. A few examples:

1. The power thing: this immediately occurred to me as one of the strongest arguments against his timeline for AGI. Power plants, even solar, are typically contracted for and built over a course of years, say 2 for the very best case. A lot of that planned capacity is probably already contracted for. Does he think that AI firms will just outbid other users for electricity? That will drive up the cost significantly. FYI: the UAE has the world’s largest natural gas plant, fwiw.

2. I think “unhobbling” is a terrible concept, reading your previous summary I initially completely misunderstood it because … he does not mean what that word does, at least not more than Max Verstappen was “unhobbled” last year and is “hobbled” this year by having different cars at his disposal.

3. The government /natsec just confirms my belief that I cannot fully understand many people. How can one be simultaneously committed to the idea that we have to beat China but ¬_not_ committed to actual security measures, for example, that might help us do it? (I do “know” that these people only ever speak on simulacra level 4 but I cannot “understand” it).

4. “Constitutional AI” à la Leonard… he really needs to brush up on the history of the Constitution as applied. As an exercise, he could consider why the “establishment clause” exists, word for word, in both the US and Australian constitutions but has entirely different meanings, or how Marbury v Madison came to be decided.

Expand full comment

I think the amount of alpha from a lab a person could bring with them is highly dependent on how much the receiving party is willing to accept it, not just in theory but in practice. I wouldn't be surprised if a Chinese lab, for example, was very interested in theory but still stuck on "we don't do things that way here" in practice. I bet stealing AI secrets in an effective way is harder than stealing nuclear designs. Not to say that we shouldn't still be worried about it.

Expand full comment

This is thought-provoking stuff, and I appreciate the attention to this thesis. Here is one big concern: Addressing Aschenbrenner’s geopolitical and economic arguments just assuming the exponential math is correct is that the former are completely dependent on the latter. You can’t bend the exponential curve down - or slide the temporal time line ahead - a few years and have anything close to the same discussion. One (maybe) surprising example: If you delay AGI progress to the 2030s, nuclear power becomes a realistic option (again). Few people know this, but on May 31, Biden’s Secretary of Energy promoted building 200 (!) more huge nuclear reactors like Vogtle 3 and 4 in Georgia. (Maybe she’s thinking about powering the AI arms race??) Even if you think that’s a pipe dream – and I would say once we start talking about multiple OOM extrapolations ("fooming through the OOMs"), everything is or at least seems like a pipe dream - installing and starting up small modular reactors (SMRs) becomes much feasible in 2030s. SMRs are carbon-free and much more efficient than nat gas plants! That would mean the speculation that Big Tech will have to suspend its carbon commitments or the government will have to send the Army Corp of Engineers to the Marcellus Shale to crank up the gas fields to power thousands of nat gas plants is unnecessary. That’s just one example. Leopold’s and Dwarkesh’s COVID analogies are intriguing because they show the power and limits of exponential thinking. (This played out with AIDS a few decades earler.) Yes: those who understood exponential growth could “do the math” and see the COVID tsunami coming. But tsunami of what, exactly? Infection for sure. Death leading to depopulation? I mean, there were trend lines based on legit exponential curves based on real death rate data showing serious depopulation. People drew the trend lines, there was death and misery, but depopulation didn’t happen. What exponents get extrapolated – and at what rate – matters tremendously and (importantly) changes the fundamentals of the analysis. What if AGI isn't in the cards until 2035 but China invades Taiwan in 2027? Yes, Leopold Aschenbrenner knows about exponential growth, and he shorted the market. I believe he did. The clear implication, of course, is that he closed his short position quickly, making large profits. As we know, COVID played out in its weird way, and the markets roared back surprisingly quickly. You could have made at least as much money buying the (huge) dip as you could buying puts in February 2020. What was the relevant trend line? Trend lines can take you to right and wrong places, and exponential trend lines can take you to Bananasville on a SpaceX rocket: “[A]ll important animal life in the sea will be extinct” by 1980 (Paul Ehrlich), “Let’s consider where we are, circa early 2030s. We’ve eliminated the heart, lungs, red and white blood cells, platelets, pancreas, thyroid and all the hormone-producing organs, kidneys, bladder, liver...” (RK). I'm not saying we should ignore the dangers of AGI / ASI! We just need to be humble about "understanding how exponents work" and *very* careful about basing plans, policies, and actions on multiple exponential trend line extrapolations.

Expand full comment
author

I was taking it as given because that is the context here, not because I agree with the timeline - and I agree that if you shift the timeline many things change, although many don't unless it shifts quite a lot. The third part of the trilogy is coming.

Expand full comment

Great point. Respecting the power of the exponential is just the first step, and then we're still in a position of extreme uncertainty. I don't think we can have a well-thought-out plan on something like Covid or AGI without an explicit or implicit scenario analysis. Maybe we think there's a 90% chance AI tops out at sub-human capabilities in another version of AI winter shortly. Fine, but unless we have a plan for the 10% scenario where things get really wild, then I think we're just doing planning theater / advocacy. Imagine if structural engineering was based only on the 99% scenario of "no earthquake this year"!

And then we need to try to measure whether it looks like we're actually heading into the 90% or the 10% scenario. If it looks like oops, it's actually 50/50, then we have a lot more work to do!

Expand full comment

I think this might be worth flagging in your next update, quite some food for thought (and I am very pro-tech disposed person!):

https://open.substack.com/pub/tedgioia/p/is-silicon-valley-building-universe?utm_source=share&utm_medium=android&r=9tvwd

Expand full comment

Fascinating article. Hits home in a real way... In the past 2 years I've become something of a luddite for the first time in my entire life.

Expand full comment

I am certainly not become a luddite (yet haha) but it certainly did make me feel a little uneasy and worried.

Expand full comment

I am exaggerating, obviously, I am not anti-all-new-technologies. But with AI, I would be strongly tempted to ban all large-scale implementations (and certainly the pursuit of AGI) indefinitely if presented with the option. Even the supposedly utopian outcome (alignment is somehow solved and humans never need to work) feels intuitively dystopian to me, like it sucks the meaning and value out of human life altogether, although I have had trouble articulating why.

Expand full comment

I don't think we are there yet but I understand the sentiment and partly share it!

Expand full comment

We are definitely not there yet but this is the stated goal of companies who are being given billions if not trillions of dollars in VC.

Expand full comment

Definitely a valuable read. Gov't folks ain't dumb ... Logic is not the main driver of Elected officials, so often, there are misaligned with what needed beyond the election cycles. In times of crisis gov't can move at scale, usually, again, not with the best of the best. in WWII US moved at scale WITH THE BEST OF THE BEST of ACADEMIC and INTELLECTUAL TALENT. This happens after a disaster (see Pearl Harbor, or Sputnik ...). Academics and intellectuals are rarely effective communicators at the required levels; also they have their own agendas (read testimony of why the Superconducting Supercollider was cancelled, read especially Steven Weinberg's testimony and his analysis of who was paying attention and who was not, and what the committee members criteria were- after all the house is the one paying the bills, and if you can't get the plain folks rooting for you, you're toast.

Expand full comment

China's leadership has to balance supremacy vs the USA with aligning their super intelligent AGI with their own success. In particular, priority number one is security of the leadership of the CCP. If they build a super intelligent AI, how can they ensure the CCP stays in control? Their alignment problem is distinct from but at least as hard as the one AI developers in the West face.

I thought that was an odd omission from Leopold's discussion of China. He's assumed China's going to run ahead but we know of at least one reason (sort of unique to China) why China would choose not to do that.

Expand full comment

One of the best commentaries on this. Really convincing on almost all points, thanks Zvi for writing this. I will share it further ❤️

Expand full comment

Good analysis but I'm really looking forward to your take on Chollet's interview.

Expand full comment

With regard to immigration both sides do not agree with increasing skilled immigration. Democratic politicians mostly want to increase it (except a few such as Sanders). Republican politicians overwhelmingly want to keep it the same or reduce it, while a just few are willing to increase it but only as a concession to decreasing immigration overall.

Perhaps in your social sphere people understand the benefits of skilled immigration. But scroll down to the comments section of any news article about H-1B visas and there is an overwhelming outcry about "corporations giving away jobs to foreigners when they could have gone to Americans" or variations thereof. I'm not saying these comments are representative of the majority, but most of the people that do support immigration only support it weakly, they don't read these articles, and they don't vote based on it.

Expand full comment

"The more you believe we will do crude reactions later without ability to do anything else, the more you should push for technocratic solutions to be implemented now."

After reading Situational Awareness and listening to all 4 hours of podcast, I have to say that Leopold has written the most convincing "pro-pause" (if not fully "pro-all-out-shut-everything-down-and-smash-the-lithography-machines") argument ever conceived. The fact that he doesn't draw this conclusion himself is puzzling.

"We're barreling into a future where the only solution is a breakneck arms race with China, which they can win just by stealing our tech (which we WILL allow them to do), but which we MUST win to avoid certain destruction of all we value, but even winning could easily cause us lose control to powerful AIs that we do not know how to control (perhaps we can ask them how to control them), after which it is unclear what will happen to humanity (maybe we can ask the AIs about that too), and also we'll have to do all of this in 1-2 years (or less), starting a year or two from now (or less), and it's all going to be terrifying, but what are you going to do, it would be great if we had more time and coordination but we don't. Median outcome is nuclear war or worse. Have your kids now, I guess?"

Or... don't do any of that? Just coordinate on stopping this arms race from ever happening? Isn't that the simplest, most sufficient, most-likely-to-promote-human-flourishing answer?

I guess Leopold has decided the incentives just don't support a quiet future of regular human struggle, so he thinks we need to ride a flaming cyber-bear off a waterfall, but it's just weird that he doesn't see what the sensible thing to do here would be--Stop, if that's at all on the table. If he's tried to explore that outcome and found it totally unworkable, it's interesting that he didn't even mention it in the interview. It's not like he spent an hour chatting about all the papers he wrote about the economics of stopping and why it's not doable. It seems much more like he just thinks straight lines are destiny, and has internalized that to the point that any outcome other than blundering into ASI and war with China seems impossible.

Maybe he should re-listen to the section on taking agency.

Expand full comment

Also, high marks to Zvi for giving this material a maximally charitable reading. Obviously Leopold is offering important inside-view information, and the fact that I think these people are very likely to cause unprecedented suffering/damage biases me, badly.

It's good that there are people like Zvi who can engage with this material in a constructive way.

Expand full comment