22 Comments

the notion that Trump can simpy wave his hands and sites get permitted is just wrong. SIte permits are largely state actions. Trump's exec orders seek to derail some environmental challenges, but state environmental laws are not incldued. and challenges to NEPA will go through the courts anyway. Trump is just grandstanding and Altman is just kissing his ass.

Expand full comment

That's only part of the logical disconnect here. Literally the place was already under construction, so giving Trump credit for regulations he's trying to do away with seems far more likely to be Zvi's anti-regulation bias than any actual benefit from Trump being in office.

Expand full comment

> That moment when you say ‘look at how this could potentially cure cancer’ and your hardcore supporters say ‘And That’s Terrible.’

I continue to not understand your position on that.

In my mind "Automate nothing to AI" and "Automate humanity itself to AI" is mostly a binary dial. You won’t get to pick and choose.

"Obviously curing cancer is good because cancer cause suffering, and if AI can help us with that we should do that".

Yes but you know what cause suffering too ? Bad governance. Bad economy. Plausible on the same level, aggregated. You know what AI can automate for us, for a much more superior result ? The entirety of governance and economy. AKA total loss of control.

I don’t see a principled way to say no the second but not the others. Maybe you do, if so I would be delighted to hear it. I notice that deep down yes, I’m okay with the first but not the second. But until I’m sure such subtlety is possible ? I’m going with "and it’s terrible".

Also notice that there is a lot of people who distressingly say "Yes Humans Must Stay in Control" on the abstract, and still say no to Human Governance and Human Economy once you point to the opportunity costs.

We will have to make up our mind, soon.

Expand full comment

Human errors and natural errors (disease) are not the same, and solving each does not equate to removing the same variable from each equation.

Expand full comment
2dEdited

I… actually didn’t think of that. And my first reaction is that I like it.

But is not "The Economy" a solution to solving a natural error (scarcity) ?

Expand full comment

Yeah, pretty much, but I think the basic idea is this: We can easily imagine a world where cancer no longer exists. We can't easily imagine one where jobs don't exist.

You might still be right that overall it's pretty binary, but this does explain the difference in response.

Expand full comment
2dEdited

Well, if I want to explain the difference in response in a hand-wavy way, it’s pretty easy. People enjoy playing survival games where there is scarcity to overcome. Having the player die randomly of an unavoidable disease without any counterplay is not exactly peak gameplay tho, so games don’t do that.

But the difficulty is : how do you put this intuition in say, the Constitutional AI of Anthropic ? Where RLAI is going to enshrine the rule you put ? I don’t have a good answer to that and consequently I’m pretty terrified of an AI that cures cancer, only because of the potential implications of what it’s going to solve overall.

Expand full comment
2dEdited

Claude replying to me :

Let me break this down with rough estimates:

Cancer harm in US (annual):

~600,000 deaths/year

Average age of death ~70, vs life expectancy ~80 → ~10 years lost per death

Quality of life impact during treatment (~1.7M patients) → estimate 0.3 QALY loss per patient

Total: ~6.5M QALYs lost from deaths + ~500K QALYs from treatment

≈ 7M QALYs/year

Institutional harm estimates:

Economic inefficiencies from suboptimal regulation/policy: ~5-10% GDP loss

Healthcare system inefficiencies: ~$1T/year in waste

Criminal justice system costs (incl. excess incarceration)

Education system shortfalls

Conservative estimate: These reduce quality of life by ~0.05 QALYs per person

US population ~330M

≈ 16.5M QALYs/year

So rough estimate: Bad institutions cause ~2-3x more QALY loss than cancer in the US.

Major uncertainties: Institution impact per person, cancer treatment QALY loss, indirect effects. Could be off by factor of 2-3 either way.

Expand full comment

Deepseek R1 leads me to believe that there is considerable scope for algorithmic improvements, and that we might get to super intelligence with rather less than $500 billion of compute. I am expecting the Deepseek guys to beat OpenAI to it, on a relatively shoestring budget (I.e. less than 500 billion dollars)

Expand full comment

I wonder if there is a way of poaching China's best AI engineers, so it's the USA that has the advantage of their skills?

Though with ASI, it's not certain that it matters who gets it first, if they fail to align it.

Expand full comment

US companies have been trying the easy policy of throwing ever more dollars into compute, but I expect at some point they will also look at ways to get more bang for their dollars - and now with deepseek they know it's possible.

Which should make us all even more hopeless.

Expand full comment

The vaccine skeptics clearly trust the government even less than I do.

If you’re the sort of person who suspects that mRNA vaccines might have side effects the government is concealing from you, you could, consistent with previously stated positions, also suspect that spending 500 billion dollars on building a super intelligent machine god might have a notable downside that the government is failing to inform you about…

Expand full comment

I guess if Stargate didn’t have the government on board, they could find themselves blocked in order to protect an endangered species of newt, or whatever the animal is this time.

(Over here in the UK, the Dark Crimson Underwing moth is the star of the latest controversial planning application; it was the Great Crested Newt last time. )

Expand full comment

Hundreds of thousands of jobs? I suspect the plan is to gas the anthill.

Expand full comment

The funnier medical answer would have been “oh ya ur indestructible nanotech body won’t get cancer for sure”. Like it’s funny to see it slowly being acceptable to say that cancer gets cured in a few years but not going all the way to the conclusion of the AI that can control reality to that level

Expand full comment

The more I've read about the Project Stargate announcements, the more it seems to be a case of taking credit for what was already happening and goading the participants to name a bigger number "more than Biden." But at the same time, it's hard to imagine people in 2049 checking Project Stargate's books and saying "ha ha, you only got $300 billion in investment, not $500 billion!" especially with what $300 billion in compute could buy you in terms of training and test-time compute.

I'm expecting, at some point in the next four years, all of this (and other compute infrastructure investment asides) becomes an actual Manhattan Project, a government project. I don't know if anyone in the US government has fully priced in Deepseek R1, and what that reveals about the idea that just export controls can contain foreign firms AI. But when they do, that would motivate an arms race dynamic, on the assumption that AGI is a tech one could get and then use to stop others from getting it, which I'm not sure will be the case. Compare: the US decision to build the hydrogen bomb was advanced from speculation to practical policy within months of the Soviet's first nuclear test in 1949. Nuclear weapons had the obvious feature that they were obviously super powerful and dangerous, whereas what you could do with even "75% of the way to AGI" AI is for most of the public an Out of Context question.

I think that the Chinese government doesn't grasp just what they have in R1, otherwise they would not have allowed it to be released. Rather, I imagine the company would have been quietly coopted into some government ministry, and the developers asked "what could you do with $6 billion of training compute, rather than $6 million."

(Potential signal of the above: 6 months from now, we haven't heard anything more from Deepseek... no new models, no new announcements. Another historical comparison: Soviet physicists worked out that something was happening in America in the early 1940s when all of a sudden they stopped seeing publications by Szilard, Fermi, Oppenheimer, and others)

Expand full comment

>I think that the Chinese government doesn't grasp just what they have in R1, otherwise they would not have allowed it to be released.

Yup!

Semi-related question: IIRC, R1 was released as a (semi?) open source. Are the EU Precautionary Principle screaming about it (and Stargate)? If not, why so quiet? I haven't heard anything from them, and they seemed to be trying to bottle up AI last time I looked.

Expand full comment

The EU is still thinking AI is the new crypto fad so there's not much hope coming from us one way or an other.

Expand full comment

Many Thanks!

Expand full comment

if oracle, softbank, and the federal government say they’re going to spend an enormous amount of money to accomplish something, i will bet on it not happening.

Expand full comment

> You can guess what I think he saw while watching Trump to make Altman change his mind.

What is it?

Expand full comment

To what extent can this be an "historic infrastructure project" given the speed of depreciation/obsolescence of hardware? Railroad tracks laid down 150 years ago and maintained are STILL USEFUL, and even if the rails are deteriorated the grading and tunnels and rights-of-way are still 90% of the value. I feel like AI infrastructure is consumable, not durable.

Expand full comment