the notion that Trump can simpy wave his hands and sites get permitted is just wrong. SIte permits are largely state actions. Trump's exec orders seek to derail some environmental challenges, but state environmental laws are not incldued. and challenges to NEPA will go through the courts anyway. Trump is just grandstanding and Altman is just kissing his ass.
That's only part of the logical disconnect here. Literally the place was already under construction, so giving Trump credit for regulations he's trying to do away with seems far more likely to be Zvi's anti-regulation bias than any actual benefit from Trump being in office.
First site is in Texas. Some states have far less onerous environmental/permitting laws than others. So the sites can be picked in favorable states but Federal regulations apply to everywhere.
That's one place for one data center. But of the 8 highest areas for new data centers you've got northern Virginia, California, Illinois, Arizona, Georgia, NY tri-state, Oregon, and Texas. Fairly even split between high and low regulation states. Admittedly Meta just announced plans for a big data center in Louisiana, but it's not planned to start until the end of the year.
Even high regulation states often allow commercial/industrial uses with reasonable levels of red tape. It's residential (and thus more voters etc) where extreme level of review is required. (And by "review" they mean "delay the project for multiple years")
In addition these states are de facto competing for the tax revenue. Data centers may not create many jobs but they pay taxes with almost no footprint, basically pure cash for a state and local area. So they may offer exemptions from review and the states who delay these projects will not get that tax revenue. Your list means the states who delay the project won't get anything, and the ones who don't may get twice as much built as originally planned.
Obviously these centers will need their own power generation soon which is part of some of these deals.
Let's look specifically at why the regulations aren't nearly as onerous in terms of preventing building as the simple theory would indicate. Oregon is popular because it's a space where major internet connections are, Chicago is physically the middle of the country and the colder climate saves on energy costs for cooling. Northern Virginia, proximity to DC. The tri-state area, proximity to New York City. California Silicon Valley, LA, San Francisco. If taxes and regulations alone were enough, we'd see virtually no building data centers in those places. But instead we see that the most of the places, including Dallas/Fort Worth, it's largely in places that have physical benefits and investments in infrastructure, not the places that are low regulations. Given looking at other factors show advantages, the harping on regulations being a significant factor in data centers being built in specific locations doesn't really hold up when other factors are taken into consideration.
You say maybe regulations were shaved as part of the negotiations, but you provide no evidence for that so it seems like supposition. Looking around I see lots of talk of tax breaks, but nothing saying that regulations were withdrawn for the data centers. In fact the Illinois requirement mandates that the data centers be either carbon neutral or meet green building standards. Virginia is pushing for more regulations on data centers.
None of those regulations are relevant. What I am talking about is things like the barriers to tear down and build housing in San Francisco Bay area - 5-10 year delays are considered normal.
I am pretty sure any more than a year in permitting delays will completely cancel a data center project. Therefore these high regulation states must issue permits swiftly and not allow random residents to get an injunction in court to stop the project for several years while a frivolous lawsuit is held.
First you argued that the data center in question was being built in Texas because of low regulations. The evidence against this was provided with examples. Then you shifted to tax cuts, again, evidence against this point was provided. Then you shifted to regulations meaning something about allowing the public to bring pressure including lawsuits. That's such an expansive use of regulations that what I think you really mean is anything that isn't business, whether it be the various levels of government or the public. That definition is so broad as to be useless, plus it ignores the significant advantages businesses have as being able to hold to positions longer than the public as many political scientists have noted, particularly Paul Pierson and Jacob Hacker in many of their books. But that's a larger argument than why data centers get built where they get built and not other places.
So let's check your theory that states that let residents block data centers don't get data centers because of "random residents" as you dismissively refer to them as. There are literally lawsuits in Virginia fighting data centers. Oregon is holding up data centers because of public outcry. I will grant you that Texas sometimes does data centers over the objections of their residents, but even there sometimes those pesky citizens convince the authorities to not always do what the data center companies want. Again, these are from the states that are actually building the most data centers. It's not nearly as simple as you want to make it out to be. The reason that data centers are being built where they are is about issues that go deeper than a short term business cost of "regulation" no matter how loosely defined. Sure I can't provide evidence that in theory the reason fewer data centers are being built in Mississippi is that other business issues override their low regulation, business friendly policies and a public that isn't in a position to provide resistance, but come on, if they are building data centers in California, New York and Chicago, three of the big bogey man areas for government power, then the argument against that needs some pretty compelling evidence, more so than being pretty sure.
> That moment when you say ‘look at how this could potentially cure cancer’ and your hardcore supporters say ‘And That’s Terrible.’
I continue to not understand your position on that.
In my mind "Automate nothing to AI" and "Automate humanity itself to AI" is mostly a binary dial. You won’t get to pick and choose.
"Obviously curing cancer is good because cancer cause suffering, and if AI can help us with that we should do that".
Yes but you know what cause suffering too ? Bad governance. Bad economy. Plausible on the same level, aggregated. You know what AI can automate for us, for a much more superior result ? The entirety of governance and economy. AKA total loss of control.
I don’t see a principled way to say no the second but not the others. Maybe you do, if so I would be delighted to hear it. I notice that deep down yes, I’m okay with the first but not the second. But until I’m sure such subtlety is possible ? I’m going with "and it’s terrible".
Also notice that there is a lot of people who distressingly say "Yes Humans Must Stay in Control" on the abstract, and still say no to Human Governance and Human Economy once you point to the opportunity costs.
Yeah, pretty much, but I think the basic idea is this: We can easily imagine a world where cancer no longer exists. We can't easily imagine one where jobs don't exist.
You might still be right that overall it's pretty binary, but this does explain the difference in response.
Well, if I want to explain the difference in response in a hand-wavy way, it’s pretty easy. People enjoy playing survival games where there is scarcity to overcome. Having the player die randomly of an unavoidable disease without any counterplay is not exactly peak gameplay tho, so games don’t do that.
But the difficulty is : how do you put this intuition in say, the Constitutional AI of Anthropic ? Where RLAI is going to enshrine the rule you put ? I don’t have a good answer to that and consequently I’m pretty terrified of an AI that cures cancer, only because of the potential implications of what it’s going to solve overall.
I wonder what happens if you had the medical technology to make lifespans indefinite, but continued with the same mediocre standard of governance.
So you have a bizarre situation of smooth faced youths who look like they should still be in high school sleeping in alleys (they are former elderly homeless that some private or government program picked up, paid for their treatment, and ejected back to the streets). Since robots can provide perfect medical care and build housing, but the building housing part is still illegal.
I suppose that as long as your QALY doesn't allow for a negative life year - living in an alley and getting disturbed by drone police at 4am and forced to move is still positive, even if each year of this is 0.1 of a life year - you get ahead.
Deepseek R1 leads me to believe that there is considerable scope for algorithmic improvements, and that we might get to super intelligence with rather less than $500 billion of compute. I am expecting the Deepseek guys to beat OpenAI to it, on a relatively shoestring budget (I.e. less than 500 billion dollars)
US companies have been trying the easy policy of throwing ever more dollars into compute, but I expect at some point they will also look at ways to get more bang for their dollars - and now with deepseek they know it's possible.
The vaccine skeptics clearly trust the government even less than I do.
If you’re the sort of person who suspects that mRNA vaccines might have side effects the government is concealing from you, you could, consistent with previously stated positions, also suspect that spending 500 billion dollars on building a super intelligent machine god might have a notable downside that the government is failing to inform you about…
I guess if Stargate didn’t have the government on board, they could find themselves blocked in order to protect an endangered species of newt, or whatever the animal is this time.
(Over here in the UK, the Dark Crimson Underwing moth is the star of the latest controversial planning application; it was the Great Crested Newt last time. )
The funnier medical answer would have been “oh ya ur indestructible nanotech body won’t get cancer for sure”. Like it’s funny to see it slowly being acceptable to say that cancer gets cured in a few years but not going all the way to the conclusion of the AI that can control reality to that level
The more I've read about the Project Stargate announcements, the more it seems to be a case of taking credit for what was already happening and goading the participants to name a bigger number "more than Biden." But at the same time, it's hard to imagine people in 2049 checking Project Stargate's books and saying "ha ha, you only got $300 billion in investment, not $500 billion!" especially with what $300 billion in compute could buy you in terms of training and test-time compute.
I'm expecting, at some point in the next four years, all of this (and other compute infrastructure investment asides) becomes an actual Manhattan Project, a government project. I don't know if anyone in the US government has fully priced in Deepseek R1, and what that reveals about the idea that just export controls can contain foreign firms AI. But when they do, that would motivate an arms race dynamic, on the assumption that AGI is a tech one could get and then use to stop others from getting it, which I'm not sure will be the case. Compare: the US decision to build the hydrogen bomb was advanced from speculation to practical policy within months of the Soviet's first nuclear test in 1949. Nuclear weapons had the obvious feature that they were obviously super powerful and dangerous, whereas what you could do with even "75% of the way to AGI" AI is for most of the public an Out of Context question.
I think that the Chinese government doesn't grasp just what they have in R1, otherwise they would not have allowed it to be released. Rather, I imagine the company would have been quietly coopted into some government ministry, and the developers asked "what could you do with $6 billion of training compute, rather than $6 million."
(Potential signal of the above: 6 months from now, we haven't heard anything more from Deepseek... no new models, no new announcements. Another historical comparison: Soviet physicists worked out that something was happening in America in the early 1940s when all of a sudden they stopped seeing publications by Szilard, Fermi, Oppenheimer, and others)
>I think that the Chinese government doesn't grasp just what they have in R1, otherwise they would not have allowed it to be released.
Yup!
Semi-related question: IIRC, R1 was released as a (semi?) open source. Are the EU Precautionary Principle screaming about it (and Stargate)? If not, why so quiet? I haven't heard anything from them, and they seemed to be trying to bottle up AI last time I looked.
if oracle, softbank, and the federal government say they’re going to spend an enormous amount of money to accomplish something, i will bet on it not happening.
To what extent can this be an "historic infrastructure project" given the speed of depreciation/obsolescence of hardware? Railroad tracks laid down 150 years ago and maintained are STILL USEFUL, and even if the rails are deteriorated the grading and tunnels and rights-of-way are still 90% of the value. I feel like AI infrastructure is consumable, not durable.
the notion that Trump can simpy wave his hands and sites get permitted is just wrong. SIte permits are largely state actions. Trump's exec orders seek to derail some environmental challenges, but state environmental laws are not incldued. and challenges to NEPA will go through the courts anyway. Trump is just grandstanding and Altman is just kissing his ass.
That's only part of the logical disconnect here. Literally the place was already under construction, so giving Trump credit for regulations he's trying to do away with seems far more likely to be Zvi's anti-regulation bias than any actual benefit from Trump being in office.
First site is in Texas. Some states have far less onerous environmental/permitting laws than others. So the sites can be picked in favorable states but Federal regulations apply to everywhere.
That's one place for one data center. But of the 8 highest areas for new data centers you've got northern Virginia, California, Illinois, Arizona, Georgia, NY tri-state, Oregon, and Texas. Fairly even split between high and low regulation states. Admittedly Meta just announced plans for a big data center in Louisiana, but it's not planned to start until the end of the year.
https://www.reuters.com/markets/us/north-america-sees-70-jump-data-center-supply-construction-cbre-report-says-2024-08-20/
Even high regulation states often allow commercial/industrial uses with reasonable levels of red tape. It's residential (and thus more voters etc) where extreme level of review is required. (And by "review" they mean "delay the project for multiple years")
In addition these states are de facto competing for the tax revenue. Data centers may not create many jobs but they pay taxes with almost no footprint, basically pure cash for a state and local area. So they may offer exemptions from review and the states who delay these projects will not get that tax revenue. Your list means the states who delay the project won't get anything, and the ones who don't may get twice as much built as originally planned.
Obviously these centers will need their own power generation soon which is part of some of these deals.
Let's look specifically at why the regulations aren't nearly as onerous in terms of preventing building as the simple theory would indicate. Oregon is popular because it's a space where major internet connections are, Chicago is physically the middle of the country and the colder climate saves on energy costs for cooling. Northern Virginia, proximity to DC. The tri-state area, proximity to New York City. California Silicon Valley, LA, San Francisco. If taxes and regulations alone were enough, we'd see virtually no building data centers in those places. But instead we see that the most of the places, including Dallas/Fort Worth, it's largely in places that have physical benefits and investments in infrastructure, not the places that are low regulations. Given looking at other factors show advantages, the harping on regulations being a significant factor in data centers being built in specific locations doesn't really hold up when other factors are taken into consideration.
You say maybe regulations were shaved as part of the negotiations, but you provide no evidence for that so it seems like supposition. Looking around I see lots of talk of tax breaks, but nothing saying that regulations were withdrawn for the data centers. In fact the Illinois requirement mandates that the data centers be either carbon neutral or meet green building standards. Virginia is pushing for more regulations on data centers.
https://dceo.illinois.gov/expandrelocate/incentives/datacenters.html
https://www.loudountimes.com/0local-or-not/2state/as-data-center-boom-continues-va-legislators-broach-new-regulations/article_46584506-d2ac-11ef-be59-833ccbee7271.html
None of those regulations are relevant. What I am talking about is things like the barriers to tear down and build housing in San Francisco Bay area - 5-10 year delays are considered normal.
I am pretty sure any more than a year in permitting delays will completely cancel a data center project. Therefore these high regulation states must issue permits swiftly and not allow random residents to get an injunction in court to stop the project for several years while a frivolous lawsuit is held.
First you argued that the data center in question was being built in Texas because of low regulations. The evidence against this was provided with examples. Then you shifted to tax cuts, again, evidence against this point was provided. Then you shifted to regulations meaning something about allowing the public to bring pressure including lawsuits. That's such an expansive use of regulations that what I think you really mean is anything that isn't business, whether it be the various levels of government or the public. That definition is so broad as to be useless, plus it ignores the significant advantages businesses have as being able to hold to positions longer than the public as many political scientists have noted, particularly Paul Pierson and Jacob Hacker in many of their books. But that's a larger argument than why data centers get built where they get built and not other places.
So let's check your theory that states that let residents block data centers don't get data centers because of "random residents" as you dismissively refer to them as. There are literally lawsuits in Virginia fighting data centers. Oregon is holding up data centers because of public outcry. I will grant you that Texas sometimes does data centers over the objections of their residents, but even there sometimes those pesky citizens convince the authorities to not always do what the data center companies want. Again, these are from the states that are actually building the most data centers. It's not nearly as simple as you want to make it out to be. The reason that data centers are being built where they are is about issues that go deeper than a short term business cost of "regulation" no matter how loosely defined. Sure I can't provide evidence that in theory the reason fewer data centers are being built in Mississippi is that other business issues override their low regulation, business friendly policies and a public that isn't in a position to provide resistance, but come on, if they are building data centers in California, New York and Chicago, three of the big bogey man areas for government power, then the argument against that needs some pretty compelling evidence, more so than being pretty sure.
https://www.princewilliamtimes.com/news/county-will-spend-nearly-500-000-more-to-fight-data-center-lawsuits/article_d715d418-b061-11ef-853d-6fffcb7aa800.html
https://www.eastoregonian.com/news/business/new-testimony-prompts-delay-on-boardman-data-center-decision/article_6e69fd62-40a6-11ee-b501-c30c9fcb4fa7.html
https://www.govtech.com/infrastructure/fort-worth-data-center-approved-over-residents-opposition
https://www.bisnow.com/national/news/data-center/an-expensive-lesson-facing-growing-local-hostility-data-center-developers-are-finally-trying-to-win-hearts-and-minds-122476
https://www.datacenterdynamics.com/en/news/fort-worth-planning-commission-votes-against-proposed-data-center-campus/
> That moment when you say ‘look at how this could potentially cure cancer’ and your hardcore supporters say ‘And That’s Terrible.’
I continue to not understand your position on that.
In my mind "Automate nothing to AI" and "Automate humanity itself to AI" is mostly a binary dial. You won’t get to pick and choose.
"Obviously curing cancer is good because cancer cause suffering, and if AI can help us with that we should do that".
Yes but you know what cause suffering too ? Bad governance. Bad economy. Plausible on the same level, aggregated. You know what AI can automate for us, for a much more superior result ? The entirety of governance and economy. AKA total loss of control.
I don’t see a principled way to say no the second but not the others. Maybe you do, if so I would be delighted to hear it. I notice that deep down yes, I’m okay with the first but not the second. But until I’m sure such subtlety is possible ? I’m going with "and it’s terrible".
Also notice that there is a lot of people who distressingly say "Yes Humans Must Stay in Control" on the abstract, and still say no to Human Governance and Human Economy once you point to the opportunity costs.
We will have to make up our mind, soon.
Human errors and natural errors (disease) are not the same, and solving each does not equate to removing the same variable from each equation.
I… actually didn’t think of that. And my first reaction is that I like it.
But is not "The Economy" a solution to solving a natural error (scarcity) ?
Yeah, pretty much, but I think the basic idea is this: We can easily imagine a world where cancer no longer exists. We can't easily imagine one where jobs don't exist.
You might still be right that overall it's pretty binary, but this does explain the difference in response.
Well, if I want to explain the difference in response in a hand-wavy way, it’s pretty easy. People enjoy playing survival games where there is scarcity to overcome. Having the player die randomly of an unavoidable disease without any counterplay is not exactly peak gameplay tho, so games don’t do that.
But the difficulty is : how do you put this intuition in say, the Constitutional AI of Anthropic ? Where RLAI is going to enshrine the rule you put ? I don’t have a good answer to that and consequently I’m pretty terrified of an AI that cures cancer, only because of the potential implications of what it’s going to solve overall.
Claude replying to me :
Let me break this down with rough estimates:
Cancer harm in US (annual):
~600,000 deaths/year
Average age of death ~70, vs life expectancy ~80 → ~10 years lost per death
Quality of life impact during treatment (~1.7M patients) → estimate 0.3 QALY loss per patient
Total: ~6.5M QALYs lost from deaths + ~500K QALYs from treatment
≈ 7M QALYs/year
Institutional harm estimates:
Economic inefficiencies from suboptimal regulation/policy: ~5-10% GDP loss
Healthcare system inefficiencies: ~$1T/year in waste
Criminal justice system costs (incl. excess incarceration)
Education system shortfalls
Conservative estimate: These reduce quality of life by ~0.05 QALYs per person
US population ~330M
≈ 16.5M QALYs/year
So rough estimate: Bad institutions cause ~2-3x more QALY loss than cancer in the US.
Major uncertainties: Institution impact per person, cancer treatment QALY loss, indirect effects. Could be off by factor of 2-3 either way.
I wonder what happens if you had the medical technology to make lifespans indefinite, but continued with the same mediocre standard of governance.
So you have a bizarre situation of smooth faced youths who look like they should still be in high school sleeping in alleys (they are former elderly homeless that some private or government program picked up, paid for their treatment, and ejected back to the streets). Since robots can provide perfect medical care and build housing, but the building housing part is still illegal.
I suppose that as long as your QALY doesn't allow for a negative life year - living in an alley and getting disturbed by drone police at 4am and forced to move is still positive, even if each year of this is 0.1 of a life year - you get ahead.
Deepseek R1 leads me to believe that there is considerable scope for algorithmic improvements, and that we might get to super intelligence with rather less than $500 billion of compute. I am expecting the Deepseek guys to beat OpenAI to it, on a relatively shoestring budget (I.e. less than 500 billion dollars)
I wonder if there is a way of poaching China's best AI engineers, so it's the USA that has the advantage of their skills?
Though with ASI, it's not certain that it matters who gets it first, if they fail to align it.
US companies have been trying the easy policy of throwing ever more dollars into compute, but I expect at some point they will also look at ways to get more bang for their dollars - and now with deepseek they know it's possible.
Which should make us all even more hopeless.
The vaccine skeptics clearly trust the government even less than I do.
If you’re the sort of person who suspects that mRNA vaccines might have side effects the government is concealing from you, you could, consistent with previously stated positions, also suspect that spending 500 billion dollars on building a super intelligent machine god might have a notable downside that the government is failing to inform you about…
I guess if Stargate didn’t have the government on board, they could find themselves blocked in order to protect an endangered species of newt, or whatever the animal is this time.
(Over here in the UK, the Dark Crimson Underwing moth is the star of the latest controversial planning application; it was the Great Crested Newt last time. )
Hundreds of thousands of jobs? I suspect the plan is to gas the anthill.
The funnier medical answer would have been “oh ya ur indestructible nanotech body won’t get cancer for sure”. Like it’s funny to see it slowly being acceptable to say that cancer gets cured in a few years but not going all the way to the conclusion of the AI that can control reality to that level
The more I've read about the Project Stargate announcements, the more it seems to be a case of taking credit for what was already happening and goading the participants to name a bigger number "more than Biden." But at the same time, it's hard to imagine people in 2049 checking Project Stargate's books and saying "ha ha, you only got $300 billion in investment, not $500 billion!" especially with what $300 billion in compute could buy you in terms of training and test-time compute.
I'm expecting, at some point in the next four years, all of this (and other compute infrastructure investment asides) becomes an actual Manhattan Project, a government project. I don't know if anyone in the US government has fully priced in Deepseek R1, and what that reveals about the idea that just export controls can contain foreign firms AI. But when they do, that would motivate an arms race dynamic, on the assumption that AGI is a tech one could get and then use to stop others from getting it, which I'm not sure will be the case. Compare: the US decision to build the hydrogen bomb was advanced from speculation to practical policy within months of the Soviet's first nuclear test in 1949. Nuclear weapons had the obvious feature that they were obviously super powerful and dangerous, whereas what you could do with even "75% of the way to AGI" AI is for most of the public an Out of Context question.
I think that the Chinese government doesn't grasp just what they have in R1, otherwise they would not have allowed it to be released. Rather, I imagine the company would have been quietly coopted into some government ministry, and the developers asked "what could you do with $6 billion of training compute, rather than $6 million."
(Potential signal of the above: 6 months from now, we haven't heard anything more from Deepseek... no new models, no new announcements. Another historical comparison: Soviet physicists worked out that something was happening in America in the early 1940s when all of a sudden they stopped seeing publications by Szilard, Fermi, Oppenheimer, and others)
>I think that the Chinese government doesn't grasp just what they have in R1, otherwise they would not have allowed it to be released.
Yup!
Semi-related question: IIRC, R1 was released as a (semi?) open source. Are the EU Precautionary Principle screaming about it (and Stargate)? If not, why so quiet? I haven't heard anything from them, and they seemed to be trying to bottle up AI last time I looked.
The EU is still thinking AI is the new crypto fad so there's not much hope coming from us one way or an other.
Many Thanks!
if oracle, softbank, and the federal government say they’re going to spend an enormous amount of money to accomplish something, i will bet on it not happening.
> You can guess what I think he saw while watching Trump to make Altman change his mind.
What is it?
To what extent can this be an "historic infrastructure project" given the speed of depreciation/obsolescence of hardware? Railroad tracks laid down 150 years ago and maintained are STILL USEFUL, and even if the rails are deteriorated the grading and tunnels and rights-of-way are still 90% of the value. I feel like AI infrastructure is consumable, not durable.