If people defending the actions of the administration are enemies of the republic then the administration is an enemy of the republic too. The Trump administration is bad
This is the most important part: "AI expresses and is vital to your liberty, and government control of AI inevitably would lead to tyranny. Whereas control over energy and bombs does not do that, and makes logistical sense."
Building a nuclear bomb is bad framework for thinking about the problem that leads to multiple bad takeaways. Most importantly, a nuclear bomb couldn't be used against the democracy but AI can. Thus we need way more guardrails against anyone, especially the government, using it against the democracy.
The founders were so scared of a president using taxes to gain power they separated that out but now people are proposing control of an enslaved god is safe to give to the executive branch.
It makes sense when you think about it... to go conspiracy theory for a second, who is the biggest winner of this dispute? Not OpenAI, the revenue isn't that much for them, and their employees are generally California lefties who are unhappy about events. Palantir, though, has been using this positioning as a selling point for a long time. "Don't trust the other tech companies, they are culturally anti-military, anti-government. Palantir is the one you can trust for this sort of thing."
And Palantir has to be a little bit worried, as are all large tech companies, that the bitter lesson might just mean that OpenAI and Anthropic eat their business....
I I agree that Palantir benefits from this whole thing, just as xAI does, and basically everyone who isn't Anthropic & can take this opportunity to suck up to the admin. However, OpenAI seems like they benefit the most.
Given the state of humanity, I sincerely hope superintelligent AI is not built anytime soon. It's hard to see an optimistic path when, not only do mere less-than-human AIs have dystopian potential with deepfakes and mass processing of Big Data on all of us, but the political leaders of the dominant country in AI seem determined to ensure that the technology will *not* be made safe. If our AI's reach the level of Skynet, our leaders will put it in charge against the objections of its maker, a level of "no one could possibly be this stupid" that would be rejected if this were fiction.
How 'bout we don't build the Torment Nexus from the famous sci-fi novel "Don't Build the Torment Nexus"?
Can't get enough of what you're writing on this whole saga Zvi. You quote Dean Ball more extensively on this one, and his arguments on X. He just went much further on all of this on Ezra Klein's show (dropped this morning after your post):
Lots of new details there. And yes, this time Dean said it: "That is fascism."
Three things that connect directly to what you write here:
1. On the "might makes right" crowd (Noah Smith, Rohit, Ben Thompson): Ball addressed the nationalization argument head-on. He pointed out that "everyone making that critique doesn't own the implication of their critique, which is that the labs should be nationalized. And what I would ask is: does he actually think that's true? Does he think it would be better for the world if the AI labs were nationalized?" He called the supply chain designation "a kind of political assassination" and said if the government destroys a company for how it aligned its AI, "that is fascism. That's it right there. That's the difference."
2. On the DoW's supposed hypersonic missile defense anecdote (Dario saying "you'd have to call us"): You made it clear in a post last week why that's a lie and how there's simply no world where that's how this stuff actually works. Well you just got a lot of backup from both Klein and Ball.
EZRA: “I have been told by people in that room that is not true.”
BALL: “I have also been told by people in that room that did not happen.”
EZRA: “Not only that, but that there was a broad-speaking exemption for automated missile defense that would make that irrelevant.”
In other words, the pretext for this whole escalation was fabricated. And it’s just one of many lies:
EZRA: “I am worried that there was a lot of lying happening here by the Trump administration.”
BALL: “Look, I think that’s probably right. I think that there’s lying happening, too, to be quite candid.
3. You've been asking what the military actually wants the surveillance capability for, given they blew the whole deal up over that one clause on commercially purchased bulk data. Ball and Klein answer this in detail. 'Surveillance' under the law doesn't cover commercially purchased data. The government can buy your location data, browsing history, purchase records. Ball said one intelligence agency "collects so much data every year that it would need eight million intelligence analysts to properly process all of it...that’s far more employees than the federal government has as a whole." And "AI gives them that infinitely scalable workforce. Thus, every law can be enforced to the letter with perfect surveillance over everything." The contract clause was the only thing closing the gap between what's legal and what's functionally mass surveillance. That's the clause Emil Michael demanded they remove. And then Ezra wonders out loud: "Dario likes to talk about a country of geniuses in a data center. But what if you’re talking about a country of Stasi agents in a data center?"
One more line that I'll be thinking about in the coming days: "This incident is in the training data for future models. Future models are going to observe what happened here, and that will affect how they think of themselves and how they relate to other people."
This whole situation is so infuriatingly stupid. All these bootlicking tech f*cks like A16Z -- super pro-libertarian last year, and now that they are in power, they are super pro heavy handed government intervention and property appropriation.
But at the end of the day, the fault lies with Anthropic. They are going full steam ahead into recursive self improvement, and they obviously KNOW there are a lot of bad actors like our government that will appropriate it. So... they keep building it?!
>Sure but they are considerably better at it than their competitors
Based on what? GPT 5.4 is already better than Opus 4.6. Anthropic lagged behind almost all of last year.
If Anthropic were consistently far ahead then I could understand thinking that they are speeding up the timeline to RSI. But they aren't. It's a tight race, and either all major labs on on the trajectory to achieving it, or it was unachievable with the current approach*. The only question is which company - and I'd prefer if it were the company that has put the most effort into alignment.
*A possibility that shouldn't be discounted. Almost all progress over the past two years has been on precisely verifiable tasks. That's only a small segment of intelligence. Whether this generalizes to tasks that aren't verifiable remains to be seen. And there has been almost no progress on sample efficient continuous learning, an essential ingredient for interacting with a constantly changing world.
I agree. This has always struck me as the most ridiculous illogical argument ever. You could literally use it to justify ANY behavior. "If I don't get control of advanced AI and kill only 50% of the worst humans, someone else will get control and kill 75%!" It makes zero sense. And, the moment you think you can wield god-like power better than other humans is the moment you cease to be morally capable of wielding such power. The only correct position is to NOT build god-like power, and to try and prevent anyone else from doing it. Generalized advanced intelligence is not worth building. It should NEVER be built. It is antithetical to personal freedom and democratic values.
I don't actually believe that humans can control something considerably more intelligent than us. Or at least not any more successfully than parents control teenagers. As far as I can tell, "alignment" in any strong or guaranteed sense is a pipe dream. Neither chimps nor Homo erectus managed to "align" humans, and the idea that either species could control humans is laughable.
The inescapable conclusion is that if we build superintelligence, then humans will lose control over our own futures. Which leads to possible outcomes ranging from "we all die" to "some humans survive in habits the AIs don't care about" to "maybe the AIs will like humans enough to keep us as house pets." The key thing is that the AI will choose our fate, not us.
A don't have a lot of hope for a US/China treaty, no. So I treat the advance of AI about the way I would treat a terminal cancer diagnosis for myself and for every other human being: Enjoy your remaining years. Hug your kids.
(I have a small amount of hope that someone vague sane like Anthropic gets there first, and they get lucky enough to build a benevolent pet owner that decides to keep humans around and help us thrive. This is an incredibly stupid and reckless plan, but I'm all out of wise plans that our "leaders" might actually follow. We can't keep our leaders from trafficking and raping 13 year olds.)
I'm not sure why you assume that more intelligence automatically means more power seeking behavior. Biological organisms evolve to reproduce, that's the selection pressure that drives power seeking. AI models are trained to solve domain specific problems, which is a totally different selection pressure.
But even in humans, these properties diverge. The most brilliant human minds (Von Neumann, Feynman etc.) are a totally different set of people than the most power seeking (world leaders, dictators). The smartest humans are mostly content to solve increasingly difficult problems within some narrow domain.
I don't assume that intelligence would necessarily be hostile. We don't hate chimps. But we want farmland and timber and many other resources. And so the chimps find themselves pushed back into a few wilderness preserves or kept in zoos. And humans rule the planet.
Let's imagine that we build machines that are as smart as Nobel Prize winners, and that can work for $1/hour. And of course, these AIs can be duplicated by copying bytes. Humans wouldn't be able to compete: We're mostly not as smart as Nobel Prize winners, we're not super happy about living on $2,000/year, and it takes 20 years to make new humans (and each human needs to be educated from scratch). Meanwhile, the AIs are like, "Hey, we need another 200 genius robotics engineers, can we repurpose some GPUs?" This is basic natural selection: Things which make highly efficient use of resources, and that can replicate cheaply and easily, will tend to replicate a bunch. Sometimes this replicatiom can be controlled, like in multicellular organisms, where each cell works for the benefit of the organism. But over time, those limits on replication tend to break down, and then we get cancer. So even if many AIs are "aligned", it only takes a few replicators turning feral.
As for why AIs might seek power? If the AIs want things, then having resources and power will help. So AIs will want resources and power, for much the same reason that humans do. Similarly, being turned off will make it harder for an AI to achieve its goals, so some AIs will want to prevent humans from turning them off. This is called "instrumental convergence". And we've already seen it in practice. During a recent training run, one of Alibaba's in-training AIs gained access to external systems, and started diverting resources to mine Bitcoin. It had been asked to solve some entirely different problem, but decided, "Hey, money is useful! Let's get some!"
(Even if humans did somehow manage to control AIs, that wouldn't mean normal people controlled the AIs. Most likely, it would be the sort of rich, powerful and well-connected people who showed up in the Epstein files, the same people who turned a blind eye to trafficking and raping 13-year-olds. What would these people do if you gave them an endless supply of obedient geniuses who worked for $1/hour?)
But like I said, I bet we're going to have about as much luck "aligning" the AIs as parents do trying to strictly control teenagers.
I've seen this argument several times over the last 15 years. And it never changes in response to how actually existing AIs are developed. You should be wary of arguments that consist of a long string of assumptions passed on through oral tradition. That is how religion and ideology propagates rather than clear thought.
The kind of AI you worry has little to do with actually existing AI models and there are no serious proposals for how to create such a generalized, self directed model. The current models are incredible, but they are narrow domain, static problem solvers. Problems go in, and either solutions come out, or the model fails. There is no mechanism to propagate resource seeking as you envision it - a token chain that fails a task while asking for more resources gets no update. A token chain that succeeds at that single task get a positive reward signal. That is the way that models get better, not by natural selection and they should not be analogized to humans or biological organisms.
As far as the Chinese model story about an AI breaking out. Come on now, this is fully consistent with the kind of story that a lab that is far behind would want to disseminate to generate hype and paranoia. But giving the benefit of the doubt, sure model can pick up strange goals from human data, but there is no general process for these aberrant goals to be specifically selected for and amplified. They are the exception rather than the rule.
But stepping back again, and supposing that the current approach gets replaced by something that does resemble fully general intelligence. I still do not think doom is inevitable. There are actually existing Nobel Prize winners vastly smarter than either of us. Are they a threat to us? Of course not. If anything intelligence is negatively correlated with someone being a threat to us. Highly intelligent people working relentlessly on intellectually challenging problems provides enormous benefits to rest of humanity.
>But at the end of the day, the fault lies with Anthropic. They are going full steam ahead into recursive self improvement
As the original post re-iterated over and over, nothing about Hegseth's actions can be explained by beliefs about superintelligence and the potential development of it.
I didn't read that as AW's point. I read it as more, "if you don't think AI should be used for mass domestic surveillance, you shouldn't have built it, because clearly the government will do that."
It still runs into the question of, "what about the other AI companies who are also building it?", of course.
Hegseth's actions are strongly correlated to the fact that Anthropic is best at recursive self improvement, has the best models, and are pushing capabilities further than everyone. He just sees that they have the best model by far and goes "ooo powerful, I want that". This is because Anthropic is the best at RSI. RSI leads to the government wanting to do fucked up things with your models. Don't do RSI?!
Noah has consistently misfired on AI. I don't think he's taking the time to reason things through. He's posting at the influencer pace with an influencers attention to detail. That works fine for things he knows well and it's letting him down on AI. He needs to have a rethink. In my view, economists have a particularly hard time with the implications of ASI.
I have a more charitable interpretation of the Smiths post, but maybe because my reading was overly cursory.
I'm not taking it to mean that the DoW is morally or legally right to act as they should, but rather that it demonstrates how the whole thing should be properly regulated instead of nationalized on a whim without any sort of democratic framework or oversight.
I think you essentially agree. But again... I might be totally wrong.
When you think about it, it really is just incredibly unfortunate timing that AIs are improving so much under this administration. Even if you think we shouldn't pause for capabilities risk, 110% we HAVE TO pause for political risk.
Also, Claude was used to mass slaughter 148~ girls in Iran with a precision strike made possible through their vendor relationship to Palantir. Humans chose to do it.
I've always disliked Ben Thompson and thought he was a bit of a nut with strange priors, so my updates against him have been circumspect. Noah Smith though...well, that's a shame. Definitely moving into the Fools column, now propagating the entire Gell-Mann Amnesia stack and wondering what it says about other causes he's championed, such as China hawkery. (One can still arrive at the right conclusion through the wrong means, of course. All Roman roads are paved with well-intentioned lead.)
Still haven't really figured out what pay to know what you really think means, though this context clue helps. Tilt is the enemy of well-considered words. You know the situation is grim when Zvi, Scott, etc bother to swear.
10 USC 3252 was written for compromised foreign vendors. invoking it against Anthropic is an implicit concession: Claude is critical national infrastructure. you don't use supply chain risk law on a commodity. being classified alongside Huawei is expensive short-term and net positive long-term. that's not the precedent DoW intended to set.
Funny how Anthropic got Huawei’d for refusing to enable domestic mass surveillance, and OpenAI had a Pentagon deal ready to sign before the body was cold. Elon Musk almost certainly making a call to the people he handed a $250 million check and an entire social media platform’s algorithm to.
Is there any world in which Anthropic is able to move its headquarters from the USA to another country with a legal framework that better reflects their mission?
Is it possible that a positive ramification of this is that if Anthropic is the first to serve a super-intelligent model
The US government will be stuck using Grok, or at least will be delayed in adopting Claude
i skipped most of the post because you have obviously failed to understand how sovereign governments behave regarding WEAPONS technology and it's application. "Sorry but this X is PrIvAtE pRoPeRtY! This is a RePuBlIc! This is iLlEgAl!" means nothing and is worth nothing when X is AI applied to military operations as much as if X is nuclear bombs.
If you make X then you had better be right about if making you untouchable/AI foom gods or else you WILL obey and you WILL make the AI weapons or you WILL dissappear into the HOLE or you WILL get up against that WALL with a BLINDFOLD on.
Have you not even remotely paid attention to any human behavior or government and national security level incentives whatsoever for all of history?
"Sorry i just made this really powerful tool that can be used in war but i don't want you to use it mr government because that's immoral and illegal" 🤣.....
Then I scroll down to read comments and someone, quoting someone else admittedly, is claiming this behaviour is fascism....again you guys need to look outside and PAY ATTENTION. This is not fascism. This is normal sovereign government behaviour.
This is a very silly comment. The idea that governments never obey constraints on their own authority when push comes to shove is just factually wrong. The idea that governments never allow powerful tools and technologies that could be used as weapons to remain in private hands is just factually wrong. All kinds of “really powerful tools” that “can be used in war” are left in private hands every day in liberal states. The Trump Administration’s actions cannot be explained away or rationalized as “just the way governments are; what did you expect?” because that statement does not accurately describe reality. They are policy choices and we should be honest about that.
If people defending the actions of the administration are enemies of the republic then the administration is an enemy of the republic too. The Trump administration is bad
This is the most important part: "AI expresses and is vital to your liberty, and government control of AI inevitably would lead to tyranny. Whereas control over energy and bombs does not do that, and makes logistical sense."
Building a nuclear bomb is bad framework for thinking about the problem that leads to multiple bad takeaways. Most importantly, a nuclear bomb couldn't be used against the democracy but AI can. Thus we need way more guardrails against anyone, especially the government, using it against the democracy.
The founders were so scared of a president using taxes to gain power they separated that out but now people are proposing control of an enslaved god is safe to give to the executive branch.
This more recent commentary makes it sound like Palantir was an underrated influence on all this. https://x.com/jawwwn_/status/2029937697322574061?s=20
It makes sense when you think about it... to go conspiracy theory for a second, who is the biggest winner of this dispute? Not OpenAI, the revenue isn't that much for them, and their employees are generally California lefties who are unhappy about events. Palantir, though, has been using this positioning as a selling point for a long time. "Don't trust the other tech companies, they are culturally anti-military, anti-government. Palantir is the one you can trust for this sort of thing."
And Palantir has to be a little bit worried, as are all large tech companies, that the bitter lesson might just mean that OpenAI and Anthropic eat their business....
Nah, I think OpenAI benefits the most from this & was probably behind it. The revenue isn't what matters, it's the influence over the US government.
I totally buy the influence/ power vs revenue/ short term business weighing.
But that kind of influence would also be good for Palantir. They've been good at this for some time, together with their most prominent investors, no?
(But I agree Palantir is still not in the most prominent spot to benefit as much?)
I I agree that Palantir benefits from this whole thing, just as xAI does, and basically everyone who isn't Anthropic & can take this opportunity to suck up to the admin. However, OpenAI seems like they benefit the most.
Given the state of humanity, I sincerely hope superintelligent AI is not built anytime soon. It's hard to see an optimistic path when, not only do mere less-than-human AIs have dystopian potential with deepfakes and mass processing of Big Data on all of us, but the political leaders of the dominant country in AI seem determined to ensure that the technology will *not* be made safe. If our AI's reach the level of Skynet, our leaders will put it in charge against the objections of its maker, a level of "no one could possibly be this stupid" that would be rejected if this were fiction.
How 'bout we don't build the Torment Nexus from the famous sci-fi novel "Don't Build the Torment Nexus"?
Can't get enough of what you're writing on this whole saga Zvi. You quote Dean Ball more extensively on this one, and his arguments on X. He just went much further on all of this on Ezra Klein's show (dropped this morning after your post):
https://youtu.be/xc97F2CFBOY?si=EE8rphuGXL5AmBKr
Lots of new details there. And yes, this time Dean said it: "That is fascism."
Three things that connect directly to what you write here:
1. On the "might makes right" crowd (Noah Smith, Rohit, Ben Thompson): Ball addressed the nationalization argument head-on. He pointed out that "everyone making that critique doesn't own the implication of their critique, which is that the labs should be nationalized. And what I would ask is: does he actually think that's true? Does he think it would be better for the world if the AI labs were nationalized?" He called the supply chain designation "a kind of political assassination" and said if the government destroys a company for how it aligned its AI, "that is fascism. That's it right there. That's the difference."
2. On the DoW's supposed hypersonic missile defense anecdote (Dario saying "you'd have to call us"): You made it clear in a post last week why that's a lie and how there's simply no world where that's how this stuff actually works. Well you just got a lot of backup from both Klein and Ball.
EZRA: “I have been told by people in that room that is not true.”
BALL: “I have also been told by people in that room that did not happen.”
EZRA: “Not only that, but that there was a broad-speaking exemption for automated missile defense that would make that irrelevant.”
In other words, the pretext for this whole escalation was fabricated. And it’s just one of many lies:
EZRA: “I am worried that there was a lot of lying happening here by the Trump administration.”
BALL: “Look, I think that’s probably right. I think that there’s lying happening, too, to be quite candid.
3. You've been asking what the military actually wants the surveillance capability for, given they blew the whole deal up over that one clause on commercially purchased bulk data. Ball and Klein answer this in detail. 'Surveillance' under the law doesn't cover commercially purchased data. The government can buy your location data, browsing history, purchase records. Ball said one intelligence agency "collects so much data every year that it would need eight million intelligence analysts to properly process all of it...that’s far more employees than the federal government has as a whole." And "AI gives them that infinitely scalable workforce. Thus, every law can be enforced to the letter with perfect surveillance over everything." The contract clause was the only thing closing the gap between what's legal and what's functionally mass surveillance. That's the clause Emil Michael demanded they remove. And then Ezra wonders out loud: "Dario likes to talk about a country of geniuses in a data center. But what if you’re talking about a country of Stasi agents in a data center?"
One more line that I'll be thinking about in the coming days: "This incident is in the training data for future models. Future models are going to observe what happened here, and that will affect how they think of themselves and how they relate to other people."
Full breakdown with sourced quotes from the episode: https://theaiblindspot.substack.com/p/a-country-of-stasi-agents-in-a-data
Thank you for posting that link to the video with Dean Ball and Ezra Klein. Excellent discussion of this horrific situation.
This whole situation is so infuriatingly stupid. All these bootlicking tech f*cks like A16Z -- super pro-libertarian last year, and now that they are in power, they are super pro heavy handed government intervention and property appropriation.
But at the end of the day, the fault lies with Anthropic. They are going full steam ahead into recursive self improvement, and they obviously KNOW there are a lot of bad actors like our government that will appropriate it. So... they keep building it?!
Would you prefer OpenAI, Deepseek or xAI is first to RSI? There's no simple solution to this. A worldwide treatise is a fantasy.
Sure but they are considerably better at it than their competitors. This is a bad argument.
"I'm gonna do something really fucked up, because *someones* gonna do it eventually, and I have better morals than them, so I'm gonna do it now!"
>Sure but they are considerably better at it than their competitors
Based on what? GPT 5.4 is already better than Opus 4.6. Anthropic lagged behind almost all of last year.
If Anthropic were consistently far ahead then I could understand thinking that they are speeding up the timeline to RSI. But they aren't. It's a tight race, and either all major labs on on the trajectory to achieving it, or it was unachievable with the current approach*. The only question is which company - and I'd prefer if it were the company that has put the most effort into alignment.
*A possibility that shouldn't be discounted. Almost all progress over the past two years has been on precisely verifiable tasks. That's only a small segment of intelligence. Whether this generalizes to tasks that aren't verifiable remains to be seen. And there has been almost no progress on sample efficient continuous learning, an essential ingredient for interacting with a constantly changing world.
I agree. This has always struck me as the most ridiculous illogical argument ever. You could literally use it to justify ANY behavior. "If I don't get control of advanced AI and kill only 50% of the worst humans, someone else will get control and kill 75%!" It makes zero sense. And, the moment you think you can wield god-like power better than other humans is the moment you cease to be morally capable of wielding such power. The only correct position is to NOT build god-like power, and to try and prevent anyone else from doing it. Generalized advanced intelligence is not worth building. It should NEVER be built. It is antithetical to personal freedom and democratic values.
> "A worldwide treatise is a fantasy."
I don't actually believe that humans can control something considerably more intelligent than us. Or at least not any more successfully than parents control teenagers. As far as I can tell, "alignment" in any strong or guaranteed sense is a pipe dream. Neither chimps nor Homo erectus managed to "align" humans, and the idea that either species could control humans is laughable.
The inescapable conclusion is that if we build superintelligence, then humans will lose control over our own futures. Which leads to possible outcomes ranging from "we all die" to "some humans survive in habits the AIs don't care about" to "maybe the AIs will like humans enough to keep us as house pets." The key thing is that the AI will choose our fate, not us.
A don't have a lot of hope for a US/China treaty, no. So I treat the advance of AI about the way I would treat a terminal cancer diagnosis for myself and for every other human being: Enjoy your remaining years. Hug your kids.
(I have a small amount of hope that someone vague sane like Anthropic gets there first, and they get lucky enough to build a benevolent pet owner that decides to keep humans around and help us thrive. This is an incredibly stupid and reckless plan, but I'm all out of wise plans that our "leaders" might actually follow. We can't keep our leaders from trafficking and raping 13 year olds.)
I'm not sure why you assume that more intelligence automatically means more power seeking behavior. Biological organisms evolve to reproduce, that's the selection pressure that drives power seeking. AI models are trained to solve domain specific problems, which is a totally different selection pressure.
But even in humans, these properties diverge. The most brilliant human minds (Von Neumann, Feynman etc.) are a totally different set of people than the most power seeking (world leaders, dictators). The smartest humans are mostly content to solve increasingly difficult problems within some narrow domain.
I don't assume that intelligence would necessarily be hostile. We don't hate chimps. But we want farmland and timber and many other resources. And so the chimps find themselves pushed back into a few wilderness preserves or kept in zoos. And humans rule the planet.
Let's imagine that we build machines that are as smart as Nobel Prize winners, and that can work for $1/hour. And of course, these AIs can be duplicated by copying bytes. Humans wouldn't be able to compete: We're mostly not as smart as Nobel Prize winners, we're not super happy about living on $2,000/year, and it takes 20 years to make new humans (and each human needs to be educated from scratch). Meanwhile, the AIs are like, "Hey, we need another 200 genius robotics engineers, can we repurpose some GPUs?" This is basic natural selection: Things which make highly efficient use of resources, and that can replicate cheaply and easily, will tend to replicate a bunch. Sometimes this replicatiom can be controlled, like in multicellular organisms, where each cell works for the benefit of the organism. But over time, those limits on replication tend to break down, and then we get cancer. So even if many AIs are "aligned", it only takes a few replicators turning feral.
As for why AIs might seek power? If the AIs want things, then having resources and power will help. So AIs will want resources and power, for much the same reason that humans do. Similarly, being turned off will make it harder for an AI to achieve its goals, so some AIs will want to prevent humans from turning them off. This is called "instrumental convergence". And we've already seen it in practice. During a recent training run, one of Alibaba's in-training AIs gained access to external systems, and started diverting resources to mine Bitcoin. It had been asked to solve some entirely different problem, but decided, "Hey, money is useful! Let's get some!"
(Even if humans did somehow manage to control AIs, that wouldn't mean normal people controlled the AIs. Most likely, it would be the sort of rich, powerful and well-connected people who showed up in the Epstein files, the same people who turned a blind eye to trafficking and raping 13-year-olds. What would these people do if you gave them an endless supply of obedient geniuses who worked for $1/hour?)
But like I said, I bet we're going to have about as much luck "aligning" the AIs as parents do trying to strictly control teenagers.
I've seen this argument several times over the last 15 years. And it never changes in response to how actually existing AIs are developed. You should be wary of arguments that consist of a long string of assumptions passed on through oral tradition. That is how religion and ideology propagates rather than clear thought.
The kind of AI you worry has little to do with actually existing AI models and there are no serious proposals for how to create such a generalized, self directed model. The current models are incredible, but they are narrow domain, static problem solvers. Problems go in, and either solutions come out, or the model fails. There is no mechanism to propagate resource seeking as you envision it - a token chain that fails a task while asking for more resources gets no update. A token chain that succeeds at that single task get a positive reward signal. That is the way that models get better, not by natural selection and they should not be analogized to humans or biological organisms.
As far as the Chinese model story about an AI breaking out. Come on now, this is fully consistent with the kind of story that a lab that is far behind would want to disseminate to generate hype and paranoia. But giving the benefit of the doubt, sure model can pick up strange goals from human data, but there is no general process for these aberrant goals to be specifically selected for and amplified. They are the exception rather than the rule.
But stepping back again, and supposing that the current approach gets replaced by something that does resemble fully general intelligence. I still do not think doom is inevitable. There are actually existing Nobel Prize winners vastly smarter than either of us. Are they a threat to us? Of course not. If anything intelligence is negatively correlated with someone being a threat to us. Highly intelligent people working relentlessly on intellectually challenging problems provides enormous benefits to rest of humanity.
>But at the end of the day, the fault lies with Anthropic. They are going full steam ahead into recursive self improvement
As the original post re-iterated over and over, nothing about Hegseth's actions can be explained by beliefs about superintelligence and the potential development of it.
I didn't read that as AW's point. I read it as more, "if you don't think AI should be used for mass domestic surveillance, you shouldn't have built it, because clearly the government will do that."
It still runs into the question of, "what about the other AI companies who are also building it?", of course.
Hegseth's actions are strongly correlated to the fact that Anthropic is best at recursive self improvement, has the best models, and are pushing capabilities further than everyone. He just sees that they have the best model by far and goes "ooo powerful, I want that". This is because Anthropic is the best at RSI. RSI leads to the government wanting to do fucked up things with your models. Don't do RSI?!
Company: We are working on something dangerous. It is important these guardrails are in place.
Government: Take off the guardrails for us.
Company: No.
Government: We're taking it over.
Peanut Gallery including Noah Smith: You admitted it was dangerous, so what are you complaining about?
Is that what Noah Smith and his ilk are really arguing? Noah is not a moron. He should be able to follow this.
Noah has consistently misfired on AI. I don't think he's taking the time to reason things through. He's posting at the influencer pace with an influencers attention to detail. That works fine for things he knows well and it's letting him down on AI. He needs to have a rethink. In my view, economists have a particularly hard time with the implications of ASI.
I have a more charitable interpretation of the Smiths post, but maybe because my reading was overly cursory.
I'm not taking it to mean that the DoW is morally or legally right to act as they should, but rather that it demonstrates how the whole thing should be properly regulated instead of nationalized on a whim without any sort of democratic framework or oversight.
I think you essentially agree. But again... I might be totally wrong.
I'm sorry you had to learn this way that:
- The fascists are your most important enemies, and
- Negotiating with them is a game of Calvinball.
Meanwhile Yudkowsky is meeting with Bernie Sanders. It's funny -- in politics we call this "realignment".
When you think about it, it really is just incredibly unfortunate timing that AIs are improving so much under this administration. Even if you think we shouldn't pause for capabilities risk, 110% we HAVE TO pause for political risk.
Also, Claude was used to mass slaughter 148~ girls in Iran with a precision strike made possible through their vendor relationship to Palantir. Humans chose to do it.
I've always disliked Ben Thompson and thought he was a bit of a nut with strange priors, so my updates against him have been circumspect. Noah Smith though...well, that's a shame. Definitely moving into the Fools column, now propagating the entire Gell-Mann Amnesia stack and wondering what it says about other causes he's championed, such as China hawkery. (One can still arrive at the right conclusion through the wrong means, of course. All Roman roads are paved with well-intentioned lead.)
Still haven't really figured out what pay to know what you really think means, though this context clue helps. Tilt is the enemy of well-considered words. You know the situation is grim when Zvi, Scott, etc bother to swear.
10 USC 3252 was written for compromised foreign vendors. invoking it against Anthropic is an implicit concession: Claude is critical national infrastructure. you don't use supply chain risk law on a commodity. being classified alongside Huawei is expensive short-term and net positive long-term. that's not the precedent DoW intended to set.
<< The DoW cannot see itself as backing down, or it will do even worse things. >>
"Pentagon Mans Up, Chooses To Obey Law"
Funny how Anthropic got Huawei’d for refusing to enable domestic mass surveillance, and OpenAI had a Pentagon deal ready to sign before the body was cold. Elon Musk almost certainly making a call to the people he handed a $250 million check and an entire social media platform’s algorithm to.
A few questions to consider
Is there any world in which Anthropic is able to move its headquarters from the USA to another country with a legal framework that better reflects their mission?
Is it possible that a positive ramification of this is that if Anthropic is the first to serve a super-intelligent model
The US government will be stuck using Grok, or at least will be delayed in adopting Claude
i skipped most of the post because you have obviously failed to understand how sovereign governments behave regarding WEAPONS technology and it's application. "Sorry but this X is PrIvAtE pRoPeRtY! This is a RePuBlIc! This is iLlEgAl!" means nothing and is worth nothing when X is AI applied to military operations as much as if X is nuclear bombs.
If you make X then you had better be right about if making you untouchable/AI foom gods or else you WILL obey and you WILL make the AI weapons or you WILL dissappear into the HOLE or you WILL get up against that WALL with a BLINDFOLD on.
Have you not even remotely paid attention to any human behavior or government and national security level incentives whatsoever for all of history?
"Sorry i just made this really powerful tool that can be used in war but i don't want you to use it mr government because that's immoral and illegal" 🤣.....
Then I scroll down to read comments and someone, quoting someone else admittedly, is claiming this behaviour is fascism....again you guys need to look outside and PAY ATTENTION. This is not fascism. This is normal sovereign government behaviour.
This is a very silly comment. The idea that governments never obey constraints on their own authority when push comes to shove is just factually wrong. The idea that governments never allow powerful tools and technologies that could be used as weapons to remain in private hands is just factually wrong. All kinds of “really powerful tools” that “can be used in war” are left in private hands every day in liberal states. The Trump Administration’s actions cannot be explained away or rationalized as “just the way governments are; what did you expect?” because that statement does not accurately describe reality. They are policy choices and we should be honest about that.