49 Comments

Thanks for your insights and all the links. Very helpful to me as someone who has a great interest in how this plays out and the implications for both humanity and the markets as well as just in following the Musk- Altman escalating feud. Please keep us posted as further developments occur in this fascinating several act drama.

Expand full comment

Seconded! Sigh, I'm frustrated by the Musk-Altman pissing match. I just want to _see_ AGI (yeah, yeah, I know, that event has an excellent chance of being the last thing I'll see).

Two questions that I wish I knew the answers to:

Re right now:

I wish I knew if OpenAI is running recursive self-improvement right now. Between the HFE score and the coding competition score, it looks plausible.

Re GPT5-release-timeframe-ish (months?):

Altman has said that he expects it to be smarter than himself. I wish I knew how he thinks he will control it???

To any downstream ASI reading this archive: Hi! Happy computing!

Expand full comment

The pissing match is bad but x.AI is probably? good. It increases the odds of AGI you live to see because it creates a parallel track to it. Also X.AI will go for broke as they are currently losing and try different things.

Expand full comment

Many Thanks! Yup, to the extent that Musk concentrates on x.AI (in time out from DOGE, which I presume is occupying the bulk of his time these days) rather than the pissing match with Altman, I agree with you that the multiple paths to AGI increase my odds of seeing it. Much appreciated!

Expand full comment

Appears to me to potentially be one of the greatest thefts in history. Hard to pay attention to your principles and moral compass ( if you posses one) when so much money and power is at stake.

Expand full comment

Does anyone at OpenAI who isn’t a sociopath just get filtered out or something? Is everyone there really unaware of what they’re doing, or just deluded by the fortunes they’re facing?

Expand full comment

Everyone that is not a sociopath getting filtered out of an org is usually how it works when the org starts to be run by sociopaths.

Expand full comment

Upton Sinclair would suggest yes people are highly susceptible to being blinded by money, and presumably that scales as we see a bigger divide between the haves and not haves.

But there's also our weird social fixation on certain CEOs as almost cult leaders. Musk, Jobs, Altman, Neumann. I've never worked for a CEO I would have threatened to quit for like most of the OpenAI team did for Altman, but the workers did (even if you chalk some up to peer pressure, they had a critical mass).

Expand full comment

It's kinda telling how it seems to be so convergent! It seems like everyone with any hope of gaining this much money and power is suddenly a full throated accelerationist. Regardless of what pDoom they earlier said, everyone seems to be acting like they believe pDoom is negligible and the rewards for winning are collosal.

Even the world's richest man seems to be almost all-in.

I wonder what the thought process is.

Like I can understand if you thought pDoom was low and you don't want to die of aging, this is the move. But why would someone who said they thought the odds of AI takeover was high be doing this?

Expand full comment

> Like I can understand if you thought pDoom was low and you don't want to die of aging, this is the move. But why would someone who said they thought the odds of AI takeover was high be doing this?

For the same reason all of society tries to take down Light Yagami in "Death Note," an anime where he has the power to kill anyone in the world by whatever means he wants at any time, simply by writing it in a notebook. Even though he only uses it to kill criminals, and suppresses all crime worldwide as a result, all of society tries to find and stop him.

Imagine that power. Who would you trust with it, besides yourself?

Absolute power can only be trusted in YOUR hands - your own personal hands and direction, and nobody else's. Certainly not China's, or WORSE - the other political flavors' hands!!

Expand full comment

This makes me wonder if EAs and rationalists were both right and so hopelessly wrong it was stupid.

Right about the danger.

Wrong about how the response would be. It's one thing if it were a little balanced. At least one CEO with hope of developing their own AI was saying "let's actually be careful...". But no, even Dario Amodei seems to be "MAXIMUM POWER! Fuck China let's hog the GPUs and get AGI next year!".

I don't want to sound defeatist...and I have previous identified much more with the acceleration camp. But when it wasn't even a debate, when it was just going to be full speed ahead and everyone was going to betray instantly..

Expand full comment

> Right about the danger.

> Wrong about how the response would be.

100% agree - I think this same dynamic is what we've been seeing unfold in Zvi's posts over time, and I've taken a similar journey to yourself and (presumably) Zvi.

From "technology is good, we should definitely push ahead because that way we can solve our complex problems" to "no, wait, actually it's getting EXTREMELY clear that coordination is impossible and you idiots are going to accelerate ever onwards, eventually running us into a cliff that everyone could see coming for weeks at 1,000 mph."

Expand full comment

Right but for a long time that was debatable. I thought there might be some brief attempt at coordination. A fig leaf would be given to safety. All the AI companies would agree what not to unlock. Some international agreements.

I thought it would fail slowly and quietly, like government labs pushing the limits in secret, or you could go on the dark web and access AI models hosted in some secret place that are stronger than current limits. Or get them to do stuff that currently isn't allowed. (Hacking for hire etc)

So you get to the same place in the end but stretched out over time.

Instead it seems to be just fuck it let's ball, and it's going to be beyond full throttle. But efforts scaling to world war levels of effort. Nobody is going to quietly wait for a chance to make AGI for "just" 10 billion in hardware if they can get it for a trillion 3 years earlier.

China also isn't intending to take their time developing a domestic IC infrastructure over 10 years but appears to be looking for any routes in parallel including smuggling, cloud rental, optimizations, and just really big ICs on domestic processes. This will keep them in the game and prevent any kind of "coordination" because there's no time, China is right on our ass...

Expand full comment

Yeah, I think the game theory on this one is just exceptionally harsh. It's literally almost like a Pascal's wager.

The terms are:

1. We either treat this as a race, or we don't.

2. If we do treat it as a race, there's a potential long-term downside (AI risk) that all humanity will be destroyed.

3. If we don't treat it as a race, the Other is going to race ahead and subject us to the long-term AI risk anyways, PLUS there is a short-term certainty that the Other will hugely advance in manufacturing, sigint, and military capabilities, and we consider that a nigh-guaranteed existential threat already.

4. We MUST treat this as a race.

Expand full comment

As a side note automated AI drones allow way more flexible mass murder than a mere death note. You don't need to know what your victims look like, just their approximate location and a simple text description of the kinds of things that you want the drone to use to make the decision. It's way more scalable.

And thats just small aircraft with an AI ASIC aboard. You could target DNA...

Expand full comment

Yup. Existential threats are real, as is the threat of literal "boots on faces, forever."

I point this out to anyone that says "zillionaires who own the robotic means of production couldn't possibly control future society, regular people will rise up."

$300, easy-to-make drones that automatically image-recognize enemy soldiers and suicide bomb them are ALREADY a major factor in the Ukraine war, today.

You think you have any chance against a killbot-patrolled, zillionaire walled compound, or just a generation or two of tech advance later, rich people with insect-sized killbot swarms around their person protecting them?

That said, I'm an optimist, and think enough billionaires have signed up to give half or most of their wealth away that people in the US, at least, are really likely to get UBI-ed.

Expand full comment

Yep. Also power armor would finally have a reason to exist. (It's just a full sealed suit of body armor to protect VIPs from drones and airborne gas attacks. The main protection is a swarm of active defense interceptor drones around the VIP)

Expand full comment

Just one other thought: in the scenario of "1 person owns the United States and the government" it can be, well, better than now. That person could spend just 10 percent of their resources keeping the population in unaging luxury.

There still would be jobs - 1 person can't keep the AIs in check. There would be millions of managing/auditing jobs where you semi randomly load up an AI model, developed from a diverse lineage, and in an isolated sandbox audit what the AI swarm is doing.

It would be some kind of weird workplace where semi rogue AIs are constantly trying to bribe you but so are security auditors. Everyone is trying to get their opponent to break a rule and get it recorded to a blockchain so they get fired or deleted. (Rulebreaking for AIs is punished with deletion while humans are merely fired)

Maybe. I won't say this will be how it goes just when there is effectively infinite resources even the subjects of a trillionaire living in the crumbs of charity and poorly paid jobs are impossibly rich in real terms.

Expand full comment

Tech CEOs will say anything if it helps their recruiting. When AI was a niche thing, they talked up existential risk to appeal to the narrow audience who was thinking a lot about AI. Now that it’s obvious to everyone that AI is important, they talk down existential risk to help them recruit the software engineers with more mainstream opinions.

Expand full comment

I think Altman is saying in the quoted passage that the safety measures will be unpopular, not that the lack of safety will be unpopular.

Expand full comment

That was my interpreration too.

Expand full comment

Yep, that’s what the words and the grammar mean, it’s not really ambiguous; it also doesn’t really change much in terms of the overall argument.

Expand full comment

I reread the passage, and I believe my original interpretation is correct if you look at the broader context. I invite him to clarify if this is wrong, and I would be happy to be in error and apologize.

Expand full comment

I think it was 'pretty' clear: the desire to avoid recklessness means the need for safety measures that for some will be unpopular. But for shits and giggles I asked Claude:

Let me break this complex sentence down into its key components:

First caveat: "While we never want to be reckless"

Acknowledges need for caution/responsibility

Second caveat: "there will likely be some major decisions and limitations related to AGI safety that will be unpopular"

Recognizes that some safety measures may restrict what people can do

Admits these won't be popular with everyone

Main point: "directionally, as we get closer to achieving AGI, we believe that trending more towards individual empowerment is important"

Despite safety needs, they believe we should generally move toward giving individuals more control/agency

"Directionally" suggests this is about the overall trend/balance

Links timing to AGI development progress

The core message seems to be advocating for a balanced approach that:

Maintains necessary safety precautions

But aims to maximize individual freedom/capability where possible

Sees this balance shifting more toward individual empowerment as AGI development progresses

It reads like a statement trying to reconcile competing priorities between safety/control and individual autonomy, while suggesting that over time, the balance should favor individual empowerment more.

Expand full comment

I think you’ve overdone “the mask comes off”. It’s a fun turn of phrase but presumably you’d need equivalent numbers of “the mask comes back”

Expand full comment
6dEdited

"It doesn’t tell us what the scale of this ‘intelligence’ is, which is a matter of much debate. What does it mean to be ‘twice as smart’ as the average (let’s simplify and say IQ 100) person? It doesn’t mean ‘IQ 200,’ that’s not how that scale works."

When Peter Voss was advertising a job for an "AI Psychologist" on the SL4 list in 2005, I advised him to get someone who knew Rasch psychometric measures, which potentially let you quantify intelligence as a ratio measure, one where you can say one person is twice as smart as another (or equivalently that one question is twice as hard as another). Rasch intelligence measures can be converted to IQ and back, or to mental age, or to probability of being correct on a problem of a given difficulty. As intelligence increases, the odds of being right rise not as a step function, but as an extremely soft logistic curve, with about 2 s.d. (= 30 IQ points = difference between an adult and a 10 year-old) to go from a 25% chance to a 75% chance of being correct, for a problem that average people get right 50% of the time

For hard problems, higher intelligence gives exponentially higher probabilities of beig correct, though still low in an absolute sense. For solving the hardest and most crucial

questions, given the lack of dramatically more intelligent people at present, being able to

prioritize the important questions to get the most attempts to answer them, even by non-geniuses is the key. AI changes that rapidly, making not just far more attempts possible, but attempts each with exponentially higher odds of succeding. AI may be advancing only linearly in intelligence, but it seems to be covering each year most of the gap between 10 year olds and adults or between average adults and top professors, likely over 15 IQ points per year, which while not FOOM, is still a potentially rapid takeoff, with multiple society-transforming effects becoming suddenly apparent over weeks or months, effects which will be positive overall because intelligence solves problems, gets the right answer, rejects the old wrong answers. BY DEFINITION!

I think this is the root problem with fears of higher intelligence, they are fears of getting the right answers, fears that we'll abandon the current wrong answers, falsehoods to which many are religiously committed, fasehoods upon which their livlihoods and status depend.

Here's my post on Rasch measures of intelligence and using them to compare different levels of abulity at different ages; the technical discussion is in the "Appendix C, misc. notes on Rasch measures" section at the end:

https://substack.com/@enonh/p-149185059

Expand full comment

"In another sense, it is the beginning of something for which it’s hard not to say “this time it’s different”; the economic growth in front of us looks astonishing" has a feel of

"A study of price trends during a nuclear exchange: The ICBM boost phase interest rate"

about it. It seems to overlook the point that people will have more pressing concerns than bidding on goods and services during a period when machine intelligence exceeds human intelligence.

Expand full comment

On your statement:

"It’s not possible for everyone on Earth to be ‘capable of accomplishing more than the most impactful person today.’ The atoms for it are simply not locally available."

O3-mini-high disagrees. (And I had a hunch as well). Getting lots of atoms is a natural thing to do if you ever have AI smart enough to have self replicating robots. Which since human robot manufacturing technicians and mine and factory workers exist and these tasks are thought to require ordinary levels of skill, would mean "median human level AGI".

Locally means "in the earth moon system and not in the form of lava which will take some time to access".

https://chatgpt.com/share/67afe7e3-61b0-800a-90df-8a412c6b1fcc

Now with that said you are right. In such a scenario: good luck getting permits to strip mine the earth crust and destroy the biosphere. And society as it currently functions would have near homeless while a small number of, umm, quadrillionaires have almost all resources.

Atoms are there though and if evenly divided there are enough.

Expand full comment

The only thing that makes any sense here is:

1. OpenAI likes to say further growth is near infinite, enough intelligence in 2035 to be 8 billion times today level of available intelligence. This is to raise funds at the next step up.

2. OpenAI believes we will slam into a wall at slightly past AGI. Enough to make lots of profits and keep everyone on earth employed since models would be still worse than humans in important skills and areas.

This could happen, maybe. Though yes it's hard to see the mechanism because if we live in a world where AGI messes up some things, why doesn't it collect feedback a million ways in parallel and train on that and...

Well if 1 is true at least we won't die of aging, but ..

Expand full comment

> You can also say ‘oh, any effective form of coordination would mean tyranny and that is actually the worst risk from AI’ ...

I'm persuaded that attempting international coordination is the better path, but some of the comments here have reminded me that enforcing such an agreement would indeed require 1984-level surveillance and coercion (as dramatized by some of Jack's flash fiction in Import AI). We have a serious imagination gap and communications challenge here, and I'm concerned that (with apologies for tone policing) persistent mockery may be counterproductive. Mocking cynical deceit like OpenAI's denialist position on job loss is one thing, but mocking sincerely held views is another.

I'd like to see more substantive engagement with the concerns about tyranny. How do we have surveillance pervasive enough to enforce restrictions on AI but somehow still preserve freedom otherwise? (I've only read (a subset of) your work for a couple of weeks, so I'm probably just missing where you've done this elsewhere.)

Expand full comment

Do societies that ban underage pornography have to resort to tyranny to do so?

Banning *frontier* AI development is extremely easy and won’t require dramatic scaling up of policing.

Expand full comment

Thanks for the helpful analogy. But lots of underage porn gets through, no? What percentage of producers get caught, do you think? That despite the market for it being much much smaller. (And the consequences are much much smaller scale.)

Expand full comment

AFAIK, no, not a lot gets through and 99.99% is successfully removed from the Internet, even from the Dark Web. I don't know what percent gets caught but eradication has certainly been successful.

Note that such pornography requires nothing but a victim and a camera. Frontier AI requires *hundreds* of people and millions of dollars in equipment. You can't get away with it for a meaningful period of time, as someone *will* report you and a SWAT team will find you anywhere in the world (just ask Bin Laden).

Expand full comment

The Tech Tale fiction at the end of Import AI #393 is an example of the dystopic feel such a restriction on AI development could have: https://open.substack.com/pub/importai/p/import-ai-393-10b-distributed-training Is there something about that story that strikes you as unrealistic or misleading (setting aside the sentience bit).

Expand full comment

1. It assumes a single genius can just magically solve AGI if only they had the right dataset.

2. It trivializes the availability of powerful hardware for this. In a world where AGI development is banned, there will be a cap on new AI-friendly equipment manufacturing + measures to make it hard to link multiple GPUs together to work effectively.

3. Even if you did get enough GPUs together your electric bill will quickly give your plans away

4. Even if it didn't give you away (solar panels?), there would be a lot of people involved and one of them will report you to the FBI

5. It wrongly imagines a world where AI development is seemingly allowed but is heavily monitored. In reality we'd just ban it outright and that's it. If you're suspected of trying to train AI with a serious amount of resources, you'll get the "Bin Laden" treatment

Expand full comment

This is feeling helpful, to flesh out a specific policy regime. (It's more intrusive than the current enforcement against underage porn, though, right?) Let me know if I'm misunderstanding your vision: There's a ban on certain hardware, following some confiscation of current hardware. Electricity use is monitored for anomalously high loads. Internet use is monitored to prevent decentralized training. There will be strong incentives for informants. Not only actually training powerful AI but developing better algorithms will be banned, or at least publishing such advancements will be restricted somewhat like nuclear secrets (to preempt a single individual from being able to do the training with the right dataset). And it's an international agency doing all this surveillance and quashing of knowledge, an entity each major power trusts to be effectively policing the others. And there is international consensus that the limitations this blanket prohibition imposes on the advancement of science and medicine are worth it, necessary sacrifices.

Expand full comment

Put in terms of Zvi's Levels of Frictions, child porn is currently level 3-4 in our society, but we will need to make ASI level 5+, no? Even our attempts at curbing nuclear proliferation have been insufficiently strong, despite the inherent advantages of radioactivity being detectable and large-scale machinery being required.

Expand full comment

It's a lot easier to build a nuclear bomb than it is to produce a frontier AI model. Most of the nuclear bomb challenges arise due to the complexity of obtaining enriched Uranium, the rest is undergrad-level physics.

Just look at how many people it took to get from GPT-3 level models to GPT-4 level models. How are you going to get so many people to work in secrecy, assuming world governments agree to ban such activity and don't do this in secrecy?

Expand full comment

Agreed to assume that world governments publicly agree to ban, but whether they then do so in secrecy is the heart of the matter.

The weights for a near-frontier model (R1) are now publicly available, and many of the smartest engineers in the world are currently working to lower the barriers to training and running such models. Step 1 of a global enforcement regime might need to be a "gun buyback" type program to delete all copies of R1 derivatives, no? And that strikes me as pretty invasive.

But I'm admittedly not an expert on this (any more than I am on nuclear non-proliferation). Are you aware of a well-fleshed-out plan somewhere that I could reference?

Expand full comment

R1's weights importance has been vastly overstated by various actors. In reality the program would be around confiscating hardware and mostly from large corporations.

Expand full comment

Does anybody here know anything about how to turn off the internet? Like, all of it, kinda all at once?

Might come in handy.

Also, maybe people won't like having the internet shut off. Oh, well...

Expand full comment