27 Comments
User's avatar
Miles Shuman's avatar

How much do you think a company has actually offered Scarlett Johansson to be the voice of their AI? This is me registering a prediction that Zuckerberg will in fact offer her $100 Million or more once he, you know, has a standalone app anyone would actually use.

Expand full comment
Leo's avatar

"Oddly I still don’t see GPT-5-Thinking in the API?" I think it's just GPT-5 with reasoning parameter set to something other than minimal (medium is the default, and it does think)

Expand full comment
LessThanFulfillment's avatar

Minimal is still technically GPT-5 Thinking, just restricted to very few thinking tokens. The version called "GPT-5 Instant" in the UI is GPT-5-Chat in the API.

Expand full comment
Askwho Casts AI's avatar

Podcast episode for this post, pretty much bang on 2 hours:

https://open.substack.com/pub/dwatvpodcast/p/ai-137-an-openai-app-for-that

Expand full comment
Shockz's avatar

I'm gonna start adding "carbon chauvinist" to my profiles.

Expand full comment
Skull's avatar

Well that just makes you sound like a headstrong petrochemicals executive.

Expand full comment
Dave92f1's avatar

Rare earths aren't rare. China sells a lot only because they produce them cheaply. If Chinese rare earths become hard to get, prices will go up and production will happen other places. The market can handle this just fine.

The whole US-vs-China hype is getting out of hand. This is not capitalism-vs-communism; China is not an existential threat and Chinese interests are not fundamentally opposed to US interests - both want (or should want) resources to be allocated by markets, freedom of the seas, control of terroristic religious extremism, AI-not-to-kill-us-all, good lives for humans.

Yes, Taiwan is a running sore. But it's not worth WW3.

Expand full comment
Shockz's avatar

Capitalism-vs-communism is exactly what it is, though. Just because the modern CPC is much more pragmatic than its predecessors about how much of the economy should be centrally planned doesn't mean that there's zero ideological opposition or fundamental conflicts.

Expand full comment
Dave92f1's avatar

I really don't think the CCP is Marxist in any meaningful way, and hasn't been since Deng.

They're "with Chinese characteristics" which seems to me just means (ideologically; of course there's corruption as in all systems) "government should work for the benefit of the people".

The CCP is definitely authoritarian, but not totalitarian. Nor do they seem very interested in exporting an ideology.

I think people read too much into the name "CCP". It has "communist" in the name, but they haven't really meant it for 40 years.

Expand full comment
Mark's avatar

I think authoritarianism states feel inherently threatened by democratic states, who provide an example that encourages internal dissent. Thus they in turn seek to undermine the democratic states. So mutual conflict is hard to avoid. For example China has to eradicate Taiwan's democracy (which is especially threatening because it's the same Chinese people), and Russia has to eradicate democracy across the former USSR and ideally across Europe too.

Expand full comment
Vince's avatar

That “should want” is doing a lot of work there. Ask Vietnam or the Philippines what they think of Chinese respect for the freedom of the seas

Expand full comment
Dave92f1's avatar

Fair points. China has become a regional bully. Do you think that's what really drives the vehement anti-China attitudes in the US? (Even from The Zvi, who I'd normally expect to be relatively immune?)

Expand full comment
Vince's avatar

I mean, fair, no, most Americans have no idea about anything going on in the South China Sea - I mentioned to a friend the collision between the Chinese navy and coast guard vessel while they were chasing a Philippine coast guard ship a few months ago, and he was shocked by the very existence of that kind of tension, much less the specific incident.

But I do think it contributes to an ambient feeling that China is a malignant actor, both among people who are paying attention and by slow osmosis among those who aren’t. The feeling, I think, of people like Zvi is that China is constrained only by capability, not by desire, and therefore the best way to prevent bad outcomes is to not hand them the capabilities.

Expand full comment
Vince's avatar

And now there’s another collision, two days after I posted the above, haha

Expand full comment
gregvp's avatar

Disliking Sonnet 4.5 is an indicator of cluster B personality disorder.

Expand full comment
bakkot's avatar

Surprised to not see a mention of TRM (Tiny Recursion Model), my candidate for plausibly the most important AI news of the month. https://x.com/jm_alexia/status/1975560628657164426

"7M parameters neural network that obtains 45% on ARC-AGI-1 and 8% on ARC-AGI-2"

Commentary on X suggests it's legit although not yet clear if the technique generalizes beyond the ARC-AGI setting. Even if it doesn't, 7M parameters for this result is extremely impressive!

Expand full comment
jmtpr's avatar

I'm glad to see you writing at length about the successionists. I think you would be more effective if you directly outlined some of your counterarguments. I am not a successionist, but neither have I read the complete works of Yudkowksy, and when you or he mention things like "all value in the universe" I actually have no idea what, specifically, you are talking about.

I would be very interested to read some of the metaethical essays that flesh your position. You can imagine me as someone who doesn't have kids, and is ambivalent about the moral valence of death. I don't think that makes me ontologically evil. At worst that makes me depressed, and at any rate there's a lot of people like me.

Expand full comment
Jonathan Woodward's avatar

I imagine the idea is that there are things that make humans and possibly animals or aliens morally valuable - curiosity, creativity, love, art, etc. Some AIs might also have some of those traits, and be valuable, but it's also possible that life on Earth could get wiped out by a paperclip maximizer that just makes paperclips and that's the end.

Expand full comment
Garrett MacDonald's avatar

Michael Huemer is the most readable philosopher I've found. Knowledge, Reality, and Value is a great phil101 textbook

Expand full comment
Mark's avatar
Oct 15Edited

It is easy to imagine that future AIs which replace us might not be conscious, have emotions, and so on. If so, and they wipe us out but also establish impressive looking space colonies or whatever, of what value is that? Why is it better to have a universe of server farms rather than a universe of rocky spheres, if neither one contains any conscious feeling beings?

Expand full comment
Shane P's avatar

Maybe we should call successionists lotus-eaters or experience-machine-salesmen. (Are they mostly male?)

Expand full comment
Jeffrey Soreff's avatar

"this is inevitable (and therefore good or not worth trying to stop)"

I don't believe the "good" part (people disagree radically on ethics, of course), but (nearly) "inevitable" is pretty plausible.

While I'm not a technological determinist, enough of the steps towards artificial neural nets were nearly simultaneously invented by multiple groups that near-inevitability is a reasonable conclusion.

Personally, I simply want to _see_ AGI, to see the endgame play out.

In any event, I'm on the sidelines, watching us traverse the path towards AGI (and possibly ASI),

running my benchmark-ette and watching others' benchmarks gradually saturate.

So be it.

Expand full comment
Sinity's avatar

about tinder for job market, I remembered that I wrote down an idea for that. Looking around its context, there's ideas on how do drugs affect cognition, which compare things to GPT-3, so it was pretty long time ago.

> ### From weed_thoughts.md

> dating(partnerseeking!) <=> jobseeking isomorphism

I mean, situation is the same, so app could function as any dting app

luring by lowered transaction costs (effort finding partner), thanks to scalability...

intermediary, same ways to exploit

non-exploitative option - increasing mutual info and making filtering/routing more efficient, making it more valuable

iq tests, machine learning (likelyhood successful match)

employer/emploloyee female/male (or other way around?)

contract/job - ONS/Longterm

I love how openly evil I described this as. I asked GPT-5 pro about its opinion regarding coherency of this proposal and it said it's naive on role split because gender roles vary by culture. Lol.

> That bakes in gendered asymmetries that vary by subculture and orientation and are not necessary for the model. The right mapping is “proposer” vs “chooser” (or “demand‑constrained” vs “supply‑constrained” side), which can flip by context. Don’t smuggle gender into structure.

It also said exploitation is "platform failure". Lol.

> > intermediary, same ways to exploit

> Rewrite/add: “Platform risks: rent extraction, engagement optimization over match quality, adverse selection, dark patterns, data lock‑in.”

Expand full comment
Coagulopath's avatar

> Ask GPT-5 Thinking to find errors in Wikipedia pages, and almost always it will find one at it will check out, often quite a serious one.

While it's clearly a good idea to have an LLM fact-check text, most of his examples either have replies arguing that GPT5 is wrong, or are nitpicks (GPT5 thinks "China blocks access to Wikipedia" is wrong because ackchually they only blocked access to Chinese-language Wikipedia.)

Expand full comment
Gerald Monroe's avatar

AI pause advocates and sucessionists both feel like two sides of the same coin.

Both claim to not care about themselves but their children. AI pause advocates are perfectly fine with themselves dying and their children dying of aging, knowing they will feel a sense a smug satisfaction that humanity will continue in its struggles, while they personally struggle to breathe from some illness medicine is helpless to treat.

Sucessionists are also apparently just fine with getting killed by mite sized weapons or herded into camps or whatever the fate would be, smug in knowing their AI successor children have far superior intelligence and will continue on.

I think what bothers me is neither plans to be able to observe their desired future. Whatever actually happens they will personally be dead and unable to witness it or influence events.

(Well kind of, AI pause advocates hope to fail but instill just enough caution humans survive and they can go get a shot of a Chinese ASI developed drug combination that stops aging. And AI sucessionists hope to not succeed enough, that advanced ASI just isn't that smart and so it can develop the cures for aging but not kill everyone)

Expand full comment
Mark's avatar

AI pause is not AI stop. A lot of AI pause advocates just want to pause say 5 or 10 years until enough money is invested in safety research that safe AGI can be built.

Expand full comment
Gerald Monroe's avatar

Some pause advocates have proposed 1000 year delays.

Expand full comment