24 Comments
User's avatar
Kevin M.'s avatar

The quote from Louis Arge from X is not right--it looks like you copied the previous tweet twice.

Expand full comment
Alex Scorer's avatar

More than that, the quoted tweets are completely rewritten in a formal style - I'm assuming an AI rewrite but I didn't think Zvi was using it here for that sort of thing?

Expand full comment
Jamie Fisher's avatar

(Zvi, please read 🙏)

(I just wanna quickly reiterate a point in case I don't have time to post a longer comment)

(and apologies if this is slightly off-topic from the current post)

It's not that the "General Public" isn't informed.... It's that the General Public imo.... is informed by OTHER experts.

exact quote from a member of the 'uninformed' public to me:

"""

Hey so I saw your post in sneerclub.

2:00 AM

I don't want you to be alarmed. Yud's arguments for foom scenario(the silly name for expoential takeoff) in the Hanson-Yud debates was nothing more than quoting a book about nuclear weapons, then writing down undergrad first day diff equations

"""

Expand full comment
Jamie Fisher's avatar

other headlines:

"The AI Doomers Are Losing the Argument"

“AI will kill everyone” is not an argument. It’s a worldview."

The people who write these things aren't total idiots. They're not totally uninformed. They're totally confused.

And it's because the so-called experts on AI are displaying serious division on the topic. So why not try to PERSUADE THE OTHER EXPERTS? Why not try to UNIFY THE INTELLECTUAL COMMUNITY. The Climate Change Scientists have been unified on *their* topic for decades... and they *still* struggle to advocate for sane policy. How the hell are the AI-Safety people going to advocate for safety when others in the community are saying different things?

Expand full comment
Mo Diddly's avatar

I dare say nearly all Zvi’s AI posts are dedicated to trying to convince other experts. It’s hard to do! People get pretty entrenched, especially when the stakes feel high.

Expand full comment
Jamie Fisher's avatar

> I dare say nearly all Zvi’s AI posts are dedicated to trying to convince other experts

Are they though? I feel like his posts are more "news coverage and policy advocacy", which I feel is useful for AI Experts OR General Public... but only if they *do not already have deep disagreements*, philosophical-or-scientific, about AI-risk.

i.e. I don't think that Zvi gets into the deeper weeds of the AI-risk issue. (just my surface impression / I don't always read his columns top-to-bottom). But I'm not blaming him. I don't think that "deep persuasion of informed opponents" is the purpose of this blog. (which is fine, since a "climate change policy" blog will probably not spend every article debunking "sun-spots and volcano" myths)

But I think some people NEED to work on the deep-persuasion / deep-debating / DEEP-DEBUNKING part. I think they need to talk/debate/have-coffee *directly* with their staunchest opponents.

I'm not an expert. But I have the vantage point of someone reads a lot of angles on this issue. And there's a *lot* of angles. I don't think the general public is apathetic about AI. Not at all! I think anyone with an internet connection is extremely curious!

BUT THERE'S TOO MANY "EXPERTS". THERE'S TOO MANY "VIEWS". WE NEED TOP-LEVEL-EXPERT UNITY.

Expand full comment
Methos5000's avatar

The thing is it's a lot easier to get consensus from science when the experiments can be run and show climate change is real. And as you note, that unity doesn't get policy because policy is at least as much about feelings. Whether it's people crying about coal jobs, or about the view being disrupted by offshore wind turbines, or debates over nuclear power with the waste/potential for nuclear accidents.

AI doesn't have that concrete evidence. You can't run an experiment to show what happens with more AI or less AI in the same way you can model what happens with more CO2 in the air. You can have thought experiments, or models based on that, but it would all be guesses, highly informed by what the model maker wanted the outcome to be. So it's just the feeling part and when you have one group swearing we're all gonna die and they are the heroes leading the charge to save us all and one group swearing we're all gonna sit around because AI will do all the work (and the people that make the AI will become billionaires), there's not a lot of room for compromise.

Expand full comment
Jamie Fisher's avatar

> So it's just the feeling part and when you have one group swearing we're all gonna die and they are the heroes leading the charge to save us all and one group swearing we're all gonna sit around because AI will do all the work (and the people that make the AI will become billionaires)

Those "groups" are really *not* the groups who seem to write nor influence the mainstream articles.

Expand full comment
Methos5000's avatar

I'm not sure I agree (at least with what I thought you were saying previously that the intellectual community is in agreement on the reality of climate change). You can't really unify the intellectual community when you've got large groups of AI experts with vastly different starting points on AI. Climate change in contrast the experts are functionally all in agreement. Unless you're regarding the intellectual community as being limited to the AI doom side in which case, that's not really representing the disparity of views.

Expand full comment
NullityNine's avatar

There are a bunch of reviews on LW that lie somewhere in between Yudkowsky's view and the popular articles. And people who agree with Yudkowsky are trying to persuade them! Even convincing those who half-agree is really hard. The reason advocating for safety could be possible is that most members of the public already dislike AI, which I assume is why Yudkowsky thinks his book could help.

Expand full comment
Jamie Fisher's avatar

Why don't Yud and Hanson have a new debate?

Expand full comment
gregvp's avatar

When the AGI comes, economists will be first against the wall - er, on the breadline. It's so easy to predict what they are going to say about anything.

Expand full comment
Jackson Pemberton's avatar

If AGI comes, everyone will be against the wall. Then there will be no one. If there were future human historians, they would determine that in the end, despite all the warnings, wealth addiction won out.

Expand full comment
gregvp's avatar

Ah, but economists will be first! Delight is delight even if fleeting.

Expand full comment
Katalina Hernández's avatar

Thank you for mentioning my post! I wrote it to prevent people from spending considerable time and effort on AGI ban strategies without considering how in-house counsel at labs could work around them. Past EU regulation (like the GDPR) and how it's constantly game is a good example of this. Glad this is reaching a wider audience!

Expand full comment
Jamie Fisher's avatar

Dear Zvi: In your reactions to IABIED, you seem to have missed a *major* hostile review.

https://www.theatlantic.com/books/archive/2025/09/what-ais-doomers-and-utopians-have-in-common/684270/

P.S. To continue my rant-series, what are members of the AI-Risk Community and Yudkowsky doing to address the concerns, disagreements, and counter-arguments of such influential people (other than sneering-back at them)? Has Yudkowsky, The AI Future Project, or anyone else made plans to engage with such people one-on-one?

Expand full comment
Jonathan Woodward's avatar

What an awful review. It takes the premise, "If Anyone Builds It, Everyone Dies", and mostly ignores that statement in order to (in effect) argue that, hah hah, no one will build it. Okay, great, perhaps not, but that's not the point of the book, and it hasn't proven the book's central thesis to be incorrect at all.

Expand full comment
hwold's avatar

> I would note that this result only holds while humans and compute are not competing for resources, as in where supporting humans does not reduce available compute

So, in that scenario, how does the human "stay in the economy", in the sense of communicate with prospective client / boss / partners ?

I presume a laptop / smartphone and the internet ? But how is all that silicon not a waste of resources when it could be "moar dakka for the AI" ? Do we just magically assume that "spending silicon to keep humans connected to the AI economy" is a good use of capital ?

As always, the blind spots of economists is depressing.

Expand full comment
avalancheGenesis's avatar

If at first your smart peripheral does not succeed, add another one? Wouldn't touch anything from Meta anyway, and no AR/VR seems to miss a bunch of the potential utility, but...just on form factor, I already don't wear glasses, so adding a wristband on top of that is ugh. Trivial Inconveniences! What'd be potentially genuinely interesting is adding enough phone-like features that one could possibly go without a phone and not miss much...replacing a ubiquitous device is obviously a huge market to "disrupt". Even cooler if they serve as actual, you know, glasses, given ongoing trends towards myopia. Two ubiquitous devices with one stone!

Any time there's a justification along the line of "this is our last moonshot to change the world, so it's worth dump trucks worth of money", it makes me think of Scott's old "Dark Money In Almonds" post...that is, the amount of money directly spent influencing politics is trivial compared to corporate lobbying, which itself is trivial next to these AI capex numbers. Obviously there's some binding constraints like campaign finance law, the money is not fully fungible. Still...the fraction of resources spent on moving the mundane levers of power always seems so pitiful by comparison. One gets a distinct found-hard-and-not-tried/world-of-atoms feeling of frustration.

Expand full comment
Paulin's avatar

"I notice the lack of AR/VR on that list which seems like it is reserved for a different glasses line for reasons I don’t understand"

First the field of view is too small

And also, to do AR you need to understand what part of the display is overlayed on what objects, and thus map the world in 3D

It would be much more computationally intensive than just a small 2D display (on one eye)

Expand full comment
Anthony Bailey's avatar

> real enterprise repos involve:

> • Multi-file edits

> • 100+ lines changed on average

> • Complex dependencies across large codebases

Boring old coding nerd: it's true, but only because larger real world problems have value correlated with greater complexity so are typically harder to understand, model and solve.

These costs growing are always signs of not actually having worked hard enough to fit the software system to what you really need it to do in practice. Enormous codebases *can* scale analogously to smaller ines. You "just" have to decouple, represent key concepts you need to play with, don't take on technical debt you can't pay down, etc.

The fact that more value will become possible due to coding models getting even better relative to humans at handling this greater complexity is just as important as the fact they can solve lesser problems cheaply.

Expand full comment
Hollis Robbins's avatar

Thanks for linking to my piece but you misunderstand my critique: I am saying that FIRE does not take into consideration that many young people now engage with new and controversial points of view with their AI chatbot. Given cameras everywhere, it is "safer" to ask questions privately rather than publicly, where they might get recorded and publicized, turning the simple asking of questions into a political act.

Expand full comment
hnau's avatar

> He did not expect, before LLMs, that we would be so lucky as to see such blatant alignment failures within distribution, in normal usage, while AIs were so underpowered.

You say luck, I say resolution of model uncertainty in a direction that should have caused more of a Bayesian update. In this and many other things about LLMs.

Expand full comment