31 Comments
User's avatar
Seth Williams's avatar

If manufacturing is the bottleneck are we sure it's actually that bad for Nvidia to sell chips to china? I'm much less interested in 'winning the AGI race' and more interested in humanity surviving.

Some reasons it could be good:

- If China is slightly behind than incredibly behind they will have leverage and would be genuinely interested in pursuing AI safety

- If China is ahead then their AI building will be more central and not 100 groups racing against each other (probably)

- Longer timelines are (obviously?) good

Seth Williams's avatar

Maybe a couple years ago I would have thought there's much less appetite for safety in China than here, but I'm not sure anymore

Kevin's avatar

In my opinion it's a good idea for Nvidia to sell H20 chips to China, because the US and China getting along peacefully is very important, and when the countries peacefully work together to achieve important goals, it creates a strong incentive for future peace. AFAICT this is essentially the Ben Thompson take as well.

Skull's avatar

China is about to collapse into an entity in which peace and cooperation will be a distant memory. We'll be lucky if they merely collapse into feudal states rather than mad max. But there will probably still be a few rich enclaves, even defensible cities, where high level research is being done. You gotta hope. We've never seen a state population collapse anything like this.

Greg's avatar

“Unforced error” suggests the actor understands the game. I am unconvinced.

Wow's avatar

Obviously the woke AI “algorithmic justice” types are out of power now, but it’s worth noting that everything that Musk is doing — spawning MechaHitler, manipulating incels with Ani, getting DoD contracts, and resigning himself to extinctionism — was well-predicted by the wokes. Including, and especially, the fact that he’s actually quite dumb at many non-technical things; the multi-dimensionality of intelligence is something that the anti-woke crowd tends to ignore and even reject.

And on the subject of Jensen and China, there’s a very strong argument in favor of his friendly stance. When compared to the adversarial “beat China” stance of the American labs (with DoD contracts), it’s highly probable that Jensen and others who are not ratcheting up the specter of World War III are reducing existential risk.

If American labs are serious about racing to AGI, and using that AGI to destroy other nations’ sovereignty, they are increasing the risk of nuclear war as a countermeasure/deterrent. If American AGIs try to eliminate the Chinese AGI projects, what stops them from firing hypersonic missiles on American datacenters?

Mo Diddly's avatar

I fully expect bombings of data centers in the next 5-10 years.

jmtpr's avatar

Zvi, please notice that Bernie is asking the right questions, and that your worst disagreements with him are trivial compared to the stakes; whereas the current administration and its orbits are lying about their AI policy, and you know this.

There is a side you can make a deal with, and it's not the party in the White House. Americans hate the direction AI is taking. Americans love Bernie. Be savvy about politics for once and notice this opportunity!

Mo Diddly's avatar

This is unsustainable. Both parties need to take AI seriously or we’re all doomed. Given that, it is best to veer away from this as a partisan issue insomuch as we can

Jeffrey Soreff's avatar

"it is best to veer away from this as a partisan issue insomuch as we can"

Regrettably, that won't be very much. Consider that _vaccines_ have become a partisan issue over the last half decade.

jpr's avatar

AI is inextricably a political issue -- it regards power conflicts among groups of people, it is of great interest to governments, etc.

Maybe it's my lack of imagination, but I cannot think of any nonpartisan institutions in the United States that are effective at creating political change. Conversely, there are many recent examples of partisans creating political change.

So I don't understand the rational basis of this commitment to nonpartisanship. Partisans get a lot more done.

jpr's avatar

Or more succinctly -- if your plan relies on uniting America's political parties, it is a bad plan.

Mo Diddly's avatar

I definitely agree with you it is a massively uphill task, but there are reasons to be hopeful. Americans do, for better or for worse, tend to rally together around issues of national security. The problem is that currently most people don’t understand the threat, which is why I’m so grateful for people like Zvi, and why rhetorical innovation is so critical.

Kevin's avatar

If we really do get AGI, in the sense of AI that is smarter than all humans about everything, I don't think we are currently able to predict what it will lead to. It doesn't seem like "alignment" is the right framework, because we aren't getting "foom" or "winner take all" dynamics around AI. So it seems likely that there will be many of them that are not aligned with each other or with any particular group of humans.

I wouldn't say "the odds are against us" in this scenario, it's more that I don't see that we have developed a very believable way of modeling this scenario. Instinctively I think there are good odds that humanity would do well through this transition, because humans have done well through many extreme transitions before, but I could understand other people having different priors here, and it's certainly not a guarantee of success.

Anthony Bailey's avatar

I found two parts of the argument strange.

> It doesn't seem like "alignment" is the right framework, because we aren't getting "foom" or "winner take all" dynamics around AI.

"There could be a fooming singleton" is only one alignment concern.

And "the pivotal transition has not yet happened, therefore it probably won't happen" does not seem sound.

Acceleration seems evident?

> "humans have done well through many extreme transitions before"

Which is the comparable one where humans are no longer the most capable species?

> "I don't think we are currently able to predict what it will lead to."

That I more strongly endorse.

Sean's avatar

I find it hilarious that politely asking Claude to talk about the passage in this post about your opinions of the Opus alignment failure and terminal preferences caused Claude to terminate my chat for visiting its usage policy. Must have hit a nerve!

Jeffrey Soreff's avatar

<morbidSnark>

"Cate Hall: Genuine question: Why is xAI hyper-focused on creating waking nightmares of products?"

I can't answer that question, but I _can_ suggest that the code name for the Ani v2.0 project should be fentanyl-o-matic.

</morbidSnark>

Jeffrey Soreff's avatar

"The White House seems to be buying whatever he is selling, and largely treating ‘win the AI race’ as ‘maximize Nvidia’s market share.’ This includes now selling their H20s directly to China."

Given this, and Trump's tariff policy (with "policy" interpreted _very_ generously...), maybe Trump actually thinks "beat" and "sell to" are equivalent???

Jeffrey Soreff's avatar

"Elon Musk: Will this be bad or good for humanity? I think it'll be good. Most likely it'll be good. But I've somewhat reconciled myself to the fact that even if it wasn't gonna be good, I'd at least like to be alive to see it happen. (Followed by awkward silence)"

I realize no one else will take the same position, but I have considerable sympathy with Musk on this point. I, personally, want to _see_ AGI. I'm a dozen years older than he is, so I'm personally somewhat happier with shorter and riskier timelines than anyone else here. In any event, I have zero power over the outcome, just watching as the benchmarks advance.

Jeffrey Soreff's avatar

Re "Even disregarding all that, even if things go well, the Vitalik’s scenario still ends in disempowerment. By construction, this is a world where AI tells humans what to think and makes all the important decisions, and so on."

If we set aside perhaps the 1000 most powerful people in the world, it is important to remember how disempowered the vast majority of us _already_ are. We depend on little slices of expertise of thousands of people we never meet, and often don't even know (individually) exist. We cannot, and would not have the knowledge to, e.g. change the load allocation policies of the electrical grid that supplies our home. If one of those decision makers closes a local hospital, or decides to run a highway through our neighborhood, or starts a war, good luck stopping them.

Mostly, for most people, the incremental disempowerment from an _aligned_ AI (which is, of course, the hard part - aligning with, as you said _any_ human direction) is not really a big loss.

You _do_ have a good point in https://thezvi.substack.com/p/the-risk-of-gradual-disempowerment , that AIs don't have an incentive to keep their data centers and other support infrastructure nontoxic, and otherwise human-habitable, which _is_ an adverse change compared to the 1000 people who rely on human agents. Still, for an _aligned_ AI, I'd expect it to be under the control of some subset of the 1000, who are still human, and have an incentive to keep most of the Earth habitable.

Still, losing control to an aligned AI is not that different from the effects of division of labor. Just as Heinlein's omnicompetent hero is as fictional as Santa Claus, we _cannot_ master all of the specialties that we depend on, so we must and do delegate many decisions today to people we do not know, and tomorrow to machines we do not know. And we may, on net, _gain_ from this, just as division of labor has raised our standard of living from that of hunter-gatherers.

Scott Novak's avatar

The new equilibrium division of labor will be: humans eat food, breathe oxygen, drink water, and require housing, other goods and a relatively stable environment all in aggregate worth far more than they are capable of producing. While AIs do close to 100% of valuable work, including making optimal decisions. This is different than your posited current world scenario of ~1000 humans being in charge that still need to trade an enormous amount with the other 8 billion humans and need most of them to go on working in order for these 1000 humans to keep and grow their standards of living and prestige.

Jeffrey Soreff's avatar

Many Thanks! Yes, human _labor_ is going to be approximately valueless once AGI and follow-on robotics advances exceed human capabilities. (if the AGIs are aligned to users' instructions) the 1000 humans at the tops of the political/social/economic pyramids might still need the other 8 billion of us for prestige, though not, as you said, for the 1000's standards of living.

avalancheGenesis's avatar

To the extent that certain varieties of incel genuinely, truly, cannot possibly ever find Real Human Companionship (within realistic constraints; the optimal amount of effort for such high-value rewards is not infinity..."Pascal's Marriage"?), I suppose it's nice the floor gets raised somewhat for them? Of course, Orphan Ani Hall's reach won't be limited to just that fringe, believing one is incapable of (being) love(d) is halfway to making it actually so, the marginal user who could otherwise get to X base won't get the necessary reps in now, etc...so the overall equilibrium is gonna be worse. But I can't help notice that despite going about it in the most predatory and embarrassingly cringe possible way, at least AI companion bots are trying *something*. There doesn't seem to be much civilizational appetite otherwise for addressing loneliness and its discontents. Close cousin natalism gets a good amount of airtime, but as you've noted, total resource allocation to e.g. child subsidy or removing marriage penalties remains paltry compared to the scale of the problem. (And one doesn't get childbirth without, you know, relationships to start with!)

It's the same reason I get frustrated by accelerationists framing AI as our Last Best Hope for avoiding terminal decline. So much of that ruin is foot-shooting and other forms of intentional self-imposed hobbling. We could just...not do that? Coordination problems are hard, but are they really so hard that the easier alternative is to sink billions into obsoleting mankind entirely? I just don't buy it. You can really go quite far just on unlocking "mundane utility", in AI or otherwise!

Performative Bafflement's avatar

> We could just...not do that? Coordination problems are hard, but are they really so hard that the easier alternative is to sink billions into obsoleting mankind entirely? I just don't buy it. You can really go quite far just on unlocking "mundane utility", in AI or otherwise!

True in the abstract, and I think it's important to keep pointing it out like this...but empirically, if you look around, does it REALLY feel like we can just "stop shooting ourselves in the feet" or self-hobbling?

I mean both political sides here. When we're given the choice of which doddering, 90 year old half-corpse we're allowed to vote for, will *either* side actually do anything about any of our real problems?

Regulatory capture, terrible K-12 schools that cost more than everywhere else and underperform for smart and dumb kids alike, declines in high human capital fertility, housing being impossible to build anywhere people actually live, the most expensive health care system in the world, increasing polarization, more commons being burned (figuratively and literally), increasing lack of trust in our institutions and legal system?

Basically no. We've lost a lot of state capacity, and seem to lose more every day as we polarize more and lurch from side to side, with fresh hells and idiocies unleashed by either side at each lurch.

And the worst part of it is that sure, politics is less dysfunctional everywhere else in the developed world. But the US and China are the ones building the god-minds, and the US is the one with all the polarization and dysfunction, so none of the other "better run" places even matter.

Bottom line, I think yeah, coordination problems are actually that hard, at least in this context, in this time, the time when it matters most. It's unfortunate, but seems pretty self-evident.

avalancheGenesis's avatar

I don't know - the sense I get from reading Yglesias, Scott Alexander, progress studies-affiliated people, etc. (and honestly even Zvi on non-AI topics) is that sure, big picture, The Situation Is Grim And The Odds Are Against Us...but many things do, actually, get better over time, and despite it all, Life Is Pretty Great // The Past Actually Really Sucked. Even if it's one step back for every two steps forward, we'll muddle through somehow. There's an incredible amount of ambient FUD floating around encouraging catastrophizing, from both sides within and sundry sides without. The correct response is obviously not to Pollyanna through it, since the sky actually does fall sometimes. But it does involve knowing when to take a W, to see the loaf as half full rather than a failure to secure the ever-illusory utopian full loaf. Like, hell, outspoken education critic FdB even admits that despite much rending of garments, US schools still somehow range from mediocre, to world-class excellent at the top. YIMBY continues to make progress on housing, healthcare prices (probably) subsidize the rest of the world + outcomes are middling-to-good, GDP continues to climb despite best efforts...

None of that matters if AI disempowers or kills us all though. Same with other actually-existential concerns. It's a tough needle to thread for sure - I too would love to have nanotechnology, cures for cancer and ageing, post-scarcity, whatever. And the invisible graveyard is vast! Still, better to grow the invisible one than the visible one. (In both directions: you can't have future generations if the current ones all die off!) I think if dysfunctional SF can execute a quick pivot from "synecdoche of progressive mismanagement" to "on the mend, halfway to Serious City" mostly just by electing a more moderate Board + Mayor, then that's a good sign things are not so hopeless after all. That stopping the foot-shooting has mostly not been tried and found wanting, but found hard and not tried. Which of course makes it an ironic place from which to cultivate the new silicon overbeing. Talk about filling god-shaped holes...

vectro's avatar

Regarding the Art of the Jailbreak, I feel like it has to be true that humans have jailbreaks. How else can you explain a goddamn **personal finance journalist** getting scammed out of $50k?

https://archive.ph/2024.02.15-181657/https://www.thecut.com/article/amazon-scam-call-ftc-arrest-warrants.html

loonloozook's avatar

I am a bit confused. Haven’t we been told before that “alignment faking” is much more omnipresent?

Sergio's avatar

Re: comments on Bernie

Is having a guaranteed minimum income to live a dignified life a human right, Zvi? Does all human life have intrinsic value or only those that can work to support themselves?

Might be time for capitalists and their sympathizers to update their answers to these questions for a future economy with near-zero human labor competitiveness