29 Comments

The fact that Bezos does not think that the state should not enforce exist even to safeguard existence is bewildering. As always, Connor is the sane, reasonable voice

Expand full comment

I enjoyed the debate once it got into a normal cadence, but was annoyed by how it started with a debate over whether government intervention is ever productive or legitimate. Would have rather they skipped right into the AI-specifics and took it for granted that collective action problems exist. And while I enjoyed listening to Beff's exposition of his worldview, there's an inconsistency in his disdain for government, as governments are also kinds of spontaneous orders that exist to economize on transaction costs and minimize variational free energy. There's something tragic in believing in both greater decentralization and that morality is derivative of physics, and then to have the physics strongly favor hierarchical states with monopolies on violence. Nevertheless, I think Beff won the debate, first on rhetoric and tone, because Connor came off as pedantic and somewhat crazed, and second on substance, because once he and Connor agreed that we're likely past the point of no return, that made it a battle of heuristics, which played to Beff's case, while Connor (rather than drilling into ASI-specific risks) made it about his ambivalence for technological progress per se.

Expand full comment

Firstly, thank you for doing what you do Zvi! Consistently lucid analysis, as always. The world is better off for your efforts, IMO.

Continuing along that theme, Connor makes a really powerful point here that I'd like to just highlight. Starting at 2:27:07:

"I'm making a strong claim that the amount of optimization pressure that has gone from like people like you and me, you know, smart, educated, young, energetic, you know, tech guys into trying to design institutions is a fucking pittance compared to what has gone into Facebook's ad algorithm. So we don't even know if it's hard. We haven't tried."

In terms of "designing good institutions", there may be hundred dollar bills laying around, all over the place, and we simply need to look. But hardly anyone is looking.

Expand full comment

There's a reason nobody is looking - if you're smart and ambitious, you can build an app, or start a company, or build a product in a global company, and by your own efforts, or by coordinating with other people like you in skills, approach, and competence, you can impact hundreds of millions to billions of people.

What is the corresponding situation in politics? You have to deal with A) the masses, god help you, and B) politicians, who are largely less popular than STD's, and for good reason (it's easier to get rid of STD's). So instead of working with a handful or two of nice, competent, skilled people and affecting billions, now you have to work with gross, average, dumb people, AND professional liars that literally nobody likes. And to what end? Probably nothing! You can easily spend *years,* your entire *career* trying to change things for the better, and you might affect a handful of laws affecting a handful of million people (at best) over 40 years.

The inertial barriers are so high, the systematic inefficiencies and tripwires so ubiquitous, you literally can't get anything done in politics, and you have to somehow surround yourself with, and herd and work with, the worst of the worst in humanity for decades, to get that "basically nothing" accomplished. Why should talented ambitious people immolate their entire lives on that altar for zero good done?

Maybe you can see why we choose to spend our time tackling technical problems rather than trying to do anything politically?

Expand full comment

Yes, of course. If you are fresh grad, young and smart and want to get work done there are companies competing for your labor who will make you a millionaire in a few short years and you get to work with other smart people in fun, happy environments. Free and infinite catered food! On-site gyms! Lots of optional fun work-dos with your fun coworkers. The job might be optimizing Facebook's ad system, or fine tuning a trading bots, so you always feel awkward about telling others what you do for a living. But it's all around pretty great.

On the other hand, you have people like Matt Cutts trying to recruit you to work at the US Digital Service. Want to help fix the VA hospital system? How about working on the labor department's ancient mainrframes? Maximum pay is... $183,500 which would be the highest anyone in the US government can get paid, so likely you won't get that as a fresh grad with no experience. Glassdoor says average pay is $74k. Sounds not very compelling.

This is even worse outside of technology, I'm sure. You could work at a non-profit policy tank that's super dysfunctional and has lots of political infighting of its own, and you could spend years upon years trying to move the needle on anti-trust regulation only for George W. Bush to be elected and for the FTC to ignore you for the next 8 years because they have no political buy-in from the White House. You burn out and leave defeated with nothing to show for it.

If I had only one life to live I'd probably go with Facebook. On the other hand you can do... all of these things? You can get the job at Facebook and put a few million in the bank. Then you can try working at a data science lab that produces policy, or even the US digital service.

Or even try the 80,000 hours questionnaire. https://80000hours.org/problem-quiz/ (don't tell them you only have 60,000 hours left)

(Re: the AI issue, this might even be a false dichotomy. AI interpretability seems policy-adjacent and working on interpretability for a name brand AI lab almost certainly pays well and has similar tech company perks)

Expand full comment

I think these anti-government comments are from people who mostly have never worked in government. It is absurd to blithely dismiss what can be done in government - various levels exist, with different pluses and minuses - and conclude that working at Facebook is somehow clearly a better option. It is if salary is your number one concern.

This bizarre debate - why is one guy using 3 different personas - speaks to the societal challenges we have in trying to connect tech and futurism and AI and acc with things happening to people in the real world. There are lots of people in government trying to help lots of people who desperately need their help. Sometimes they actually succeed.

I’m glad I read this newsletter - which is a foreign language to me but therefore enlightening - but I think this area of inquiry could use a little better connection to reality and real people.

Expand full comment

> Sure sounds once again like a strong argument for not building it! Jezos is saying we have no path to not building it. Assumes facts not in evidence.

I agree with Jezos, Tyler Cowen, et al, who think that we have no path to stop building ever-advanced computer systems.

Dave Chappelle once told a joke, framing the question, “Did Michael Jackson really do it [abusing children]?” with a question of his own, “If having sex with adult women were illegal, how long would you stay out of jail?”

The incentives for building computers that work better (at eg writing code, developing medicines, managing large scale cybernetic systems) are too strong. Humans by and large can’t control our urges to have sex, how will we control the urge to eg build machines that build machines that create abundant energy?

I think the major crux between market-competition-oriented and central-planning oriented approaches to managing the risks to next-gen computing systems comes down to how much power one ascribes to incentives.

The incentives towards improving AI are, in my eyes, strong enough to outweigh the risks, in perception if not in reality. As long as the risk is hypothetical and the incentives are real, it’s implausible to me that international cooperation without defections to stop AI progress is a stable equilibrium.

Speaking of hypotheticals, Leahy’s use of hypotheticals stuck out to me. The e/accs online have lampooned him for it, but safety-aligned folks find the hypotheticals a perfectly valid rhetorical strategy.

Is one obliged to consider all proposed hypotheticals? If someone asked me, “If [unphysical scenario] occurs, what would you do?”, I would answer “I don’t know what I’d do because I’d first have to update my whole world model in light of this reality-breaking experience, and as I remember from logic 101, everything is entailed by false premises.”

Expand full comment

Although Tyler and BBJ may claim that it's inevitable, they certainly also admit national and global regulatory regimes needlessly delayed the world from achieving a bright nuclear powered future, due to fear. We are not totally helpless in the face of laws of nature playing out.

There is absolutely reason to believe we can choose values over physics and successfully pump the brakes. It's even better if we also design good policy while we're at it.

Expand full comment

Isn't nuclear a bad model to replicate? After decades of fearmongering about risk, we got tons of weapons but not much electricity.

I don't want a world in which military AI gets accelerated but civilian AI gets paused.

Expand full comment

I didn't say anything about whether intervening was good or bad. I simply said we have agency in the face of these things that are supposedly as inevitable as sex.

Expand full comment

Nuclear is an example of military tech getting accelerated while civilian tech gets banned, not an example of a whole category of tech getting banned.

Is that what proponents of compute limits or other AI regulation want? Limits for projects that aim to improve people's lives but acceleration for projects that aim to kill people with optimal efficiency?

Expand full comment

No, we want to live. I want my children to live.

Many of the projects in the hands of random antisocials are just as, if not more, harmful than state control of violence(which has been a basis of civilization).

Expand full comment

This is just your opinion. But history shows that states are by far deadlier than "random antisocials".

I understand the call to pause AI development until safety/alignment advances. But I don't get the call to, as I see it, entrust the world's competing states, militaries, and defense contractors with misaligned AGI, but not the people.

Trusting states, militaries, and defense contractors with misaligned AGI mitigates none of the risk of AGI (bioweapons, killer nanotech, reengineering the atmosphere to be inhospitable for humans, emergent cooperation of separate AGI systems, etc) while preventing most of the benefits.

Indeed, this amplifies the risks, because states, militaries, and defense contractors will literally develop superweapons that could destroy the world in order to get leverage over their adversaries.

Expand full comment

We can alwayd ramp up later; caution is good. But that misses the point and shows that regulation and social values do work and have power.

Expand full comment

Do you envision military AI getting paused or just civilian AI? I don't see any pause of military AI on the table, the incentives to defect are too strong. If it's just a pause of civilian AI, no thanks.

Expand full comment

I fail to see how limiting nuclear weapons to a few government made us worse off. This is one of those things that any limits are good, as the runaway result of human extinction is plainly unacceptable.

Expand full comment

Limiting nuclear weapons is fine and good. But when it comes at the cost of limiting nuclear power, and increasing reliance on fossil fuels, accelerating climate change, then the cost needs to be acknowledged.

But anyway, if militaries have advanced AI — AI so intelligent it poses an existential risk — then how does this keep humanity safe? How is military AI — machines designed to kill or dominate populations — safer than civilian AI — machines designed to make products and services that people value?

With nukes, the point is to have them but not use them. But with advanced AI in state hands, we should expect it to be used on people, both foreign and domestic.

Expand full comment

I think well-framed hypotheticals are worthwhile. First setting out an argument to evaluate for validity. If validity is more or less established, people can argue about soundness, the probability that the premise(s) are true, and what sorts of observations might convince one party or the other to update their probabilities and arguments.

Expand full comment

When it comes to the ABC flavors of BBJ... I have this crazy dream that the latest most extreme journeys beyond the pale are so that ABBJ might now enter relevant twitter.com conversation as @GillVerd, as a separate voice alongside @BasedBeffJezos. He could keep both his joke and the e/acc memetic sway it offers, yet offer coded value to the conversation.

Expand full comment

To be honest, like many in SV, it just has to do with what drugs and how much has he taken in any given day.

This is also a good example of why we shouldn't leave the fate of humanity in the hands of such people.

Expand full comment

I am now wondering Jezos actually believes his own hype about AI. If what he actually believes is that AI won't be able to do very much - not even design an F16 for you - and the only harms are mundane ones, like "people will use it to generate pictures of Taylor Swift naked" - then there's a reasonable argument to be made that the downside of government regulation of AI is not justified by the relatively minor risk.

Expand full comment

I can just about imagine an AI that can generate a design for a fighter jet without being dangerous ... all it does is apply well-known engineering techniques to well-specified engineering problems (and is not agentic in any meanimgful way).

Expand full comment

Also: we should by now be familiar with the idea of a Twitter account that posts crazy provocative stuff. It is not surprising that Jezos scales back the crazy when he's in a formal debate.

Some sort of comparison with Alex Jones and Infowars might be in order. (Gays frogs, crisis actors and, by the way, buy these pills I am advertising)

Expand full comment

"Someone who complains that it is not scientifically possible is fundamentally mistaken about the nature of the entertainment that is on offer" --- originally said about space opera, such as Star Wars, but could equally be said about paranoid conspiracy theories, or accelationist rhetoric.

Expand full comment

Only tangentially related question: I know it’s hard to estimate timelines, but I sense from statements across many blog posts that you consider AGI in the next 3-4 years reasonably unlikely, otherwise you’d probably have a higher p(doom), right? Or is it more that if that’s the case we are powerless anyway so it’s not worth dwelling too much on? I think Conor had very short timelines in early 2023 but I don’t know if that’s still the case. The OpenAI super-alignment folks seem to want to prepare for very short timelines but Sam talks as if this will be a much more gradual process.

Expand full comment