83 Comments

I still don’t get why you think the chance of AI disaster is high in the first place. You can imagine all sorts of ways to be very intelligent that don’t fit your model of “it’s going to take over”.

Expand full comment

Really enjoyed this post, thank you for making it!

For what it’s worth, I found the robust argument much, much more persuasive than the narrow argument, and I appreciate you spelling it out. N=1 but interesting to me, considering I read tons of AI risk content and find even the most public facing stuff to be mostly about scenario listing (which I find largely unpersuasive).

I suspect I will update my view slightly but meaningfully when I have time to digest this.

Expand full comment

There seems to be a word or two missing in that sentence: "My approach is, I’ll respond in-line to Tyler’s post, then there is a conclusion section will summarize the disagreements"

Expand full comment

"I notice I am still confused about ‘truly radical technological change’ when in my lifetime we went from rotary landline phones, no internet and almost no computers to a world in which most of what I and most people I know do all day involves their phones, internet and computers. How much of human history involves faster technological change than the last 50 years?"

Its useful to remember that Tyler Cowen is a professor. Until 2020 his job, and most of the people he knew had almost the exact same job as a Professor from a century before. Not even cars fundamentally changed the jobs of academics. The telegraph and reliable first class mail have been the major changes to the job of "professor" over the last five centuries.

AI is one of the first pieces of technology that might actually replace his niche in society. Its a new feeling for him, and for the people in his bubble. But its not a new feeling for everyone.

Expand full comment

The evaluation of the probability of AI ruin as "not 99%+ definitely, but not ‘distant possibility’" seems entirely vibes-based. In the mid 20th century many computer scientists believed advances in symbolic AI could plausibly result in superintelligence. If you were in their shoes would you have advocated for a halt to computing research?

Expand full comment

This is a fantastic articulation of the core 'AI doom' argument! Thank you!

Expand full comment
Mar 28, 2023·edited Mar 28, 2023

You are reading this situation sort of the way good chess players read a board. It involves using the spacial sense: Black has huge consolidated power in the front left of the board. White's power is overall slightly less, and is spread out in a way that makes it impossible for it to pry apart black's formation, and also ups the chance there will be a fatal gap in white's defense of its king.. I have that sort of spacial intelligence about some things, but not things like the current situation with AI. But I recognize it when I see it in someone else, and think your conclusions are probably right.

My spacial intuition does get activated by the parts of this that have to do with individual psychology. My guess is that we will not all die. Some of us will instead merge in some way with the ASI, and a new life form will come into being. To that being, those of us that died will be like the ants that swarmed onto a gluey plank, perishing so that later ants will have a paved surface to walk on. As for the hybrid being itself, it is so different from anything that I am able to love or hate that all I feel when I think of it is a sort of huge inner shrug.

I am old enough that I do not expect to see the story play out fully. But I am so sad when I think of my daughter, who is in her twenties and thinking of having children soon. Growing up, she managed to find her way past all kinds of dangers and delusions, and now just wants things that she and her boyfriend easily have the energy and the resources to do: Build a house out in the country, have children, work from home, take great backpacking trips with her family and friends. I picture her fear and astonished grief as things play out in some way that utterly ruins all that.

Expand full comment

GPT-type AI is not intelligent in the paramount sense that it can create new knowledge. The threat it poses is overblown.

Expand full comment

What are low-regret moves that the average reader of this piece should do? (It is okay if the answer is, “not much, don’t wait too long to open the good bottles of wine you have,” but it is plausible that there are discrete actions you might recommend that you currently consider under-pursued).

Expand full comment

I cannot abide the maligning of American cheese, the perfect burger cheese, especially when included in a list of obviously horrible things. American cheese has a place, on a burger, and it’s good.

Expand full comment

Why is the printing press constantly referenced as the most relevant analogy here? This isn't just a step change in communication, it's a step change in evolution. The only relevant analogy is when humans formed symbolic knowledge, and with it the capacity to be universal niche constructors. The great apes would probably like a redo on letting their puny primate cousins start walking on two legs.

Expand full comment
Mar 29, 2023·edited Mar 29, 2023

So let's say it's highly likely AI will do us in. And I'd say it's a certainty that articles like this, Yudkowsky's tweets & posts, etc., are going to do next to nothing to slow down AI deveopment. Doesn't it make sense to try to slow down development by hook or by crook? I'm not talking about shooting AI company CEO's, which feels just evil and anyhow would not work. But how about some ordinary dirty politics? Do not bother with trying to convince people of the actual dangers of AI. Instead, spread misinformation to turn the right against AI: For ex., say it will be in charge of forced vax of everyone, will send cute drones to playgrounds to vax your kids, will enforce masking in all public paces, etc. Turn the left against AI by saying the prohibition against harming people will guarantee that it will prevent all abortions. Have an anti-AI lobby. Give financial support to anti-AI politicians. Bribe people. Have highly skilled anti-AI tech people get jobs working on AI and do whatever they can to subvert progress. Pressure AI company CEO's by threatening disclosure of embarrassing info.

I have never done any of this sort of thing, but it does seem to me that any who are convinced that AI will be our ruin and have some influence, power and money should be trying this sort of thing. Why the hell not? If you're not at least thinking about efforts like this, is it because you don't *really* think AI poses a huge risk? Is this some doomsday role-play game we're playing. Is it that when you think about AI risk you do think the worst is likely, but most of the time you don't even think about the subject, and taking the steps I'm describing would be. a lot of work and would interfere with your actual life? Is it because you think it won't work? Yeah, I get that it's not that likely to work. But it is many orders of magnitude more likely to work that Eliezer's tweets.

Expand full comment

Hats off to you Zvi for writing important articles like this.

But I find stuff like Tyler Cowen's articles so exasperating.

You can't prove AI isn't dangerous by analogizing with other technologies - no other technologies ever had the potential to be as powerful as a machine superintelligence, and they certainly had no prospect of agency.

You can't prove that AI isn't existentially risky by appealing to extremely general axioms ("since when can 𝘮𝘰𝘳𝘦 intelligence be a bad thing?"). The ability to align a machine superintelligence is matter of computer science. And the harm that a machine superintelligence can inflict is a matter of decision theory et al. These specific things need to be dealt with directly. They won't turn out to not matter because the f-ing 𝘱𝘳𝘪𝘯𝘵𝘪𝘯𝘨 𝘱𝘳𝘦𝘴𝘴 of all things didn't radically transform the world.

If influential people are making very not good arguments, those arguments need to be argued against. But it's extremely frustrating to see them make these arguments and refusing to even engage with the actual work that the thing they're arguing against is based off. I've certainly experienced many academics or other influential figures lose a lot of their shine over the past year or so based on how asininely they've dealt with the AI risk question.

Expand full comment

Think this all boils down to a difference in induction.

Tyler and Robin are inductivists. They see the patterns of history, and have learned that every single time someone has catastrophised about the future implications of some innovation, they have been wrong, and shockingly wrong at that. Paul Ehrlich and Malthus are great examples of this.

Zvi and Eliezer are anti-inductivist for this specific situation. They think that superintelligent AGI is fundamentally different than all previous inventions, thus the knowledge gleaned from observed patterns in history don't apply here.

I think Tyler is probably right, mostly because there's a lot of evidence for his view, not just interesting thought experiments. I worry that we'll never find out who's right before well-intentioned actors kneecap AGI before we can even know for sure what dangers it poses.

Expand full comment
Mar 29, 2023·edited Mar 29, 2023

Radically agnostic! I really don't know anything about the issue and so I have to live in this space of having no opinion. (Which is hard for humans.) I read some of your posts and I start making a list

of responses, but your posts are too long, and it's like pulling teeth for me to write, so I stop.

Re: radical technological change (rtc). If you want to call cell phones rtc, I'd be fine with that.

But I understand Tyler to mean no new 'big' ideas. The cell phone and all, are the result

of big ideas in the previous 50 year ~(1920-1970) When quantum mechanics and understanding atoms lead to all of the things we are doing now. From phones and TV's it's not a big leap to everyone having a Dick Tracy wrist radio. Cell phones may cause a big change in our culture, but they are not a big change in how we can impact the world.

Re: GPT and AI. I've watched a few videos, and kept up a little. AFAICT the current generation is reaching the limit of what it can do. There is some curve of 'improvement' vs training time, and it looks like they are starting to hit to a point of diminishing returns. They had to run (something like) 1000 computers for a month on the training data.

Anyway none of this looks like intelligence. I'm not sure how to define intelligence, but a

good start might be: Intelligence is model making of the world, and making predictions from your models to test them. GPT is making no models... AFAICT. :^)

I wish you could have a conversation with Tyler, because you both seem to be using a bit of hyperbole to make your arguments. (Sorry for the bad formatting, I need to upload a better word processor.)

Expand full comment
Mar 30, 2023·edited Mar 30, 2023

Can any of the folks here concerned about AI doom scenarios direct me to the best response to this article: https://www.newyorker.com/culture/annals-of-inquiry/why-computers-wont-make-themselves-smarter

I am assuming some responses have been written but I wonder where I can read them. Thank you!

Expand full comment