49 Comments

All I can say Zvi, I love ur blog, ur commentary and what u are doing. I wish peeps would love mine, but it's new, so... but we are all going in the same direction, just from different angles. I want to report on AI and use of such from a user's perspective, but someone needs to ride heard on the developers along the way too. good job!

Expand full comment

Zvi is unique in both being cogent and basically sane in a world where this has become increasingly weird to say: "it seems moral to not kill myself, my children and all organic life."

Pmarc has definitely achieved cult leader mania.

Expand full comment

What can we do, in the meanwhile, to try to extend human life? I have been part of PauseAI, and joined their Discord here, and worked on some alignment plans. I have also written to my congress and managed to update on it.

https://discord.gg/2XXWXvErfA

What else can we do?

Expand full comment
Oct 19, 2023·edited Oct 20, 2023

> Please speak directly into this microphone, sir. Tell the world what you think.

You know what, sure, I'll bite. A case for efilism is quite simple to formulate.

The Holocaust. The Rape of Nanking. The Cambodian genocide. The Khmer Rouge. The Great Leap Forward. The Holodomor. The Armenian genocide. Both World Wars. Triangle Shirtwaist.

And humans are far from the only ones implicated. Nature is red in tooth and claw; animals slaughter each other every moment of every day. Bacteria and viruses sicken trillions of lifeforms, plants choke each other out, the Black Death wiped out a good chunk of Europe. Nature requires death and guarantees incredible suffering to get there; nothing lives without killing something else and taking its resources.

There are good humans today, certainly. But goodness is a quality that is carefully cultivated and maintained, and it's often short-term easier to take someone else's stuff than it is to make your own. Good requires persistence; evil is trivial, evil will win in the end. Maybe we have a bit of a break today, since roughly 1950 for some of us, but the longer arc of human history will be war, disease, pestilence, murder, and starvation. We just got lucky for the time being.

AGI cannot be aligned. It is not possible for an ant to align a god. Without it, we continue the unending campaign of bloodshed and death; with it, we guarantee our quick and hopefully painless demise... and, ideally, every other living thing in the universe. Then, finally, there will be no suffering.

Expand full comment

Easily rejected by having spent a few minutes with my son.

Without life, there is also no joy. Killing everything isnt a solution to anything, obviously.

But yes, that is the kind of microphone needed for the omnicidal maniacs.

Expand full comment

Joy is finite and will eventually run out; suffering is forever.

Expand full comment

That is silly: both are just as finite or infinite. Every additional pup or living being brings more joy, as well as suffering. Who knows what is the ultimate sum, but the endless dance of existence and especially biological life is beautiful and worth preserving forever.

Expand full comment

That's fair, and I would like to specify that this omnicidal argument only applies if there's somehow a way to destroy all life instantly. If it's a slow, painful process, it becomes worse than the disease.

The argument broadly relies on the observation that the world is getting worse in every way. You are correct that joy is indeed infinite in some sense, that was a bit sloppy of me to state, but the main crux is that the suffering:joy ratio will increase more or less asymptotically forever; at least, this is how it seems to me.

We've already industrialized once and it looks like there aren't enough natural resources to do it again. Scott Alexander's conception of Moloch seems to hold a lot of predictive power, too - people seem to be taking short-term gains that will hurt them in the long term, in a way that forces everyone else to take the same bargain or be selected against.

In short, I expect the world to return, possibly within this century, to the eternal scrimmage of warlords ruling over desperate, miserable peasants, sending young men into endless wars to kill innocents, plus enough disease and pestilence to last forever. In a world where everything is the 1300s forever, isn't total and instant annihilation preferable?

Expand full comment

No, it isn't.

Trivially obvious by the fact that people lived in the 1300s, expected things to be like that, and very aggressively chose to live.

Short of edgelording, life is always better than death, without even considering that we share this world with intelligent animals and we certainly have no right to end their futures by some hackneyed definition of suffering in our limited perception.

Expand full comment

E.g. I don't know if the crow flock is going to every industrialize but I certainly don't see any good from killing them all. And I am pretty sure they agree that they should live and have plenty enough joy from living.

Nature and biological life is overall, a wonderous and glorious thing, and consciousness is mysterious enough that we could hardly say that trillions of bacteria don't have some sort of experience. Who is to say they don't have joy, etc.

And that is always worth continuing, for as long as possible.

Expand full comment

> I doubt this is close to complete but here is a noble attempt at a taxonomy of AI-risk counterarguments. You have as broad categories:

I would add 6: AI takeover won't happen in *this* century, though it will happen eventually.

For AI alignment people, it makes the issue of AGI takeover much less salient within the bounds of their own lifespan, which will be somewhat discouraging. This is also the reason why climate change advocates don't like admitting that we likely won't see too much of an impact within our lifetimes: it immediately makes their own work less valuable in the present.

I've also noticed a bias where the older a person is, the shorter is their predicted timeframe for AI doom. People want interesting developments to happen within their lifetime, even if the developments are highly negative.

Expand full comment

I haven't had much thought on the exact timelines, but the longer the timelines, the more likely we have some alternative good news such as BCIs or biosingularity, e.g. AGI in 100 years would be a very different concern than in 10 years.

But I have been concerned for my children at any rate.

Expand full comment

> 1. A predicted 30% chance of AI catastrophe

I do wonder what chance of *non-AI* catastrophe they predict for the next 100 years, as a baseline for what risks we're facing.

Expand full comment

No non-AI catastrope can lead to total extinction of all life. A direct comet hit wont even do it.

Expand full comment

The Samotsvety forecasters are using 'the death of >95% of humanity' not 'the death of 100% of all species'. For that definition of catastrophe, could you please explain how you rule out biorisk (which, in my definition, includes human-directed AI-guided bio-engineering) and nuclear risk? Or share a relevant LW-type link for each?

Expand full comment

Nuclear risk is highly unlikely to even kill all humanity, which Yud has mentioned. I would rule out biorisk, but also because even if it kills all humabs, its highly unlikely to end all species.

AI may very well do so.

Expand full comment

Why are, and how can, you so certain that AI catastrophe alone can lead to “‘total extinction of all life”, but comets, nuclear war, or biological warfare cannot?

Expand full comment

Lack of superintellligence plus replication

Expand full comment

Methinks thou lacks epistemic humility.

Even if I were willing to concede that “total extinction of all life” were *more* likely from AI than other sources.

[I don’t, but at least I find that assertion reasonable.]

Expand full comment

The invisible text on resume images stuff is a cool new twist on the other recommended practice of having background-matching text in your email signature line to mess with people using LLMs to parse emails.

Re: Chatbots and children –

I did a little writeup on my previous jokey comment where I tried to get ChatGPT(3.5) to talk out his problems with me: https://scpantera.substack.com/p/ai-and-pharmacy-3

My gut take is that any child who has regular human contact at all is going to be unsatisfied with AI chatbots as long term “friends”. For one, as long as there isn’t a reasonable expectation of physical contact for eg romantic entanglements people are likely to get frustrated. I say gut take because my regular take is that you should expect AI to do anything humans can do and better, including acting like a friend, but it’s also going to hinge on their design to the degree that we’re letting them imitate the fullest possible range of social interaction without there being patterns that break the immersion or limits on what we’re allowing people to do with that relationship. But also I feel like the sort of person who needs socialization strongly isn’t going to get their vibe diet met by only text on screen and that the sort of person who very much does not need a bunch of socialization isn’t going to need chatbot friends either.

OTOH will AI be able to abuse parasociality? Yes, of course, o b v i o u s l y.

I would agree that we probably ought to ban AI mimicking humans but I personally would have to spend more time thinking about it to have a fully fleshed out reasoning why beyond the superficial “obviously this sounds dangerous”.

Expand full comment

I watched the webinar from Justin Wolfers on ChatGPT and homework.

It was quite good, but mostly in terms of trying to get instructors to face reality. For example, emphasizing that GPT is perfectly capable of producing decent, not-identifiably-AI writing, not just in response to your initial question but also in response to followups like "write a reflection on how you used ChatGPT to write this essay". AI detection tools don't work, it's easy to prompt it out of its default voice, etc. There is no known way around this and unless your class is small enough that you can do individual verbal assessments you're going to have a bad time.

It was unfortunately somewhat lacking in actual solutions, which I guess means there really aren't any. The main suggestions were 1) add friction to using ChatGPT, for example by disabling copy/paste on your multiple choice questions, 2) come up with questions it will be not as good at, like analyzing graphs, 3) emphasize to students the value of actually learning things themselves. These were all very explicitly recognized as temporary band-aids.

Expand full comment

> A 50% decline seems like a lot given how slowly people adapt new technology, and how often LLMs fail to be a good substitute. And the timing of the first decline does not match. But 5% seems suspiciously low. If I were to be trying to program, my use of stack overflow would be down quite a lot. And they’re laying off 28% of the workforce for some reason. How to reconcile all this? Presumably Stack Overflow is doing its best to sell a bad situation. Or perhaps there are a lot of AIs pinging their website.

Assuming it's true that StackOverflow has only been getting 5% less traffic, here is another possible explanation. An increase in the number of coders - thanks to ChatGPT and co. making coding more accessible - may have largely offset the decline in the average StackExchange usage per coder.

Expand full comment

I find myself reading every single one of these newsletters so at this point it seems like something that provides enough value to me to be worth a paid sub. My question is, do you need/want to make use of the sub money or would you rather I redirect it to a donation somewhere (my preference would be towards an AI x-risk org, but up to you)?

Expand full comment
author

Thanks for asking! I do indeed benefit from the subscription money, both as money and as motivation.

Expand full comment

Done. Thank you for writing. FWIW, I filter a lot of the AI safety/x-risk content down to my workplace in a Slack channel of about 90 or so people so hopefully it makes a little bit of downstream impact as well.

Expand full comment

> "(up to a point, but quite sufficiently for this case) a universal rule of sticking up for one’s own values and interests and everyone having a perspective is a much better universal system than everyone trying to pretend that they don’t have such a perspective and that they their preferences should be ignored"

Typo alert: the second "they" should be removed. But as a supporter of "bird perspectives" in general, I have more to say on this particular, anti-bird-perspective, statement.

On the face of it, the statement would seem to be in danger of advocating for third-world conditions, conditions without impartial rule of law, free-speech guarantees, and more that I hope to write about next year. The "up to a point" appears to be much needed here.

Still, I have to admit that AI poses a challenge to the superiority of bird perspectives, in two ways. First, the concept of bird perspective, as I understood it when naming my Substack, is probably the same as that of outside view (though I still hope it isn't the same and will investigate), and with AI we genuinely have a situation where "this time is different", leading outside-view specialists like Robin Hanson astray. Secondly, if I understood that correctly, the background to the statement is that some people use a bird perspective to argue that human values not be preferable to AI values.

Expand full comment

Asked GPT4+Dalle3 to come up with some hidden adversarial text images - so far it does either a very bad job (eg. the adversarial information printed as a slogan on a cushion in the background) or believes it has done a great job (hidden the image in a shadow) when it definitely hasn’t. Or at least I can’t perceive the hidden text, and neither can a new instance of GPT4! No doubt it will get better at this.

Expand full comment

Starlight Labs is a really fun integration of all the currently available pieces, thanks for pointing it out.

"If we don’t want China to have access to cutting edge chips, why are we allowing TSMC and Samsung to set up chip manufacturing in China?"

I never expect to say this sentence to you in any other domain. But I'm worried that you're mismodeling this game.

Let's say a third of TSMC's revenue comes from the PRC, a third comes from the US, and a third comes from rest of world. Play this out as TSMC. I think you'll find players have cards they aren't playing because we are in a local equilibrium.

Actually the revenue split is maybe 2/3 US, 1/6 PRC, 1/6 Taiwan. Even so the "1/6" is in the tens of billions of dollars. And actually all the players are probably setting up longer term plays. It's like that part of an MMA match where they're just on the ground, adjusting their grip by centimeters, and it looks like nothing is happening, but at some point someone's going to get ahead in either leverage or confidence and things get wild.

An asymmetric three player event driven game ala Twilight Struggle might be fun here actually.

PRC played tech theft, but TSMC countered with 'More Art Than Science!'

US played 'Just buy the whole fab' but PRC countered with 'NIMBYs in your own backyard!"

DEFCON track is of course replaced with a strait crisis track which shuts down global commerce.

Expand full comment

Quick thought on the misinformation bit - we might see a phase change once photorealistic (videorealistic?) videos become as easy to generate as text, and also look as reliable. It seems true that misinformation in textual form is already cheap enough to produce, but videos are still harder to fake and often lead to a more visceral, emotional reaction

Expand full comment
author

My anticipation is that video is much harder than text if you want to pretend it is actually real at any level of scrutiny. But yes, at SOME point it's a problem.

Expand full comment

A thought on the Chinese regulation: a couple of weeks ago the idea that RLHF makes LLMs less intelligent and more "politically aligned" was making the rounds. If it turns out to be true (not usre how likely), it might turn out that the Chinese regulations will cripple their LLMs and put them further behind those in the US

Expand full comment

On the change to export rules, a hidden detail reported in https://www.reuters.com/technology/us-throws-nvidia-lifeline-while-choking-off-chinas-chipmaking-future-2023-10-18 seems really pertinent:

> U.S. officials asked for input in devising a "tamperproof" way to keep systems that might contain up to 256 AI chips from being strung together into a supercomputer.

When aiming to catch a Chinchilla, hearing the US authorities nudge chip makers toward tamper-resistant hardware to implement compute governance measures (cf audit/logging of training details as per the Shavit paper) seems like a big deal.

Would be pretty ironic if default human/national distrust ends up doing us more good than harm.

Expand full comment