11 Comments

At least everything is directionally correct here.

Expand full comment

A meta questions - how do you keep track of all those things that happen over the span of months, so you can pull them up and reference them when writing the posts? Do you use something like obsidian or another software?

Expand full comment

In this case I kept a summit browser tab for a long time. I don't usually do that, and mostly improvise without a real system.

Expand full comment

The fact that you don’t have a system makes your quality of writing even more impressive! How much time do you spend per day on reading vs writing?

Expand full comment

I would like to see more fear and anger in the population, not less. That is not dangerous, that is the only thing that will save human primacy. Large amounts of fear and anger as soon as they can be generated, strong taboos against the use of AI, people treating AI capabilities researchers as if they clubbed baby seals for a living, etc, are the only way out of this.

I understand the summit had to use diplomatic language to get tech involved, but this is not encouraging for those of us who wish to avoid an AI future altogether. I don't consider any statement that calls AI a potentially positive societal transformation to be a good first step, it's basically an acknowledgement of surrender from Day 1. The Day 1 position should be "this should never be built, if you want to convince us to allow it to be built you're gonna have to spend a very long time proving it's desirable for humans, and if you try to build it before we give you the green light you will be stopped by the full range of force available to our government." Europeans employ the precautionary principle for mundane chemicals in plastics, but the most destructive technology ever conceived gets to continue until it reaches some arbitrary danger point?!

Expand full comment

Amen.

Expand full comment

I agree that a public reaction is a good thing. But I disagree that public taboos will help too much in this case. We already treat people who talk about genetic engineering of fetuses pretty badly, and yet, one person has succeeded in doing it. In their case that led to two gentically modified babies. However, if one person succeeds in building AGI, we are all fucked. Too many people have seen the promise of this technology to stop, in my opinion, and making it culturaly impossible to speak about will just move the development of the technology undergound, away from the public eye and away from any governmental scrutiny

Expand full comment

The comments about misinformation, which I do agree with, make me wonder if Elon Musk was ahead of the curve with his blue ticks program?

Maybe once/if we have 10x as many LLM "bots" than people, many more will want to know which is which. Or maybe we will just prefer the bots?

Expand full comment

Nitpick regarding the Chinese statement: "我们相信" means "we believe". The Mandarin term for existential risk is 生存性风险, at the end of the second paragraph. My Mandarin isn't great and I didn't read the whole thing, but I think the two versions are intended to be word for word translations and GPT-4 is completely off base here.

Expand full comment

Andrew Yao is as influential a figure as it's possible to be within Chinese computer science. I don't think you could have gotten a bigger name to sign it -- he is certainly higher-profile than anyone on the US side.

He has an important position at Tsinghua and is held in extremely high esteem. Also he's the founder of the "Yao class" at Tsinghua which is the most prestigious undergrad CS program in China, and whose graduates have a fairly powerful alumni network in the tech industry as well as global CS research.

Whether this translates into actual policy influence for him, I don't really know, but his name carries serious weight.

Expand full comment

One thing I'm quite curious about, and am going to try to track, is the degree to which in the US future policy statements begin quoting the Bletchley summit declaration in addition to the EO.

(If yes, that implies that bureaucratic actors think that it is influential inside government.)

Expand full comment