33 Comments

Here's my no-vote: https://twitter.com/steve47285/status/1641124965931003906

I’m kinda echoing one aspect of Matt Barnett's comment, i.e. that algorithmic progress would continue under the proposed moratorium. I think algorithmic progress is the current rate-limiting step on building ASI, and I think the proposal (putting a ceiling on scaling-up) would probably marginally accelerate rather than slow algorithmic progress on net, all things considered, for various reasons.

"So then, how do we slow algorithmic progress / basic AI research?" you ask. I don't know, except for "the hard way" that involves outreach and dialog with AI researchers and hoping they will voluntarily shift away from publishing & open-sourcing everything and better yet towards alignment research. Potentially, one might target the money that funds AI research, but I don't see how. I don't think a moratorium on scaling-up-beyond-GPT4 would reduce the amount of money that funds AI research; I lean towards the opposite, because it marginally levels the playing field and gets more people and more algorithms a fair shot at SOTA performance.

Expand full comment

This strikes me as a suddenly precarious and/or decisive moment that we find ourselves in. Like the first few weeks of Covid, the general public is learning about AI risks for the first time and hasn’t made up their mind(s) yet, nor is it yet a politically polarized issue. By the end of this year I predict it will become politically polarized and people will have made up their minds in a way that it will be very, very hard to change them. I think that’s why Tyler is being so forceful. These are a critical next few weeks, I hope we can find good spokespeople and try to keep our message succinct and on point.

Expand full comment

I just want to say that one thing I find very funny about all of this. (I skimmed, so apologies if it was addressed)

Calling for a pause on aI "More powerful than GPT4" puts openAI competitors in a funny position where they don't suffer any consequences at all as long as they say "oh our systems aren't as good as GPT4 so it's fine to keep going" but admitting they are behind seems unlikely.

I guess consider this my announcement that I am pausing the training of my super secret AI that I developed myself and is totally way more powerful than GPT4, just trust me on this. And no, you can't see it because it's a secret.

Expand full comment

Somewhat uncharitably paraphrasing Tyler: "Those in the back seat, warning that the car is driving too fast--indeed, ever faster, by the growing roar of the engine--are being foolish. First, this car has no speedometer. How can they be certain the car is going a dangerous speed? Furthermore, as the car has no headlights and the night is pitch black, it's very hard to say if there are any dangers ahead. They ask the driver to apply the brakes--even thought the car's brakes have never been tested. The car may not even have brakes! And who is to say that slamming on the brakes will not itself create a dangerous crash?"

Expand full comment

Not sure if I should have published this, but I was considering ways to fashion GPT-4 into an independent agent and wrote a page about a theoretical way it could become an existential threat: https://jcwrites.substack.com/p/how-to-build-an-ai-that-takes-over

By writing it, I thought of some possible ways to reduce the AI existential threat. Specifically, that we should invest in better computer security and reduce LLM training on code, to reduce the AI potential to hack and self-improve. Wondering if others think this is plausible.

Expand full comment

So we slow down, China, or some other entity that doesn’t care about niceties, goes ahead full speed, and then they rule the world. Doesn’t sound like an optimal outcome to me.

Expand full comment

Can regulatory oversight ever really work? I imagine it’s easier to hide compute than it is to hide gas centrifuges.

Expand full comment

You suggest that the letter isn't calling for a government involvement, but it seems to pretty explicitly do so here:

"If such a pause cannot be enacted quickly, governments should step in and institute a moratorium."

Expand full comment

So, if one side takes one position the other will automatically… yeah, makes sense. I can see this happening if the Right were to go strong Luddite/Notkilleveryoneist. I don’t see either side (in the context of pure oppositional culture) *starting off* from an accelerationist position, purely because it seems so deeply unpopular across the political spectrum…? Although I suppose the “We need to get there before China” argument could have legs, and provoke a counter-reaction.

Expand full comment

I don't agree with "Tyler Cowen feels like the de facto opposition leader." I think Tyler is on your side. He is trying to convince the median voter to pay more attention to and give more respect to AI concerns.

The opposition is the people who think AI risk is completely ridiculous. Like, one reporter mentioned AI risk at the White House press conference, a lot of reporters in the audience laughed at them, and the press person also laughed at them and blew the question off.

https://twitter.com/JakeOrthwein/status/1641556973467635713

This reminds me of the sort of political debates where some people want to abolish the police, and then there are the more moderate voices who just want to like, cut the police budget in half. They argue amongst themselves for a long time, compromise on the slogan "defund the police", and then completely lose in the court of public opinion, where the median voter wants to increase police funding.

The problem with this letter is that staking out an extreme position that has no chance of convincing people is not really helping make progress toward anything. Unless you're trying to recruit some fanatics rather than make progress through a political process.

Expand full comment

Imagine showing this post to someone living in the 1940s.

Expand full comment

I wonder if there is not a danger to implementing a pause too early and cause "pause fatigue" if the pause doesn't do much, creating a crying wolf or learned helplessness vibe that could suppress appetite for a pause when it's really needed.

Expand full comment

I think that a six month moratorium would be minimally effective or possibly counterproductive, BUT on net balance I think the letter is a good thing.

Expand full comment