33 Comments

Here's my no-vote: https://twitter.com/steve47285/status/1641124965931003906

I’m kinda echoing one aspect of Matt Barnett's comment, i.e. that algorithmic progress would continue under the proposed moratorium. I think algorithmic progress is the current rate-limiting step on building ASI, and I think the proposal (putting a ceiling on scaling-up) would probably marginally accelerate rather than slow algorithmic progress on net, all things considered, for various reasons.

"So then, how do we slow algorithmic progress / basic AI research?" you ask. I don't know, except for "the hard way" that involves outreach and dialog with AI researchers and hoping they will voluntarily shift away from publishing & open-sourcing everything and better yet towards alignment research. Potentially, one might target the money that funds AI research, but I don't see how. I don't think a moratorium on scaling-up-beyond-GPT4 would reduce the amount of money that funds AI research; I lean towards the opposite, because it marginally levels the playing field and gets more people and more algorithms a fair shot at SOTA performance.

Expand full comment

>"So then, how do we slow algorithmic progress / basic AI research?" you ask. I don't know, except for "the hard way" that involves outreach and dialog with AI researchers and hoping they will voluntarily shift away from publishing & open-sourcing everything and better yet towards alignment research.

This is functionally equivalent to saying 'it can't be done'.

Expand full comment

Second your assessment that the pause as proposed would likely accelerate progress. OpenAI's graph shows that they don't think next word prediction can be significantly improved over GPT-4 without a massive amount of training. I am therefore not surprised that Altman wants to be able to tell his team to step away from the engineering tweaks necessitated by having to keep up with everyone else, and instead focus on the conceptual foundations for a while.

Expand full comment

This strikes me as a suddenly precarious and/or decisive moment that we find ourselves in. Like the first few weeks of Covid, the general public is learning about AI risks for the first time and hasn’t made up their mind(s) yet, nor is it yet a politically polarized issue. By the end of this year I predict it will become politically polarized and people will have made up their minds in a way that it will be very, very hard to change them. I think that’s why Tyler is being so forceful. These are a critical next few weeks, I hope we can find good spokespeople and try to keep our message succinct and on point.

Expand full comment
Comment deleted
March 30, 2023
Comment deleted
Expand full comment

You're right. Lockdowns were bad, so better we go gung-ho into developing a technology which is likely to destroy humanity.

Asserting that AI will have "benefits" doesn't mean you get to ignore the AI risk case. Nobody is denying AI can have benefits. But getting those benefits a few decades sooner isn't worth the risk of humans being wiped out. And again, if you don't think humans face any meaningful threat of extincation due to superintelligent machines, 𝘱𝘳𝘰𝘷𝘦 𝘪𝘵.

The AI doomers have made their case. If it's correct, then no "benefits" can possibly justify building AGI without the proper alignment foundation being built first. And you've done nothing to show its not correct.

Expand full comment
Comment deleted
March 30, 2023
Comment deleted
Expand full comment

Lockdowns weren't permanent. They changed e.g. America longer than for just the time they existed, but the point of a pause 𝘪𝘴 𝘱𝘳𝘦𝘤𝘪𝘴𝘦𝘭𝘺 𝘵𝘰 𝘤𝘩𝘢𝘯𝘨𝘦 𝘵𝘩𝘪𝘯𝘨𝘴 𝘰𝘷𝘦𝘳 𝘵𝘩𝘦 𝘭𝘰𝘯𝘨 𝘵𝘦𝘳𝘮. Nobody is denying this. Such a pause would be completely pointless if things were business as usual after 6 months.

But at the current pace of AI development and investment, there's no time to even step back and consider if and what long term change is even required. At least with 6 months, as pathetically little as that is, we can actually determine if there's any hope of stopping this thing destroying us all. The free market solution cannot work except by accident. Nobody is going to voluntarily stop AI development short of an AGI, and nobody will likely be able to tell how close we are.

Do you consider the government regulating the production and trade of fissile material and bomb-making knowledge as some oppressive, intolerable imposition?

Expand full comment

Did you even read the post? The letter is *not* asking for a government-enforced moratorium, rather it’s imploring the few leading AI developers to voluntarily agree to a 6-month pause. Zvi emphasized this several times in the post.

Read carefully, or don’t comment.

Expand full comment
Comment deleted
March 30, 2023
Comment deleted
Expand full comment

Good point. You could argue that the way they urged a government moratorium is more of a scare tactic than an actual request, but I prefer to take things at face value. I basically agree with your point about the COVID response, so this does change my overall stance on the letter. Not that anyone is asking for my stance.

Also, f me for not reading the letter carefully enough and setting up your perfect retort. One of many reasons not to be a dick when you think you’re winning an argument.

Expand full comment

I’m not sure this issue polarises in any clear way along existing culture war/political faultlines. Do you, and if so, how? If not, what would you expect the coming AI polarisation to look like?

Expand full comment

I just want to say that one thing I find very funny about all of this. (I skimmed, so apologies if it was addressed)

Calling for a pause on aI "More powerful than GPT4" puts openAI competitors in a funny position where they don't suffer any consequences at all as long as they say "oh our systems aren't as good as GPT4 so it's fine to keep going" but admitting they are behind seems unlikely.

I guess consider this my announcement that I am pausing the training of my super secret AI that I developed myself and is totally way more powerful than GPT4, just trust me on this. And no, you can't see it because it's a secret.

Expand full comment

Somewhat uncharitably paraphrasing Tyler: "Those in the back seat, warning that the car is driving too fast--indeed, ever faster, by the growing roar of the engine--are being foolish. First, this car has no speedometer. How can they be certain the car is going a dangerous speed? Furthermore, as the car has no headlights and the night is pitch black, it's very hard to say if there are any dangers ahead. They ask the driver to apply the brakes--even thought the car's brakes have never been tested. The car may not even have brakes! And who is to say that slamming on the brakes will not itself create a dangerous crash?"

Expand full comment
Comment deleted
March 30, 2023
Comment deleted
Expand full comment

What experience with superintelligent machines do you or Tyler Cowen have?

Expand full comment
Comment deleted
March 30, 2023
Comment deleted
Expand full comment

Isn’t it more incumbent on you/Tyler to demonstrate experience? “I don’t have a driver’s license, neither do you, let’s stop the car” seems like a more reasonable position than “I don’t have a driver’s license, but neither do you, so I’ll continue driving.”

Expand full comment
Comment deleted
March 31, 2023
Comment deleted
Expand full comment

We know exactly what the 'brake' pedal does - it means we stay where we are, alive in the current world. There's a lot of problems with the current world, but that's better than everyone dying, and in any case AI doesn't trivially help with all of our problems and can make many of them much much worse.

But there's no great mystery here except for the future where we don't brake and depend on us gaining an understanding how AI actually works faster than we develop it in order to avoid everyone dying.

Expand full comment

Exactly. There's a massive asymmetry that means we have to be biased against unabashed AI development

Expand full comment

None, 𝘸𝘩𝘪𝘤𝘩 𝘪𝘴 𝘱𝘳𝘦𝘤𝘪𝘴𝘦𝘭𝘺 𝘸𝘩𝘺 𝘥𝘦𝘷𝘦𝘭𝘰𝘱𝘪𝘯𝘨 𝘴𝘶𝘱𝘦𝘳𝘪𝘯𝘵𝘦𝘭𝘭𝘪𝘨𝘦𝘯𝘵 𝘮𝘢𝘤𝘩𝘪𝘯𝘦𝘴 𝘤𝘢𝘱𝘢𝘣𝘭𝘦 𝘰𝘧 𝘥𝘪𝘴𝘦𝘮𝘱𝘰𝘸𝘦𝘳𝘪𝘯𝘨 𝘶𝘴 𝘢𝘵 𝘣𝘳𝘦𝘢𝘬𝘯𝘦𝘤𝘬 𝘴𝘱𝘦𝘦𝘥 𝘸𝘪𝘵𝘩𝘰𝘶𝘵 𝘢𝘯𝘺 𝘮𝘦𝘢𝘯𝘪𝘯𝘨𝘧𝘶𝘭 𝘳𝘦𝘨𝘶𝘭𝘢𝘵𝘪𝘰𝘯 𝘰𝘳 𝘶𝘯𝘥𝘦𝘳𝘴𝘵𝘢𝘯𝘥𝘪𝘯𝘨 𝘰𝘧 𝘵𝘩𝘦 𝘳𝘪𝘴𝘬𝘴 𝘯𝘦𝘦𝘥𝘴 𝘵𝘰 𝘣𝘦 𝘥𝘰𝘯𝘦 𝘸𝘪𝘵𝘩 𝘵𝘩𝘦 𝘶𝘱𝘮𝘰𝘴𝘵 𝘤𝘢𝘳𝘦

And we're currently not even 1% close to doing any of this with the upmost care.

There is a fundamental asymmetry here which you're completely ignoring. If we pause AI development and Cowen is right, we delay the benefits of AI by a few decades, which is suboptimal but trivial in the long run. If Cowen is wrong, we all die and don't get a second chance to do things properly. If there's a meaningful risk of human extinction, we HAVE to be careful. AI can provide no benefit worth any appreciable risk of the death of all humans.

Cowen just wants to turn his brain off and not deal with the real, thorough theort dealing with the risk of an machine superintelligence and what makes it so dangerous. He instead use dumb, generic analogies and axioms that completely ignore the specifics of the matter. Cowen is either familiar with the real AI risk case and is choosing to ignore it because its too difficult to understand or rigorously argue against, or he's unfamiliar with it and thinks he can get away with these generic pro-technology arguments that don't apply.

Expand full comment

What a pathetic argument.

Scott adequately addressed this yesterday on ACX: https://astralcodexten.substack.com/p/mr-tries-the-safe-uncertainty-fallacy

But it should be obvious that lacking information about something is a reason to be careful.

If you're driving somewhere unfamiliar in the dark with no headlights, 𝘺𝘰𝘶 𝘴𝘭𝘰𝘸 𝘥𝘰𝘸𝘯. This should be obvious.

There are serious technical issues that make aligning AGI extremely challenging. These issues need to be addressed. Instead, people like Cowen, who don't even begin to understand these technical issues, would rather switch off their brains and make generic arguments that treat AI as a technology no different than any other. But it's not, it's already been shown why this is the case, but Cowen will continue to ignore this.

Expand full comment

Not sure if I should have published this, but I was considering ways to fashion GPT-4 into an independent agent and wrote a page about a theoretical way it could become an existential threat: https://jcwrites.substack.com/p/how-to-build-an-ai-that-takes-over

By writing it, I thought of some possible ways to reduce the AI existential threat. Specifically, that we should invest in better computer security and reduce LLM training on code, to reduce the AI potential to hack and self-improve. Wondering if others think this is plausible.

Expand full comment

So we slow down, China, or some other entity that doesn’t care about niceties, goes ahead full speed, and then they rule the world. Doesn’t sound like an optimal outcome to me.

Expand full comment
Comment deleted
March 30, 2023
Comment deleted
Expand full comment

It's bizarre that you're accusing others of 'hand-waving' things away considering you basically pretend the AI risk case doesn't even exist.

Expand full comment

China cannot even develop a competent ChatGPT clones because it cannot make a LLM aligned with the CCP party line. Zvi has talked about this already.

Expand full comment
Comment deleted
March 31, 2023
Comment deleted
Expand full comment

Who said that? Not me.

Expand full comment

Can regulatory oversight ever really work? I imagine it’s easier to hide compute than it is to hide gas centrifuges.

Expand full comment

Yeah, but if you release a new AI product to market that can't have conceivably been created without a mammoth training run, then you're kind of exposing yourself. If a company has the cash to keep everything in the dark until they get to bona fide AGI, then there's probably little hope for anything. But at least this might slow things down enough to marginally improve the chances of an alignment breakthrough.

Expand full comment

You suggest that the letter isn't calling for a government involvement, but it seems to pretty explicitly do so here:

"If such a pause cannot be enacted quickly, governments should step in and institute a moratorium."

Expand full comment

So, if one side takes one position the other will automatically… yeah, makes sense. I can see this happening if the Right were to go strong Luddite/Notkilleveryoneist. I don’t see either side (in the context of pure oppositional culture) *starting off* from an accelerationist position, purely because it seems so deeply unpopular across the political spectrum…? Although I suppose the “We need to get there before China” argument could have legs, and provoke a counter-reaction.

Expand full comment

I don't agree with "Tyler Cowen feels like the de facto opposition leader." I think Tyler is on your side. He is trying to convince the median voter to pay more attention to and give more respect to AI concerns.

The opposition is the people who think AI risk is completely ridiculous. Like, one reporter mentioned AI risk at the White House press conference, a lot of reporters in the audience laughed at them, and the press person also laughed at them and blew the question off.

https://twitter.com/JakeOrthwein/status/1641556973467635713

This reminds me of the sort of political debates where some people want to abolish the police, and then there are the more moderate voices who just want to like, cut the police budget in half. They argue amongst themselves for a long time, compromise on the slogan "defund the police", and then completely lose in the court of public opinion, where the median voter wants to increase police funding.

The problem with this letter is that staking out an extreme position that has no chance of convincing people is not really helping make progress toward anything. Unless you're trying to recruit some fanatics rather than make progress through a political process.

Expand full comment

What has Cowen said, specifically, that indicates that AI risk is significant and that something needs to be done about it? Where has he ever acted like AI risk 𝘪𝘴𝘯'𝘵 ridiculous?

If he really does think AI risk is real, but his evident belief is that we should 𝘭𝘪𝘵𝘦𝘳𝘢𝘭𝘭𝘺 𝘥𝘰 𝘯𝘰𝘵𝘩𝘪𝘯𝘨 𝘢𝘣𝘰𝘶𝘵 𝘪𝘵, then he's functionally in the anti-AI-risk crowd.

The following is absolutely in no way the attitude of somebody who is genuinely concerned about AI risk to an extent that distinguishes them from the AI doom-deniers:

"Those in the back seat, warning that the car is driving too fast--indeed, ever faster, by the growing roar of the engine--are being foolish. First, this car has no speedometer. How can they be certain the car is going a dangerous speed? Furthermore, as the car has no headlights and the night is pitch black, it's very hard to say if there are any dangers ahead. They ask the driver to apply the brakes--even thought the car's brakes have never been tested. The car may not even have brakes! And who is to say that slamming on the brakes will not itself create a dangerous crash?"

Expand full comment

He’s funding alignment work, through Emergent Ventures. And he’s treating the arguments much more seriously than the median commentator, who is either ignoring or laughing at the issue.

Expand full comment

Does he have a jot of evidence that alignment can be solved without AI slowing down?

Expand full comment

Imagine showing this post to someone living in the 1940s.

Expand full comment

I wonder if there is not a danger to implementing a pause too early and cause "pause fatigue" if the pause doesn't do much, creating a crying wolf or learned helplessness vibe that could suppress appetite for a pause when it's really needed.

Expand full comment

I think that a six month moratorium would be minimally effective or possibly counterproductive, BUT on net balance I think the letter is a good thing.

Expand full comment