6 Comments
⭠ Return to thread

Except that the closer analogy is “there’s a lab down the street doing gain-of-function research on the world’s deadliest viruses with their windows open.” By the time there’s a bad outcome obvious enough to gain consensus, there’s no real way of undoing it.

Expand full comment

Ok - but if that were the case, then the solution is still the same: prove that a bad outcome will happen before it does, then use that evidence to convince enough people to vote to take action to prevent it. We *have* evidence that GOF is bad now, and I don't think it would actually be difficult to convince your fellow voters in your town to shut down that lab? It is not currently too late! There's a pretty simple solution in your analogy!

What you're lacking (in this case) is proof. If you showed everyone a time lapse video of the virus they were working on killing 100 immuno-compromised monkeys in cages, I think that would convince people that evidence could be extrapolated to mean that the work was dangerous and should be stopped. I similarly think that you can find ways to demonstrate in sufficiently-similar, sufficiently-extrapolatable test cases, that AI will cause (or not) bad things to happen. If you don't want to do the technical work yourself, that is also fine, but then just publicly agitate for it to be done by someone and reward that person with money or votes. That's all.

Expand full comment

Hmmm, isn’t this a pretty bad example? As in, GoF (probably, maybe, whatever, I’d say almost certainly) just killed several million people and yet there is no ban, nor even much of a public outcry?

Expand full comment

That's why I say "accurate, extrapolatable-the-the-public evidence." With covid we ALSO coordinated shut down the world for varying durations on the basis of literally a chart showing a line going up to the right and some pictures from China.

So yes, I think my case that - if persuaded with sufficiently convincing evidence - we can coordinate massive global action: works just fine. Obviously yes, things did not go perfectly and we did things we shouldn't have, and we did not do things we should have. The % of those errors we have for future events will depend on how accurately we understand the threat, and it's likelihood and proximity - we are in the "before the next disaster" phase right now. So get out there and find your "upwards to the right" chart, if it exists.

Expand full comment

I’m not disputing your case for action — far from it. “Very little hope” is not the same as “despair”, and arguably should be more motivating than “somewhat more hope.” Unfortunately, most of my remaining hope on this would come from a scenario where some less-than-all-powerful AGI-esque thing causes a quantity of mayhem X that is high enough to prompt drastic action, but not so high that we can’t then recover. I say “would”, because COVID. Yeah we groped around for mitigation while it was happening, but in terms of lessons learned? It’s like there’s been a collective decision to focus on minutiae while ignoring the elephant in the room, and indeed feeding the elephant, idly fondling its trunk, bringing it lady elephants to make babies with (ok, the analogy is getting strained at this point…). So my mayhem value X would appear to significantly exceed “several million deaths.”

Expand full comment

Right, you need a minor thing, easily interpretable as applying to everything, to happen first. For covid, that was the late 2019 outbreak in China + small # of rising cases locally - we could all see where things were headed.

But you can do that with AI too. You just have to run the simulations, and I think you can produce evidence (and promote it) that would be at least as compelling to enough relevant people.

But my focus is on the despair that at least some people in this thread are exuding. There are things that can be done. There are things that any of us, no matter how non-technical or powerless, can do to help, just like every other large coordination problem in history. There is no reason for despair.

Expand full comment