There's no reason to despair any more than anything else: if it is a true prediction, then "AI will cause lots of harm to other people" is a political issue. You should react to it however you would to high crime, or a foreign country attempting to invade - petition your elected representatives to learn about the problem and develop an a…
There's no reason to despair any more than anything else: if it is a true prediction, then "AI will cause lots of harm to other people" is a political issue. You should react to it however you would to high crime, or a foreign country attempting to invade - petition your elected representatives to learn about the problem and develop an actually effective solution, and learn enough about what an "effective solution" would be, so you can keep pestering them until they implement it.
My elected representatives have not been great so far on the issues I care about, where there are already reasonably well-known probably-workable solutions. I do not expect them to be better about reworking the entire economy to deal with the obsolescence of vast swathes of workers. On the positive* side, it won't be a problem for long since automation of almost all labour probably implies RSI, which probably implies everyone dying.
My point was to address what I took to be the despair in your post. Technical skills are not what is needed here for "personal action", any more than after Pearl Harbor, every citizen who couldn't raise a rifle should have collapsed into despair. Yeah our reps suck, but they suck in very predictable ways: i.e. those where the voters - us - provide contradictory demands. After December 7th, politicians quickly accepted that voters definitely would demand a successful resolution to the situation, so they attempted to provide it.
If AI is going to cause Extremely Bad Thing it should not actually (compared to getting reps to do contradictory/unsuccessful things) be that hard to 1) find good evidence that shows that it will, 2) persuade enough voters that this is true, 3) not vote for pols who don't have a good solution. That's not my opinion, that's simply the only course of action that the past 10,000 years of human history have left you. Maybe it will be really hard! We should get started early then, and divide the task up amongst the billions of us.
Public despair is bad because it convinces other people not to do things that might help. Your private despair is far more measured, because I assume you are still making breakfast, taking care of your family and working towards retirement - if not, then please seek assistance for those specific items - so in reality you obviously think there is at least a reasonable hope. Let the reasonable hope - that sufficient people will do the right thing, and therefore be part of the persuasive case that they do so - be your public face, and keep doing all the important things in your life that will be important in 10 years as well - and also take your citizen's portion of the necessary steps forward.
I don't see why *everyone* would die. I could see huge pressures to reduce the population. Maybe UBI in exchange for sterilization or something like that, at least in some countries. The people who remained would be able to live without working in a relative utopia.
Except that the closer analogy is “there’s a lab down the street doing gain-of-function research on the world’s deadliest viruses with their windows open.” By the time there’s a bad outcome obvious enough to gain consensus, there’s no real way of undoing it.
Ok - but if that were the case, then the solution is still the same: prove that a bad outcome will happen before it does, then use that evidence to convince enough people to vote to take action to prevent it. We *have* evidence that GOF is bad now, and I don't think it would actually be difficult to convince your fellow voters in your town to shut down that lab? It is not currently too late! There's a pretty simple solution in your analogy!
What you're lacking (in this case) is proof. If you showed everyone a time lapse video of the virus they were working on killing 100 immuno-compromised monkeys in cages, I think that would convince people that evidence could be extrapolated to mean that the work was dangerous and should be stopped. I similarly think that you can find ways to demonstrate in sufficiently-similar, sufficiently-extrapolatable test cases, that AI will cause (or not) bad things to happen. If you don't want to do the technical work yourself, that is also fine, but then just publicly agitate for it to be done by someone and reward that person with money or votes. That's all.
Hmmm, isn’t this a pretty bad example? As in, GoF (probably, maybe, whatever, I’d say almost certainly) just killed several million people and yet there is no ban, nor even much of a public outcry?
That's why I say "accurate, extrapolatable-the-the-public evidence." With covid we ALSO coordinated shut down the world for varying durations on the basis of literally a chart showing a line going up to the right and some pictures from China.
So yes, I think my case that - if persuaded with sufficiently convincing evidence - we can coordinate massive global action: works just fine. Obviously yes, things did not go perfectly and we did things we shouldn't have, and we did not do things we should have. The % of those errors we have for future events will depend on how accurately we understand the threat, and it's likelihood and proximity - we are in the "before the next disaster" phase right now. So get out there and find your "upwards to the right" chart, if it exists.
I’m not disputing your case for action — far from it. “Very little hope” is not the same as “despair”, and arguably should be more motivating than “somewhat more hope.” Unfortunately, most of my remaining hope on this would come from a scenario where some less-than-all-powerful AGI-esque thing causes a quantity of mayhem X that is high enough to prompt drastic action, but not so high that we can’t then recover. I say “would”, because COVID. Yeah we groped around for mitigation while it was happening, but in terms of lessons learned? It’s like there’s been a collective decision to focus on minutiae while ignoring the elephant in the room, and indeed feeding the elephant, idly fondling its trunk, bringing it lady elephants to make babies with (ok, the analogy is getting strained at this point…). So my mayhem value X would appear to significantly exceed “several million deaths.”
Right, you need a minor thing, easily interpretable as applying to everything, to happen first. For covid, that was the late 2019 outbreak in China + small # of rising cases locally - we could all see where things were headed.
But you can do that with AI too. You just have to run the simulations, and I think you can produce evidence (and promote it) that would be at least as compelling to enough relevant people.
But my focus is on the despair that at least some people in this thread are exuding. There are things that can be done. There are things that any of us, no matter how non-technical or powerless, can do to help, just like every other large coordination problem in history. There is no reason for despair.
There's no reason to despair any more than anything else: if it is a true prediction, then "AI will cause lots of harm to other people" is a political issue. You should react to it however you would to high crime, or a foreign country attempting to invade - petition your elected representatives to learn about the problem and develop an actually effective solution, and learn enough about what an "effective solution" would be, so you can keep pestering them until they implement it.
My elected representatives have not been great so far on the issues I care about, where there are already reasonably well-known probably-workable solutions. I do not expect them to be better about reworking the entire economy to deal with the obsolescence of vast swathes of workers. On the positive* side, it won't be a problem for long since automation of almost all labour probably implies RSI, which probably implies everyone dying.
My point was to address what I took to be the despair in your post. Technical skills are not what is needed here for "personal action", any more than after Pearl Harbor, every citizen who couldn't raise a rifle should have collapsed into despair. Yeah our reps suck, but they suck in very predictable ways: i.e. those where the voters - us - provide contradictory demands. After December 7th, politicians quickly accepted that voters definitely would demand a successful resolution to the situation, so they attempted to provide it.
If AI is going to cause Extremely Bad Thing it should not actually (compared to getting reps to do contradictory/unsuccessful things) be that hard to 1) find good evidence that shows that it will, 2) persuade enough voters that this is true, 3) not vote for pols who don't have a good solution. That's not my opinion, that's simply the only course of action that the past 10,000 years of human history have left you. Maybe it will be really hard! We should get started early then, and divide the task up amongst the billions of us.
Public despair is bad because it convinces other people not to do things that might help. Your private despair is far more measured, because I assume you are still making breakfast, taking care of your family and working towards retirement - if not, then please seek assistance for those specific items - so in reality you obviously think there is at least a reasonable hope. Let the reasonable hope - that sufficient people will do the right thing, and therefore be part of the persuasive case that they do so - be your public face, and keep doing all the important things in your life that will be important in 10 years as well - and also take your citizen's portion of the necessary steps forward.
I don't see why *everyone* would die. I could see huge pressures to reduce the population. Maybe UBI in exchange for sterilization or something like that, at least in some countries. The people who remained would be able to live without working in a relative utopia.
Except that the closer analogy is “there’s a lab down the street doing gain-of-function research on the world’s deadliest viruses with their windows open.” By the time there’s a bad outcome obvious enough to gain consensus, there’s no real way of undoing it.
Ok - but if that were the case, then the solution is still the same: prove that a bad outcome will happen before it does, then use that evidence to convince enough people to vote to take action to prevent it. We *have* evidence that GOF is bad now, and I don't think it would actually be difficult to convince your fellow voters in your town to shut down that lab? It is not currently too late! There's a pretty simple solution in your analogy!
What you're lacking (in this case) is proof. If you showed everyone a time lapse video of the virus they were working on killing 100 immuno-compromised monkeys in cages, I think that would convince people that evidence could be extrapolated to mean that the work was dangerous and should be stopped. I similarly think that you can find ways to demonstrate in sufficiently-similar, sufficiently-extrapolatable test cases, that AI will cause (or not) bad things to happen. If you don't want to do the technical work yourself, that is also fine, but then just publicly agitate for it to be done by someone and reward that person with money or votes. That's all.
Hmmm, isn’t this a pretty bad example? As in, GoF (probably, maybe, whatever, I’d say almost certainly) just killed several million people and yet there is no ban, nor even much of a public outcry?
That's why I say "accurate, extrapolatable-the-the-public evidence." With covid we ALSO coordinated shut down the world for varying durations on the basis of literally a chart showing a line going up to the right and some pictures from China.
So yes, I think my case that - if persuaded with sufficiently convincing evidence - we can coordinate massive global action: works just fine. Obviously yes, things did not go perfectly and we did things we shouldn't have, and we did not do things we should have. The % of those errors we have for future events will depend on how accurately we understand the threat, and it's likelihood and proximity - we are in the "before the next disaster" phase right now. So get out there and find your "upwards to the right" chart, if it exists.
I’m not disputing your case for action — far from it. “Very little hope” is not the same as “despair”, and arguably should be more motivating than “somewhat more hope.” Unfortunately, most of my remaining hope on this would come from a scenario where some less-than-all-powerful AGI-esque thing causes a quantity of mayhem X that is high enough to prompt drastic action, but not so high that we can’t then recover. I say “would”, because COVID. Yeah we groped around for mitigation while it was happening, but in terms of lessons learned? It’s like there’s been a collective decision to focus on minutiae while ignoring the elephant in the room, and indeed feeding the elephant, idly fondling its trunk, bringing it lady elephants to make babies with (ok, the analogy is getting strained at this point…). So my mayhem value X would appear to significantly exceed “several million deaths.”
Right, you need a minor thing, easily interpretable as applying to everything, to happen first. For covid, that was the late 2019 outbreak in China + small # of rising cases locally - we could all see where things were headed.
But you can do that with AI too. You just have to run the simulations, and I think you can produce evidence (and promote it) that would be at least as compelling to enough relevant people.
But my focus is on the despair that at least some people in this thread are exuding. There are things that can be done. There are things that any of us, no matter how non-technical or powerless, can do to help, just like every other large coordination problem in history. There is no reason for despair.