I think it’s not necessary a sign of evil. For example Robin Hanson believes that there’s no difference between how our human descendants would be different from us in 1 million years vs how different ASI would be compared to humans. If you see ASI as merely the next step in human evolution, then IMO it’s not necessarily a villainous position.
I have always rejected the Robin Hanson proposition, because its no different from saying that the wolf that eats you and uses your atoms is your descendant. There is a strong difference here from evolution that generally modifies from the base form, versus something that just wipes us out while retaining some memes.
Its akin to having the idea of fantasy undead wipe out life and call it an upgrade.
The "enlongated time" argument has always been one of nonsense, if you think about it. If you think that everyone ia going to die of heat death, then maybe there is no difference from dying now. Its essentially the nihilistic argument that nothing matters and I feel that RH knows that, and given that it essentially means eating my children for the better good of machines, seems strictly toxic.
I think that when people talk about that, there's a fair amount of conflation between "humanity gradually chooses to transition to friendly and relatable transhuman offshoots over the course of centuries, until everyone looks like a '90s Greg Egan character" and "paperclipper murders everybody". Rooting for the latter is clearly evil and insane, but I think reasonable people can disagree about the desirability of the former.
Of course, the Greg Egan future isn't what people concerned about ASI risk are talking about when they talk about AI causing human extinction, but I think there are still some old-school transhumanist/extropian types who haven't followed the current discussion closely enough to understand that.
The issue is the "gradual and friendly" part as well as the "course of centuries." I was reliably transhumanist until recently and remain strongly supportive of BCI. However, there is nothing specifically of my children in AI, unlike BCI or even a wild All Tomorrows future. The outcomes atm seem strongly negative for humans, up to and including the creepy likelihood of humanity becoming defined via deepfakes of actual humans - I have been skeptical of uploading but the trend isnt even toward that but simply copying of available data to make cheap but functional agents.
I think there’s one more argument against transformational AI: it will happen, just not in this century / not in our lifetimes. This doesn’t sound that important in the long run but in the short run we mostly care about things that affect our own lives, not some distant future experienced by our grand-grand-grand children.
I remain baffled by otherwise-respectable people boosting blockchain as The Next And/Or Current Amazing Tech. Crypto one can at least make the (cynical) case that getting while the getting's good in a speculative bubble is temporally rational behaviour...but fundamentally that's more a class of Dutch Tulips or whatever, not due to the underlying technology per se. The best steelman case I ever heard was Scott Alexander saying it helped poor people avoid onerous government financial controls in infrastructure-impoverished countries, which...hardly compares with microwaves, zippers, antisocial media, assuming that it's even true in the first place. In the meantime one just keeps seeing Axie Infinities all the way down. The TRS is meant to measure effects on a civilizational scale, of course, so maybe there's bank-shot causality I'm not seeing...but it feels weird to see something that's had zero noticeable effect on my life on the same level as umpteen things which have. Heck, even yesteryear's bombs like the VCR - lots of fond memories of Magic Schoolbus VHS casettes! My childhood media diet would have been completely different without Be Kind Please Rewind...and even in current_year that makes me conceptualize digital ownership as "actually having physical possession of a file", no cloud streaming for me, thanks.
1. A Merkle tree, made by chaining the outputs of cryptographic hash functions into the inputs of other hash functions. Merkle trees are great! The "git" version control system uses Merkle trees. It's an easy and simple way to build a tamper-resistant history or ledger. We should use more Merkle trees.
2. A distributed consensus mechanism, which allows parties with extremely low levels of trust to agree on _which_ version of a history or ledger will be the official one.
Almost any "blockchain" application can be replaced with "a law firm that maintains a git repository according to a specific set of contractually binding rules." Which could actually revolutionize a bunch of badly broken financial systems. But it's basically just a better stock clearing firm.
But maybe your hypothesis is "we can't trust law firms to obey contractually binding rules." Which is absolutely false in any functioning modern civilization. If you're that paranoid, then Bitcoin is actually a terrible investment. You should instead consider remote hippie communes and get good at seed saving.
Yes, it's certainly impressive technologically and from a decision-theoretic perspective. I can admire that! Lots of mundane utility we could be extracting (are we doing so?), coordination problems are very hard, etc. It's the "will save us when civilization collapses and/or fascism happens" part that never quite fleshes out the ????? step...who's going to run the infrastructure supporting said ledger, in either case? Lloyd's of London will be loaded when we go...I've a coworker who's trending into paranoialand wrt "They Say we're gonna digitize and globalize all currency and get rid of cash..." and such. Which I got a lot more sympathetic to after the banking shenanigans Canada pulled during covid, sure. But fighting The Man through penny crypto ("look dude, it's up 1000%! gonna be the next GameStop!") is...yeah. An administration flirting with the idea of realized capital gains taxes is not going to mercifully leave crypto untaxed and unregulated. Even for the non-bitcoin stuff, Europe's increasing hostility towards end-to-end encryption and various other forms of privacy makes me worry about the mundane utility. KYC comes for everything in the end.
I think a lot of blockchain enthusiasm is confused at best, and criminally fraudulent at worst.
95% of the blockchain hype a couple of years ago could have been replaced with some version-controlled files in git, or maybe a custom Merkle tree, plus some digital signatures. Simple, easy, and cheap. This works anywhere that you want a tamper-resistant ledger. The value to be gained here is mostly simplifying existing "clearing house" applications, or enabling new ones. Nice, but not revolutionary.
Ethereum is actually neat in a strictly technical sense, because it enables "smart contracts" that execute automatically. But as much as I admire the technical cleverness, I actually prefer contracts that operate inside a legal system and that can't be "hacked." It's a clever tech demo in search of a real-world problem. It's sort of the financial equivalent of running Doom on your toaster.
But most of the actual action in the crypto currency space seems to be extracting the wealth of elderly Republicans and naive libertarians, occasionally via criminal fraud.
As for privacy, Bitcoin is a global, tamperproof ledger of every transaction ever. I don't know if you could invent a less private financial instrument if you tried.
Still, Merkle trees by themselves are clever and quite useful!
When I consider a full distribution of possibilities, I see plenty of ways this could all go horribly wrong. If we build something smarter than us, we will eventually lose control, and this seems like a profoundly dumb risk.
Given AGI, my most likely positive scenario is similar to (4)+(10), "singleton" plus "good outcomes are cheap." There are two key parts to these scenarios:
- We have an effective "singleton", or at least we do not have a scenario where AIs are locked in a life or death struggle for resources with other AIs.
- The AI likes humans enough to keep us around, and this is basically a trivial "expense." If you've got 99.999...% of the resources in the universe, what's one planet, or an occasional terraformed "human wildlife preserve"?
The slogan here is "humanity as house pets", because it captures the idea of a basically comfortable life, but a total loss of ultimate autonomy. I would place the SF example of The Culture firmly in this category. And remember, nobody asks house pets whether they want to be neutered.
And this is my _positive_ scenario. It requires several specific weird things happening, including: no Darwinian struggle between AIs, plus a loss of human control but not a loss of human-compatible values.
Why a loss of human control? If I imagine hypothetical scenarios where we retain control over multiple weakly superhuman AIs, I think we're basically fucked. Humans _are_ in competition with each other, and many humans are awful (exhibit A: human history). Taking humans and giving some of them authority over massive, unstoppable, unaccountable power will work out the way it usually does.
But how might we maintain human-friendly values after a loss of human control? Luck, basically. Lots of individual humans are lovely and benevolent. And so far, our AIs are trained off of humans, so I dunno, maybe we get a nice AI that likes humans.
I don't believe in "alignment", not in any strict sense. I think it's a delusional coping mechanism. Building a superhuman intelligence is like raising a teenager. You can hope to set a good example, but teenagers will ultimately make their own choices.
Considering all the possibilities, I vote against building things smarter than us. When your best scenario is "I dunno, maybe the AIs will like us enough to keep us as pets", and your worst scenarios are nightmares, it seems like a bad gamble.
The tragic amounts of "human extinction is fine" is pretty shocking. Its like the world was turned over and people turned into campy 70s villains.
I think it’s not necessary a sign of evil. For example Robin Hanson believes that there’s no difference between how our human descendants would be different from us in 1 million years vs how different ASI would be compared to humans. If you see ASI as merely the next step in human evolution, then IMO it’s not necessarily a villainous position.
I have always rejected the Robin Hanson proposition, because its no different from saying that the wolf that eats you and uses your atoms is your descendant. There is a strong difference here from evolution that generally modifies from the base form, versus something that just wipes us out while retaining some memes.
Its akin to having the idea of fantasy undead wipe out life and call it an upgrade.
The "enlongated time" argument has always been one of nonsense, if you think about it. If you think that everyone ia going to die of heat death, then maybe there is no difference from dying now. Its essentially the nihilistic argument that nothing matters and I feel that RH knows that, and given that it essentially means eating my children for the better good of machines, seems strictly toxic.
Well, yes, nothing matters, but that's not what RH is saying.
If nothing matters, then fighting for life seems like it truly does matter.
I think that when people talk about that, there's a fair amount of conflation between "humanity gradually chooses to transition to friendly and relatable transhuman offshoots over the course of centuries, until everyone looks like a '90s Greg Egan character" and "paperclipper murders everybody". Rooting for the latter is clearly evil and insane, but I think reasonable people can disagree about the desirability of the former.
Of course, the Greg Egan future isn't what people concerned about ASI risk are talking about when they talk about AI causing human extinction, but I think there are still some old-school transhumanist/extropian types who haven't followed the current discussion closely enough to understand that.
The issue is the "gradual and friendly" part as well as the "course of centuries." I was reliably transhumanist until recently and remain strongly supportive of BCI. However, there is nothing specifically of my children in AI, unlike BCI or even a wild All Tomorrows future. The outcomes atm seem strongly negative for humans, up to and including the creepy likelihood of humanity becoming defined via deepfakes of actual humans - I have been skeptical of uploading but the trend isnt even toward that but simply copying of available data to make cheap but functional agents.
As usual, especially for those who agree and are dismayed by this trend, dont just give up! Join us in #PauseAI and fight for a future with humanity!
First hit on Google.
I think there’s one more argument against transformational AI: it will happen, just not in this century / not in our lifetimes. This doesn’t sound that important in the long run but in the short run we mostly care about things that affect our own lives, not some distant future experienced by our grand-grand-grand children.
I remain baffled by otherwise-respectable people boosting blockchain as The Next And/Or Current Amazing Tech. Crypto one can at least make the (cynical) case that getting while the getting's good in a speculative bubble is temporally rational behaviour...but fundamentally that's more a class of Dutch Tulips or whatever, not due to the underlying technology per se. The best steelman case I ever heard was Scott Alexander saying it helped poor people avoid onerous government financial controls in infrastructure-impoverished countries, which...hardly compares with microwaves, zippers, antisocial media, assuming that it's even true in the first place. In the meantime one just keeps seeing Axie Infinities all the way down. The TRS is meant to measure effects on a civilizational scale, of course, so maybe there's bank-shot causality I'm not seeing...but it feels weird to see something that's had zero noticeable effect on my life on the same level as umpteen things which have. Heck, even yesteryear's bombs like the VCR - lots of fond memories of Magic Schoolbus VHS casettes! My childhood media diet would have been completely different without Be Kind Please Rewind...and even in current_year that makes me conceptualize digital ownership as "actually having physical possession of a file", no cloud streaming for me, thanks.
Blockchain tech combines two things:
1. A Merkle tree, made by chaining the outputs of cryptographic hash functions into the inputs of other hash functions. Merkle trees are great! The "git" version control system uses Merkle trees. It's an easy and simple way to build a tamper-resistant history or ledger. We should use more Merkle trees.
2. A distributed consensus mechanism, which allows parties with extremely low levels of trust to agree on _which_ version of a history or ledger will be the official one.
Almost any "blockchain" application can be replaced with "a law firm that maintains a git repository according to a specific set of contractually binding rules." Which could actually revolutionize a bunch of badly broken financial systems. But it's basically just a better stock clearing firm.
But maybe your hypothesis is "we can't trust law firms to obey contractually binding rules." Which is absolutely false in any functioning modern civilization. If you're that paranoid, then Bitcoin is actually a terrible investment. You should instead consider remote hippie communes and get good at seed saving.
Yes, it's certainly impressive technologically and from a decision-theoretic perspective. I can admire that! Lots of mundane utility we could be extracting (are we doing so?), coordination problems are very hard, etc. It's the "will save us when civilization collapses and/or fascism happens" part that never quite fleshes out the ????? step...who's going to run the infrastructure supporting said ledger, in either case? Lloyd's of London will be loaded when we go...I've a coworker who's trending into paranoialand wrt "They Say we're gonna digitize and globalize all currency and get rid of cash..." and such. Which I got a lot more sympathetic to after the banking shenanigans Canada pulled during covid, sure. But fighting The Man through penny crypto ("look dude, it's up 1000%! gonna be the next GameStop!") is...yeah. An administration flirting with the idea of realized capital gains taxes is not going to mercifully leave crypto untaxed and unregulated. Even for the non-bitcoin stuff, Europe's increasing hostility towards end-to-end encryption and various other forms of privacy makes me worry about the mundane utility. KYC comes for everything in the end.
I think a lot of blockchain enthusiasm is confused at best, and criminally fraudulent at worst.
95% of the blockchain hype a couple of years ago could have been replaced with some version-controlled files in git, or maybe a custom Merkle tree, plus some digital signatures. Simple, easy, and cheap. This works anywhere that you want a tamper-resistant ledger. The value to be gained here is mostly simplifying existing "clearing house" applications, or enabling new ones. Nice, but not revolutionary.
Ethereum is actually neat in a strictly technical sense, because it enables "smart contracts" that execute automatically. But as much as I admire the technical cleverness, I actually prefer contracts that operate inside a legal system and that can't be "hacked." It's a clever tech demo in search of a real-world problem. It's sort of the financial equivalent of running Doom on your toaster.
But most of the actual action in the crypto currency space seems to be extracting the wealth of elderly Republicans and naive libertarians, occasionally via criminal fraud.
As for privacy, Bitcoin is a global, tamperproof ledger of every transaction ever. I don't know if you could invent a less private financial instrument if you tried.
Still, Merkle trees by themselves are clever and quite useful!
When I consider a full distribution of possibilities, I see plenty of ways this could all go horribly wrong. If we build something smarter than us, we will eventually lose control, and this seems like a profoundly dumb risk.
Given AGI, my most likely positive scenario is similar to (4)+(10), "singleton" plus "good outcomes are cheap." There are two key parts to these scenarios:
- We have an effective "singleton", or at least we do not have a scenario where AIs are locked in a life or death struggle for resources with other AIs.
- The AI likes humans enough to keep us around, and this is basically a trivial "expense." If you've got 99.999...% of the resources in the universe, what's one planet, or an occasional terraformed "human wildlife preserve"?
The slogan here is "humanity as house pets", because it captures the idea of a basically comfortable life, but a total loss of ultimate autonomy. I would place the SF example of The Culture firmly in this category. And remember, nobody asks house pets whether they want to be neutered.
And this is my _positive_ scenario. It requires several specific weird things happening, including: no Darwinian struggle between AIs, plus a loss of human control but not a loss of human-compatible values.
Why a loss of human control? If I imagine hypothetical scenarios where we retain control over multiple weakly superhuman AIs, I think we're basically fucked. Humans _are_ in competition with each other, and many humans are awful (exhibit A: human history). Taking humans and giving some of them authority over massive, unstoppable, unaccountable power will work out the way it usually does.
But how might we maintain human-friendly values after a loss of human control? Luck, basically. Lots of individual humans are lovely and benevolent. And so far, our AIs are trained off of humans, so I dunno, maybe we get a nice AI that likes humans.
I don't believe in "alignment", not in any strict sense. I think it's a delusional coping mechanism. Building a superhuman intelligence is like raising a teenager. You can hope to set a good example, but teenagers will ultimately make their own choices.
Considering all the possibilities, I vote against building things smarter than us. When your best scenario is "I dunno, maybe the AIs will like us enough to keep us as pets", and your worst scenarios are nightmares, it seems like a bad gamble.
Its why #PauseAI is such a sensible organization all in all.