IMO, a lot of AI doomerism is basically status competition: "I'm smart and important because I am concerned about the exciting new type of world-ending catastrophe, instead of the old boring types of world-ending catastrophes"
100% agree. I am skeptical of a lot of the AI doomerism because the status/funding/etc of the doomerists is in direct proportion to how much concern they raise. Even for AI safety people who claim (and probably believe) they are doing this out of altruism, there is always that incentive even if unconscious.
That said it still does seem like something to be at least a little worried about!
This is an especially silly example of the "appeal to motive" fallacy because the motive doesn't make any sense. Even if you assume they only care about funding, you can get far more funding working on AI capabilities than AI safety. OpenAI just got another $10 billion, and Microsoft obviously didn't do it because they were worried about AI safety.
In all fairness, one used to be able to make a lot more funding working on industrial capacity than industrial pollution worries. That turned on its head pretty quickly, and probably those who are making money in the latter couldn't have made so much money in the former.
There is always a market for catastrophe prediction and prevention, one where demand for catastrophe outstrips supply. The real problem, as always, is determining which are real worries and which are manufactured or entirely vaporware.
I’ve worried that I’m overestimating it’s likelihood because of some unseen motivation to be smart and edgy. I do notice that global warming doomers provoke an eye roll from me of late.
Personally, I worry more about my grandchildren going to school and having to do active(is there any other kind?) shooter drills then I do about future AI scenarios. I talk with them about that. I tell them about the air raid drills we had in the 60’s. And, going to a Catholic school, I worried whether I would denounce my faith at gunpoint or die a courageous altar boy...still not sure
"If there is a new world order - AI or something else changes everything - and we are not all dead, how do you prepare for that? Good question. What does such a world look like? Some such worlds you don’t have to prepare and it is fine. Others, it is very important that you start with capital. Keeping yourself healthy, cultivating good habits and remaining flexible and grounded are probably some good places to start."
This is the question I am working on answering at the moment. It seems to me that it is universally agreed that AGI will happen in my lifetime (I am 34). My reasoning tells me that either AGI destroys us, or it fundamentally changes society, improving the average human standards of living substantially. This could very well mean that concepts such as law, government, capital markets, family systems,etc are significantly different than they are today.
Thinking of preparing for this new world is hard. Here's a few things that I've began doing:
1. I am thinking less of life past 60, and shifting that energy more to the present moment.
2. Saving for retirement is slowly going down in the list of priorities. I have a very healthy asset base and I am not going to blow through it all, But I am also not going to maximize savings either. I will put some money away, but less than I would a few years ago. That % I am not saving for retirement will be going to the present moment.
3. Definitely not delaying bucket list items. If I can do it, I will do it.
4. The probability of having biological children has gone down. I have discussed this with my partner. We will not be ready for children for the next three years and she's older than me, so the chances of biological children were already low. Furthermore adopting is still on the table and the probability of that is even higher now.
More importantly, these are all moving goal posts as developments happen.
I don't get this at all. I'm reminded of the end of "No Country for Old Men: "What you got ain't nothin' new. This country's hard on people. You can't stop what's coming. It ain't all waiting on you. That's vanity."
AI will never destroy all value. Humans will still need to eat. AI will never be able to keep its own power running, like it seems to in the movies. You will need humans every step of the way, and those humans, by their very existence create value. Maybe that puts me into a different category, some kind of new Luddite, but I just don't see it.
So, if you won't make an AI ETF, would you consider making an anti-AI ETF (e.g. an index fund minus AI companies)? This seems like it would be helpful because:
a) As with green investment portfolios, this would let socially responsible investors who don't want to contribute to the problem be able to avoid index funds (which will indirectly become progressively larger investments in the AI sector over time if no winter occurs),
b) Selfishly, AI-winter type scenarios (where we neither die nor radically transform the economy in the next few decades) seem casually like they represent the bulk of the scenarios I'd want my retirement funds to hedge against!
I mean, no because I do not expect there to be sufficient demand for that to make sense, but if someone wants to back up the money truck then sure, why not, I guess.
It's probably 'trivial' in some sense (and to some people), but still requires (at least) the 'generic heroic efforts' that any new business does, beyond the specific (and significant) fixed costs of a financial company capable of offering such a product.
I worked there for a long while, and in better times there was a kind of beautiful chaos where everyone worked on what they wanted to work on and it all mostly kind of worked, and produced tremendous value for the company.
But careerism became more of a driving force as time went on, and this had some bad effects.
For example, the peer to peer payments on Messenger. This was a good product, and had a big advantage over competitors. Payments appeared in recipient bank accounts mere minutes after being sent because the company did not care to make money off of in flight transactions.
But once it was built and working well, there wasn’t much more to be done with it that would look impressive on your reviews. It ended up being maintained by one guy as a labor of love. Ridiculous amounts of money being funneled through this thing, people relying on it to pay rent and other such necessities, and there was just one guy.
I doubt this is unique to Facebook, it largely imports its culture from Google. Engineers generally want to progress their careers and the path of least resistance is moving some important metric. Prevention unfortunately doesn’t translate that well to metrics, especially prevention of a one-off.
Google has the same reputation for breaking and sunsetting products. Search engines are the front door to the internet.
Yet there's so much pent up rage at Facebook, when hey... it's not unique. People pull out Cambridge Analytics 10x more than they pull out Project Maven, supply chain slave labor, or anticompetitive privacy messaging that doesn't apply to their own product (ahem...).
Facebook has thrown their hat in the ring, but it's been forced by everyone else to not fall behind. I don't trust Google's responsible AI team (do they still have one?), On the other hand, Facebook is a company with probably a gigantic knowledge base to work with to parse human value alignment. What better training data is there compared to the real conversations people are having?
Seems very unlikely tho. Even if we don't get imminent FOOM. In context of saving for retirement, what about advancements in fighting aging? What about ~trivial automation, of things like cars and warehouses, dramatically reducing amount of real work? Is it really realistic to expect there won't at least be a substantial UBI?
I wonder about wisdom of accumulating ETH. In case there really is "normal" future, it seems like it should eventually become a standard. Ofc it could be something else instead, but it's a strong Schelling point.
> The exception is that the Big Tech companies (Google, Amazon, Apple, Microsoft, although importantly not Facebook, seriously f*** Facebook) have essentially unlimited cash, and their funding situation changes little (if at all) based on their stock price.
What about Nvidia?
---
I understand why investing into AI companies is bad from AI alignment perspective. But in case aligned AGI happens... there's still a problem of human alignment. Investing into these companies might provide a little bit of personal safety in case we end up in crap situation where there are humans without enough capital, in a world where wealth is still necessary, and their labor is worthless.
Amused/disturbed because at first I thought, what an unreasonably strong opinion about having kids, then realized it's probably good advice to a person who literally asks that question.
IMO, a lot of AI doomerism is basically status competition: "I'm smart and important because I am concerned about the exciting new type of world-ending catastrophe, instead of the old boring types of world-ending catastrophes"
100% agree. I am skeptical of a lot of the AI doomerism because the status/funding/etc of the doomerists is in direct proportion to how much concern they raise. Even for AI safety people who claim (and probably believe) they are doing this out of altruism, there is always that incentive even if unconscious.
That said it still does seem like something to be at least a little worried about!
This is an especially silly example of the "appeal to motive" fallacy because the motive doesn't make any sense. Even if you assume they only care about funding, you can get far more funding working on AI capabilities than AI safety. OpenAI just got another $10 billion, and Microsoft obviously didn't do it because they were worried about AI safety.
In all fairness, one used to be able to make a lot more funding working on industrial capacity than industrial pollution worries. That turned on its head pretty quickly, and probably those who are making money in the latter couldn't have made so much money in the former.
There is always a market for catastrophe prediction and prevention, one where demand for catastrophe outstrips supply. The real problem, as always, is determining which are real worries and which are manufactured or entirely vaporware.
I’ve worried that I’m overestimating it’s likelihood because of some unseen motivation to be smart and edgy. I do notice that global warming doomers provoke an eye roll from me of late.
It's symmetric to anti-AI-doomerism.
Personally, I worry more about my grandchildren going to school and having to do active(is there any other kind?) shooter drills then I do about future AI scenarios. I talk with them about that. I tell them about the air raid drills we had in the 60’s. And, going to a Catholic school, I worried whether I would denounce my faith at gunpoint or die a courageous altar boy...still not sure
Thanks for the clearly articulated voice of reason.
"If there is a new world order - AI or something else changes everything - and we are not all dead, how do you prepare for that? Good question. What does such a world look like? Some such worlds you don’t have to prepare and it is fine. Others, it is very important that you start with capital. Keeping yourself healthy, cultivating good habits and remaining flexible and grounded are probably some good places to start."
This is the question I am working on answering at the moment. It seems to me that it is universally agreed that AGI will happen in my lifetime (I am 34). My reasoning tells me that either AGI destroys us, or it fundamentally changes society, improving the average human standards of living substantially. This could very well mean that concepts such as law, government, capital markets, family systems,etc are significantly different than they are today.
Thinking of preparing for this new world is hard. Here's a few things that I've began doing:
1. I am thinking less of life past 60, and shifting that energy more to the present moment.
2. Saving for retirement is slowly going down in the list of priorities. I have a very healthy asset base and I am not going to blow through it all, But I am also not going to maximize savings either. I will put some money away, but less than I would a few years ago. That % I am not saving for retirement will be going to the present moment.
3. Definitely not delaying bucket list items. If I can do it, I will do it.
4. The probability of having biological children has gone down. I have discussed this with my partner. We will not be ready for children for the next three years and she's older than me, so the chances of biological children were already low. Furthermore adopting is still on the table and the probability of that is even higher now.
More importantly, these are all moving goal posts as developments happen.
Are you shifting in any way your life Zvi?
I don't get this at all. I'm reminded of the end of "No Country for Old Men: "What you got ain't nothin' new. This country's hard on people. You can't stop what's coming. It ain't all waiting on you. That's vanity."
AI will never destroy all value. Humans will still need to eat. AI will never be able to keep its own power running, like it seems to in the movies. You will need humans every step of the way, and those humans, by their very existence create value. Maybe that puts me into a different category, some kind of new Luddite, but I just don't see it.
So, if you won't make an AI ETF, would you consider making an anti-AI ETF (e.g. an index fund minus AI companies)? This seems like it would be helpful because:
a) As with green investment portfolios, this would let socially responsible investors who don't want to contribute to the problem be able to avoid index funds (which will indirectly become progressively larger investments in the AI sector over time if no winter occurs),
b) Selfishly, AI-winter type scenarios (where we neither die nor radically transform the economy in the next few decades) seem casually like they represent the bulk of the scenarios I'd want my retirement funds to hedge against!
I mean, no because I do not expect there to be sufficient demand for that to make sense, but if someone wants to back up the money truck then sure, why not, I guess.
Fair enough. It hadn't occurred to me what the overhead might be.
It's probably 'trivial' in some sense (and to some people), but still requires (at least) the 'generic heroic efforts' that any new business does, beyond the specific (and significant) fixed costs of a financial company capable of offering such a product.
What’s driving the sentiment against Facebook?
I worked there for a long while, and in better times there was a kind of beautiful chaos where everyone worked on what they wanted to work on and it all mostly kind of worked, and produced tremendous value for the company.
But careerism became more of a driving force as time went on, and this had some bad effects.
For example, the peer to peer payments on Messenger. This was a good product, and had a big advantage over competitors. Payments appeared in recipient bank accounts mere minutes after being sent because the company did not care to make money off of in flight transactions.
But once it was built and working well, there wasn’t much more to be done with it that would look impressive on your reviews. It ended up being maintained by one guy as a labor of love. Ridiculous amounts of money being funneled through this thing, people relying on it to pay rent and other such necessities, and there was just one guy.
I doubt this is unique to Facebook, it largely imports its culture from Google. Engineers generally want to progress their careers and the path of least resistance is moving some important metric. Prevention unfortunately doesn’t translate that well to metrics, especially prevention of a one-off.
I kind of feel the same way.
Google has the same reputation for breaking and sunsetting products. Search engines are the front door to the internet.
Yet there's so much pent up rage at Facebook, when hey... it's not unique. People pull out Cambridge Analytics 10x more than they pull out Project Maven, supply chain slave labor, or anticompetitive privacy messaging that doesn't apply to their own product (ahem...).
Facebook has thrown their hat in the ring, but it's been forced by everyone else to not fall behind. I don't trust Google's responsible AI team (do they still have one?), On the other hand, Facebook is a company with probably a gigantic knowledge base to work with to parse human value alignment. What better training data is there compared to the real conversations people are having?
far out how do you get the time
> A ‘normal’ future could still happen.
Seems very unlikely tho. Even if we don't get imminent FOOM. In context of saving for retirement, what about advancements in fighting aging? What about ~trivial automation, of things like cars and warehouses, dramatically reducing amount of real work? Is it really realistic to expect there won't at least be a substantial UBI?
I wonder about wisdom of accumulating ETH. In case there really is "normal" future, it seems like it should eventually become a standard. Ofc it could be something else instead, but it's a strong Schelling point.
> The exception is that the Big Tech companies (Google, Amazon, Apple, Microsoft, although importantly not Facebook, seriously f*** Facebook) have essentially unlimited cash, and their funding situation changes little (if at all) based on their stock price.
What about Nvidia?
---
I understand why investing into AI companies is bad from AI alignment perspective. But in case aligned AGI happens... there's still a problem of human alignment. Investing into these companies might provide a little bit of personal safety in case we end up in crap situation where there are humans without enough capital, in a world where wealth is still necessary, and their labor is worthless.
Amused/disturbed because at first I thought, what an unreasonably strong opinion about having kids, then realized it's probably good advice to a person who literally asks that question.