IMO, a lot of AI doomerism is basically status competition: "I'm smart and important because I am concerned about the exciting new type of world-ending catastrophe, instead of the old boring types of world-ending catastrophes"

Expand full comment

Personally, I worry more about my grandchildren going to school and having to do active(is there any other kind?) shooter drills then I do about future AI scenarios. I talk with them about that. I tell them about the air raid drills we had in the 60’s. And, going to a Catholic school, I worried whether I would denounce my faith at gunpoint or die a courageous altar boy...still not sure

Expand full comment

Thanks for the clearly articulated voice of reason.

Expand full comment

"If there is a new world order - AI or something else changes everything - and we are not all dead, how do you prepare for that? Good question. What does such a world look like? Some such worlds you don’t have to prepare and it is fine. Others, it is very important that you start with capital. Keeping yourself healthy, cultivating good habits and remaining flexible and grounded are probably some good places to start."

This is the question I am working on answering at the moment. It seems to me that it is universally agreed that AGI will happen in my lifetime (I am 34). My reasoning tells me that either AGI destroys us, or it fundamentally changes society, improving the average human standards of living substantially. This could very well mean that concepts such as law, government, capital markets, family systems,etc are significantly different than they are today.

Thinking of preparing for this new world is hard. Here's a few things that I've began doing:

1. I am thinking less of life past 60, and shifting that energy more to the present moment.

2. Saving for retirement is slowly going down in the list of priorities. I have a very healthy asset base and I am not going to blow through it all, But I am also not going to maximize savings either. I will put some money away, but less than I would a few years ago. That % I am not saving for retirement will be going to the present moment.

3. Definitely not delaying bucket list items. If I can do it, I will do it.

4. The probability of having biological children has gone down. I have discussed this with my partner. We will not be ready for children for the next three years and she's older than me, so the chances of biological children were already low. Furthermore adopting is still on the table and the probability of that is even higher now.

More importantly, these are all moving goal posts as developments happen.

Are you shifting in any way your life Zvi?

Expand full comment

I don't get this at all. I'm reminded of the end of "No Country for Old Men: "What you got ain't nothin' new. This country's hard on people. You can't stop what's coming. It ain't all waiting on you. That's vanity."

AI will never destroy all value. Humans will still need to eat. AI will never be able to keep its own power running, like it seems to in the movies. You will need humans every step of the way, and those humans, by their very existence create value. Maybe that puts me into a different category, some kind of new Luddite, but I just don't see it.

Expand full comment
Mar 1·edited Mar 1

So, if you won't make an AI ETF, would you consider making an anti-AI ETF (e.g. an index fund minus AI companies)? This seems like it would be helpful because:

a) As with green investment portfolios, this would let socially responsible investors who don't want to contribute to the problem be able to avoid index funds (which will indirectly become progressively larger investments in the AI sector over time if no winter occurs),

b) Selfishly, AI-winter type scenarios (where we neither die nor radically transform the economy in the next few decades) seem casually like they represent the bulk of the scenarios I'd want my retirement funds to hedge against!

Expand full comment

What’s driving the sentiment against Facebook?

I worked there for a long while, and in better times there was a kind of beautiful chaos where everyone worked on what they wanted to work on and it all mostly kind of worked, and produced tremendous value for the company.

But careerism became more of a driving force as time went on, and this had some bad effects.

For example, the peer to peer payments on Messenger. This was a good product, and had a big advantage over competitors. Payments appeared in recipient bank accounts mere minutes after being sent because the company did not care to make money off of in flight transactions.

But once it was built and working well, there wasn’t much more to be done with it that would look impressive on your reviews. It ended up being maintained by one guy as a labor of love. Ridiculous amounts of money being funneled through this thing, people relying on it to pay rent and other such necessities, and there was just one guy.

I doubt this is unique to Facebook, it largely imports its culture from Google. Engineers generally want to progress their careers and the path of least resistance is moving some important metric. Prevention unfortunately doesn’t translate that well to metrics, especially prevention of a one-off.

Expand full comment

far out how do you get the time

Expand full comment

> A ‘normal’ future could still happen.

Seems very unlikely tho. Even if we don't get imminent FOOM. In context of saving for retirement, what about advancements in fighting aging? What about ~trivial automation, of things like cars and warehouses, dramatically reducing amount of real work? Is it really realistic to expect there won't at least be a substantial UBI?

I wonder about wisdom of accumulating ETH. In case there really is "normal" future, it seems like it should eventually become a standard. Ofc it could be something else instead, but it's a strong Schelling point.

Expand full comment
Mar 6·edited Mar 6

> The exception is that the Big Tech companies (Google, Amazon, Apple, Microsoft, although importantly not Facebook, seriously f*** Facebook) have essentially unlimited cash, and their funding situation changes little (if at all) based on their stock price.

What about Nvidia?


I understand why investing into AI companies is bad from AI alignment perspective. But in case aligned AGI happens... there's still a problem of human alignment. Investing into these companies might provide a little bit of personal safety in case we end up in crap situation where there are humans without enough capital, in a world where wealth is still necessary, and their labor is worthless.

Expand full comment