57 Comments
Comment deleted
Expand full comment

Do you have any view or insight into Beff Jezos' startup extropic.ai? I don't really understand what it is that it claims to be doing.

Expand full comment

My only relevant position is that humanity and biology needs to survive, no matter what.

Expand full comment

"I highlight it to show exactly how out of line and obscenely unacceptably rude and low are so many of those who would claim to be the ‘adults in the room’ and play their power games. It was bad before, but the last month has gotten so much worse."

I get off the bus whenever people shift from "here are the logical reasons and data why I'm right" to "now I shall play tribal monkey politics games and instead advocate with emotional appeals to people's identities." Even when I agree with those emotions, and identify with those identities. You could make this case against lots of people on the left, but it's no less silly when not-left people do it. I don't actually think "AI doom" is inevitable, but that's because I think people can identify the risk (and also less severe, but perhaps more likely ones) and try to mitigate it. Which is a radically pro-tech viewpoint! But if the predominant viewpoint among people building X is "there is no risk to X, we shouldn't do anything about them, and people who think there is are stupid" then I'm pretty confident they are making that risk far more likely.

"radically 99th percentile pro-nuclear-power along with most other technological advancements and things we could build."

I think this is a great point. You see people going "oh the AI doom people are just like luddites or Elizabeth Warren or Gary Gensler or anti-vaxxers" and again, that's where I am getting off the bus. Maybe you're right - maybe AI will be 100% awesome and they're wrong, but the fact that you're not acknowledging that they'd been aggressively in support of almost every OTHER advancement, and lumping them in with people who are canonically against many other advancements, makes me think you're not accurately assessing their objections, and even more importantly, that you don't want to, because it makes it easier to win those monkey politics games. And if that's what you're doing, then you're just making me even more suspicious that you're not right.

Expand full comment

It is just so painfully obvious that Verdon is a grifter, just like so many through history including SBF, Trump, Holmes, Madoff and so many more. And exposing a grifter is one of the highest callings of journalism, not a failing.

E/acc is not a legitimate intellectual position, as I am certain you are aware, any more than Elon or Thiel's earth shattering idea-of-the-month. It is a trivial oversimplification of society, the economy, and how we should live and manage our society that it is astounding it has been adopted by anyone past the late-adolescent "Fountainhead" phase of their own development.

The pap that you are going to hold some moral high ground that they should not have doxxed this grifter is embarrassing to you.

What would be delightful would be if we would recognize that most of the c**ts that keep foisting this nonsense, including MAGA and EA, are men with self-esteem issues trying to find power in a world that they cannot control and that resists them. It is virtually always men (apologies to Holmes and Thatcher, who both wanted to be men), and almost all very poorly-read outside of math and sciences.

They attempt to create mathematical approaches to social issues without any comprehension of the stunning complexity of the world, ecology, psychology, sociology or history. They inevitably create trivial models (like EA's expected value calculation) that make problems seem solvable, without understanding that there are only three outcomes when one tries to do so:

1. The algorithm is simply wrong beyond a very short term or outside of very constrained circumstances because what is considered noise by the assumptions becomes signal (weather, stocks, food to health connections)

2. In order to "work," over a long period of time, the nature of reality is constrained by force to simplify its terms (modern bureaucracy, schooling, financial markets)

3. The model/algorithm becomes exactly as complex and perfectly models the world, in which case it is useless as it would operate at the same time scale as the world. Read "On Exactitude in Science" by Borges.

The people behind these new movements (not unlike the last round of Seasteaders), are usually in category one. Just look at a few of the example uses of EA's algorithm to realize how unbelievably silly it is). The scary part is that they inevitably realize this and move to category 2. Have you noticed that they all end up talking about some form of monarch or tyrant? Usually referring at some point to Plato's "Philosopher King" ?

EA and e/acc are childish, simplistic, and immensely dangerous. They are grifts by deeply damaged men/quasi-men It is incumbent upon us to expose their nonsense and the sociopaths who are behind them.

(Side note: Thatcher did study at Oxford, but she studied Chemistry, which means she took zero classes not related to that discipline. No philosophy, no political philosophy, no economics. I attended Oxford, it is not an American university)

Expand full comment

I feel like I agree with the core e/acc principles. Beff started it but it has spread to other people I respect like Garry Tan and Marc Andreessen. I see it as focused on the points:

1. Tech progress is good

Anything like an "AI slowdown" or "destroying this company is consistent with the mission" is a bad idea.

2. Freedom of religion

It's okay if Beff Jezos believes in the rise of the machine god. Just like it's okay if people believe in the second coming of Jesus. And it's okay if people believe that one day AI may destroy humanity. But you have to get along and work with people who don't agree with your particular religious vision and not try to convert them all the time.

Expand full comment

For the record, Zvi is one of a tiny group to whom I basically outsource my understanding of AI and the issues arising from it. The work he puts in, rounding up and interpreting developments is admirable. Thanks for this piece too.

Expand full comment

It seems to me that e/acc has taken the usual course that identity-based memes seem to universally take. It’s kind of a particular flavor of the community-of-idiots effect but it’s a little more complicated in that there’s definitely an element of people starting it as a goofy joke but you get a one-two punch of people thinking they’re 100% unironically serious about it & joining in and ideological opponents thinking they’re 100% unironically serious about it & panic fearmongering over it. Like I remember circa 2015 telling my dad he needs to chill because “alt right” is just a stupid internet joke but now here we are. I struggle to think of any sort of meme identity that has successfully maintained a reasonable level of unseriousness.

I guess my question, having not paid a ton of attention to this, is how sure are we he’s the “founder” versus the arch-guy-who-decided-to-take-this-meme-too-seriously?

Expand full comment

I hate being a meatbag. I hate being talking meat (reference: https://www.mit.edu/people/dpolicar/writing/prose/text/thinkingMeat.html). It's a stupid arrangement of atoms that resulting from some random evolution. I do not want to die and I do want my consciousness preserved, if not 100% than to some reasonable degree. I also think that the worst case scenario of AI-based grabby aliens has been empirically invalidated by us still being around after 14 billion years, unless you are willing to bite the bullet and say that "we are the first within a billion light years or so", which is very much anti-Copernican. Given the above, I think e/acc or d/acc or something similar is a lot closer to the approach I want than Yudkowskian doomerism. Full steam ahead until there are good empirical (not hypothetical) reasons to slow down. It is unfortunate that Beff Jesos' discourse style is so obnoxious, I wish it was more reasonable, but it does not invalidate the goal.

Expand full comment

Thank you for this! Listened to the Beff Jezos interview on the Moment of Zen podcast and did not feel he was constructively engaging with any criticisms of the e/acc position.

Expand full comment

I try to keep up with AI and accelerationism and related topics - but the people involved in these discussions are so radically divorced from reality and basic human values that it is remarkable that anyone takes the time to think about or describe or summarize their bizarre and (I can only assume) drug-addled points of view.

At least I can take one Substack off my list.

Expand full comment

Love everything you write Zvi, keep it up :)

I used to use the word 'doomer' in a kind of ironic self-deprecating humility (I'm one myself: p(doom) in the 10-90% range).

I'm thinking of changing now, this probably does more harm to the underlying idea than is worth it.

Still not in love with the alternatives sadly.

Expand full comment

I kind of respect Scott Alexander on a personal level, though I'm especially sympathetic to his protest against doxxing in almost all cases.

At the same time, I'm ambivalent about how I should react to Scott's protest actions. He was doxxed by NYT. Should I not share any links myself from Forbes, or NYT, for "at least a year" too? Should I do that indefinitely until such a time NYT or Forbes issues a public apology? Should I not share them on any social media for at least a year? If the answer to all those questions is 'yes,' am I obliged to pressure anyone else I know with a blog to follow suit?

Those seem like kind of high bars to clear. I'm not personally inclined to follow to the letter all, or even any one, of those rules. If Scott or some of his avid fans knew that, would they expect me to impose those rules on myself anyway? Would they judge me to be bad or wrong if I didn't? If they did, how seriously should I take them?

I've got no sense of what the answers to those questions would be, either.

Expand full comment
Dec 8, 2023·edited Dec 8, 2023

EDIT: Deleted because it was meant to be a reply, reposting in the proper place.

Expand full comment
Dec 8, 2023·edited Dec 8, 2023

At least a partial explanation of Jezos's fanatical brand of e/acc is that it's a useful recruiting tool for his startup. Evidence is a) he recently mentioned the importance of ideology to recruit passionate, committed people toward a startup cause like his and, adjacently, b) in talking about OpenAI, Roon's made similar comments on twitter about the importance of ideology to motivate herculean efforts.

Expand full comment