30 Comments

How is your relationship with those characters? If there’s no backchannel between you and Tyler, seems worthwhile to find or make one. It seems hard to have all discussions in public.

Expand full comment

Excellent post! Reminds me of classic SSC stuff in all the best ways.

My preferred theory of Tyler's behavior (I don't follow A9 closely) was that he was trying to have Strauss's cake and eat it too-- increase the salience of AI discourse, maintain his influence without getting written off as a Doomer, and push Doomers to improve their arguments (or more precisely their public-facing communication). On this theory in-the-know Doomers should read Tyler as saying not "you're wrong" but "you need to find better talking points-- the current ones come off as unrealistic and low status to wider audiences". But as sometimes happens with him I thought he'd overestimated his ability to pull off such a 5d social gambit.

Your theory is probably better, though I didn't take Tyler for that much of a Yay Progress writer on other topics-- e.g. on immigration or State Capacity Libertarianism he seemed inclined to shoot for more nuance.

Expand full comment

I too would crank the dial, I think the benefits are worth the risks, granting that I’m willing to be more flexible on how I interpret the risks. I can stand a Hansonian-esque idea that even the AI that eats the universe is in some sense us, with a long spectrum gradient of “us” going through, say, ems all the way to us continuing/being as us now. I wouldn’t put myself on their side, but I wonder if maybe some amount of intellectually thoughtful AI-not-kill-everyone-ism is this idea in some form or in an indirect sense; some combination of the benefits being (potentially -extremely-) high and/or the risks, even of extinction, not being entirely unacceptable as long as something remains.

Expand full comment

I just don't think "expecting human extinction" summarizes my position well. I expect a lot of change over the long run to our descendants, but don't think "extinction" all all gives the right connotation. Are your ancestors from a million years ago "extinct" because we are now quite different from them?

Expand full comment

It's actually worse than this. *Even if there is not just one dial*, the non-AI dial is a losing battle without some serious progress as it is, due to our current social condition. We as a society have, mostly for the better, moved from true scarcity into a position of precarious comfort -- from millions of years of craving variance simply to survive (a logical play if you're Actually Losing) into some short decades to centuries of terror at the very thought of variance in our lives (again, logical if you are winning but not enormously so) -- and true progress requires taking risks at times.

It is precisely this terror and precarity that is the cause of many of our problems. For instance, we expect much more safety with regards to children than past generations found sane (much less feasible), leading to an ever-increasing portion of life being spent in childcare (both years as a kid and hours per week as an adult). And children's personalities are inherently high-variance in nature... it was a child who said the emperor has no clothes, after all. Between this socially-constructed ascent in the costs of raising a kid and our increasing discomfort at the idea of variance in our lives, it is no wonder that fertility rates have plummeted in recent decades across much of the first world, even as hours spent at a job have also fallen (which one would guess would ordinarily lead to more time to raise a kid).

The recently deceased Ted Kaczynski had long ago identified the problem -- that the Industrial Revolution is upstream of societal changes that have been mostly for the detriment of human connection, and that a portion of these detrimental changes has been a narrowing of the window of acceptable and "sane" thought and behavior. But his proposed solution, to push the dial so far back down that we crave variance once again out of a condition of abject poverty, is abhorrent if thought about for more than a second... technology has reduced early childhood mortality from nearly one out of every two births to less than 1%. Rather, the best way forward is to continue to progress from precarity into a state of abundance where we no longer care about the effects of the variance around us because we can handle whatever it throws at us.

As it stands, we have a significant problem on our hands. Fertility rates are low enough that, barring some significant upheaval, there will be a major demographic crisis in the coming century, with fewer workers and more elderly people who demand costly and time-consuming care... and the ideal upheaval is not one of degrowth and the suffering that it brings. AI, especially in its more general forms, has a potential to be the kind of force multiplier that pulls us out of this precarious situation... between its ability to automate and speed up jobs, and its potential for use in gerontology and other valuable research (healthspan/longevity in particular is the most likely way we get out of the problem... more healthy fertile years of life will raise TFR while reducing the burden of end-of-life care).

In this lens, the rumblings of regulation against strong AI read more like a threat against the future than its salvation. The non-AI dial is largely a lost cause due to the coordination needed to even try to turn it up, combined with the pervasive social attitude against doing so... while the AI dial can still be a unilateralist's blessing with the right discoveries. The world of bits is far more amenable than the world of atoms to progress from an individual basement. Let's keep it that way, the future depends on it.

Expand full comment

- In "It would [...] be first best to say", "first" seems misplaced.

- "With of that kind of push" is missing "more" as a second word.

- Change "is" so "us" in "would put is in".

- The "Perhaps there is even something in the form of [...]" sentence is hard to parse, though I get the gist of it.

Expand full comment

I like the single-dial model. It's not really a causal explanation, but it seems like a very useful intermediate predictive model that naturally sits upstream of other problems like "arguments as soldiers". Those things always seemed a bit too fiddly; this seems robust.

On the causal side, I wonder how it happens. I notice in myself that my emotions spread over connections that make no logical sense - if a friend likes something, I feel more positively toward it, whereas if an enemy likes something, I feel more negatively toward it. (And the other way around, too.) When someone makes an argument involving many things I have strong negative feelings about, it can be difficult to identify the parts that I actually agree with; the negative emotion taints all of it. And I didn't always have negative emotions that were as strong as I do now, and I don't recall this sort of problem being as much of an issue for me back then, either. So maybe it has something to do with becoming emotionally invested in an argument? (And maybe, in the general case, there's a connection to social media feeding strong emotions to capture attention...)

I also wonder whether people are aware that they're doing it. Sometimes I've tried to talk to people about things bound up in a dial, and simply gotten a sad, pitying look, as if I am missing something vital that cannot be explained. How much of the one-dialism (looks a bit like mondialism) is unexamined, and how much is people making a private rational decision to do it, the way someone might decide never to negotiate with terrorists? Is it possible to have a private, off-the-record, never-recorded-or-referenced conversation with an intelligent person who's one-dial on some issue, where they privately acknowledge what they're doing and explain why? I've never gotten one, but that may be mostly due to my own limitations and situation.

There may be some other useful lessons to learn from the covid-19 reactions. In the beginning, the eventual lines weren't clear, and people took a while to settle into their final positions, in particular regarding masks and vaccinations. What allowed that to happen? I don't recall it seeming like merely a function of "do more" or "do less". And Trump's support for vaccination is still an outlier that a lot of people on both sides seem to have erased from their minds. It's also interesting to see that more nuance is possible now that the threat is less immediate; it reminds me of descriptions of wartime censorship letting up, and the enforced patriotism being allowed to wind down.

Expand full comment

Great read.

Is this about whether you decide to prioritise power/influence over knowledge/questions? There's a time for talk and a time for action - would be the argument.

It doesn't sit well with me. But then I am not faced with that decision each day.

Expand full comment

Your one dial theory seems related to the convergence of channels of information exchange. The more people converge on something like Twitter as a single global agora, the harder it gets for nuance to compete. Luckily, Twitter's moment seems to have passed, and we have a post-Babel dispersion going on: perhaps multiple ways of seeing the world, allowing more nuance, will prosper.

Expand full comment

But even this meta-argument seems to leave little space for the most sensible argument: human beings have a truly terrible track record when it comes to futurism, our ability to forecast even largely mechanical future events is extremely limited, and history is the story of things that didn't happen as much as it is the story of things that did. And right now the AI debate is dependent on forecasting of unfathomably complex future events with comical confidence by almost everyone involved. I find it absolutely mystifying; "tomorrow will be very much like yesterday" remains the best bet in all of human prognostication.

I can't be the first to point out that AI doomerism and AI utopianism are really two facets of the same thing. And both seem to fundamentally serve the same master - the profound human dissatisfaction with the way things are. But the world is probably going to remain the same mundane and disappointing place it's always been. That is the endowment of modernity.

Expand full comment

to the extent you’re looking for non-twitter sources and want to cross check your understanding of Marc’s position, i might recommend https://stratechery.com/2023/an-interview-with-marc-andreessen-about-ai-and-how-you-change-the-world/ (posted online after your post went live)

Expand full comment

"We can create common knowledge of what is happening... The alternative is... a cycle where people say words shaped like arguments not intended to hold water, and others point out the water those words are failing to hold..."

❤️❤️❤️

This reminded me of The Crux List. I'm looking for a path to help in such a way.

Expand full comment

But people do have nuances to the views about progress. Most people think superweapons should be subject to regulation. They just conceptualize that kind of innovation as different from "progress".

Expand full comment

Alas, the EA vs e/acc war has commenced.

Expand full comment