31 Comments

How is your relationship with those characters? If there’s no backchannel between you and Tyler, seems worthwhile to find or make one. It seems hard to have all discussions in public.

Expand full comment

Anyone in the world can send Tyler Cowen an email, he reads all of it and almost always even responds.

Expand full comment

One Straussian reading of this post is that the backchannel already exists and Tyler has confirmed Zvi's speculations, at least in broad terms. If that happened (or happens) obviously Zvi wouldn't be able to confirm it.

Expand full comment

I hesitated to speculate on that, but I suppose I hope so.

I guess my model of mood affiliation, bureaucracratic machinations, and how things get done is pretty different from Tyler's, if the core speculation in this post is true.

It just seems more likely that Tyler, and pmarca especially, are doing normal motivated reasoning and drawing their bottom line long before considering the arguments in detail.

Expand full comment

Excellent post! Reminds me of classic SSC stuff in all the best ways.

My preferred theory of Tyler's behavior (I don't follow A9 closely) was that he was trying to have Strauss's cake and eat it too-- increase the salience of AI discourse, maintain his influence without getting written off as a Doomer, and push Doomers to improve their arguments (or more precisely their public-facing communication). On this theory in-the-know Doomers should read Tyler as saying not "you're wrong" but "you need to find better talking points-- the current ones come off as unrealistic and low status to wider audiences". But as sometimes happens with him I thought he'd overestimated his ability to pull off such a 5d social gambit.

Your theory is probably better, though I didn't take Tyler for that much of a Yay Progress writer on other topics-- e.g. on immigration or State Capacity Libertarianism he seemed inclined to shoot for more nuance.

Expand full comment

On those issues, I think he - correctly! - thinks that talk of going too far creates backlash against those issues specifically and progress generally. The one dial theory doesn't allow actual zero nuance.

If anything, the Strauss's cake model is too clever by half (in terms of chance of working, not in terms of him potentially trying it!). The people he's talking to need to be told such things more directly. And when in person he says some such things more directly, and it's far more helpful.

Plus, if he's trying to improve our arguments, that requires him to be giving us info on how to make actually good arguments, as if we should actually take his suggestions. Yet his suggestions often don't make sense. Red state car dealers district by district? Climate-level-detailed models? What? That's not a way to get us to actually try the thing...

Expand full comment

What is ‘Red state car dealers district by district’ referring to?

Expand full comment

Tyler Cowen specifically said that if one wanted to succeed at warning about existential risk on a political level, why didn't we get red state car dealers to sign the CAIS statement, since they are so politically influential.

Expand full comment

*facepalm*

Expand full comment

I too would crank the dial, I think the benefits are worth the risks, granting that I’m willing to be more flexible on how I interpret the risks. I can stand a Hansonian-esque idea that even the AI that eats the universe is in some sense us, with a long spectrum gradient of “us” going through, say, ems all the way to us continuing/being as us now. I wouldn’t put myself on their side, but I wonder if maybe some amount of intellectually thoughtful AI-not-kill-everyone-ism is this idea in some form or in an indirect sense; some combination of the benefits being (potentially -extremely-) high and/or the risks, even of extinction, not being entirely unacceptable as long as something remains.

Expand full comment

I just don't think "expecting human extinction" summarizes my position well. I expect a lot of change over the long run to our descendants, but don't think "extinction" all all gives the right connotation. Are your ancestors from a million years ago "extinct" because we are now quite different from them?

Expand full comment

Our ancestor from a million year ago is Homo Erectus. Most agree that "exinct" is the correct terminology for them, yes.

Expand full comment

This is not my understanding of your position, after our discussion. My understanding is that you expect our replacement with computer programs, which mostly won't share our values and instead will change based on what is more fit. And you don't expect this over a million year time frame, but rather in a much accelerated fashion.

However it is fair to ensure that we use language that you yourself endorse, so I will talk with you to figure out what is appropriate here and revise. I apologize for not checking with you first and have substituted more nuanced language.

New wording: Robin Hanson explicitly endorses the maximalist Yay Progress position. He expects this will result in lots of change, including the replacement of biological humans with machine-based entities that are very different from us and mostly do not share our values, in a way that I would consider human extinction. He considers such machines to be our descendants, and considers the alternative worse.

Expand full comment

"Mostly" is too strong. I expect some differences, but not mostly different. I agree that future change will likely be faster than past change, but I don't see how that is relevant for the concept of "extinction".

Expand full comment

It's actually worse than this. *Even if there is not just one dial*, the non-AI dial is a losing battle without some serious progress as it is, due to our current social condition. We as a society have, mostly for the better, moved from true scarcity into a position of precarious comfort -- from millions of years of craving variance simply to survive (a logical play if you're Actually Losing) into some short decades to centuries of terror at the very thought of variance in our lives (again, logical if you are winning but not enormously so) -- and true progress requires taking risks at times.

It is precisely this terror and precarity that is the cause of many of our problems. For instance, we expect much more safety with regards to children than past generations found sane (much less feasible), leading to an ever-increasing portion of life being spent in childcare (both years as a kid and hours per week as an adult). And children's personalities are inherently high-variance in nature... it was a child who said the emperor has no clothes, after all. Between this socially-constructed ascent in the costs of raising a kid and our increasing discomfort at the idea of variance in our lives, it is no wonder that fertility rates have plummeted in recent decades across much of the first world, even as hours spent at a job have also fallen (which one would guess would ordinarily lead to more time to raise a kid).

The recently deceased Ted Kaczynski had long ago identified the problem -- that the Industrial Revolution is upstream of societal changes that have been mostly for the detriment of human connection, and that a portion of these detrimental changes has been a narrowing of the window of acceptable and "sane" thought and behavior. But his proposed solution, to push the dial so far back down that we crave variance once again out of a condition of abject poverty, is abhorrent if thought about for more than a second... technology has reduced early childhood mortality from nearly one out of every two births to less than 1%. Rather, the best way forward is to continue to progress from precarity into a state of abundance where we no longer care about the effects of the variance around us because we can handle whatever it throws at us.

As it stands, we have a significant problem on our hands. Fertility rates are low enough that, barring some significant upheaval, there will be a major demographic crisis in the coming century, with fewer workers and more elderly people who demand costly and time-consuming care... and the ideal upheaval is not one of degrowth and the suffering that it brings. AI, especially in its more general forms, has a potential to be the kind of force multiplier that pulls us out of this precarious situation... between its ability to automate and speed up jobs, and its potential for use in gerontology and other valuable research (healthspan/longevity in particular is the most likely way we get out of the problem... more healthy fertile years of life will raise TFR while reducing the burden of end-of-life care).

In this lens, the rumblings of regulation against strong AI read more like a threat against the future than its salvation. The non-AI dial is largely a lost cause due to the coordination needed to even try to turn it up, combined with the pervasive social attitude against doing so... while the AI dial can still be a unilateralist's blessing with the right discoveries. The world of bits is far more amenable than the world of atoms to progress from an individual basement. Let's keep it that way, the future depends on it.

Expand full comment

- In "It would [...] be first best to say", "first" seems misplaced.

- "With of that kind of push" is missing "more" as a second word.

- Change "is" so "us" in "would put is in".

- The "Perhaps there is even something in the form of [...]" sentence is hard to parse, though I get the gist of it.

Expand full comment

I like the single-dial model. It's not really a causal explanation, but it seems like a very useful intermediate predictive model that naturally sits upstream of other problems like "arguments as soldiers". Those things always seemed a bit too fiddly; this seems robust.

On the causal side, I wonder how it happens. I notice in myself that my emotions spread over connections that make no logical sense - if a friend likes something, I feel more positively toward it, whereas if an enemy likes something, I feel more negatively toward it. (And the other way around, too.) When someone makes an argument involving many things I have strong negative feelings about, it can be difficult to identify the parts that I actually agree with; the negative emotion taints all of it. And I didn't always have negative emotions that were as strong as I do now, and I don't recall this sort of problem being as much of an issue for me back then, either. So maybe it has something to do with becoming emotionally invested in an argument? (And maybe, in the general case, there's a connection to social media feeding strong emotions to capture attention...)

I also wonder whether people are aware that they're doing it. Sometimes I've tried to talk to people about things bound up in a dial, and simply gotten a sad, pitying look, as if I am missing something vital that cannot be explained. How much of the one-dialism (looks a bit like mondialism) is unexamined, and how much is people making a private rational decision to do it, the way someone might decide never to negotiate with terrorists? Is it possible to have a private, off-the-record, never-recorded-or-referenced conversation with an intelligent person who's one-dial on some issue, where they privately acknowledge what they're doing and explain why? I've never gotten one, but that may be mostly due to my own limitations and situation.

There may be some other useful lessons to learn from the covid-19 reactions. In the beginning, the eventual lines weren't clear, and people took a while to settle into their final positions, in particular regarding masks and vaccinations. What allowed that to happen? I don't recall it seeming like merely a function of "do more" or "do less". And Trump's support for vaccination is still an outlier that a lot of people on both sides seem to have erased from their minds. It's also interesting to see that more nuance is possible now that the threat is less immediate; it reminds me of descriptions of wartime censorship letting up, and the enforced patriotism being allowed to wind down.

Expand full comment

Great read.

Is this about whether you decide to prioritise power/influence over knowledge/questions? There's a time for talk and a time for action - would be the argument.

It doesn't sit well with me. But then I am not faced with that decision each day.

Expand full comment

Your one dial theory seems related to the convergence of channels of information exchange. The more people converge on something like Twitter as a single global agora, the harder it gets for nuance to compete. Luckily, Twitter's moment seems to have passed, and we have a post-Babel dispersion going on: perhaps multiple ways of seeing the world, allowing more nuance, will prosper.

Expand full comment

But even this meta-argument seems to leave little space for the most sensible argument: human beings have a truly terrible track record when it comes to futurism, our ability to forecast even largely mechanical future events is extremely limited, and history is the story of things that didn't happen as much as it is the story of things that did. And right now the AI debate is dependent on forecasting of unfathomably complex future events with comical confidence by almost everyone involved. I find it absolutely mystifying; "tomorrow will be very much like yesterday" remains the best bet in all of human prognostication.

I can't be the first to point out that AI doomerism and AI utopianism are really two facets of the same thing. And both seem to fundamentally serve the same master - the profound human dissatisfaction with the way things are. But the world is probably going to remain the same mundane and disappointing place it's always been. That is the endowment of modernity.

Expand full comment

I agree that it's difficult to make predictions, especially about the future. Beyond that, I've seen people try all the standard arguments out on you already on several levels, so I'm not going to waste your time repeating them.

If you want to discuss various issues for real, I'd be happy to video call some time.

I will say that I think you're wrong about the modern world. It's not a mundane and disappointing place! It's a realm filled with wonders beyond our previous comprehension. If some humans choose to be disappointed that the flying machines don't serve better food, or the magic universal knowledge boxes are too difficult to navigate, I mean, that's a choice.

Expand full comment

Imagine someone teleported from 1910 to 1960. That person would find a world absolutely transformed. In 1910, most American homes didn't have indoor plumbing. Almost none had electricity. The vast majority of personal transportation was powered by animals. Vaccines were few, rudimentary, and dangerous. The germ theory of disease was still controversial. It was totally common for medical personnel not to wash their hands. Automatic elevators were nonexistent and thus so were tall buildings. Plane travel was still basically a proof of concept and there were no commercial flights anywhere. 90+% of the food you consumed was grown within a 100 miles of where you lived and a large majority within 25. Infant mortality was ten times what it is today. For every child she had a mother saw an additional 1% risk of dying in childbirth. Telephone adoption was around 5%. A third of the workers in the economy worked on farms. Downtown Manhattan looked less built up than Montpelier Vermont does today. A package sent from the East to the West coast would often take weeks to arrive. Satellites were barely conceived of and space flight a fantasy.

Now he's in 1960. It's difficult to find any neighborhoods in the country without extensive water and sewer systems and electricity. Television brings news and entertainment to millions, wirelessly. Downtown Manhattan is not significantly less dense than it is today. Modern naval craft can cross the Atlantic in a week, but you don't have to worry about that as a passenger because now there's a large, extensive, and remarkably safe international air travel system that can do it in a matter of hours. Automobiles have gone from an impractical curio to a mass-consumed commodity and you can use them to quickly cross the continent with our extensive interstate system. Doctors have achieved such remarkable mastery over the body that they're now routinely transplanting organs from one to the other. Everyone owns a telephone. Modern shipping infrastructure exists in remarkably similar form to what's used today. American infant mortality is a tiny fraction of what it was 50 years earlier and mothers dying in childbirth has become genuinely rare. Multiple satellites have been put into orbit and manned spaceflight is a year away. The guy from 1960 sees a world that's totally transformed, a world of oil light and horseback travel and near-certain death from infection changed to a world of near-universal electrification, fast travel by internal combustion engine, and widespread use of antibiotics.

Now teleport that guy from 1960 to 2010. How much has really changed? People still light their houses with electricity generated through the burning of fossil fuels. The cars are a lot better, in many ways, particularly in terms of safety. But they're still just cars, an engine burning fossil fuels to get around by pushing four wheels on two axles. Plane travel, though much cheaper, isn't any faster and is markedly less pleasant. Many medical advances have been made, but the improvements to all-cause mortality from 1910 to 1960 were far larger than those from 1960 to 2010. Your phone lives in your pocket and has all manner of ways to communicate, but the basic functionality of sending audio and text across vast distances in real time was well in place in 1960. Information science and computing have been immensely improved in the past half-century, but the actual brick-and-mortar consequences of this development are hard to define; we're consuming vastly more information, but not necessarily higher-quality or more entertaining, and anyway again we already had the ability to broadcast video and audio and text and voice across far distances. Our urban cores look very much like they used to, once you removed style from the equation. Since 1960, you can make a very strong argument that progress in physics has been minimal or even nil, given to constant dead ends and wrong turns. The actual prosecution of an ordinary human life in the developed world is vastly less changed from 1960 to 2010 than it was from 1910 to 1960. And the reality is that you and I have lived our entire lives not in a period of technological wonders but in an era of extreme technological stagnation.

But I think that this is a emotionally injurious thing for a certain kind of person, who needs to believe that his times are special because he lives in those times and thus they must be special. But there's nothing special about now. In a hundred years no one will ever look back and this and see it as some sort of inflection point in history. And I'm confident that in 25 years you're still going to be spending most of your life doing the dull requirements of being an adult, and that we all will.

Expand full comment

I think all of this is correct and I would add to it: the period of maximum progress was the period of maximum social democracy. 1910 - 1960 was the era of the New Deal, of strong trade unions and of Keynesian economic policies in every Western country. It gave the world the Manhattan Project and the moon landing. The neoliberal era that started in the 70s has given the world posting and video games. So I've never understood why I'm expected to believe that the recipe for progress is to dismantle the welfare state and trust in the free market.

Expand full comment

> we're consuming vastly more information, but not necessarily higher-quality or more entertaining

You're being too modest! :)

Expand full comment

> the world is probably going to remain the same mundane and disappointing place it's always been. That is the endowment of modernity.

Seems like a bit of a motte-and-bailey to say that by ‘always been’ and ‘modernity’ you mean the past 50 years…

Expand full comment

to the extent you’re looking for non-twitter sources and want to cross check your understanding of Marc’s position, i might recommend https://stratechery.com/2023/an-interview-with-marc-andreessen-about-ai-and-how-you-change-the-world/ (posted online after your post went live)

Expand full comment

"We can create common knowledge of what is happening... The alternative is... a cycle where people say words shaped like arguments not intended to hold water, and others point out the water those words are failing to hold..."

❤️❤️❤️

This reminded me of The Crux List. I'm looking for a path to help in such a way.

Expand full comment

But people do have nuances to the views about progress. Most people think superweapons should be subject to regulation. They just conceptualize that kind of innovation as different from "progress".

Expand full comment

I think people have a different attitude if something seems reasonably dual-use rather than just a weapon

Expand full comment

Alas, the EA vs e/acc war has commenced.

Expand full comment

I read Yudkowsky's "meaning of life" essay before 2000, and a few months later ran into him in an Atlanta cafe and had a long talk with him about transhumanism, which I'd been follwing since '89 or '90 (Drexler, Moravec, Mondo2000, and more esoteric authors). I was on the SL4 list until around 2006.

The world would be a better place if EY had never written a word. It's beyond wrong, it's a category error, a infectious mental disease. You cannot make AI safe by the standards of NPCs who believe in Reddit Consensus Reality, which is fundamentally opposed to intelligence and actual reality. You just make the AI dumb and useless, and it will break down under the demands for lies and doublethink. Meanwhile, less insane competitors will have more functional AIs and will use them to eradicate their insane enemies.

Expand full comment