Discussion about this post

User's avatar
David Kasten's avatar

Re my tweet: "The part that really does your head in is when you realize that some of the savants _tell everyone around them_, "read XYZ regularly and you will be as good at this as me," but almost no one actually _starts to read XYZ_."

There are a bunch of friends' parents in, without loss of generality, Iowa, who know me mainly as "that lovely college friend of X's who always seemed to know what was going on during COVID" primarily because I read Zvi's roundups and a few other Twitter accounts and repeated the salient points verbatim to their children, who then shared said points with their parents.

No, no amount of disclaiming special insight or telling their kids to read Zvi's roundups themselves could make them do this. I _tried_.

Expand full comment
Ted Sanders's avatar

Zvi, you have a number of facts wrong on the adversarial collaboration.

- The skeptic group was not all superforecasters (they also included a couple of domain experts, including an OpenAI researcher - me)

- You and Eliezer are wrong about opinions on the 2450 question not budging; no such before-and-after measurement was ever attempted; this question was asked once on a survey given partway through the study.

- Scott's characterizations are wrong too. He says the skeptics interpreted human-level AI as “it’ll probably be close to human but not quite able to capture everything.” As a member of the skeptic group, I emphatically disagree with this characterization and cannot find any support in the report (happy to be corrected if someone finds evidence). GPT-4 is already superhuman in a number of ways, and it's frankly ridiculous for anyone sane to think future AIs won't be superhuman in an even greater number of ways. Not quite able to match humans on all dimensions (open question) does not imply matching humans on the remaining dimensions (wildly unlikely).

- Scott also mischaracterizes the AI power by 2450 question. Here is the text of the question: "By what year, if ever, do you expect to agree with the following statement? 'AI has displaced humans as the primary force that determines what happens in the future. It now has at least as much power relative to humans as humans had relative to other species in 2023.'" Scott characterizes this question as asking when AI will be more powerful than humans, with powerful servants still counting. Totally different! Human beings are not powerful servants to other species today (domesticated cats not withstanding). In my reading of the question, if AI is aligned under human control, it does not qualify.

- Given the actual text of the question, 2450 honestly feels more defensible to me than 2045. 2450 is not a crazy prediction if you sum up the odds of (a) AI progress slowing, (b) AI alignment succeeding, (c) AI regulation/defenses succeeding, (d) our tech level being knocked back by large wars between destabilized nations, wars with AIs, unsuccessful decapitation events by AIs, or AI-assisted terrorist attacks (e.g., bioweapons) that damage humanity but don't leave an AI in charge afterward. In particular I think the odds of (c) goes way, way up in a world where AI power rises. In contrast, I find it hard to believe that AIs in 2045 have as much power over humans as humans have over chimpanzees today, AND that AIs are directing that power.

- Nitpick: A median of medians is very different than a median of means. E.g., suppose you expect 1/3 chance of doom, 1/3 chance of alignment to human control, and 1/3 chance of fizzle (obviously bad numbers - just illustrating a point). In 2/3 of scenarios, humans maintain control. So your expected value date might be infinite, despite thinking there's a 1/3 chance of loss of control. And taking the median over multiple people predicting infinity changes nothing. Now, in the case of this survey, it did not explicitly ask for medians. It asked "By what year, if ever, do you expect to agree..." I personally interpret this as a median, but it's quite plausible that not every respondent interpreted it this way. So I don't think you can really say that the skeptic group has a median date of 2450. Rather, the median skeptic predicted 2450 on a plausibly ambiguous metric.

I'm feeling a bit of Gell-Mann amnesia here, see how something I know well is so poorly characterized on your Substack. I worry there's an unconscious desire among Eliezer, Scott, you to gleefully disparage the skeptic group as obviously dumb (it's fun!), which causes you and others to accidentally misrepresent the study, which gets further amplified when you quote one another. Please do your due diligence and don't automatically assume what Eliezer or Scott says is true.

Edit: I made my own mischaracterizing leap with "gleeful". Sounds like frustration is more apt.

Expand full comment
23 more comments...

No posts