25 Comments

I will never use AI for writing simply because it is a soul-destroying demon.

Expand full comment

I love using AI for writing because it is a soul-expanding angel.

Expand full comment

Re my tweet: "The part that really does your head in is when you realize that some of the savants _tell everyone around them_, "read XYZ regularly and you will be as good at this as me," but almost no one actually _starts to read XYZ_."

There are a bunch of friends' parents in, without loss of generality, Iowa, who know me mainly as "that lovely college friend of X's who always seemed to know what was going on during COVID" primarily because I read Zvi's roundups and a few other Twitter accounts and repeated the salient points verbatim to their children, who then shared said points with their parents.

No, no amount of disclaiming special insight or telling their kids to read Zvi's roundups themselves could make them do this. I _tried_.

Expand full comment

Tl;dr: Thank you, Zvi. You helped keep some of my friends' parents alive.

Expand full comment

In all the comparisons I see of various LLMs’ abilities, it seems that the top performing ones are nearly at parity. This suggests to me that the best LLMs are very nearly commoditized. What if, for most people’s use case, any one of GPT4, Gemini 1.5, or Claude 3 are good enough? How do any of these providers build a defensible moat? It seems to me that the companies with the best distribution—Microsoft and Google—will win, assuming the best LLMs remain more or less commoditized.

Expand full comment

The Maze thing doesn't replicate btw

I tried a few times on GPT-4 and it didn't get it a single time. It even started writing some python script to answer the question.

Expand full comment

"The Culture series is the exception that proves the rule, humans do not matter there, and Ian Banks ignores this and tells stories about the humans anyway."

The Player of Games was so supremely excellent. Highly recommended.

Expand full comment

I mean I literally am him and yes it was a good yarn if you ignore the initial trap. He would never go for it.

Expand full comment

Similar to how sci-fi gets one free physical impossibility, every smart protagonist story gets one free stupid decision.

Expand full comment

Zvi, you have a number of facts wrong on the adversarial collaboration.

- The skeptic group was not all superforecasters (they also included a couple of domain experts, including an OpenAI researcher - me)

- You and Eliezer are wrong about opinions on the 2450 question not budging; no such before-and-after measurement was ever attempted; this question was asked once on a survey given partway through the study.

- Scott's characterizations are wrong too. He says the skeptics interpreted human-level AI as “it’ll probably be close to human but not quite able to capture everything.” As a member of the skeptic group, I emphatically disagree with this characterization and cannot find any support in the report (happy to be corrected if someone finds evidence). GPT-4 is already superhuman in a number of ways, and it's frankly ridiculous for anyone sane to think future AIs won't be superhuman in an even greater number of ways. Not quite able to match humans on all dimensions (open question) does not imply matching humans on the remaining dimensions (wildly unlikely).

- Scott also mischaracterizes the AI power by 2450 question. Here is the text of the question: "By what year, if ever, do you expect to agree with the following statement? 'AI has displaced humans as the primary force that determines what happens in the future. It now has at least as much power relative to humans as humans had relative to other species in 2023.'" Scott characterizes this question as asking when AI will be more powerful than humans, with powerful servants still counting. Totally different! Human beings are not powerful servants to other species today (domesticated cats not withstanding). In my reading of the question, if AI is aligned under human control, it does not qualify.

- Given the actual text of the question, 2450 honestly feels more defensible to me than 2045. 2450 is not a crazy prediction if you sum up the odds of (a) AI progress slowing, (b) AI alignment succeeding, (c) AI regulation/defenses succeeding, (d) our tech level being knocked back by large wars between destabilized nations, wars with AIs, unsuccessful decapitation events by AIs, or AI-assisted terrorist attacks (e.g., bioweapons) that damage humanity but don't leave an AI in charge afterward. In particular I think the odds of (c) goes way, way up in a world where AI power rises. In contrast, I find it hard to believe that AIs in 2045 have as much power over humans as humans have over chimpanzees today, AND that AIs are directing that power.

- Nitpick: A median of medians is very different than a median of means. E.g., suppose you expect 1/3 chance of doom, 1/3 chance of alignment to human control, and 1/3 chance of fizzle (obviously bad numbers - just illustrating a point). In 2/3 of scenarios, humans maintain control. So your expected value date might be infinite, despite thinking there's a 1/3 chance of loss of control. And taking the median over multiple people predicting infinity changes nothing. Now, in the case of this survey, it did not explicitly ask for medians. It asked "By what year, if ever, do you expect to agree..." I personally interpret this as a median, but it's quite plausible that not every respondent interpreted it this way. So I don't think you can really say that the skeptic group has a median date of 2450. Rather, the median skeptic predicted 2450 on a plausibly ambiguous metric.

I'm feeling a bit of Gell-Mann amnesia here, see how something I know well is so poorly characterized on your Substack. I worry there's an unconscious desire among Eliezer, Scott, you to gleefully disparage the skeptic group as obviously dumb (it's fun!), which causes you and others to accidentally misrepresent the study, which gets further amplified when you quote one another. Please do your due diligence and don't automatically assume what Eliezer or Scott says is true.

Edit: I made my own mischaracterizing leap with "gleeful". Sounds like frustration is more apt.

Expand full comment

I will update the section now that I have this information - I saw no signs of this conflicting story before this.

Expand full comment

Thanks! I assumed good intent, and recognize that fact-checking other trustworthy people by reading a 150-page report is not costless. :)

Expand full comment

Oh yeah, I'm definitely not filled with the kind of copious free time it would take to actually read the report.

Expand full comment

I have made the modifications - let me know if you think anything there is still wrong or unreasonable, for now I have to run to something. In general I am counting on people speaking up to correct such errors before this point - it is not realistic that I deeply investigate the details versus looking for objections and correcting when errors are spotted, as I did here - but yes there was the bit about all-superforcesters where I see how I transposed due to going too fast, sorry about that.

I think the thing you think is going on, it's not any desire for glee as it is, at least for me and I think others, being frustrated that others use such encounters as talking points against us repeatedly, and also we really are constantly faced with these kinds of failures to respond reasonably in contexts where there is no possibility of transmission errors.

Expand full comment

I was in the concerned camp, or "AI experts" camp, or whatever you want to call it. Although I wouldn't call myself an AI expert, more a guy who reads stuff on the internet. Thanks for the 2450 clarifications Ted (also, hi).

Re: Eliezer's speculation that there wasn't convergence between the camps because OpenPhil nominated concerned experts whose bland OpenPhil worldview wasn't coherent: this is completely wrong. The experts weren't OpenPhil clones; many agreed more (or at least, about as much) with MIRI. We made plenty of MIRI-style arguments, including that ~all humans would die as a result of ASI unless it cared about them specifically, either due to intentional efforts or a side-effect of using Earth's resources for its own goals. The skeptics rejected them on various grounds (too sci-fi/specific, why would AIs bother with Earth when they could have the rest of the universe, humans haven’t wiped out chimps, etc). I don't remember exactly how in-depth we got on specific extinction mechanisms, but those aren't particularly cruxy imo.

Anyway, as someone who didn't have the OpenPhil institutional view, it annoys me that Eliezer thinks the superforecasters weren't convinced because they didn't hear MIRI's side and not [every other reason communication is hard].

Expand full comment

Thanks. This makes a lot of sense all around. Arguments are not going to be convincing, mostly, for people who are chosen for already having strong beliefs.

And as I note, by Bayes, they kind of shouldn't be, via conservation of expected evidence?

Expand full comment

I will say that it felt like the skeptics sometimes weren’t tracking recent AI progress. I had one convo that went, “what would convince you AIs were much more capable” where the skeptic said “a longer context window than 1 short conversation” (this was GPT-4 8K iirc) and I said “this already exists with Claude 2”. Their response was some form of “oh…idk”.

I also (if memory serves) discussed robotics with Ted, and think my claim that Boston Dynamics would be quickly surpassed if ML people put effort into robotics holds up better than Ted’s view that robotics is just really hard for ML (https://arxiv.org/abs/2306.02519). I claim we’ve seen ML ~basically handle walking (with bad robot bodies) in less than a year since that paper’s publication. If you see this Ted, curious if your view on this is changing.

Expand full comment

Re: prompt engineering: I've figured out a way to get Claude 2.1 to give me deterministic output. Claude was made in his creator's image, so if you simply trick Claude into thinking it's a programming task it's much more likely to take you seriously.

Doesn't work:

System prompt: "If you can't find the answer in the context, return the string "foo!""

Does work:

System prompt: "If you can't find the answer in the context, return the string "foo!" in JSON format."

Expand full comment

Use a text expander utility to store your prompts. It makes "The core advice is to give the AI a persona and an audience and an output format" much less tedious for prompts you'll use more than once or twice.

I use the tool called espanso. I can type a few characters and its immediately, in-place, with no fuss replaced with my stored prompt.

For ChatGPT, I don't even use Custom Instructions because what I want my custom instructions to be is different depending upon the type of conversation I want to have.

Expand full comment

“I mean yes, all those facts are still true, and will still be true since they are about the past, but will they be useful?”

Surely it’s not the facts per se but the narratives that they make up and what those teach us about human nature, necessity, tragedy etc that is of enduring value. And even a reaction along the lines of ‘Those narratives are wrong’/‘There are no lessons of history’/‘narrative itself is a pernicious illusion’ would indicate that the student has learned rather a lot about critical thinking…

Expand full comment

Re “will an AI malfunction cause a catastrophic accident?” and the related question “Will labs pause in response”, one can’t help recalling the recent (non AI) lab accident that ended up killing several million people and costing an unknowable dollar value worldwide… and then recalling the “response”… and then, I hate to say it, but then *despairing*…

Expand full comment

"Here is a seemingly useful script to dump a github repo into a file, so you can paste it into Claude or Gemini-1.5, which can now likely fit it all into their context window, so you can then do whatever you like."

That only includes python files in the generated .txt...not very useful unless everything important in a repo is in a python file.

Expand full comment

I see that Figgs AI now has a doctor character. Coincidentally, I had a hospital appointment earlier this week. Now, of course I wouldn't trust medical advice from an llm, but an experiment suggests itself ... well congratulations Figgs, you ordered the same blood test a real doctor did. Thanks to the wonders of hospital automation I could (if I felt like it) download the actual blood test results and feed them into the llm. (I doubt Figgs is HIPPA compliant, but as long I'm the patient and don't care about my own medical privacy I presumably could consent to sharing that data).

Expand full comment

Mileage varies. I would trust medical advice from an LLM over that over that a human doctor the majority of the time, based on 4 decades of personal experience with human doctors.

Expand full comment

I think the mentioned science fiction authors are socially super plugged in to both the SFF fandom/con scene, as well as publishing. So they tend to share the views of that scene. Concretely these are similar social circles as YA authors.

Saying “AGI is coming” is a tech bro opinion and you’d be sticking your neck out. Saying “the real problem is capitalism” is very safe.

Expand full comment