38 Comments

Regarding my Hinge tweet:

> He’s not single, ladies, but he’s polyamorous, so it’s still a go.

In case this is misunderstood, I am not in fact polyamorous, but I’m currently dating to meet my future wife, ladies. So if you think we’d be a good match, send me a message. :)

About me: https://jacquesthibodeau.com/lets-go-on-a-date/

Expand full comment
author

To be clear, I meant Yudkowsky, but got sloppy with the wording, I'll fix. My apologies :)

Expand full comment

The AI risks community is not *completely* above amateur psychoanalysis and insults. In fact, I'd argue the AI risk community does not *want* to be above amateur psychoanalysis and insults. I agree, it's off-putting and mostly counterproductive.

But claiming "we are the nice ones and the outgroup is deontologically awful people", however true it is right now, won't be true forever; all of this is going to be far too mainstream for any sort of this scenario to hold true.

Serious but nice talk is helpful when communicating to smart people, or good people, or high status people. People like the typical anon troll have none of the aforementioned qualities, and they respond to different type of talk. (Remember the 45th US president's support base.)

I do not want or need any help from those people *right now*. But they are also people, with some values and desires, some of them make interesting stuff, and their inputs should and will somehow count in this whole thing that will unfold.

Expand full comment
author

We're a large and growing group of humans without an exclusionary mechanism, so no, not going to be beyond such things. It's still an advantage to keep the edge on this as much as we can, as long as we can, especially where it's most useful, and also to point this out, I'd think. I do hope we can continue to do relatively well.

Expand full comment

What Do We Call Those Unworried People? I’m still voting for Faithers.

I think the claim that chatbots will take over the world require further evidence rather than the reverse. Plenty of reasons beyond faith why that’s implausible. But sure, government will totally productively help save us from this existential danger.

Expand full comment

Any specific kind of evidence that you would find convincing?

Or do you honestly and sincerely think that the worry is exactly that "chatbots will take over the world"?

Why even bother mentioning your skepticism of government positively helping mitigate or prevent these dangers if you don't think the dangers are even plausible? (I'm skeptical too, but then I'm sincerely worried, and I'm not aware of any other _better_ options than some kind of government enforcement of some kind of regulation.)

Expand full comment

Sure, I appreciate the genuine engagement.

> Or do you honestly and sincerely think that the worry is exactly that "chatbots will take over the world"?

No, I view that statement as equivalently mocking as referring to people who don't buy it as "Faithers", as if it's some kind of X Denier. Also, it points to the biggest gap in the evidence that there's something to worry about: we have fantastically capable LLMs, but I have seen no evidence that it's leading to AGI. I think a number of accomplished people have made these points much better than I can, so I will appeal to their authority with these two links:

https://sarahconstantin.substack.com/p/why-i-am-not-an-ai-doomer

https://www.richardhanania.com/p/pinker-on-alignment-and-intelligence

I don't see how any amount of development on LLMs would lead to X risk, and I don't see how big breakthroughs or "fast takeoff" could happen considering that "the real world" is in the loop.

> Why even bother mentioning your skepticism of government positively helping mitigate or prevent these dangers

I mentioned that because I feel like Government can't help being a negative intervention in most scenarios. It's often something ineffectual but maddening (cookie banners) or outright harmful to the stated goals (many domestic spending / social policies of the past 20 years).

This also touches on the second area of reasons I think it's unlikely to carry extinction risk -- humans are adaptable. There are many incredibly complicated life critical systems that nonetheless operate smoothly. The danger of computerized systems putting people in mortal danger is not terribly new, so I don't see why it would be impossible to adapt to a future where you may have inbound communications from intelligent agents instead of humans.

IMO, by far the most likely thing is that there is a classic S curve in capabilities, (e.g. how much better can LLMs get past GPT-4?) and after 10-20 years we have an enormous number of valuable new tools and capabilities, but humans are still in charge of setting goals and driving things, because it's still a tool.

Expand full comment

> Sure, I appreciate the genuine engagement.

I as well!

> No, I view that statement as equivalently mocking as referring to people who don't buy it as "Faithers", as if it's some kind of X Denier.

Ahh

I can understand the 'symmetry' of 'Faither' compared to 'Doomer', but I'm pretty indifferent about this particular 'sub-project'. I think 'non-Doomer' would work just as well.

> Also, it points to the biggest gap in the evidence that there's something to worry about: we have fantastically capable LLMs, but I have seen no evidence that it's leading to AGI.

Claiming that the "fantastically capable LLMs" are "no evidence" of future AGI(s) seems like a crux. I would guess that, practically, nothing short of AGI, and an inescapably obvious instance thereof, would suffice if you're not exagerrating.

Also: https://thezvi.substack.com/p/law-of-no-evidence

> I don't see how any amount of development on LLMs would lead to X risk, and I don't see how big breakthroughs or "fast takeoff" could happen considering that "the real world" is in the loop.

*Any* amount won't lead to an x-risk? *No* big breakthroughs can happen – at all? Again, if you're not exageratting, then I'm not sure *any* arguments could sway you.

I did read Sarah's post, but I disagree, not only with her but with many 'Doomers' too, that the *current* AIs (i.e. LLMs) are definitely missing necessary "capacities". It's certainly possible that that's the case, but I also think it's possible that we've already exceeded the (or a) 'danger threshold'.

But, even assuming Sarah is correct, it sure seems to me that other people are (effectively) actively searching for any possibly missing capacities and are explicitly aiming to give them to AIs.

One aspect about which I'm more worried than Zvi, or even Eliezer, is whether some feasible amount and kind of 'scaffolding' (e.g. via 'manual' human intervention), could provide exactly whatever necessary capacities might be missing for even existing AIs to be dangerous, possibly even existentially.

I think "fast takeoff" is a distraction, if by that you're referring to AIs self-improving. 'Takeoff', of any kind, seems like it (strictly) only increases the probability of doom. I think we're doomed even ignoring that, i.e. we're doomed even in the worlds with 'slow takeoffs' or maybe (tho unlikely) in worlds where our capabilities don't increase from their current level(s).

> > Why even bother mentioning your skepticism of government positively helping mitigate or prevent these dangers

>

> I mentioned that because I feel like Government can't help being a negative intervention in most scenarios.

I agree! But, as I commented before, I'm not aware of any other _better_ options. This doesn't seem like a disagreement.

> This also touches on the second area of reasons I think it's unlikely to carry extinction risk -- humans are adaptable. There are many incredibly complicated life critical systems that nonetheless operate smoothly. The danger of computerized systems putting people in mortal danger is not terribly new, so I don't see why it would be impossible to adapt to a future where you may have inbound communications from intelligent agents instead of humans.

>

> IMO, by far the most likely thing is that there is a classic S curve in capabilities, (e.g. how much better can LLMs get past GPT-4?) ...

It's not *impossible* for there to be some way to adapt to a future with other intelligent agents.

And it *is* possible that the capabilities of LLMs and other AIs will stall right around those of humans.

But both of those possibilities seem very unlikely relative to all of the other possibilities. In every (?) domain in which AIs have reached human level, they've subsequently surpassed us too. It seems 'suspiciously convenient' to think that 'human level' is some kind of 'natural threshold'. There *are* some reasons why that might be the case, but even 'fixed constant' differences (versus, i.e. 'exponential differences') could be decisive if humans and AIs compete, let alone engage in more pointed conflicts.

In a very real sense, LLMs are *already* more capable than humans. They might not output text as well written as the very best human writers, but they generate it much much faster. They also seem to be *much* more flexible, and more knowledgable, than any one human. There are lots of ways for systems to be dangerous even if any single instance isn't 'more intelligent' than a single human. Current LLMs are somewhat competitive with humans already for some tasks, but they're also much faster, much cheaper (in non-financial senses too), and can be copied/duplicate much more easily as well. The 'danger manifold' of AIs is different than that of humans. We have 'general intelligence', but also social organization, groups/organizations/institutions, and 'culture' more broadly, but we also can't be cognitively 'expanded', nor cognitively copied, and we'll probably remain overall more expensive to 'utilize' than most individual instances of AI.

> ... and after 10-20 years we have an enormous number of valuable new tools and capabilities, but humans are still in charge of setting goals and driving things, because it's still a tool.

This would be _nice_, but that's not a reason to think it'll be true.

Intelligent tools are very likely to be dangerous: https://gwern.net/tool-ai

Expand full comment

> I wonder how much ‘we could do X with AI’ turns into ‘well actually we can do X without AI.’

I think one of the big benefits of chatGPT/Bard is that it'll be way cheaper and easier to build quick little solutions. Lots of things that are too expensive to build right now for a small audience, or even for a single person, become viable.

Yes in theory those could all be done before, but if they're radically cheaper, that's still a big change.

Expand full comment

Imagine if all these AI regulations came into existence 10 years ago. By how much would it have delayed the appearance of GPT-4? A year? A decade? 30 years? It seems to me like we're sacrificing the future utility of GPT-5 for the sake of avoiding a hypothetical scenario that would've probably not happened in the next few decades anyway without any regulation.

Expand full comment

Well, sure, if your expectations are very different, the expected value of various 'interventions' should be too.

I think Zvi has been very clear that it would be a tragedy if we (somehow) failed to capture any, or even any significant fraction, of the future utility of 'GPT-4+' (or even GPT-4 and earlier).

But it also seems pretty obvious that whatever we utility we have to give up to have even some reasonable chance of surviving AI is a cost we should be willing to pay.

Expand full comment

Zvi are you familiar with, and do you grok, David Deutsch’s position? Basically that AI risk is founded upon epistemological misconceptions. Keen for your views.

Expand full comment
author

Not specifically, if you provide the canonical link that summarizes I will check it out. That kind of statement has a lot of potential meanings.

Expand full comment

Link below to main. But the fundamental principles underlying his view are so opposed to our own, you might need to read The Fabric Of Reality to grok them.

https://aeon.co/essays/how-close-are-we-to-creating-artificial-intelligence

Expand full comment

Reading his book has convinced me though from doomer to optimist (my prior position being mostly informed by EY, ACX, you etc). And that slowdown is a flawed and counterproductive idea. Which is a significant reversal!

Expand full comment
author

Added link to the queue. Reading a book would require a super-high bid though (although Deutsch's rep does make it less super-high).

Expand full comment

Engaging with his arguments on AI will require introspection commensurate with what you recommended for the EA contest, ie reconsider fundamental world view assumptions. Eg he states inductivism is wrong, from which he concludes both that current AIs are nowhere near AGI, and that Bayesian epistemology is incorrect.

Expand full comment
author

If his arguments are depending on Bayes being wrong...

Expand full comment

Re terminology for non-"doomers", it's a bad idea to let this fall into the framing of optimism/pessimism, naivete/sophistication, futurism/technophobia, idealism/realism, progress/stability, complacency/reacting, or _especially_ right/left. IMO, the ideal framing would be "legalize mad science"/"don't". When all of Dr Doom's henchmen mutiny and go tell the police that he's making progress on his long-running project to open a gateway to the Terminator Dimension, should the police be allowed to shut it down? Should the government try to "make use of" Doctor Robotnik, because who knows what greater threats may be out there? When the eccentric scientist starts going off about how his invention will change everything about life on earth, that it will be the next stage of human evolution, etc, is that a sign that his work should be allowed to continue? How about if he's also open about that he doesn't know what it just might do?

This framing does not map onto any existing long-running conflicts, and carries with it most of the most helpful instincts everyone has.

Expand full comment

That framing seems to map _very neatly_ onto lots of "existing long-running conflicts", e.g. nuclear energy, 'GMOs', and climate/geo engineering.

Expand full comment
author

Yeah, I think Kenny's right here, that framing is not going to keep your issue non-partisan much longer than our other options.

I think of this more as 'we need words for things' and it's fine within these posts to call them The Unworried but that doesn't work outside of the context.

Expand full comment

re: “is ChatGPT useful” I did a writeup trying to find uses for it for my job/profession here: https://scpantera.substack.com/p/ai-and-pharmacy-1

tl;dr it either has a ways to go and/or needs some specialized fine tuning

I’m a pharmacist contracted with the California state prison healthcare system. My boss caught me fiddling around with ChatGPT and so I’ve ended up as the local “knows about AI” guy which doesn’t mean much except that I got a little inside info that Sacramento really really wants to do something with ChatGPT/AI but doesn’t really know what/how. Been kind of trying to weasel my way in that direction but my options are pretty limited.

re: LLM in video games; (basically agreeing with what was already written) one of the biggest problems with games as story vehicles is that the writing isn’t up to the standards of high brow literature broadly. What allows them to get away with it is partly that the average person doesn’t themselves have high standards for writing quality (see: all of modern Hollywood) but also that story/narrative elements are used as an extrinsic motivator in a way that lowers the threshold of emotional investment for a lot of people for even the flimsiest of narrative pretexts. Very IMO here but David Gaider/Bioware is a good example of this, a lot of Bioware writing post-Dragon Age was already pretty overrated (again, very IMO) but doesn’t hold up compared to modern examples of the genre broadly (I’m thinking specifically stuff like Disco Elysium but even Owlcat’s Pathfinder games which aren’t stellar but are a step above the usual). We’re still in an industry that thinks Gone Home was peak storytelling.

That procedural generated writing makes things feel soulless is more a consequence of poor fit in game design. If your pitch is “Mass Effect but with ChatGPT”, yes absolutely but that’s because the narrative and gameplay are pretty poorly blended. The side of gameplay that is the narrative has to be tightly scripted else you’re leaning on increasingly larger coincidences to get every conversational beat to be somehow connected—real life works less like that.

Instead I think about what a game like Shadows of Doubt would look like if you took the current iteration and threw in an LLM. The game world is already heavily procgen—and the gameplay is designed around it in a way that makes it work well when procgen is already kind of a duddy fad—but giving the NPCs the ability to be closer to real people would add in an additional missing puzzle piece.

That AI writing feels soulless is a problem of how the medium is being used, not a problem with how AI is being used.

Expand full comment

You wrote:" Tessa is giving correct information here. This would, as I understand it, in general, indeed be the correct method to safely lose weight, if one wanted to intentionally lose weight."

There is nothing safe about eating 500 calories/day for a lot people (to be clear: I am claiming it is not good general advice). If you are an adult male who is sedentary, you may want to cut to as low as 1200ish (with medical supervision!) for a very, very short period, but more likely you just want to make sure you're below 1500 (or ideally increase your activity level until 1500 is enough to cause a calorie deficit). It is worth remembering that food provides more than just calories- you need essential amino acids, vitamins/enzymatic cofactors and minerals your body can't generate on its own. Those requirements don't go away just because you want to lose weight.

Now, a very small (<5 ft tall) sedentary woman *might* need to go as low as 500/day to shed fat, but even then I'd really really want a medical professional that has evaluated BMR carefully.

There is also the deeper question of expert disagreement over how universally valid CICO is. Personal experience and anecdotal evidence suggests that aiming for body re-composition first (to increase lean mass) and THEN cutting calories is both a healthier and more sustainable path to fitness.

Expand full comment

It suggested a 500 calories a day deficit, not total.

Expand full comment
author

I read Tessa as not saying 500/day total, rather as saying a 500/day deficit. I agree that 500/day is probably quite bad, but e.g. I have a close male relative who did 800/day for an extended period and it worked, although he didn't keep it off.

I am in the strange position where I am well below 1500/day permanently, simply to maintain, even with physical activity, which colors my views somewhat.

Expand full comment

This was a failure of reading comprehension on my part. Apologies.

Expand full comment

The link:

> Japan declares it will not enforce copyright on images used in AI training models

Goes to a fake news story that is based on an innocuous press conference in which no new information was provided. Like an obvious "fake news" site in the old Macedonian vein. It is claiming that "a surprising move" and "bold stance" was "confirmed" as a new "policy" but all that actually happened was a journalist asked a minister to confirm that existing Japanese law works the way everyone seems to think it does, and was told that yes, this is the position of the Ministry, it works the way we all think it does.

Expand full comment
author

I got this from multiple secondary sources in various forms, and saw no pushback until this, also this seems like the story's essence is confirmed. I do agree people are making a bigger deal out of it than it is.

Expand full comment

Think Robin has two separate points that are getting mismashed: one AGAINST short-term risk, and one FOR long-term risk.

In the short-term, Robin thinks ASI is not a valid concern mainly because of how unusual it would be relative to the rest of human history. He also thinks it's too early to know anything about what ASI might look like, so positing different "doom" scenarios shouldn't really update us further in that direction.

In the long-term, he seems to think our descendants (either enabled by AI, or AIs themselves) will inevitably come to not identify with our values, and seek to destroy *us*.

To use Cowen's language, a "Straussian" reading of the second point might be that he thinks alignment is a non-starter conceptually. Being "aligned to human values" might not work too well if those values are both variable and dynamic. Your only option is to let your "unaligned" descendants pursue their version of the good, and pray they don't judge us too harshly.

Expand full comment
author

I agree with your statement of Robin's views, except that he thinks it is possible to avoid this with sufficient anti-change policies if one is willing to be not grabby to do that.

Expand full comment

One of the cool things about house concerts is that the performer(s) can have a conversation with the audience as they go along (in addition to the less effable performer/audience connection, which is definitely on a different level than e.g. stadium concerts). That seems like it would require AI. Depending on the fidelity you want, it might even require technology that we're not very close to having yet. (Arnold Kling's post did mention interviewing the performers; similar issues there).

Though I suspect many famous musicians would not want to make AI clones of their personalities for everyone to use, probably wisely.

(Aside: I didn't understand that the idea was to have a famous band play in your house, until I clicked through to the original post. It made a lot more sense after that.)

Expand full comment

The tweet: "Sherjil Ozair: I think there's a world market for maybe five large language models."

is clearly a joking reference to

"I think there is a world market for maybe five computers."

Thomas Watson, president of IBM, 1943

and thus implies we'll end up with billions of the things in a few decades, with individually personalised LLMs.

I'm surprised you didn't recognize the reference!

Expand full comment
author

I do find it weird that I missed it until it was pointed out, but also I do think it's plausible that it's mostly true in its reformulation, since it's more like saying there's a market for 5 computer companies.

Expand full comment

Hanson “instead be loyal to the idea of natural selection”...

Godwin me if you must, but I can think of two ideologies from the last century that displayed such loyalty, the one to its idea of biological natural selection, the other to a socio-economic version. Both claimed not to be ideologies at all, merely level-headed realists embracing the inevitable future. Both rejected sentimental attachments to that which must be discarded, in a similar manner to Hanson. Couple of key differences: Hanson wears more colourful shirts, and they only required a death count in the “tens of millions” range.

Expand full comment

In the publishing industry, Publishers Weekly printed this open-minded take on how AI will transform the book biz (by a book-biz expert). https://t.co/dzzlrDu0L9

Expand full comment