Regarding my Hinge tweet:

> He’s not single, ladies, but he’s polyamorous, so it’s still a go.

In case this is misunderstood, I am not in fact polyamorous, but I’m currently dating to meet my future wife, ladies. So if you think we’d be a good match, send me a message. :)

About me: https://jacquesthibodeau.com/lets-go-on-a-date/

Expand full comment

The AI risks community is not *completely* above amateur psychoanalysis and insults. In fact, I'd argue the AI risk community does not *want* to be above amateur psychoanalysis and insults. I agree, it's off-putting and mostly counterproductive.

But claiming "we are the nice ones and the outgroup is deontologically awful people", however true it is right now, won't be true forever; all of this is going to be far too mainstream for any sort of this scenario to hold true.

Serious but nice talk is helpful when communicating to smart people, or good people, or high status people. People like the typical anon troll have none of the aforementioned qualities, and they respond to different type of talk. (Remember the 45th US president's support base.)

I do not want or need any help from those people *right now*. But they are also people, with some values and desires, some of them make interesting stuff, and their inputs should and will somehow count in this whole thing that will unfold.

Expand full comment

What Do We Call Those Unworried People? I’m still voting for Faithers.

I think the claim that chatbots will take over the world require further evidence rather than the reverse. Plenty of reasons beyond faith why that’s implausible. But sure, government will totally productively help save us from this existential danger.

Expand full comment

> I wonder how much ‘we could do X with AI’ turns into ‘well actually we can do X without AI.’

I think one of the big benefits of chatGPT/Bard is that it'll be way cheaper and easier to build quick little solutions. Lots of things that are too expensive to build right now for a small audience, or even for a single person, become viable.

Yes in theory those could all be done before, but if they're radically cheaper, that's still a big change.

Expand full comment

Imagine if all these AI regulations came into existence 10 years ago. By how much would it have delayed the appearance of GPT-4? A year? A decade? 30 years? It seems to me like we're sacrificing the future utility of GPT-5 for the sake of avoiding a hypothetical scenario that would've probably not happened in the next few decades anyway without any regulation.

Expand full comment

Zvi are you familiar with, and do you grok, David Deutsch’s position? Basically that AI risk is founded upon epistemological misconceptions. Keen for your views.

Expand full comment

Re terminology for non-"doomers", it's a bad idea to let this fall into the framing of optimism/pessimism, naivete/sophistication, futurism/technophobia, idealism/realism, progress/stability, complacency/reacting, or _especially_ right/left. IMO, the ideal framing would be "legalize mad science"/"don't". When all of Dr Doom's henchmen mutiny and go tell the police that he's making progress on his long-running project to open a gateway to the Terminator Dimension, should the police be allowed to shut it down? Should the government try to "make use of" Doctor Robotnik, because who knows what greater threats may be out there? When the eccentric scientist starts going off about how his invention will change everything about life on earth, that it will be the next stage of human evolution, etc, is that a sign that his work should be allowed to continue? How about if he's also open about that he doesn't know what it just might do?

This framing does not map onto any existing long-running conflicts, and carries with it most of the most helpful instincts everyone has.

Expand full comment

re: “is ChatGPT useful” I did a writeup trying to find uses for it for my job/profession here: https://scpantera.substack.com/p/ai-and-pharmacy-1

tl;dr it either has a ways to go and/or needs some specialized fine tuning

I’m a pharmacist contracted with the California state prison healthcare system. My boss caught me fiddling around with ChatGPT and so I’ve ended up as the local “knows about AI” guy which doesn’t mean much except that I got a little inside info that Sacramento really really wants to do something with ChatGPT/AI but doesn’t really know what/how. Been kind of trying to weasel my way in that direction but my options are pretty limited.

re: LLM in video games; (basically agreeing with what was already written) one of the biggest problems with games as story vehicles is that the writing isn’t up to the standards of high brow literature broadly. What allows them to get away with it is partly that the average person doesn’t themselves have high standards for writing quality (see: all of modern Hollywood) but also that story/narrative elements are used as an extrinsic motivator in a way that lowers the threshold of emotional investment for a lot of people for even the flimsiest of narrative pretexts. Very IMO here but David Gaider/Bioware is a good example of this, a lot of Bioware writing post-Dragon Age was already pretty overrated (again, very IMO) but doesn’t hold up compared to modern examples of the genre broadly (I’m thinking specifically stuff like Disco Elysium but even Owlcat’s Pathfinder games which aren’t stellar but are a step above the usual). We’re still in an industry that thinks Gone Home was peak storytelling.

That procedural generated writing makes things feel soulless is more a consequence of poor fit in game design. If your pitch is “Mass Effect but with ChatGPT”, yes absolutely but that’s because the narrative and gameplay are pretty poorly blended. The side of gameplay that is the narrative has to be tightly scripted else you’re leaning on increasingly larger coincidences to get every conversational beat to be somehow connected—real life works less like that.

Instead I think about what a game like Shadows of Doubt would look like if you took the current iteration and threw in an LLM. The game world is already heavily procgen—and the gameplay is designed around it in a way that makes it work well when procgen is already kind of a duddy fad—but giving the NPCs the ability to be closer to real people would add in an additional missing puzzle piece.

That AI writing feels soulless is a problem of how the medium is being used, not a problem with how AI is being used.

Expand full comment

You wrote:" Tessa is giving correct information here. This would, as I understand it, in general, indeed be the correct method to safely lose weight, if one wanted to intentionally lose weight."

There is nothing safe about eating 500 calories/day for a lot people (to be clear: I am claiming it is not good general advice). If you are an adult male who is sedentary, you may want to cut to as low as 1200ish (with medical supervision!) for a very, very short period, but more likely you just want to make sure you're below 1500 (or ideally increase your activity level until 1500 is enough to cause a calorie deficit). It is worth remembering that food provides more than just calories- you need essential amino acids, vitamins/enzymatic cofactors and minerals your body can't generate on its own. Those requirements don't go away just because you want to lose weight.

Now, a very small (<5 ft tall) sedentary woman *might* need to go as low as 500/day to shed fat, but even then I'd really really want a medical professional that has evaluated BMR carefully.

There is also the deeper question of expert disagreement over how universally valid CICO is. Personal experience and anecdotal evidence suggests that aiming for body re-composition first (to increase lean mass) and THEN cutting calories is both a healthier and more sustainable path to fitness.

Expand full comment

The link:

> Japan declares it will not enforce copyright on images used in AI training models

Goes to a fake news story that is based on an innocuous press conference in which no new information was provided. Like an obvious "fake news" site in the old Macedonian vein. It is claiming that "a surprising move" and "bold stance" was "confirmed" as a new "policy" but all that actually happened was a journalist asked a minister to confirm that existing Japanese law works the way everyone seems to think it does, and was told that yes, this is the position of the Ministry, it works the way we all think it does.

Expand full comment

Think Robin has two separate points that are getting mismashed: one AGAINST short-term risk, and one FOR long-term risk.

In the short-term, Robin thinks ASI is not a valid concern mainly because of how unusual it would be relative to the rest of human history. He also thinks it's too early to know anything about what ASI might look like, so positing different "doom" scenarios shouldn't really update us further in that direction.

In the long-term, he seems to think our descendants (either enabled by AI, or AIs themselves) will inevitably come to not identify with our values, and seek to destroy *us*.

To use Cowen's language, a "Straussian" reading of the second point might be that he thinks alignment is a non-starter conceptually. Being "aligned to human values" might not work too well if those values are both variable and dynamic. Your only option is to let your "unaligned" descendants pursue their version of the good, and pray they don't judge us too harshly.

Expand full comment

One of the cool things about house concerts is that the performer(s) can have a conversation with the audience as they go along (in addition to the less effable performer/audience connection, which is definitely on a different level than e.g. stadium concerts). That seems like it would require AI. Depending on the fidelity you want, it might even require technology that we're not very close to having yet. (Arnold Kling's post did mention interviewing the performers; similar issues there).

Though I suspect many famous musicians would not want to make AI clones of their personalities for everyone to use, probably wisely.

(Aside: I didn't understand that the idea was to have a famous band play in your house, until I clicked through to the original post. It made a lot more sense after that.)

Expand full comment

The tweet: "Sherjil Ozair: I think there's a world market for maybe five large language models."

is clearly a joking reference to

"I think there is a world market for maybe five computers."

Thomas Watson, president of IBM, 1943

and thus implies we'll end up with billions of the things in a few decades, with individually personalised LLMs.

I'm surprised you didn't recognize the reference!

Expand full comment

Hanson “instead be loyal to the idea of natural selection”...

Godwin me if you must, but I can think of two ideologies from the last century that displayed such loyalty, the one to its idea of biological natural selection, the other to a socio-economic version. Both claimed not to be ideologies at all, merely level-headed realists embracing the inevitable future. Both rejected sentimental attachments to that which must be discarded, in a similar manner to Hanson. Couple of key differences: Hanson wears more colourful shirts, and they only required a death count in the “tens of millions” range.

Expand full comment

In the publishing industry, Publishers Weekly printed this open-minded take on how AI will transform the book biz (by a book-biz expert). https://t.co/dzzlrDu0L9

Expand full comment