19 Comments

Hey Zvi, do you plan on doing a recap of the year (in AI)? I think it would be pretty interesting to see your perspective on how things evolved (especially considering you've been making these round-ups for a while now...).

Expand full comment

He does a lot of work covering developments in great detail. Would he see re-reading each round-up, to make sure he remembers everything, as a similarly good use of his time?

Something easier for him might be using a script to input all AI-related 2023 posts into GPT 4, Claude 2, Bard, and some Llama-based model, producing 4 different summaries of similar lengths, and then commenting on which summary is best.

Expand full comment

I guess recap isn't really the right word. I meant more in the sense of an analysis on trends and how the next year might look like. You don't really get to discuss the big picture in depth in the weekly round up (at most you can make comments about it). I think it would be pretty interesting to hear.

Expand full comment

> I would think of this less as ‘catching them using ChatGPT’ and more ‘catching them submitting a badly written assignment.’

There's more to the post than that. Further in the tweet thread, Thinkwert says he that GPT's bad writing looks genuinely different. When a student has an empty argument, they are usually halting and confused and contradictory, but ChatGPT is smooth and confident but disconnected from the core discussion. Some humans can do that, too, of course, so it's not absolute proof, but he says that it's enough evidence that he can often use it to coax a confession when the student is guilty.

There's also discussion about how to improve your system prompts to coax ChatGPT to write better. IMHO, if you can use ChatGPT intelligently to help you write a better paper than you could write on your own, that's fine. The important is to make sure that students learn about good writing rather than just to prove that they didn't "cheat".

Expand full comment

So that generate-video-from-image sample is clearly struggling in a few key areas—it’s obvious anywhere the fabric has to move substantially, the proportions are a bit janky, and the faces are definitely obvious—but I wonder how much of that is the wireframe movement template not being detailed enough. It’s been an internet eon now but we already had a wave of memes turning still portraits into singers and, as you said, this kind of thing is one TikTok dance away from rocketing into the stratosphere. The animated/anime ones are especially bad and would need so much work to clean up as to make it not worth it (at present) but I strongly suspect the human portraits in the hands of an experienced rigger/animator could do some wild stuff without anyone noticing.

Basically, I suspect legal challenges to AI use in entertainment will hit porn before anything else.

Expand full comment

> The [Belrose and Pope] post still puts existential risk from AI, despite all this, at ~1%. Which I will note that I do agree would be an acceptable risk, given our alternatives, if that was accurate.

Have you given much thought to your rough upper limit for acceptable risk? Belrose and Pope don't seem to mention a time-frame. I find it hard to think about risk without one, so I'm going to assume we're discussing this century.

Has anyone else in the upper left quadrant (from section 16) given their own rough upper limit?

Expand full comment

I don't recall if Zvi's posted about his upper limit but I've seen a number of others that I respect the thinking of argue for acceptable upper limits in the 1% to 10% range (i.e. I've seen people arguing that 1% is right on the line and other people making reasonable arguments that anything below 10% is better than a world where all innovation is stifled).

Expand full comment

If you have any links to blog/forum posts with those, could you please share them? Thanks.

Expand full comment
Dec 8, 2023·edited Dec 8, 2023

Minor nitpick:

(1) This sentence appears TWICE:

"Totally agree with all this analysis, and yet, if media is and previously wasn’t fully controlled by people committed to preventing gains to humanity, that has some bearing on whether AGI can be expected any time soonish."

(2) You probably meant "strangle", not "strange" in this line (though it's funnier as written):

"Hawley’s upcoming bill about Section 230 is a no good, very bad bill that will not only strange generative AI in its tracks[...]"

See you at Solstice!

Expand full comment

Am i the only one who would definitely pay to read Zvi on Moloch? (Surely not.)

Expand full comment
Dec 8, 2023·edited Dec 8, 2023

"Surely if the user typed ‘Tony Danza hates puppies’ then that would not allow a third party to sue ChatGPT in the absence of Section 230, that’s obvious nonsense."

You can sue anyone for anything (any many frequently do). The special magic of S230 is that it creates a procedural process to get a certain class of cases dismissed early. Nothing in S230 changes the definition of libel, it just states explicitly who can be legitimately held accountable for libel.

Otherwise defending (even if you win) such lawsuits can be ruinously expensive.

Expand full comment
author

Ah yes, sorry, I forget that technically you can sue over anything, and use it as shorthand for 'possibly win.' But I thought you could move for summary judgment, dismissal, costs and so on in cases where the whole thing was obvious enough nonsense.

Expand full comment

IANAL - but I think the threshold for 'obvious nonsense' is tremendously high.

Expand full comment

The New York Solstice or the related meetup? Link?

Expand full comment

If we are indeed in a simulation of the most interesting historical epoch, the run-up to ASI, could it be pivotal that the AI regulatory effort by the United States is led by such manifestly dim and incompetent people? This seems like it could be a key variable, but I can't tell which way it might cut in terms of p/doom.

Expand full comment
Dec 12, 2023·edited Dec 12, 2023

and I mean this in a non-partisan way, with Trump waiting in the wings as a possible alternative

Expand full comment

As it relates to "AI control" of untrustworthy models, you might be interested in Redwood's new work on this here: https://twitter.com/bshlgrs/status/1734967328599945621

Expand full comment