49 Comments

i believe his name is “george hotz” (s/holtz/hotz/)

Expand full comment

It’s “Landauer limit”

Expand full comment

I listened to an interview with Hotz when he had his self-driving car company, and he sounded like a total BS hype man. Am I wrong, or is there just a big market for that?

I wish we had more people focusing on practical aspects of AI safety/doom prevention (which I think are more closely related than people seem to). One example: shouldn't we segregate AI cybersecurity capabilities from generalist LLM capabilities? The AI that understands various hacking techniques probably shouldn't be the same AI that can sweet talk people into doing things and (eventually) come up with and execute complex goals. We also probably need to 10X our game on cybersecurity in general since that's the field where bad actors will be trying to use AI to come up with new exploits and it's the logical starting point for an AGI to start creating problems.

Expand full comment

These debates seem very hard to make progress in. They are entertaining, but I really wish there was some way of forcing someone to answer a specific objection. That mostly didn't happen the last time Hotz debated, and didn't happen this time either. They are somewhat entertaining, but I really wish that they would spend a lot more time finding a specific crux, expressing that crux as a statement, and debating that single crux instead of going through everything from replicating bacteria to compute limits to decision theories. I think multiple subjects are too complex for a debate format

Expand full comment
Aug 16, 2023·edited Aug 16, 2023

I think this summarization is accurate; the debate wasn't really too insightful, but rather disheartening. Except that I can now calibrate what to expect from Hotz going forward. He can actually be quite insightful in computer science in general, and I expected him to have some actual arguments here, but he simply kept throwing whacks at Eliezer and framing it as some weird "narrative".

Worse still, he is very familiar with Eliezer's writing, and I can't believe he doesn't actually understand what Eliezer believes -- I really think Hotz is trying to *make it sound like* AGI is just not gonna be a problem, because... "not my problem" / "impossible" / "sci-fi" etc. Or to make it sound like he can just ""win"" this debate, and then, "see? Open source AI good! Yay Progress!"

(Maybe one good thing that comes out of this is to show all viewers that this is really what debating AGI scenarios looks like, even with the influential people, and yeah, I would agree that that's good.)

Expand full comment

I have never understood how competing ASI is any better for humans than competing aerobic animals are better for anaerobic answers.

Ultimately it comes down to the crux that Hotz doesnt much value humanity, which is an something many do not realize a lot about these AI promoters.

If you think that present day humanity has no value in existing, of course you would push for one where humanity is extinct or marginalized in.

Expand full comment

It's a ridiculously big and unfair ask, but what I want from e/acc folks is to demonstrate comprehension of the key Yud/Bostrom/etc arguments. I am sincerely interested in good e/acc arguments, but instead these peeps just put forward ideas that have already been well explored. Then it is just a matter of: "can Yud condense years of LW canon on this topic into a 1 minute soundbite in a rhetorically convincing fashion" instead of a substantive debate.

Relatedly, my hunch is that LW is a highly erudite alignment echo chamber, because you really need to have your act together to put forward a non-laughable "why AGI will be okay for humans" argument - as in, have spent a formidable amount of time reading and thinking about it. I suspect this barrier to entry has a selection effect (as in, selects for doomer sympathy).

TLDR this discussion has a variance-bias problem. Not sure what can be done about that.

Expand full comment

I think you're going to have these circular arguments every time because the crux of the matter is that the hypothetical Doom AI is supposed to do a thing that neither debater can predict, with capabilities that no one can comprehend. To convince someone of something, they have to comprehend that your argument is right - if they can't understand how a thing will happen, you can't convince them of it. Well, you can convince some people, if they accept the principle assumptions that will lead to it - which is why I think the AI Doom argument makes intuitive sense to some people, but almost everyone continually comes up with "this is why it won't happen!" - but that won't be enough. There's only one way to determine if, starting from assumptions X Y and Z, if an unforeseen event will occur, and that is to test it.

Expand full comment

I must have missed something important in the AI debate, because the following seems to dramatically change the debate: "34:00 Hotz asks about AIs rewriting their own source code. Yudkowsky says he no longer talks about it because it strains incredulity and you don’t need the concept anymore."

Why would AI not need to be able to rewrite its own code in order to foom? Or, as Hotz seemed to think EY was conceding, was this no longer an argument in favor of foom?

Expand full comment

This doesn't _seem_ particularly high value for people familiar with all of this to watch – no?

Expand full comment