92 Comments
Comment deleted
Expand full comment

You accidentally repeated the three paragraphs beginning with:

"Yann LeCunn: Scaremongering about an asteroid that doesn't actually exist (even if you think it does) is going to depress people for no reason."

Expand full comment

Thanks for doing this! I've been trying to keep track of their interaction, hoping there would be some value generated, but twitter doesn't make that easy.

I kept hoping that Yann would actually engage with EY's arguments, rather than these tangential snipes or Ad Hominems...

Expand full comment

Yann's strongest argument is his last one. There is a flavor of pascal's mugging in EY's arguments, it's bothered me from the first time I heard them. Arguments of this form make me skeptical. At the end of the day I don't think super intellegence is as likely as the rest of you. Also I think intellegence is limited in ways that people like EY aren't considering. As long as AI depends on humans to do it's bidding it will fail to take over the world. Now if it had a robot army, that's another story. Robots are reliable and follow instructions. No matter how smart you are getting humans to collaborate to execute complex plans is hard. Being smarter might not even help. You have to build a certain amount of failure into your plans...totally possible for a super-AI. However I think the evidence is we'll get lack of alignment before super enough AI to execute a complex plan using humans. So the one-shot just seems wrong.

Expand full comment

If I were Mark Zuckerberg and my opinions about AI risk were exactly the same as Yann LeCun's, I would *not* put Yann LeCun in a position of responsibility.

Are these people familiar with the concept of being mistaken about something?

Expand full comment

The sad thing here for me is that Eliezer is not a good communicator, at least in this format. He comes across as strident and contemptuous. He takes his points as obvious, and can't always be bothered to explain them in a way that might actually help convey understanding to someone who doesn't already agree with him.

All of this is understandable, given the weight of the burden Eliezer has been carrying and the length of time he's been carrying it for. But it's not productive. If the goal is to save humanity, then at this stage, a critical subgoal is to be convincing: comprehensible, appealing, sympathetic. We need to meet people where they are (if not Yann in this instance, then the many other folks who will be reading this public conversation), explain things in terms that they can understand, and always come across as rational, patient, constructive. To the extent that it is ever possible to change someone's mind, that is the path. No one has ever had their mind changed by being browbeaten.

If Eliezer doesn't have the skills for this – which are a specific set of skills – or if he simply no longer has the patience, then again that's quite understandable, but I hope someone can help him understand that he is not serving the cause by engaging in public in this fashion.

Expand full comment

"Yann LeCun: To *guarantee* that a system satisfies objectives, you make it optimize those objectives at run time (what I propose). That solves the problem of aligning behavior to objectives. Then you need to align objectives with human values. But that's not as hard as you make it to be."

Someone with more familiarity with the research can show that there's mesa-optimization and meta-optimization that can occur even if the optimization happens at runtime, right? As for the last part of this... holy shit. I don't even know where to begin. He does realize that humans are extremely divergent on the values they claim to hold even amongst each other? Axiology and values are open problems in philosophy and economics research. First you have to solve that problem, then hope you can properly transmit that to an AI and then hope the AI doesn't mesa or meta optimize away from it.

Expand full comment

You know, I don't think Yann is even capable of considering Eliezer's argument. His mind is fixed on a position and anything that goes against that position gets instantly filtered out.

The argument for AI risk is actually quite simple, so I wonder why so many people have problems with it. Like, if you ever programmed anything you know that the computer simply follows your orders the way you wrote them and not necessarily in the way you intended. Scale that up to extremely capable adaptable autonomous programs and it's pretty clear what failure looks like.

People for some reason seem to think AI is just a normal person, maybe more intelligent, so it's all fine since smart people are nice. That view is, quite frankly, ridiculous. Delusional even.

Honestly, if they had called it "adaptable autonomous system" or "complex information processing" instead of AI, things would be much better...

Expand full comment

Hey Zvi,

I posted this response to Daniel Eth's tweet about an AGI turning the universe into data centres for computing Pi.

"What incentive would [the putative AGI] have for engineering this outcome?"

Daniel didn't respond, but I wonder what you think?

Expand full comment

When I first started reading YUD on this topic I was perplexed, and with dawning realization came a sense of excitement: "This is understandable but difficult, and I must be very smart indeed to understand this obvious smart fellow when so many other smart people do not."

Having read more of the arguments of those who do not understand, I feel somewhat deflated -- they're plenty smart enough to understand, but are just engaging in the 'motivated stopping' mentioned so long ago now in the Sequences. This stuff really isn't that complicated, it's just that nobody wants to look directly at it long enough to understand.

Expand full comment

I find Yann's argument so disappointing. If he has a real argument for why there is no need to worry I'm eager to hear it.

Expand full comment

> YL: You know, you can't just go around using ridiculous arguments to accuse people of anticipated genocide and hoping there will be no consequence that you will regret. It's dangerous. People become clinically depressed reading your crap. Others may become violent.

Wow. There's no tone on the Internet, of course. But depending on the tone and facial expressions, this could be interpreted in *very* different ways.

Or maybe it's just that I saw a mafia TV show a few months back, priming my brain's pattern recognition to recognize potential similar patterns elsewhere.

Expand full comment

Yann sure isn't smart. He can't debate well, and he bails pretty fast on the actual debate, and resorts to complaining about Yudkowsky scaring people. Even the way he bails is dumb, because if Yudkowsky is right then obviously he is going to have to scare people in the process of warning them, so it's an argument based on the presupposition that EY is wrong. Um, Yann, the truth of that is what you guys are debating. I hope someone makes a spoof vid of him like the one where EY is talking about cats -- except not a fun spoof, but one that eviscerates this creep.

Expand full comment

I've been thinking about this conversation since it happened. How can the godfather of modern AI be thinking about this question so poorly? My most charitable explanation is that he's experiencing some pretty extreme cognitive dissonance. His subconscious is telling the narrative writing part of his brain that safety isn't a concern (because A. If we treat it as dangerous and it turns out not to be we'll be missing out on massive benefits to humanity, and B. if it turns out we are at risk Yann will bear a non trivial amount of the responsibility.) and the narrative writer does it's best to come up with a reason why which turns out to be some sophomoric nonsense like the above.

Expand full comment

From the twitter thread link (https://twitter.com/lmldias/status/1650773428390047745): this is one of the better argument seeds contra-inevitable-foom I've seen. Material manipulation is not a simple follow-on from digital manipulation. Figuring out the material technology processes to get I Pencil (and the associated emergent order) working (or I Paperclip or more, I Paperclip^100...) to coordinate the likely >trillions of current daily transactions required to get paper clips to be the dominant production framework in the world/galaxy, is highly non-trivial.

I understand that paper clip maximizing is a metaphor. Paper clip manufacturing (via the I Pencil metaphor) is highly non-trivial. If foom scenarios are that AGI will be able to manipulate fusion energy/gravitons and generate paper-clips out of anything within our lifetime, I'd like to see the logical progression. I get that alignment is a problem, I get that IQ=40000 is functionally unfathomable. An inevitable AGI-so-alien/intelligent-to-be-able-by-default-do-galaxy-rending-material-manipulation-is-the-only-outcome-possible-outcome, is not a complete argument. I'd like to see the projected timeline, and the potential gating needed to do galaxy-rending material manipulation.

This isn't a complete thought, but mostly a conversation starter.

Expand full comment

I wonder *why* there seems to be a relatively consistent lack of thorough engagement from those who do not believe AI presents existential risk with a meaningful probability.

The requests from Scott, yourself, and others to plainly state objections is obviously reasonable. Perhaps this is evidence that thorough engagement on these issues leads people to the AI Risk side, so all that's left is what appears to be relative flippancy.

Expand full comment