8 Comments

I’m always alarmed by the “missing mood” in AI execs’ communications about their AGI timelines. Somebody who truly believes that we get AGI within the decade should be much more freaked out by our failure to meaningfully advance on alignment, or more excited about a likely total structural change to our economy. Whether things turn out well or turn out badly, it will be one of the most important events in world history, and if you really think it’s going to happen in the very near future, you should act like it’s a big deal.

Expand full comment

Seems to me that quite a few people do act like it’s a big deal. Whether Demis does or doesn’t act that way may be a function of his personality more than anything else.

Expand full comment

Jeff Dean has also previously explained that "Gemini" is a reference to the twin teams, the Google Brain team and the DeepMind team.

https://twitter.com/JeffDean/status/1733580264859926941

Expand full comment

>what it takes to align a system smarter than humans

None of these people have a damn clue. You can't "align" humans either.

Cat's already out of the bag, by the way. Think for two seconds:

If you accidentally-on-purpose built God, would you tell your boss?

Elon wants his money back for different reasons than he's saying. Smartest thing he's done in a long time.

Expand full comment

> Dwarkesh notes that Go and Chess have clear win conditions, real life does not, asks what to do about this. Demis agrees this is a challenge, but that usually ‘in scientific problems’ there are ways to specify goals. Suspicious dodge?

I wonder if status is really as pervasive as Robin Hanson et al. believe.

Expand full comment

(9:00) Dwarkesh notes that Go and Chess have clear win conditions, real life does not, asks what to do about this. Demis agrees this is a challenge, but that usually ‘in scientific problems’ there are ways to specify goals. Suspicious dodge?

This... this is literally the whole thing!

The human utility function cannot be conveyed in a way that adequately allows for victory conditions!

That's like, a core component of why we should be terrified

I definitely feel the missing mood here, and it's so weird to hear two people talking about this without drilling down hard on this point

real life's victory conditions are so intractable as to be impossible to define, and yet we are building AIs that will attempt to do so anyway

Expand full comment