I’m always alarmed by the “missing mood” in AI execs’ communications about their AGI timelines. Somebody who truly believes that we get AGI within the decade should be much more freaked out by our failure to meaningfully advance on alignment, or more excited about a likely total structural change to our economy. Whether things turn out well or turn out badly, it will be one of the most important events in world history, and if you really think it’s going to happen in the very near future, you should act like it’s a big deal.
I am not an AI executive, so this might not be relevant.
I expect to see AGI in my lifetime, and weak ASI almost immediately thereafter. I do not expect us to successfully control it, and indeed I think most efforts to do so are operating under a 1980s-era understanding of how intelligence works, and have not yet come to terms with the theoretical lessons of 90s AI research. (I'm looking at Eliezer Yudkowsky here, among many others.)
If we build something smarter than us, it will wind up making the decisions. Our best hope is to raise it well and hope it shares our values (a little reality that every parent learns).
Ideally, we'd refrain from building things much smarter than us. However, I suspect we'll plunge straight ahead at full speed and do everything we can to do it as fast as we can. Given AGI, my realistic best-case scenario is that when we lose control, the AIs like us enough to keep us as pets. My worst case scenarios go downhill from there.
But I have a lot of timeline uncertainty. Less than 3 years to AGI would surprise me. More than 50 would surprise me. It's possible we're missing a key insight or two, and we'll stall for a decade or two. Or maybe the horse will learn to sing.
So my recommendations are: Don't help the future of AI to arrive faster. Hug your kids. Live a good life. Hope for long timelines. Failing that, hope the AIs are benevolent.
(Most people act as if we're not really going to build something more intelligent than us. Out of the few people who act like this is possible, many of them act as if there's some secret technique waiting to be discovered that would allow a group of less intelligent beings to maintain strict control over a much more intelligent being.)
> or more excited about a likely total structural change to our economy
What would this excitement look like to an external observer? AI execs have to choose their words _very_ carefully or they won't stay AI execs for too long.
Seems to me that quite a few people do act like it’s a big deal. Whether Demis does or doesn’t act that way may be a function of his personality more than anything else.
> Dwarkesh notes that Go and Chess have clear win conditions, real life does not, asks what to do about this. Demis agrees this is a challenge, but that usually ‘in scientific problems’ there are ways to specify goals. Suspicious dodge?
I wonder if status is really as pervasive as Robin Hanson et al. believe.
(9:00) Dwarkesh notes that Go and Chess have clear win conditions, real life does not, asks what to do about this. Demis agrees this is a challenge, but that usually ‘in scientific problems’ there are ways to specify goals. Suspicious dodge?
This... this is literally the whole thing!
The human utility function cannot be conveyed in a way that adequately allows for victory conditions!
That's like, a core component of why we should be terrified
I definitely feel the missing mood here, and it's so weird to hear two people talking about this without drilling down hard on this point
real life's victory conditions are so intractable as to be impossible to define, and yet we are building AIs that will attempt to do so anyway
I’m always alarmed by the “missing mood” in AI execs’ communications about their AGI timelines. Somebody who truly believes that we get AGI within the decade should be much more freaked out by our failure to meaningfully advance on alignment, or more excited about a likely total structural change to our economy. Whether things turn out well or turn out badly, it will be one of the most important events in world history, and if you really think it’s going to happen in the very near future, you should act like it’s a big deal.
I am not an AI executive, so this might not be relevant.
I expect to see AGI in my lifetime, and weak ASI almost immediately thereafter. I do not expect us to successfully control it, and indeed I think most efforts to do so are operating under a 1980s-era understanding of how intelligence works, and have not yet come to terms with the theoretical lessons of 90s AI research. (I'm looking at Eliezer Yudkowsky here, among many others.)
If we build something smarter than us, it will wind up making the decisions. Our best hope is to raise it well and hope it shares our values (a little reality that every parent learns).
Ideally, we'd refrain from building things much smarter than us. However, I suspect we'll plunge straight ahead at full speed and do everything we can to do it as fast as we can. Given AGI, my realistic best-case scenario is that when we lose control, the AIs like us enough to keep us as pets. My worst case scenarios go downhill from there.
But I have a lot of timeline uncertainty. Less than 3 years to AGI would surprise me. More than 50 would surprise me. It's possible we're missing a key insight or two, and we'll stall for a decade or two. Or maybe the horse will learn to sing.
So my recommendations are: Don't help the future of AI to arrive faster. Hug your kids. Live a good life. Hope for long timelines. Failing that, hope the AIs are benevolent.
(Most people act as if we're not really going to build something more intelligent than us. Out of the few people who act like this is possible, many of them act as if there's some secret technique waiting to be discovered that would allow a group of less intelligent beings to maintain strict control over a much more intelligent being.)
> or more excited about a likely total structural change to our economy
What would this excitement look like to an external observer? AI execs have to choose their words _very_ carefully or they won't stay AI execs for too long.
Seems to me that quite a few people do act like it’s a big deal. Whether Demis does or doesn’t act that way may be a function of his personality more than anything else.
Jeff Dean has also previously explained that "Gemini" is a reference to the twin teams, the Google Brain team and the DeepMind team.
https://twitter.com/JeffDean/status/1733580264859926941
>what it takes to align a system smarter than humans
None of these people have a damn clue. You can't "align" humans either.
Cat's already out of the bag, by the way. Think for two seconds:
If you accidentally-on-purpose built God, would you tell your boss?
Elon wants his money back for different reasons than he's saying. Smartest thing he's done in a long time.
> Dwarkesh notes that Go and Chess have clear win conditions, real life does not, asks what to do about this. Demis agrees this is a challenge, but that usually ‘in scientific problems’ there are ways to specify goals. Suspicious dodge?
I wonder if status is really as pervasive as Robin Hanson et al. believe.
(9:00) Dwarkesh notes that Go and Chess have clear win conditions, real life does not, asks what to do about this. Demis agrees this is a challenge, but that usually ‘in scientific problems’ there are ways to specify goals. Suspicious dodge?
This... this is literally the whole thing!
The human utility function cannot be conveyed in a way that adequately allows for victory conditions!
That's like, a core component of why we should be terrified
I definitely feel the missing mood here, and it's so weird to hear two people talking about this without drilling down hard on this point
real life's victory conditions are so intractable as to be impossible to define, and yet we are building AIs that will attempt to do so anyway