This week, Altman offers a post called Reflections, and he has an interview in Bloomberg. There’s a bunch of good and interesting answers in the interview about past events that I won’t mention or have to condense a lot here, such as his going over his calendar and all the meetings he constantly has, so consider
Until we can make him take stock and change direction, he will continue doing as he will. And this is unfortunately, not likely to end well for us as humans.
I am close to giving up hope. There are two groups: those who understand AGI risks, but lack power. And those who have power but are blind to the risks.
Not sure how to educate the second group, or enable the first.
Those two groups combined comprise less than 1% of the population. The vast majority of people have no clue about any of this. I have no idea how this will ultimately play out, but don’t underestimate the average person’s natural tendency to overreact to calamities.
One very small upside to the timeline to AGI is that the lawsuits that challenge Sam's control of the company will (very likely) play out during a time where the impacts are becoming damned obvious. So if he's right about when we'll have AGI, it will be impossible for judges and juries to ignore the fact that by taking over, successful or not, he was actively trying to steal control of the future monolithic ASI from a nonprofit which was founded to prevent exactly that from happening. And the legal system will presumably be impartial, but it won't be blind to what is happening in the world.
> Looking back, I certainly wish I had done things differently, and I’d like to believe I’m a better, more thoughtful leader today than I was a year ago.
What exactly did he wish he had done differently? In which ways is he a better or more thoughtful person? How did that event contribute positively?
Without answers to these questions, this is just an empty sentence meant to sound good.
I’m sure this has been pointed out before, but there is a certain Tolkien-esque “One Ring to rule them all” pull to AGI/ASI. I can easily imagine Altman beginning this journey fully invested in AI safety, only to later realize he had a chance to become the most powerful human of all time.
It may be that our best hope is Altman dramatically underestimating Elon Musk's willingness to defeat his rivals and perceived enemies by any means including dramatic AI safety laws.
Build AGI safely? Humans are a good example of AGI. And they're never going to be safe. Why does anyone suppose it will be possible to build a safe AGI? People always have conflicting objectives. Satisfying one group's objectives will necessarily be "unsafe" - to some degree - for the other group. All of human politics is about this problem. There never will be safe AGI.
Great article as always but I think we should do more to questions Altman's own constraints and context. Given we know he wants to continue to raise huge rounds, he needs to show results, I see him exaggerating just a little.
Until we can make him take stock and change direction, he will continue doing as he will. And this is unfortunately, not likely to end well for us as humans.
There are many chapters yet left to play out. Don’t give up hope.
I am close to giving up hope. There are two groups: those who understand AGI risks, but lack power. And those who have power but are blind to the risks.
Not sure how to educate the second group, or enable the first.
Those two groups combined comprise less than 1% of the population. The vast majority of people have no clue about any of this. I have no idea how this will ultimately play out, but don’t underestimate the average person’s natural tendency to overreact to calamities.
One very small upside to the timeline to AGI is that the lawsuits that challenge Sam's control of the company will (very likely) play out during a time where the impacts are becoming damned obvious. So if he's right about when we'll have AGI, it will be impossible for judges and juries to ignore the fact that by taking over, successful or not, he was actively trying to steal control of the future monolithic ASI from a nonprofit which was founded to prevent exactly that from happening. And the legal system will presumably be impartial, but it won't be blind to what is happening in the world.
Here’s hoping
Podcase episode for this post:
https://open.substack.com/pub/dwatvpodcast/p/openai-10-reflections
Thank you so much for this public service!
+1, these are really valuable, thanks for taking the time
My BS detector got triggered on this sentence:
> Looking back, I certainly wish I had done things differently, and I’d like to believe I’m a better, more thoughtful leader today than I was a year ago.
What exactly did he wish he had done differently? In which ways is he a better or more thoughtful person? How did that event contribute positively?
Without answers to these questions, this is just an empty sentence meant to sound good.
I’m sure this has been pointed out before, but there is a certain Tolkien-esque “One Ring to rule them all” pull to AGI/ASI. I can easily imagine Altman beginning this journey fully invested in AI safety, only to later realize he had a chance to become the most powerful human of all time.
Sam using the move fast and break things strategy to AI
It may be that our best hope is Altman dramatically underestimating Elon Musk's willingness to defeat his rivals and perceived enemies by any means including dramatic AI safety laws.
One can live in hope!
let's call it what it is. sam altman is a sociopath
Build AGI safely? Humans are a good example of AGI. And they're never going to be safe. Why does anyone suppose it will be possible to build a safe AGI? People always have conflicting objectives. Satisfying one group's objectives will necessarily be "unsafe" - to some degree - for the other group. All of human politics is about this problem. There never will be safe AGI.
Great article as always but I think we should do more to questions Altman's own constraints and context. Given we know he wants to continue to raise huge rounds, he needs to show results, I see him exaggerating just a little.