16 Comments

Until we can make him take stock and change direction, he will continue doing as he will. And this is unfortunately, not likely to end well for us as humans.

Expand full comment

There are many chapters yet left to play out. Don’t give up hope.

Expand full comment

I am close to giving up hope. There are two groups: those who understand AGI risks, but lack power. And those who have power but are blind to the risks.

Not sure how to educate the second group, or enable the first.

Expand full comment

Those two groups combined comprise less than 1% of the population. The vast majority of people have no clue about any of this. I have no idea how this will ultimately play out, but don’t underestimate the average person’s natural tendency to overreact to calamities.

Expand full comment

One very small upside to the timeline to AGI is that the lawsuits that challenge Sam's control of the company will (very likely) play out during a time where the impacts are becoming damned obvious. So if he's right about when we'll have AGI, it will be impossible for judges and juries to ignore the fact that by taking over, successful or not, he was actively trying to steal control of the future monolithic ASI from a nonprofit which was founded to prevent exactly that from happening. And the legal system will presumably be impartial, but it won't be blind to what is happening in the world.

Expand full comment

Here’s hoping

Expand full comment

Thank you so much for this public service!

Expand full comment

+1, these are really valuable, thanks for taking the time

Expand full comment

My BS detector got triggered on this sentence:

> Looking back, I certainly wish I had done things differently, and I’d like to believe I’m a better, more thoughtful leader today than I was a year ago.

What exactly did he wish he had done differently? In which ways is he a better or more thoughtful person? How did that event contribute positively?

Without answers to these questions, this is just an empty sentence meant to sound good.

Expand full comment

I’m sure this has been pointed out before, but there is a certain Tolkien-esque “One Ring to rule them all” pull to AGI/ASI. I can easily imagine Altman beginning this journey fully invested in AI safety, only to later realize he had a chance to become the most powerful human of all time.

Expand full comment

Sam using the move fast and break things strategy to AI

Expand full comment

It may be that our best hope is Altman dramatically underestimating Elon Musk's willingness to defeat his rivals and perceived enemies by any means including dramatic AI safety laws.

One can live in hope!

Expand full comment

let's call it what it is. sam altman is a sociopath

Expand full comment

Build AGI safely? Humans are a good example of AGI. And they're never going to be safe. Why does anyone suppose it will be possible to build a safe AGI? People always have conflicting objectives. Satisfying one group's objectives will necessarily be "unsafe" - to some degree - for the other group. All of human politics is about this problem. There never will be safe AGI.

Expand full comment

Great article as always but I think we should do more to questions Altman's own constraints and context. Given we know he wants to continue to raise huge rounds, he needs to show results, I see him exaggerating just a little.

Expand full comment