Discussion about this post

User's avatar
Polytropos's avatar

I’m always alarmed by the “missing mood” in AI execs’ communications about their AGI timelines. Somebody who truly believes that we get AGI within the decade should be much more freaked out by our failure to meaningfully advance on alignment, or more excited about a likely total structural change to our economy. Whether things turn out well or turn out badly, it will be one of the most important events in world history, and if you really think it’s going to happen in the very near future, you should act like it’s a big deal.

Expand full comment
Dr. Y's avatar

>what it takes to align a system smarter than humans

None of these people have a damn clue. You can't "align" humans either.

Cat's already out of the bag, by the way. Think for two seconds:

If you accidentally-on-purpose built God, would you tell your boss?

Elon wants his money back for different reasons than he's saying. Smartest thing he's done in a long time.

Expand full comment
6 more comments...

No posts