Discussion about this post

User's avatar
Dave Friedman's avatar

The biggest problem I see with Aschenbrenner’s argument, which he acknowledges, is that we seem to be running out of data for training. It’s certainly a solvable problem but I don’t know whether it can be solved in an amount of time that conforms to his prediction of AGI by 2027-28.

Expand full comment
Ben Reid's avatar

Thanks Zvi for spending the neural processing cycles so the rest of us don't have to..

My 2¢: As one of the approximately 78% of the world's population who lives outside the US and China, it's the naive two-player, zero-sum, "everyone else is an NPC", blue-pilled US exceptionalism underpinning the analysis which seems the weakest point in the whole argument. (And coincidentally also a deep generator function for a lot of what else is fucked up on the planet...) The rest of the world won't willingly buy into another Hiroshima/Nagasaki outcome. If the prize is so valuable, superintelligence will be plurality not singleton, other players will play outside the US and Chinese "Projects" and other key innovations/variations/mutations of SuperAI will arise... think a diverse Darwinian evolutionary surface rather than top-down militarised monoculture... There's some deep psychology in LA's personal US naturalisation story that bears scrutiny....

Expand full comment
43 more comments...

No posts