9 Comments

The tl;dr of Leopold Aschenbrenner’s giant thesis is … The Singularity is Near - Now with a Geopolitical Twist! I really think an Aschenbrenner shoutout to Kurzweil is overdue. Anyone reading these AI posts is familiar (at least) with takeoff scenarios and that lots of different people have done lots of work and expressed lots of different opinions about whether, how, and/or how fast takeoff happens. Aschenbrenner’s style and argument, however, is so Kurzweilian! The trend lines on log-scale graphs, the techno-economics, the error bars that fit neatly into the argument (in Aschenbrenner’s case, “maybe it’s 2028 and not 2027…”). Yes, Kurzweil reported and graphed different things and the details of his reasoning and conclusions are different - the Singularity is Near was published in 2005, after all, and a lot has changed in AI! Also, RK’s reputation took several hits as he unwisely tried to defend various absurdly specific predictions lest he lose his champion Techno Futurist belt. Still: nobody brought more attention to the basic takeoff AI/AGI/ASI concepts in the 1990s through the mid-2000s than RK, and Aschenbrenner is his spiritual child - with a more aggressive timeline and a focus on international power and politics that RK largely left alone. I don’t know if Aschenbrenner read RK, but The Singularity is Near is a classic of AI takeoff synthesis and argument, and Situational Awareness seems destined to be the next in the canon.

Expand full comment

Not that you need more in the queue, but I was curious about an old project that afaik got mothballed...did you ever get around to RTFB with the EU AI Act? At some point the posts went from "will read this in depth and report back when firmly understood" -> "this thing is like the EU AI Act" (e.g. referencing it as if readers would Know), but I don't recall actually seeing the in-depth post. I guess at this point there's been enough coverage of various regulations that one can fill in the expected plot beats, with a sprinkling of Mistral protectionism on top, but I don't wanna just assume that for a whole world region.

Expand full comment

Yes that is still in the full queue as well (I keep 2 open tabs as a reminder, etc). The two problems are (1) it is tough to find the time and (2) you have no idea how painful that will be to do, but I half did it on an earlier draft at one point, so I do, and... oh no.

Expand full comment

If it is of any interest I have read quite a few EU Directives (mainly tax related, but also banking and crypto) and would be happy to read it with you if that helps.

Expand full comment

I might ask some questions perhaps. But I don't think there's a way to make it that much less painful.

Expand full comment

I'm told there is a T shirt with the slogan "I went to Cupertino and I can't tell you anything about it."

Expand full comment

If there is such a t-shirt you had better be lying about that!

Expand full comment

Thanks for another excellent roundup. Regarding the passage I excerpt below:

1. As well-right-of-center man of letters Richard Hanania notes, literacy-orientation is now left-coded: https://www.richardhanania.com/p/liberals-read-conservatives-watch . Your work often draws on right-of-center intellectuals (Hanson, Cowen), but they are a minority. "Socially left, economically right" people are another minority you often cite.

2. Was the "AI model for ECGs" insertion between "NewsCorp's ..." and "...deeply WEIRD data sets" a formatting error? The latter two passages cover the same concern observed at different resolutions IMHO

3. I appreciate your occasional challenge to the large fraction of safety work focused on what we might call "morally wrong judgments". That seems to me a double challenge:

i. polities disagree: the current regimes governing Iran and India will demand very different tones on answers discussing Islam

ii. all polities are morally judgmental; the 20thC Anglo-American View from Nowhere is not a viable alternative: https://pressthink.org/2010/11/the-view-from-nowhere-questions-and-answers/

Is it worth pushing the community to a tighter focus on safety-as-bounded-obedience and away from content concerns?

Expand full comment