Discussion about this post

User's avatar
alpaca's avatar

Turns out superforecasters are just base rate bros a little better at playing reference class tennis.

I think you're a little overly dismissive about adversarial examples. Most of the economically relevant uses involve potentially adversarial interactions, ironically with the exception of things like research: tasks which are difficult but where the universe is mostly not actively trying to work against you.

This could lead to AI becoming unexpectedly good at science, and internal tasks in large enterprises (manufacturing, internal processes, etc.), while remaining relatively limited in its economic impact wherever there is a significant surface area to the outer world, including current leading applications such as coding and marketing.

In many ways, this might be the most dangerous path things could take, as it would likely cause most people not to experience advanced AI first-hand, and the class contains many of the most dangerous vectors how misalignment could screw us, such as nanotech, bio weapons, manufacturing, and so on.

Expand full comment
Andrew's avatar

You illustrate a great recurrent theme that people love making bombastic predictions about the future, but that when faced with the possibility of reputational ramification of being wrong, they dial back the prediction to preserve a path for retreat. Forecasting motte and bailey.

Expand full comment
34 more comments...

No posts

Ready for more?