Discussion about this post

User's avatar
DangerouslyUnstable's avatar

I'm actually not too concerned about human flourishing/values in the optimistic case where we get a benevolent AI that doesn't kill everyone and can just do all the jobs better than we can.

The reason is that we already have proof that humans can happily flourish in that paradigm: hobbies. I have several hobbies, some of which can be lumped into the big category of "producing food", including cooking, gardening, and beer brewing (among others).

I am not even close to the best at _any_ of these. I'm not even good enough that it is difficult to find other humans that are better, let alone relative to the industrial processes. I can, for essentially trivial amounts of money, buy products that are better than what I can produce, and will ever be able to produce, yet these activities still bring me a very significant amount of joy and meaning.

I enjoy the hobbies because the action of performing them is intrinsically valuable with no need to relate to the skill or ability of anyone else, and I actively avoid taking steps that would improve the outputs but decrease my involvement, while taking lots of actions that improve the outcome while maintaining or increasing my involvement.

It is entirely possible that the entirety of human existential value will come from these kinds of hobbies (Family/small group social interactions is another such example).

In my job, I contribute, in some small way to the furthering of humanity's understanding of the physical world. If I was no longer able to productively contribute to that endeavor, that would be worse than the world in which I can continue to do so. But it is most definitely not a world in which I take no joy or find no meaning.

Expand full comment
Jacob Buckman's avatar

You're correct that I'm not worried in the "Bobby McFerrin sense", although I would say my actual position is closer to Marcus Aurelius. But you've missed a key point in my argument: *not* creating an AI of a certain capability level (or delaying it) could plausibly lead to an *increase* in x-risk. So your "obvious" intervention of "stop the breakthrough from being found" is, in my opinion, no more likely to mitigate x-risk than any other. That intervention is still just pushing the double pendulum up at t=2; doing something that vaguely feels correct given what we know right now, but with ultimately no hope of meaningfully impacting the eventual outcome.

You've slightly misunderstood my position on Christiano-type research. I think it's good research because it will yield meaningful, predictable benefits to society. But, in keeping with my overall position, I don't think it's effective at mitigating ASI x-risk. (And if I were someone whose sole evaluative criterion was mitigation of ASI x-risk, I would not think it is good research.)

Expand full comment
50 more comments...

No posts