Discussion about this post

User's avatar
Salty Spittoon's avatar

You may have written about this elsewhere, but what are your thoughts about the argument that even if alignment was achieved, we'd still be screwed because a bad actor could just align an AI to be bad, including existential-risk level bad? Is the strategy just: if we don't figure out alignment we're quite possibly toast, so let's figure it out, and if there are problems afterwards, we'll cross that bridge when we get to it?

Expand full comment
Jai's avatar

Those aren't Roon's beats, they're quoting the final verse of ERB's Gates v Jobs rap battle. Which incidentally is excellent.

https://www.youtube.com/watch?v=njos57IJf-0

Expand full comment
34 more comments...

No posts