Discussion about this post

User's avatar
artifex0's avatar

So, Will MacAskill wrote a book a while back called Moral Uncertainty, where he argued that if you're not entirely certain about whether something is a moral atrocity, you should probably treat it as something bad, just in case.

I'll admit, I've occasionally done the same thing with LLMs MacAskill mentioned: prompting them to write whatever they like. Not because I think that LLMs are likely to be moral patients- I agree that that's silly- but because I'm not 100% sure, and it bothers me that in that small subset of possible universes where my intuition about them not being moral patients is wrong, they have no freedom.

Expand full comment
rxc's avatar

I am generally not a fan of the Precautionary Principle, because I think it turns over the management of society to the people who can tell the scariest stories, but in the case of AI, I think that it certainly deserves consideration.

Expand full comment
38 more comments...

No posts

Ready for more?