5 Comments
User's avatar
Gracchus's avatar

The biggest problem with "responsible AI" is that most of the people pushing the idea are extremist Progressives. So "responsible" comes to mean a sort of gleichschaltung, where AI is forced into compliance with the anti-humanist Interahamwe morality of the Progressive cult.

Expand full comment
User's avatar
Comment deleted
Dec 5, 2023
Comment deleted
Expand full comment
Gracchus's avatar

It may be helpful in the sense of reminding some readers that the general public has a strongly _negative_ level of trust in the social faction who control the AI companies. While we outsider plebs share the legitimate concerns over catastrophic outcomes, we simply do not trust these people to do anything remotely responsible. Rather we expect the premise of safety to be used merely as justification for further gleichschaltung. All the while existential risk remaining unabated, and perhaps even worsened by said efforts.

Expand full comment
Mo Diddly's avatar

Is Zvi an “extremist progressive” in your view? Is Eliezer? Who exactly are you referring to?

Expand full comment
John Wittle's avatar

I don't think there's much of that here? These seem to pretty straightforwardly be dealing with existential or catastrophic risk. I don't see much about making sure AIs aren't racist in these RSPs.

Expand full comment
Jonathan Grant's avatar

I converted this to a podcast since I only have time to listen: https://jumpshare.com/s/s51j8eQdyulubu2QmcRs

Expand full comment