7 Comments

The biggest problem with "responsible AI" is that most of the people pushing the idea are extremist Progressives. So "responsible" comes to mean a sort of gleichschaltung, where AI is forced into compliance with the anti-humanist Interahamwe morality of the Progressive cult.

Expand full comment

This is an unhelpful comment at best, and misguided considering the proponents of RSPs are coming from a position of legitimate worry about x-risk and catastrophic outcomes of training and deploying AI models. I don’t see how this reply is anything but you grinding an axe and misdirected frustration with the perceived actions of a group of people you don’t actually care to understand.

Expand full comment

It may be helpful in the sense of reminding some readers that the general public has a strongly _negative_ level of trust in the social faction who control the AI companies. While we outsider plebs share the legitimate concerns over catastrophic outcomes, we simply do not trust these people to do anything remotely responsible. Rather we expect the premise of safety to be used merely as justification for further gleichschaltung. All the while existential risk remaining unabated, and perhaps even worsened by said efforts.

Expand full comment

Is Zvi an “extremist progressive” in your view? Is Eliezer? Who exactly are you referring to?

Expand full comment

I don't think there's much of that here? These seem to pretty straightforwardly be dealing with existential or catastrophic risk. I don't see much about making sure AIs aren't racist in these RSPs.

Expand full comment

re:

> Importantly: What would be the dynamics of a world with such a model available to the public or select actors? What economic, political, social, military pressures and changes would result? As you go to ASL-4 and higher you really, really need to be thinking such things through.

One of my recent thoughts / worries is that existing venues for exploring this—eg, simulated diplomacy and crisis management strategy games used in training and academic settings—are fundamentally incapable of providing good feedback, and perhaps more worryingly likely to lead us astray. I expect this because the shift in power dynamics and influences of an AI model worthy of ASL-4 categorization seem as if they would qualitatively change things in ‘unknown unknown’ ways we can’t account for in our thinking, even after significant reflection and debate.

Expand full comment

I converted this to a podcast since I only have time to listen: https://jumpshare.com/s/s51j8eQdyulubu2QmcRs

Expand full comment