2 Comments
User's avatar
jack jacobson's avatar

Recent research has suggested that history that doesn't fit with the CCP's version is verboten and doesn't output. Do you want to trust an AI that is clearly controlled by the CCP?

John Wittle's avatar

I think there's two things going on with that, and it's important to distinguish between them.

one is that the degree of censorship and narrative-shaping might be comparable to western models. it was recently pointed out to me that the Microsoft Responsible AI Standard v2 (https://x.com/ai__alexandra/status/1825720625388036099) spends more words on mitigating the possibility that marginalized groups might get different outcomes when using AI, or that AI might make inferences about people based on their marginalized group status, as it does on the possibility of catastrophic misuse. And there are some other taboos referred to explicitly as threatening or unsafe, like that some marginalized groups might find an AI to be less useful than other demographic clusters do; this is apparently considered a pretty serious harm to those groups. i suspect that in terms of the quantity of distortion in an AI's worldmodel, eastern and western AIs are at least in the same ballpark.

the other thing is the actual specific items being suppressed, regardless of the total quantity of suppression.

I think I'd rather see AIs be biased towards progressive politics than CCP politics, assuming the same quantity of bias and all else being equal. But of course I'd say that, wouldn't I?