Discussion about this post

User's avatar
Ethics Gradient's avatar

Presumably a typo here:

"Rob Wilbin is right that it is common for [expert in X] to tell [expert in Y] they really should have known more about [Y], but that there are far more such plausible [Y]s than any person can know at once."

I assume that this should read

"Rob Wilbin is right that it is common for [expert in Y] to tell [expert in X] they really should have known more about [Y], but that there are far more such plausible [Y]s than any person can know at once."

Right?

Randomstringofcharacters's avatar

> How can you stop an AI from ‘spreading rumors’?

The language for that Chinese AI law is extremely similar to that used in their current internet censorship laws. Which as you say is sp broad that on practice everyone is constantly violating them. This is a feature not a bug from their perspective because it gives them cover to interfere at will by their normal approach of selective enforcement.

Historically this has meant that to have a viable business model in China companies have to develop a good relationship with the regulator, both in terms of bribing and flattering the correct officials and following whatever their directives of the day are. Which then leaves open the risk that a sufficiently well connected company is allowed to do whatever it likes, or they are directed to develop malicious uses for AI deliberately by the government.

I would assume at minimum they'd require the AI companies to report on people, suppress certain information and promote the "correct" government narratives on various topics, as web and social media companies do already.

15 more comments...

No posts

Ready for more?