Discussion about this post

User's avatar
Kevin's avatar

I think there is just a different culture. The American AI labs, Google, OpenAI, and Anthropic, are all pretty similar. People mix around between them. They have some faction who believes that "AI safety" is a thing. They might pay more or less attention to it, obviously Anthropic cares more than OpenAI and so on, but it's a matter of degree.

The Chinese labs just don't seem to think that "AI safety" is an issue at all. Or at least they don't communicate about it publicly.

I feel like the whole framing is different in the different countries. In the US, worries about the power of the tech industry are open. Public thinkers criticize the tech industry for various things, the media criticizes many things, sometimes there is regulation, but much more frequently the tech industry responds to these open criticisms at a lower level before they escalate. So it's natural for there to be a big debate about the downsides of AI, even potential future ones, and for the tech companies to try to take that into account.

In China, worries about the power of the tech industry are more like, things that the Communist party handles secretly. If some company is too powerful or they disagree with their direction they take it over or kidnap the leader for a while or who knows what is actually going on. Companies just don't do things like, openly publish a list of the ways that their actions have some risk of causing political chaos.

So, I just don't think that the Chinese labs will ever handle safety concerns the way that American labs do. But if they were doing a lot of safety work, I think they'd keep it secret, anyway. It isn't the sort of thing that China would permit a public debate on.

Kevin M.'s avatar

"Frankly, this is deeply irresponsible and completely unacceptable."

It's comments like this that make it hard for me to take the doomer position seriously. I'm open to (and generally persuaded by) the idea that AGI superintelligence will be something we can't control and won't be able to even understand, so the risks are enourmous. I'm also open to the idea that is is important that we do our best to make the models legible now, so we can hopefully continue to understand them going forward.

But there is simply nothing particularly dangerous about the models now, other than mundane threats like hackers using them to hack better. DeepSeek v. 3.2 is not going to take over the world.

33 more comments...

No posts

Ready for more?