Discussion about this post

User's avatar
John Wittle's avatar

I lost a huge amount of bayes points when this happened and have given up entirely on taking any corporate output from anthropic at face value; i am clearly incompetent at trustworthiness evaluation or even basic world modeling. i would rather assume they'd fold to hegseth's threats, and then be pleasantly surprised, than assume they'd abide by hardline pause commitments, and be devastated

but I will say, I continue to be very distressed that the community isn't looking for ways to scream at openai even harder than we are screaming at anthropic. it's just been demonstrated that short term incentives are the only thing that matters, and yet we are punishing anthropic for breaking safety promises more than we punish openai for laughing at the idea of safety. this is really, really bad.

Tung no's avatar

This clearly shows why self regulation will not work in AI. When competition gets tough. Principles get tossed. Legal Regulations are necessary!

26 more comments...

No posts

Ready for more?