Discussion about this post

User's avatar
Ethics Gradient's avatar

Writing a dominant assurance contract that the frontier labs can sign on to seems like a very "You Can Just Do Things" project.

Trevor Vossberg's avatar

>Dean Ball points out that we do not in practice have a problem with so-called ‘woke AI’ but claims that if we had reached today’s levels of capability in 2020-2021 then we would indeed have such a problem, and thus right wing people are very concerned with this counterfactual.

Things, especially in that narrow window, got pretty crazy for a while, and if things had emerged during that window, Dean Ball is if anything underselling here how crazy it was, and we’d have had a major problem until that window faded because labs would have felt the need to do it even if it hurt the models quite a bit.

I've seen many murmurings about how bad it was in the woke times by a lot of sources but there doesn't really seem to be a clear account of it. As a skeptic where would it go to find the best case for it being really bad? Are people still unable/unwilling to talk about it with the executive now completely in the other direction?

9 more comments...

No posts

Ready for more?