Discussion about this post

User's avatar
Nikita Sokolsky's avatar

"The risk is that this report greatly expands and other requirements are added over time, in a way that do introduce an undue burden, and that this also applies to a wider range of systems, and also that none of that was actually necessary for safety. That is a possible world. I don’t expect it, but it makes sense to keep an eye."

Assuming the "death by a thousand cuts" scenario is plausible, isn't the best time for pushback _right now_ rather than in a few years when the requirements get ratcheted up? Yes, the requirements _today_ are to send out a few reports, but the optimal strategy for fighting this is to ignore the "it's just reports" part and blow it up into a huge political fight.

This is a good parallel to the fight against extinction-level AGI: safety people aren't exactly convinced by the idea that we shouldn't pushback against AGI development today because AI capabilities are not that significant just yet. Instead, safety people want to start the fight _now_ and not wait for these capabilities to actually happen.

Expand full comment
Yusef Nathanson's avatar

I love the symmetry between the opposing sides in this debate. For some, AI unaligned with the interests of powerful corporations and governments seems like the overwhelming existential risk to humanity. For others, it’s AI aligned with those interests that seems most risky.

Note that I reject the notion that AI could be aligned with humanity as such. Clearly, nobody serious wants AI aligned with North Korea, religious extremists, anti-vaxxers, or other assorted “bad actors”.

Expand full comment
14 more comments...

No posts