Discussion about this post

User's avatar
Anthony Bailey's avatar

7ci

Are there no existing laws (federal or state, anywhere) that can be interpreted as "Don't build technologies that you cannot robustly prevent from deposing the government and killing everyone"?

Understood if the answer is "no, left unwritten because no-one would be so stupid as to" but the question seems worth asking.

Rachel @ This Woman Votes's avatar

Yes, the framework feels like architectural preemption. Their offer is nothing, though.

The gap is much worse than missing frontier risk policy. The framework explicitly prohibits ANY verification infrastructure at every jurisdictional level, federal refuses to build it, states are blocked from requiring it, industry self-certifies. This creates a governance vacuum where no one can require proof that AI systems are safe before deployment.

The worry about existential risk going unaddressed is 100% valid, and frankly, should be assumed. I worry about everyday catastrophic failures: medical AI killing patients, financial AI causing market collapse, and educational AI systematically harming students. ALL of which could be prevented through adversarial verification but won't be because the framework makes verification architecturally impossible.

When those failures occur, the framework ensures no one will have been responsible for preventing them.

I see this entire exercise as gross negligence as legislative agenda.

On the upside, AI is a global technology, and other countries are less neglectful of human safety.

11 more comments...

No posts

Ready for more?