37 Comments

>The third clause is saying that if a clause is found unconstitutional, then rather than strike even that clause, they are authorized to modify that clause to align with the rest of the law as best they can, given constitutional restrictions. Isn’t that just… good? Isn’t that what all laws should say?

No, it isn't, because the judiciary does not write law. If they did, we would only have needed one legislative law, saying "judges make rest of the law." You're saying all laws should have a clause saying "or whatever the judge feels like." Sorry, but do you hear yourself? If you're going to talk like an intellectual, consider these things., at a bare minimum.

With regard to the rest of this hootenanny, I think it is quite reasonable to suspect that the usual pulpers are pulping out palaver in the hope of securing themselves cushy careers. The institution putting out this paper being, presumably, populated by people who would be prime candidates for such cush, this should be our null hypothesis at this point.

As for the efficacy even of a well-run program or agency of such a nature -- have you all forgotten about the internet? If they manage to choke American AI development, open source or otherwise, everyone with two brain cells will find their way to the first kooky AI made by some Romanian that's not subject to the petty censors. Then we won't know a damn thing. As I said, you all have apparently forgotten about the internet.

You can expect about zero trust for such an agency from reasonable people, because it is very well evident that the mandate of such an agency will not be restricted to "existential threats," but will include such "threats" as the robot saying no-no words or being mean to people. That these two "problems" - problems which, if you recall, most two-year-olds have, and which we have never managed to quite fix in humans - have become such touchstones of discourse is utterly ridiculous.

Drop the petty censoriousness - not you, but you know who - and maybe people will trust you-know-who on the existential risks. That isn't looking to happen any time soon, so we won't be getting anywhere fast. I would recommend caching your writing in that understanding in the future.

Beyond the certainty of internet-accessible unborked Ais in the near future - something which people haven't even begun to scream about, I fear - there is also the certainty that nations taking less-restrictive approaches to AI will develop faster & better AI faster & better. This is so obvious I shouldn't even have to say it.

The consequence of bureaucratic strangling will be that other countries have AI and America has dirt. This is like calling for nuclear disarmament during the Manhattan Project -- if we also had full knowledge that every other major power was doing a Manhattan Project. Sorry, but are you dumb? Are all of you just brick dumb?

Expand full comment

If I recall correctly, it's pretty standard for contracts to have clauses that define what happens if the government should declare some part of the contract to be illegal. (Both at Microsoft and at my current university I have professional contract lawyers to write this kind of stuff for me, but I seem to recall they usually put that stuff in),

I guess laws can also have a "what if part of this law turns out to be unconstitutional" clause,

Expand full comment

I can't help wondering if in a real emergency, the US government e,g, fires some missiles into Google's data centres and argues about whether it was legal or not afterwards.

(Look, we had about 3009 ms to shut you down, and sorry, the only way to do it killed a couple of hundred of your staff and did a couple of billion dollars in damage, but there it is)

Expand full comment

I'd like to defend the standard cato/reason/libertarian take.

There are very real public choice concerns that most people crafting legislation tend to ... completely ignore. There is scenario that could tank this proposed government organization and I think this scenario is not just possible but the *most likely outcome*:

The President and the person they appoint head of the agency consider election politics more important than AI safety.

What this means in practice is that the prioritized pieces of regulation will look like this:

1. Regulations that benefit the president or people who back the president and can be justified under the guise of "AI safety".

2. Regulations that benefit the congressional representatives with the most power over the agency.

3. (if and when the agency itself becomes large or powerful) Regulations that benefit and are liked by the work force of the agency employees.

Depending on who gets hired at the agency, priority #3 might be the best chance of actually getting regulation aimed at the true purpose of this legislation.

This is the big reason why the libertarian crowd harps on unconstitutionality. There are supposed to be a set of protections in place to limit the power of these government organizations. The more powerful a government organization is, the *more* vulnerable it is to being taken over by opportunistic political players.

The legislation should be drafted in a way that you assume your political enemies who hate you will get first dibs on running the organization. Probably for this organization that might mean they can just think back a few years and imagine Trump appointing one of his family members to run it. Imagine everything that can go wrong in that scenario, and then you can begin to see why the libertarian crowd gets worried about these sort of things.

From the perspective of a democrat or republican I can see why they think libertarians are always "crying wolf". But imagine you are a libertarian and you have no political power, and everyone in power *is* your political enemy. The way democrats and republicans feel half the time when their opponents are in power,is how libertarians feel all the time. We aren't falsely crying wolf. We are in fact ruled by wolves, and no we aren't gonna leave the gate open for your large grey dog that lives out in the woods.

Expand full comment

10^24 flops seems a low threshold ... I think the top models are already higher than that, and, well, they're not terribly dangerous. Or really dangerous at all, except to the extent that they might improve the work efficiency of people already up to no good, which is true of a whole lot of tech.

Still, at least they settled on a decent metric, which is fine. I'd probably go two orders of magnitude higher.

So far as the severability clause goes, yeah ... I'll second what Dr. Y already wrote; you can't delegate the rewriting of legislation to the court. Unconstitutional in any case, but also, why would that be a good idea? Oh hey the Supreme Court justices and their clerks now need to be experts on AI? Or are they in turn supposed to do another delegate?

I'm not especially happy with their overall risk tier list. The most that can be said for it is that it allows for further monitoring and evaluation without slamming on the brakes completely. But in the end there are really three or maybe even only two tiers of risk: A) this is mostly safe, B) this is too dangerous to ever open source but can be used under heavy monitoring C) if you build it it will kill you all. Risks like 'this could provide bomb making recipes', 'it allows you to generate extremely racist memes at warp speed', or even 'can act as an expert advisor on bioweapons development' are relatively insignificant compared to potential upside. Not enough to justify this level of regulation and should really be ignored in the context of this legislation.

Still. Overall, I like it, and this is a lot more thoughtful than a lot of legislation. It also suggests that people are at least aware of and working to mitigate the risks in a serious way, which is grounds for hope.

Expand full comment

I notice this is framed as a "national" vs "international" effort. Several months ago, there appeared to be a move to create an IAEA type outfit to regulate AI - should this be happening in parallel to this national legislation track? Or is the focus on US legislation sufficient for AI safety overall?

Expand full comment

Regarding the "MAJOR QUESTIONS DOCTRINE" paragraph, this is in response to the Supreme Court recently coming up with the "Major Questions Doctrine", whereby it sometimes strikes down rules made under a broad grant of authority because it thinks (in its infinite wisdom) that Congress surely couldn't have intended the authority to be that broad. For example, striking down greenhouse gas regulations on the basis that, despite the Environmental Protection Act authorizing the EPA to regulate pollution, regulating this pollution was kinda too much of a big deal.

This paragraph is just saying, "yeah we did actually intend it to be that broad" - this just means that if the Courts strike down a rule, it would be on any other basis besides "even tho the text of the act expressly allows this rule, it seeks like kinda a big deal so it's not allowed."

Expand full comment

Typo:

"The bill did gave its critics some soft rhetorical targets"

gave -> give

Expand full comment

The limits are defined in tiers relative to their level of "major security risk" defined as (1) substantial national security risks to the United States, (2) "global catastrophic and existential threats", (3) "risks that AI will...permanently escape human control".

I think "major security risk" is pretty ill-defined and as they iterate on this bill, that should be one thing the authors should tighten up:

(1) "substantial damage...[to US] national security"--I worry that "substantial damage" could mean just about anything. Does that vague level of "substantial damage" justify the limits the bill sets on training AI models?

(2) "global catastrophic and existential threats" is defined as "threats that with varying likelihood may produce consequences severe enough to result in systemic failure or destruction of critical infrastructure or significant harm to human civilization"--sounds bad, but does that mean "destruction of critical infrastructure" is sufficient? What about one bridge down--is that "critical infrastructure? [low confidence on this, this is a lawyerly thing I shouldn't claim to have insight on]

(3) "risks that AI will...permanently escape human control"--there are supposedly ~10,000+ Internet viruses floating around the Internet, most of them not under human control, and potentially there permanently. Those are "AI" that have permanently escaped human control, right? It is critically important we prevent superintelligent AI from escaping human control, but we probably don't want to indefinitely prevent the next GPT model purely because it might enable creation of a new virus.

By tightening up the definition of "global catastrophic and existential threats" to something more like a quantified amount of risk to human life, financial cost, or national security, we could then use the bill's tiers--"medium concern", "high concern" and "extremely high concern" in ways that refer to a tightly quantified amount of catastrophic or existential risk.

Expand full comment