37 Comments

>The third clause is saying that if a clause is found unconstitutional, then rather than strike even that clause, they are authorized to modify that clause to align with the rest of the law as best they can, given constitutional restrictions. Isn’t that just… good? Isn’t that what all laws should say?

No, it isn't, because the judiciary does not write law. If they did, we would only have needed one legislative law, saying "judges make rest of the law." You're saying all laws should have a clause saying "or whatever the judge feels like." Sorry, but do you hear yourself? If you're going to talk like an intellectual, consider these things., at a bare minimum.

With regard to the rest of this hootenanny, I think it is quite reasonable to suspect that the usual pulpers are pulping out palaver in the hope of securing themselves cushy careers. The institution putting out this paper being, presumably, populated by people who would be prime candidates for such cush, this should be our null hypothesis at this point.

As for the efficacy even of a well-run program or agency of such a nature -- have you all forgotten about the internet? If they manage to choke American AI development, open source or otherwise, everyone with two brain cells will find their way to the first kooky AI made by some Romanian that's not subject to the petty censors. Then we won't know a damn thing. As I said, you all have apparently forgotten about the internet.

You can expect about zero trust for such an agency from reasonable people, because it is very well evident that the mandate of such an agency will not be restricted to "existential threats," but will include such "threats" as the robot saying no-no words or being mean to people. That these two "problems" - problems which, if you recall, most two-year-olds have, and which we have never managed to quite fix in humans - have become such touchstones of discourse is utterly ridiculous.

Drop the petty censoriousness - not you, but you know who - and maybe people will trust you-know-who on the existential risks. That isn't looking to happen any time soon, so we won't be getting anywhere fast. I would recommend caching your writing in that understanding in the future.

Beyond the certainty of internet-accessible unborked Ais in the near future - something which people haven't even begun to scream about, I fear - there is also the certainty that nations taking less-restrictive approaches to AI will develop faster & better AI faster & better. This is so obvious I shouldn't even have to say it.

The consequence of bureaucratic strangling will be that other countries have AI and America has dirt. This is like calling for nuclear disarmament during the Manhattan Project -- if we also had full knowledge that every other major power was doing a Manhattan Project. Sorry, but are you dumb? Are all of you just brick dumb?

Expand full comment

There is only one other country who can build AI and they are well aware of the dangers. Despite the fearmongering, this is indeed a danger we can coordinate on.

Expand full comment

>There is only one other country who can build AI

Excuse me, what? Who told you this? Why do you believe this?

Even if true -- and it isn't -- such coordination seems unlikely unless Americans, spoken broadly, can refrain for a moment from fingering their own buttholes in search of new ways to fearmonger about China...as that seems to be quite a favorite activity for many on both "left" and "right" these days, I don't think there are good chances on that front either.

That's merely from the American side - what motive does China have to hinder their own capabilities, just because some folks across the sea think computers are about to kill us all? If I were a Chinese person, my response would be, "Thanks for the advice, but we'll take our chances, and we certainly won't be trusting you folks - who spent the last five years calling us all bat-eating bugmen - any further than we can throw you."

But also, other countries exist, and nothing is keeping them from working on AI -- quite a few countries went from zero to 100 on nuclear weapons in the few decades after they were unveiled. There is nothing stopping Britain, France, Germany, Russia, Iran, South Korea, Japan, India, Israel, or Italy developing AIs in the span of a few years -- and if bureaucratic strangling hurts AI development in America, those researches will go elsewhere. If I were one, I wouldn't be happy working under the petty censors. Such researchers can even work remotely. The internet, remember?

I don't think you'd say such a goofy thing if you didn't have to -- but you do have to, as otherwise, the idea that you could future-proof by regulation in this area is -- ka-boom -- exploded. Welcome to 2024! It's scary, but deal with it sensibly, or you will make things worse.

Your policy is "we will simply let no one bad have the magic stick" when magic sticks grow, if slowly, on trees all over the world. This is like the War on Drugs, if you could send drugs by satellite. It's a farce, a grifting farce, and a rather rudely obvious one.

Sooner or later, everyone will have the magic stick, and everyone's magic stick will be just about as good as everyone else's. Then the question becomes, how does one use the magic stick more effectively than other countries?

There's no easy answer, and that terrifies you. Understandable. Yet the answer is the same as ever: nations are preserved by the intelligence, fortitude and perseverance of their citizens, and by nothing else.

Expand full comment

As someone familiar with that nation, it is capable of rational decisions to avoid extinction and the same drive for power leads to a desire for preservation.

This is more like the War on Terrorism, which mostly has worked.

Expand full comment

Beyond that, the entire ramble of "this will never work!!!!" is amusing given the survelience capabilities the world already has. If you don't think ANI will effectively find violators trying to kill the world and indeed not even be able to find behavior patterns before activity, and the connected world doesn't also make it a well monitored world, I think that the magic sticks might be more in your diet.

Expand full comment

Your take is "we will simply invent a magic stick so powerful that it will keep anyone else from inventing magic sticks." My take is "everyone will probably have a magic stick eventually."

Your theory here rests on the assertion that you could control something with such power. You probably still think it's possible to "align" something smarter than yourself, don't you?

Why don't we simply "align" our children into never saying naughty words and always playing nice?

I think you cling to these misbegotten notions because you aren't confident in your ability to win a conflict fought by magic stick. What would determine the winner of such a contest? Intelligence, fortitude, and perseverance.

These new magic sticks are a lot more powerful than the old kind - videlicet, the pen - but they are wielded with the same brain. I'm confident in mine, but it seems I'm one of few.

Expand full comment

I think you misunderstand many things, which was probably evident from the beginning when you thought that China is emotional and self-destructive.

Ta.

Expand full comment

Worked? Hello? Hamas?

Any more dumb things to get out of the way?

We dealt with Saddam, but that wasn't even supposed to be part of it, and if you hadn't heard, the Taliban are back in power in Afghanistan. The Saudi funders still have more money than they can count. "Worked!" Nothing has happened *yet*, doofus. Except for, you know, Hamas.

Anyway -- as I said, there are other countries in the world. You still don't seem to be reckoning with that fact.

Expand full comment

Most terrorist attacks have indeed been pretty suppressed and despite your weird passion, I don't see how the US and China together can't effectively enforce against terrorism.

Expand full comment

And Hamas, etc certainly aren't winning. And the number of anti-life AI terrorists is definitely much fewer.

Sorry for your hopes.

Expand full comment

"Most", "pretty" -- I'll chalk that one up as a gottem. Think before you type.

Expand full comment

Mistral seems like a good start.

Expand full comment

If I recall correctly, it's pretty standard for contracts to have clauses that define what happens if the government should declare some part of the contract to be illegal. (Both at Microsoft and at my current university I have professional contract lawyers to write this kind of stuff for me, but I seem to recall they usually put that stuff in),

I guess laws can also have a "what if part of this law turns out to be unconstitutional" clause,

Expand full comment

I hope such clauses are found to be unconstitutional. One would be marked off for relying on try/excepts in a first-year CS course -- I don't think they form a workable basis for legislation, and it's pretty obviously a shameless attempt to end-run judicial review.

I didn't get to say to my parents, "I'm going to Paris, and if you say I can't, I'm at least going to Montreal." I mean, I could have said such a thing, but they would have said no the whole thing, and with prejudice, on account of the brazen gall of it.

Expand full comment

I can't help wondering if in a real emergency, the US government e,g, fires some missiles into Google's data centres and argues about whether it was legal or not afterwards.

(Look, we had about 3009 ms to shut you down, and sorry, the only way to do it killed a couple of hundred of your staff and did a couple of billion dollars in damage, but there it is)

Expand full comment

Missiles are overkill, just turn off the power.

Expand full comment

Horse out? Just close the barn door!

Expand full comment

How would it be possible to have forewarning of such an "AI emergency?" Please, before you imagine fantastical explosiony movie scenarios, start with a lick of sense... If it "gets out", it's "out." Period,

Expand full comment

If they get to the point where their mental model of the threat was such that they could make that decision, then they will already have wired the datacenters for remote detonation.

As things currently stand you'd have 24 hours of 'what the fuck is happening and why can't the tech people get it under control' before people would even think about missiles.

Expand full comment

I guess, probably, this is not the disaster we're likely to end up with.

It's just that with the Internet involved, things can potentially go wrong at Internet speeds.

(Like I seem to recall that story where some country like Estonia thinks they're under a cyberattack by a state actor, probably Russia, and request NATO assistance. The Finnish CERT are on it, at Internet speeds. Meanwhile, eventually, the Finnish minister of defense gets informed that maybe possibly a war with Russia has just broken out without anyone telling him, and it totally ruins his day. )

Expand full comment

The default assumption is that in a 'fire the missle' emergency where you'd be willing to do that, it won't work, because internet, the thing is no longer confined to a place in space. But the real world is messy, and cutting off the power (or if necessary using force in some way) is worth trying if you somehow know that this is it.

But of course the key is having eyes to know when this is it, and a better off switch, then you can do this faster and with less reasons to not do it, and more ability to do it and be or look wrong without anyone calling for your head too much.

If your move works, the obvious issue is you cannot then prove you needed to do it...

Expand full comment

Like, every George Romero movie ever assumes that government attempts to contain a deadly pathogen will fail, because by the time you've noticed it's too late. There may be many things about zombie apocalypse movies that are implausible, but experience with covid 19 suggests that impossibility of containment is realistic,

Expand full comment

I mean the reason the government fails is that if they succeed then Romero does not get a movie.

I think that with Covid-19 we made an intentional choice not to contain it. If Covid-19 had been 'anyone infected turns into a zombie' levels of bad, you see very different reactions, and my money is on successful containment.

Expand full comment

I'd like to defend the standard cato/reason/libertarian take.

There are very real public choice concerns that most people crafting legislation tend to ... completely ignore. There is scenario that could tank this proposed government organization and I think this scenario is not just possible but the *most likely outcome*:

The President and the person they appoint head of the agency consider election politics more important than AI safety.

What this means in practice is that the prioritized pieces of regulation will look like this:

1. Regulations that benefit the president or people who back the president and can be justified under the guise of "AI safety".

2. Regulations that benefit the congressional representatives with the most power over the agency.

3. (if and when the agency itself becomes large or powerful) Regulations that benefit and are liked by the work force of the agency employees.

Depending on who gets hired at the agency, priority #3 might be the best chance of actually getting regulation aimed at the true purpose of this legislation.

This is the big reason why the libertarian crowd harps on unconstitutionality. There are supposed to be a set of protections in place to limit the power of these government organizations. The more powerful a government organization is, the *more* vulnerable it is to being taken over by opportunistic political players.

The legislation should be drafted in a way that you assume your political enemies who hate you will get first dibs on running the organization. Probably for this organization that might mean they can just think back a few years and imagine Trump appointing one of his family members to run it. Imagine everything that can go wrong in that scenario, and then you can begin to see why the libertarian crowd gets worried about these sort of things.

From the perspective of a democrat or republican I can see why they think libertarians are always "crying wolf". But imagine you are a libertarian and you have no political power, and everyone in power *is* your political enemy. The way democrats and republicans feel half the time when their opponents are in power,is how libertarians feel all the time. We aren't falsely crying wolf. We are in fact ruled by wolves, and no we aren't gonna leave the gate open for your large grey dog that lives out in the woods.

Expand full comment

I mean, yes. I also feel like that all the time. What drives me nuts is that I (and I like to think highly credibly) fully appreciate that and harp on it in most other contexts, most famously 'FDA Delenda Est.' I feel like I am very much Nixon trying to go to China here.

I also wrote the Moral Mazes sequence about what happens in organizations, and explicitly said it should extend to government departments, not only corporations.

But I do not think that responding essentially the same way to every proposal, always saying 'no,' and failing to think on the margin are not the solution. That is not a route to victory. If you want a better way, a way better protected against such issues, then I want that too, get in the game and be concrete, or argue why doing nothing is a strategy (which in many other contexts, it is, often a quite good one).

Indeed, as I said, if we don't build something the smart way now, then exactly the political hacks will draft something quickly the other way later. And it will pass.

Expand full comment

10^24 flops seems a low threshold ... I think the top models are already higher than that, and, well, they're not terribly dangerous. Or really dangerous at all, except to the extent that they might improve the work efficiency of people already up to no good, which is true of a whole lot of tech.

Still, at least they settled on a decent metric, which is fine. I'd probably go two orders of magnitude higher.

So far as the severability clause goes, yeah ... I'll second what Dr. Y already wrote; you can't delegate the rewriting of legislation to the court. Unconstitutional in any case, but also, why would that be a good idea? Oh hey the Supreme Court justices and their clerks now need to be experts on AI? Or are they in turn supposed to do another delegate?

I'm not especially happy with their overall risk tier list. The most that can be said for it is that it allows for further monitoring and evaluation without slamming on the brakes completely. But in the end there are really three or maybe even only two tiers of risk: A) this is mostly safe, B) this is too dangerous to ever open source but can be used under heavy monitoring C) if you build it it will kill you all. Risks like 'this could provide bomb making recipes', 'it allows you to generate extremely racist memes at warp speed', or even 'can act as an expert advisor on bioweapons development' are relatively insignificant compared to potential upside. Not enough to justify this level of regulation and should really be ignored in the context of this legislation.

Still. Overall, I like it, and this is a lot more thoughtful than a lot of legislation. It also suggests that people are at least aware of and working to mitigate the risks in a serious way, which is grounds for hope.

Expand full comment

I think there is a confusion about what is being proposed here on severability.

The court having the power to substitute a different rule to substantively address the original provision's intent, if (and only if) they find the original rule unconstitutional, does not mean they get to write the laws however they feel like, even before Congress fixes the issue. It means that they can think about what exactly is the constitutional issue, and find a narrow way around that, the same as other severability.

The first best solution would be to have each law have a set of backup plans on each provision that is at all questionable - "Let's do X, if you say we can't do X then we do Y, if we can't do Y either then we do Z..." And as always Congress could undo anything the courts impose, if they wanted, or the court could institute a temporary alternative rule until the Administrator could choose anew.

Of course, that would require even longer bills, and for lawmakers to be willing to admit when they are pushing it. So that is hard.

In terms of risk tiers, I buy that their medium-concern category is real - e.g. 'this appears safe for the moment, but you need to keep an eye on it,' between A and B. And everything is price and continuum in various ways and there are sometimes mitigations that would work and sometimes there aren't and it is complicated. My guess is 4-5 tiers is right, where 1 is no rules, 2 is registration and monitoring but do what you want, 3 is requiring basic safety requirements, 4 is 'real deal' precautions that are going to cost, and 5 is no just no.

Expand full comment

I notice this is framed as a "national" vs "international" effort. Several months ago, there appeared to be a move to create an IAEA type outfit to regulate AI - should this be happening in parallel to this national legislation track? Or is the focus on US legislation sufficient for AI safety overall?

Expand full comment

Strongly I think this is something you do in parallel. It gives you credibility, it sends a costly signal, it lets others follow suit while you work things out, it gives you concrete experience and model laws to work from, and so on. This is a 'yes and' situation.

Expand full comment

Regarding the "MAJOR QUESTIONS DOCTRINE" paragraph, this is in response to the Supreme Court recently coming up with the "Major Questions Doctrine", whereby it sometimes strikes down rules made under a broad grant of authority because it thinks (in its infinite wisdom) that Congress surely couldn't have intended the authority to be that broad. For example, striking down greenhouse gas regulations on the basis that, despite the Environmental Protection Act authorizing the EPA to regulate pollution, regulating this pollution was kinda too much of a big deal.

This paragraph is just saying, "yeah we did actually intend it to be that broad" - this just means that if the Courts strike down a rule, it would be on any other basis besides "even tho the text of the act expressly allows this rule, it seeks like kinda a big deal so it's not allowed."

Expand full comment

Oh yes, I'd forgotten about that. Seems wise to be explicit here, then.

Expand full comment

Typo:

"The bill did gave its critics some soft rhetorical targets"

gave -> give

Expand full comment

The limits are defined in tiers relative to their level of "major security risk" defined as (1) substantial national security risks to the United States, (2) "global catastrophic and existential threats", (3) "risks that AI will...permanently escape human control".

I think "major security risk" is pretty ill-defined and as they iterate on this bill, that should be one thing the authors should tighten up:

(1) "substantial damage...[to US] national security"--I worry that "substantial damage" could mean just about anything. Does that vague level of "substantial damage" justify the limits the bill sets on training AI models?

(2) "global catastrophic and existential threats" is defined as "threats that with varying likelihood may produce consequences severe enough to result in systemic failure or destruction of critical infrastructure or significant harm to human civilization"--sounds bad, but does that mean "destruction of critical infrastructure" is sufficient? What about one bridge down--is that "critical infrastructure? [low confidence on this, this is a lawyerly thing I shouldn't claim to have insight on]

(3) "risks that AI will...permanently escape human control"--there are supposedly ~10,000+ Internet viruses floating around the Internet, most of them not under human control, and potentially there permanently. Those are "AI" that have permanently escaped human control, right? It is critically important we prevent superintelligent AI from escaping human control, but we probably don't want to indefinitely prevent the next GPT model purely because it might enable creation of a new virus.

By tightening up the definition of "global catastrophic and existential threats" to something more like a quantified amount of risk to human life, financial cost, or national security, we could then use the bill's tiers--"medium concern", "high concern" and "extremely high concern" in ways that refer to a tightly quantified amount of catastrophic or existential risk.

Expand full comment