21 Comments

It still seems like CEQA for AI. A bunch of random morons are going to have another tool to try to punish Facebook for releasing Llama models. No benefit will be achieved, just slowing down the sort of people who do useful things.

Hopefully you are right and the bill has little effect.

Expand full comment

It moves us directionally toward safety, which is the important thing, given the unhinged acceleration we are seeing which is profoundly not in the public interest.

Expand full comment

"There will be no frontier model division, so who in government will develop the expertise to know what is going on?"

One thing that really throws me is the opposition that the government should even not have a formal department for knowledge and advisory. Its as if going blind is better?

Expand full comment

Suppose a covered model is released as Open Source.

How much is it going to cost to fine-tune it to remove any guardrails it might have?

Quite possibly, less than 10 million dollars

Yann LeCun is surprising optimistic about many things, but he appears to this this one is unsolvable.

So if you Open Source a covered model, someone is going to remove the guardrails and do something bad with it, and it will be your fault. So Open Sourcing covered models is basically impossible, except maybe for AI companies who declare bankrupcy and open source their model as their last, dying act.

Expand full comment

The thing that might save them: LLMs hit a wall, and it just turns out to be impossible to create a LLM that's smart enough to cause critical harm.

Expand full comment

But it seems that if it only if the risk materialized because of AI - if someone could have used Google to make a fertilizer bomb, that is a valid defense.

Otherwise, it seems we do want to encourage companies to make their models resistant to terrorist use. It doesnt need to be perfect: just show reasonable care.

Expand full comment

If you release the weights of a model it is (currently) trivial to remove any safety efforts. You are, de facto, releasing the version without those efforts into the wild.

So I don't see any difference between releasing the safety version and the non-safety version - if you give us the first you also give us the second one.

If that is an unsafe thing to do, then don't do that!

But right now it probably IS a safe thing to do, see the question of 'cause or materially enable' and the definition of 'reasonable care' and so on.

Expand full comment

I co wonder about harms along the lines of::

a) you open source a model

b) Russia uses your LLM to control a swarm of killer drones

c) the killer drones do like easily 500 million in damage to the city they're used against

Here we have a capable, motivated attacker: capable of connecting the LLM to things that go "boom", and with a motive to do so,

Expand full comment

(I look at everyone's favorite test case for demonstrating that they have bypassed the guardrails of an LLM, and think "But .. Lemsip (UK cold remedy) contains pseudoephedrine ... I know this because I have a thyroid condition and cant take pseudoephedrine" ... I think I'm going to start asking LLMs questions about Lemsip...)

Expand full comment

One question I don’t see answered: what’s necessary for California to claim jurisdiction over a given model? If Musk has a computer cluster for Grok in Texas funded by a Texas corporation and zero California employees, are they going to be covered by this bill? I’m guessing California would be allowed to ban the sale of models that violate the law (as they did with meat products from out of state recently) but would they have any power over a Texas corporation otherwise? Or what about Sam Altmans supposed GPU cluster in the UAE?

Expand full comment

I guess another way of looking at that 10 million threshold is if you are open source releasing a covered model that has no dangerous capabilities, you are betting hard that no-one can enhance it to have dangerous capabilities for less than 10 million.

Betting hard in the sense that if someone does actually do that and cause an incident with over 500 million damage, you agree that you are going to pay that 500 million dollars to the injured parties.

Expand full comment

I am guessing Yann LeCun needs Mark Zuckerberg's sign-off before he can do something that potentially exposes Meta to billions of dollars of liability...

Expand full comment

Wait /// won't they just create a separate limited liability company to do the open source release, so that if the hypothesized 500 million dollar critical accident happens, the only legal entity anyone can sue has no assets?

Expand full comment

You are betting that you are not thereby causing or materially enabling such harm due to your failure to exercise reasonable care - meaning that they can now do things they couldn't have done before using other tech, and the reason is that you did something you shouldn't have.

And yeah, if you actually do cause a critical harm that way, you are getting your ass sued.

Also, no, you can't use the 'create a distinct company to avoid liability' trick because common law says that if you are clearly doing that the court says 'nice try.'

Expand full comment

So my main objection is formable as "all regulation, as implemented by our Politboro, is bad", so maybe one can stop there. But I think you fundamentally are missing that the limitations written down, such as they are, don't matter. the NRC's enabling legislation did not, I think, say "as much safety as can be reasonably achieved". That phrasing isn't found here, for instance: https://www.nrc.gov/about-nrc/governing-laws.html It gave them "reasonable" ability, which they chose to interpret to walk a power gradient to give themselves the most authority possible, and to fulfill their objective function, which got nothing from successful nuclear power.

This regulator will get no glory from Claude becoming a benign superintelligence, so they won't help. They will get power from extorting startups, so they will. You are being naive by saying "the training thresholds can't be lowered!" They don't need to! They just need to bury the startup in lawfare, at taxpayer expense, until it signs a consent decree. Have fun proving you're under thresholds in a court run by your enemy.

You also describe KYC requirements: KYC, famously, is how the government forces banks to enforce lawfare on undesirables without actually having to ban anything! You've read patio11! You have to know this!

When you think of who's on the board, consider that "an expert on AI safety" will be chosen by people who hate you and me. It will either be Timnit Gebru or someone who Timnit Gebru approves of. I know this because the field of AI safety had to rename itself as "notkilleveryonism" because they got their lunch eaten so badly by third-rate political operators. Who do you think will make decisions about this apparatus of political power?

Giving the state of CA any recognizable power to dictate anything whatsoever to AI labs is a recipe for rewriting the entire industry to give the CA Politboro power and fulfill their objectives. I suppose, since they're largely incompetent at wielding state capacity, the good spin here is that it provides a decent shutdown on research. The bad spin is if it's easy enough to train models, we'll get an antiracist AI to paperclip us. This is a bad idea.

Expand full comment

As opposed to a racing AI to paperclip us? Come on. I am no fan of regulation but it is better to have directionally aimed work toward safety. If we can steer toward antiracism, then we can steer toward humanity.

Expand full comment

How long before the “harm” done by spreading information that the government doesn’t like is claimed to be “just as harmful” as a biological weapon? Do a search on what constitutes genocide/insurrection/ threat to democracy in the minds of the ruling class.

Expand full comment

I guess if the economic value being provided by the AI becomes really large, you'ld need appropriate security to prevent improper use of the shutdown mechanism.

e.g. suppose in a decade or so, Open AI have captured the AI market, and their products are in everything. In particular (in this science fiction) every power station in the US electrical grid is using AI control to do freaky things like predicting 10 minutes in advance whetehr you need to spin a turbine up to meet consumer electrical demand. OpenAI's shiurdown will work of course, and the systems are fail-safe so that the power stations will shut down when their controlling AI is killed. All of them. Spinning the whole grid back up may be hard. Of course, it wqasn't just the power grid you shut down, because OpenAI's products are used in plenty of other things...

Expand full comment

.. plausible argument to be made that at this level, NSA will declare that your shutdown procedures are a national security risk and arrange that a member of the US Secret Service is a holder of one of the crypto ignition keys you need to activate the HSM to send a digitally signed message to cause shutdown.

(Any resemblance to any actual system here is co-incidental and not to be wondered at).

Expand full comment

(That was meant to be an allusion to systems inspired by the US nuclear Football, rather than the Football itself, but whatver ...

"Is your physical security adequate to prevent unauthorized shutown?"

"Look, the guy with the Crypto Ignition Key can also nuke Russia, and you;re worried he might shutdown the AI improperly?"

Expand full comment