20 Comments

I don't get the "nothing is so bad that you have to resort to a ballot proposition" (for SB 1047 to circumvent the veto). Your unironic position is that this veto will either cripple the AI industry, cause X-risk, or both. Is going for a weird legal circumvention seriously worse?

Expand full comment

Because the chances of it being the most stupid and un-repealable version of such a bill are extremely high.

Expand full comment

I mean, making it just letter-for-letter the same bill would be good even if unrepealable, because it just fundamentally doesn't demand much.

Expand full comment

Isn’t the bill pretty long? I could be wrong but I think initiatives are in general pretty short, since you’re asking laypeople to vote up or down.

Expand full comment

I mean, a one-sentence compression of it would be something along the lines of "any model of inflation-injusted $1000000 in compute (or whatever) must release a plan for catastrophic harm, defined as harm over X". Again, I kinda fail to even model a world where having a plan is stifling.

Expand full comment

“Then there’s the question of what happens if a catastrophic event did occur. In which case, things plausibly spin out of control rather quickly. Draconian restrictions could result.”

I think draconian restrictions are the most likely scenario at this point.

Expand full comment

My first reaction to this was that Gavin Newsom was either an idiot or heavily influenced by lobbyists (possibly both).

On the other hand, there might be a charitable reading where he's a skeptic about the possibility of x-risks, but really concerned about mundane harms. (Infringing copyright, generating deepfake nudes of Taylor Swift, etc. etc.) The mundane harms are becoming evident even with current smaller models, so you'ld want to regulate tbem if that's where you think the majority of thee risk is.

Expand full comment

Zvi has talked about this before, but for many (most?) ppl who are not concerned about x-risk, it is because they are skeptical about the potential of AI and don’t think it will get much more powerful.

Also see Matt Yglesias:

https://www.slowboring.com/p/what-the-ai-debate-is-really-about

Expand full comment

The AI companies and open source AI advocates ought to be really concerned, not celebrating, if Gavin Newsom is going to regulate mundane harms. SB 1047 was relatively easy to comply with.

Expand full comment

It often felt like "goodbye humanity" with the bill veing vetoed, basically due to the seeming ignorance and greed around it.

This really seems to emphasize that we need to do a lot more education around, and if you want to join our lobbying with #PauseAI, I think it is needed more than ever

Expand full comment

If this does signal model-level regulation is not happening, but that use-level regulation is, isn't a company like OpenAI going to look pretty seriously about splitting into two nominally separate entities, one of which creates models (and perhaps even opens the weights) and the other of which serves/deploys those models in consumer products (most obviously as chatbots but there's much more they could do)? Then the second company is sort of running a wrapper and hosting service, and can tack on all sorts of reactive, whack-a-mole responses to the, as you I think correctly predict it will be, ever-evolving laundry list of vague dos and don'ts, while the first company just gets on with making models that are scary powerful and scary scary, but they don't actually deploy anything so are on no hooks of any kind? That would be a very bad time.

Expand full comment
author

I think for all practical purposes they already are only the first company, or that's the plan. They could easily have branched into also doing various things to make their model more useful, and they've chosen not to, and to hand those tasks off.

Expand full comment

Small typo

depolyers -> deployers

Expand full comment

Also a lot of image to text produced Al (AL) instead of AI (ai).

Expand full comment

Unfamiliar with CA politics, why aren't people pushing for an override? They have the votes in the Senate, and the Assembly had a large number of people who abstained that could presumably be persuaded.

Expand full comment

Do they have the votes for a super majority? That certainly would be a plot twist

Expand full comment

The false sense of security line is confusing. I know it’s not supposed to make sense, but how many people are worried about the risks of small models? That’s a weird place to land, so I’m surprised the propaganda included it

Expand full comment

One small note. It's trademark laws that require you to enforce them to maintain your trademark, not copyright law.

So, if I start selling T-Shirts of my own design but claim that they are Disney merchandise, that's Trademark infringement. Disney must enforce it's trademark over it's name to maintain the trademark. The point of Trademarks is so that consumers are reasonably certain than when they buy Disney merch it's from Disney and when they buy Bud Light it really is Bud Light. Hence the compulsory enforcement.

Copyright is about the actual creative outputs. If I start writing fanfiction of Alladin and even give Disney credit for the original source material, that's copyright infringement. But, crucially, Disney doesn't lose their Copyright if they choose not to enforce.

Expand full comment

For context, I am generally opposed to AI regulation and I am happy that SB 1047 failed. But I do respect Zvi and others as smart people with serious, legitimate concerns, and I think there is a real chance of AI doom.

I don't agree with the model of "if we don't pass a regulation now, we'll get a worse one later." If you look at other overregulated areas, like building housing, it is just never the case that a single anti-housing regulation satisfies the NIMBY crowd. They would like to pile up regulation after regulation, until progress comes to a halt.

There is certainly an intelligent faction among the supporters of AI regulation. However, it is becoming clear that the intelligent AI safety people are a very small part of the anti-AI political faction. The anti-AI faction is dominated by people who are either anti-technology, anti-capitalist, or an an industry like art, music, or driving that appears likely to be disrupted by AI.

So, I don't see the argument for why pro-AI people should compromise on regulation with the AI safety faction in any way. The AI safety faction does not have the ability to "rein in" the rest of the anti-AI coalition in an area like California politics.

I think the most reasonable venue for compromise is technological. Any innovation that makes it easier for humans to control LLMs, or avoids specific harms like securing hackable software systems, preventing deepfake scamming, and so on, these are things we can support. I would be very happy to find grounds for the sides to work together in those areas. I just don't believe AI safety regulation is going to achieve any positive effect.

Expand full comment