10 Comments

Excellent response.

Expand full comment

It sounds like from your point of view, killing open source models with 10^26 operations put into them would be a good thing, not a bad thing. Personally I disagree.

Expand full comment

Do you have any suggestions for solving the problem where powerful open source models can be repurposed towards problematic ends without oversight?

Expand full comment

I think open source AI software is more likely to lead to good outcomes than to bad outcomes, so we should be encouraging it rather than putting impossible restrictions on it.

Expand full comment

Reproducing my comment from the WordPress version:

I appreciate that you’re endorsing these changes in response to the two specific cases I raised on X (unlimited model retraining and composition with unsafe covered models). My gut sense is still that ad-hoc patching in this manner just isn’t a robust way to deal with the underlying issue*, and that there are likely still more cases like those two. In my opinion it would be better for the bill to adopt a different framework with respect to hazardous capabilities from post-training modifications (something closer to “Covered model developers have a duty to ensure that the marginal impact of training/releasing their model would not be to make hazardous capabilities significantly easier to acquire.”). The drafters of SB 1047 shouldn’t have to anticipate every possible contingency in advance, that’s just bad design.

* In the same way that, when someone notices that their supposedly-safe utility function for their AI has edge cases that expose unforseen maxima, introducing ad-hoc patches to deal with those particular noticed edge cases is not a robust strategy to get an AI that is actually safe across the board.

Expand full comment

I hadn't heard that wording suggested. It seems like a reasonable one to game out, I'll think about whether it net improves things. I worry it passes the buck to the regulators/courts, a kind of 'define pornography as what looks like porn when you see it' way of not trying to get at an answer. Which could be fine or even correct, but has obvious other objections.

I think you are selling the situation short - the two cases you brought up, one of which was raised a few times and other only you raised - feel like they cover a lot of the remaining problem space, and indeed I have learned since that one of the two fixes was/is already in the works before.

My gut tells me it should ideally be some combination - declare your intent and how you want to handle unclear cases, but try to lay down as many rules as you can also. But not sure.

I do think your point about 'notice when you are whacking moles' is a good one. If everyone had that mindset, we would need a lot less in the way of top-town rules!

Expand full comment

Nice write up, this is a real service.

I'm impressed that this bill narrowly focus on hazardous capabilities, not social costs or jobs. That's a distinct issue that is best addressed separately. It is not sufficient to prevent full existential risk, but much better than I hoped for this early in the game.

Section 22605 "transparent, uniform, publicly available price schedule" interferes with business models that are rapidly changing and is completely out of scope. Antitrust enforcement is arbitrary and out of hand enough as it is.

I would appreciate a distinct take on the penalties. The "preventive relief and "punitive damages" terms in section 22606 look like actual teeth even though the civil penalties are capped.

The derivative model carve outs seem necessary but are concerning. Too many complicated scenarios where real liability can be ducked. I would at least direct the courts to provide preventive relief (ie block dissemination, require deletion) for a public safety threat.

Expand full comment

Section 22605 does seem out of place. I'd be inclined to remove it, but I see why you'd want it and I don't have a strong opinion.

Preventive relief is shutting you down IIUC. Punitives seem to me like they'd only happen if you were going to get sued into the ground without this law anyway, but maybe there's a margin?

The derivative model carve-out seems necessary in some form. As I noted, right now it is too broad, which is a 2-way problem. They're working on a fix.

Expand full comment

Thank you so much for this work—in particular for engaging in good faith with law-making.

Expand full comment

It would be great to get your thoughts on the cognitive revolution podcast ep/debate about this

Expand full comment