I don't get the "nothing is so bad that you have to resort to a ballot proposition" (for SB 1047 to circumvent the veto). Your unironic position is that this veto will either cripple the AI industry, cause X-risk, or both. Is going for a weird legal circumvention seriously worse?
I mean, a one-sentence compression of it would be something along the lines of "any model of inflation-injusted $1000000 in compute (or whatever) must release a plan for catastrophic harm, defined as harm over X". Again, I kinda fail to even model a world where having a plan is stifling.
“Then there’s the question of what happens if a catastrophic event did occur. In which case, things plausibly spin out of control rather quickly. Draconian restrictions could result.”
I think draconian restrictions are the most likely scenario at this point.
If this stuff is as dangerous as weapons grade fissionables, where it is easy to create an accident and easy to turn into a device capable of immense harm, that's how it has to be. As far as I know, no country on earth offers easy access to fissionables to an open source community of nuclear hackers and startups.
If this stuff is about to fizzle out at the next model generation and then get forgotten like NFTs, we shouldn't bother with any rules at all. Waste of time.
I don't think Newsome knows which is it. The reasoning along the lines of "let's regulate it if hospitals start connecting an LLM to the life support system" honestly sounds reasonable to me. As long as people are using it to generate text and cheat on their homework this stuff is probably safe.
What’s hard about this whole conversation is that people like Zvi (and I think he makes a compelling case) believe both only that AI will likely be the former, but agonizingly not until it passes a threshold after which it will be more or less impossible to control. Once the fissionables are in every house and each able to make more fissionables, then we are not left with any good options. Up until that threshold though, it will give everyone super powers.
The obviously (imho) best solution is to move forward but with extreme caution and vigilance.
So I am a little younger than Zvi but not by much, and what strikes me about this whole thing is that the history of recent technology is a story of mostly disappointment. Everything that isn't an iteration on "screens" advances glacially slowly and is hardly better than before I was born. Medicine is not meaningfully better, jets aren't faster, spacecraft haven't gone farther, big megaprojects mostly all get cancelled for spurious reasons, fusion feels equally far away, and so on. Solar exists but we made nuclear reactors with reprocessing illegal so effectively we just have a different form of clean power than what was available in the 1970s.
So that's the big delta here - accelerationists don't want any chances taken that this new Ai technology fizzles out before it can change the world outside of screens. They want every possible action taken to accelerate it until it becomes proven, with indisputable evidence, that it's as dangerous as fissionables. No laws, no brakes, accelerator to the floor until AI actually jumps Morevacs paradox.
That may seem unwise but the other alternative is that when you are dying of aging jack shit in the outside world would have happened. Doctors don't know why you are dying because ai isn't good enough to replace them, your meds are late because a nurse has to deliver them, there is still zero treatment at all for your cells deciding not to do their jobs because and arbitrary amount of time passed, and the news media is full of inane reality TV gossip because no one got fusion spacecraft to work.
I feel like Zvi’s entire blog is about addressing the delta and explaining why AI is different; if you don’t find him convincing then you almost certainly won’t be convinced by me. But fundamentally you either think that super-human intelligence is possible and coming soon, or you don’t.
That's not remotely the only positions you can hold. And it's a bad bet to be confident a complex and unprecedented technology is definitely coming soon.
Yes theoretically it's possible, but while there are many feedback loops there are also many barriers.
In the recent past even "sure thing" technology like an autonomous car or heck just a decent open world videogame is delayed by years.
It's very rare for any technology project to even be on time.
Soon is a squishy term, I’ll give you that. But if you think it’s possible that we get AGI/ASI eventually, you have to contend with the implications of sharing the planet with super intelligent beings that have (for all intents and purposes) unlimited memory, unlimited replicability, and who don’t need to sleep. It’s hard to see how humans remain in charge of the economy in any meaningful way.
My first reaction to this was that Gavin Newsom was either an idiot or heavily influenced by lobbyists (possibly both).
On the other hand, there might be a charitable reading where he's a skeptic about the possibility of x-risks, but really concerned about mundane harms. (Infringing copyright, generating deepfake nudes of Taylor Swift, etc. etc.) The mundane harms are becoming evident even with current smaller models, so you'ld want to regulate tbem if that's where you think the majority of thee risk is.
Zvi has talked about this before, but for many (most?) ppl who are not concerned about x-risk, it is because they are skeptical about the potential of AI and don’t think it will get much more powerful.
The AI companies and open source AI advocates ought to be really concerned, not celebrating, if Gavin Newsom is going to regulate mundane harms. SB 1047 was relatively easy to comply with.
It often felt like "goodbye humanity" with the bill veing vetoed, basically due to the seeming ignorance and greed around it.
This really seems to emphasize that we need to do a lot more education around, and if you want to join our lobbying with #PauseAI, I think it is needed more than ever
If this does signal model-level regulation is not happening, but that use-level regulation is, isn't a company like OpenAI going to look pretty seriously about splitting into two nominally separate entities, one of which creates models (and perhaps even opens the weights) and the other of which serves/deploys those models in consumer products (most obviously as chatbots but there's much more they could do)? Then the second company is sort of running a wrapper and hosting service, and can tack on all sorts of reactive, whack-a-mole responses to the, as you I think correctly predict it will be, ever-evolving laundry list of vague dos and don'ts, while the first company just gets on with making models that are scary powerful and scary scary, but they don't actually deploy anything so are on no hooks of any kind? That would be a very bad time.
I think for all practical purposes they already are only the first company, or that's the plan. They could easily have branched into also doing various things to make their model more useful, and they've chosen not to, and to hand those tasks off.
At this point, you should think of OpenAI as a completely normal tech startup, that happens to have a weird legal and PR cloud of chaos hanging around its founding, just like Facebook had a weird legal and PR cloud with the Winklevoss brothers and so on.
It's more evidence that unusual legal structures don't work as well as standard tech startups. Don't do a nonprofit that owns a for-profit, don't do a Florida LLC. Just make a C Corp. Innovate on your tech, not on your corporate structure.
Unfamiliar with CA politics, why aren't people pushing for an override? They have the votes in the Senate, and the Assembly had a large number of people who abstained that could presumably be persuaded.
The false sense of security line is confusing. I know it’s not supposed to make sense, but how many people are worried about the risks of small models? That’s a weird place to land, so I’m surprised the propaganda included it
A big part of the anti-AI faction, perhaps the majority of it, is people worried about AI taking their jobs. Artists, musicians, Uber drivers, dock workers. For these people, limiting the size of the model doesn't actually help very much. I take this as a signal that Gavin Newsom cares more about maintaining jobs than he cares about x-risk.
I think you'll be surprised that while many of them may have originally been concerned for other reasons, they have come to realize x-risk as a major threat.
One small note. It's trademark laws that require you to enforce them to maintain your trademark, not copyright law.
So, if I start selling T-Shirts of my own design but claim that they are Disney merchandise, that's Trademark infringement. Disney must enforce it's trademark over it's name to maintain the trademark. The point of Trademarks is so that consumers are reasonably certain than when they buy Disney merch it's from Disney and when they buy Bud Light it really is Bud Light. Hence the compulsory enforcement.
Copyright is about the actual creative outputs. If I start writing fanfiction of Alladin and even give Disney credit for the original source material, that's copyright infringement. But, crucially, Disney doesn't lose their Copyright if they choose not to enforce.
For context, I am generally opposed to AI regulation and I am happy that SB 1047 failed. But I do respect Zvi and others as smart people with serious, legitimate concerns, and I think there is a real chance of AI doom.
I don't agree with the model of "if we don't pass a regulation now, we'll get a worse one later." If you look at other overregulated areas, like building housing, it is just never the case that a single anti-housing regulation satisfies the NIMBY crowd. They would like to pile up regulation after regulation, until progress comes to a halt.
There is certainly an intelligent faction among the supporters of AI regulation. However, it is becoming clear that the intelligent AI safety people are a very small part of the anti-AI political faction. The anti-AI faction is dominated by people who are either anti-technology, anti-capitalist, or an an industry like art, music, or driving that appears likely to be disrupted by AI.
So, I don't see the argument for why pro-AI people should compromise on regulation with the AI safety faction in any way. The AI safety faction does not have the ability to "rein in" the rest of the anti-AI coalition in an area like California politics.
I think the most reasonable venue for compromise is technological. Any innovation that makes it easier for humans to control LLMs, or avoids specific harms like securing hackable software systems, preventing deepfake scamming, and so on, these are things we can support. I would be very happy to find grounds for the sides to work together in those areas. I just don't believe AI safety regulation is going to achieve any positive effect.
The problem is that technologically, AI may be fundamentally unsteerable and on top of that, money goes into capabilities. Thus regulations are needed.
The simple fact is that as AI safety is forced back, we have to make alliances with others affected. Its the fault of accelerationists that make extinction ever more likely.
I'm curious, if AI is really fundamentally unsteerable, what's your desired outcome. For regulation to stop all AI development at some fixed point and never go beyond that?
I mean, as zvi said, if it is going to kill us, yes, you stop building the doom machine. But there are alternate architectures like narrow AI that should work and is safer.
Newsom is a deeply, personally corrupted individual. He is political to the core.
For some reason, he believes he will be president, which I believe will never happen, but his ambitions are such that he will not offend the oligarchs.
Well, we've failed at light-touch regulation that only affected the most powerful and expensive models. And which probably would have created only slightly more paperwork than is involved in the average medical insurance claim, and only for projects costing at least $100 million.
So now I guess we need to go for plan B: Attempt to regulate use cases so heavily that we make it impossible to make any further progress at all.
This is a shitty plan that destroys a lot of economic value, and which probably won't work. But I bet we can find some people to create infinite compliance paperwork.
Since I'm in favor of a pause, I propose we assemble a committee of corporate IT security compliance managers, San Francisco zoning permit officials, and the kind of HOA committee members who wouldn't allow veterans to fly US flags, and ask them to implement use-case based regulation with strict know-your-customer liability for AI model creators. For maximum effect, we should draw up convincing policy documents now, and then propose them during the first major crisis.
If we try hard and believe in ourselves, we can avoid making ourselves the second smartest species on the planet.
The problem with this plan is that the lobbyists who came out to kill this bill will be more aggressive about anything stricter. And threaten to move to Texas, which has a state government limited during Reconstruction to be hobbled in its power. Texas would be happy to have AI companies, guzzling power from ERCOT, primarily from locally generated natural gas.
Well, yeah. Implicit in my "ha ha only serious" plan is that meaningful regulation of AI that poses strategic risks or extinction risks is currently impossible.
So unfortunately, we'll need to wait for a major catastrophe before we can do anything to prevent future catastrophes. All we can do for now is to start laying the groundwork for when the public is *angry as hell* at AI companies, so we can try to regulate then.
The way this will go is probably something like:
1. First AI takes the artists' jobs. Artists are upstream of parts of culture, so this will lead to diffuse, low-level resentment.
2. Then AI eventually takes a whole bunch of other people's jobs, including people who find that AI has already taken every other job they could do.
3. At some point, some AI system screws up in a highly legible way that crystalizes public fury.
Then we'll be able to regulate AI. Maybe.
I do not believe that "alignment" is possible in any rigorous sense. If we build something smarter than us, the best case scenario is that it likes enough to keep us as pets (and hopefully doesn't spay us, or whatever). And the worst case scenarios are nightmares.
Given the inference that a16z effectively “got to” the governor, and also that there is a high likelihood that a different, more onerous form of regulation is now on the cards — to whatever extent you believe both are true — my question would be: are a16z et al that stupid/short-sighted? Or do they see some route to some other outcome? Or what else might be going on?
If lobbyists can "get to" this governor this will actually mean vetoing of it all until this governor is not in power, or a clear crisis occurs. (An actual incident made possible by AI or mass layoffs)
Right, that’s the apparent contradiction that I’m driving at. But maybe it’s not as simple as “they’ve got him: everything will be vetoed”… maybe these things proceed on a much less thought-out/orderly basis…
What a putz. Someone's not getting any further votes from me, and I'll do my best to talk others out of same. Which...shouldn't be hard, Governor Nice Hair isn't exactly well-loved in his home state. One can always mea culpa and try again tomorrow night, but given the track record...bluh. Expected negative outcomes are still disappointing, one must find hope somewhere.
>SB 896 by Senator Bill Dodd (D-Napa)
Oh...I guess someone else survived Unsong. Wonder if he was a washed-up physicist with a powerful Macbook gaming laptop before getting elected. TINAC, etc.
But even with Newsom’s veto, the California safety proposal is inspiring lawmakers in other states to take up similar measures, said Tatiana Rice, deputy director of the Future of Privacy Forum, a nonprofit that works with lawmakers on technology and privacy proposals.
“They are going to potentially either copy it or do something similar next legislative session,” Rice said. “So it’s not going away.”
Some states passing, and some states refusing creates a comparative economic advantage. This was why it was so weird the dialogue around 1047 - Pelosi is correct. This is a Federal issue and the states don't contribute value here.
These AI companies are bringing in billions and what is anticipated to be trillions, and they don't emit noise or local pollution or require many workers. They are a dream company for any state to host. It is an obvious move for Meta to create subsidiaries that do not exist in California who research huge models (probably in Austin), using architectures scaled up from the core bay area talent team.
The download pages for the models will prohibit California residents by IP, and subscription services would have a similar geolocation wall.
This would obviously make California tech companies less productive, due to no legal access to the latest models, and push billions and later trillions to other states.
Federal law would make more of a difference though it repeats the same problem at an international level, but now its prisoners dilemma but defectors can't be more than lightly punished due to their nuclear arsenals.
Also having access to advanced AI but being sanctioned would probably be a positive ROI tradeoff.
I think I basically disagree about the prisoners dilemma aspect of this, but specifically with respect to the states — nobody is going to block IPs in Texas, New York, or Florida just because of some model paperwork. If they are, that would be highly significant that they are doing something bad, that we would want to know about. And also, such laws could effectively prohibit open source models, since there would be no way not to make them available in all states.
I am not a legal expert but if I offer software "not for use in California" from a company that only exists in Texas, and that software is legal by federal law, California courts will not have jurisdiction if the software breaks the law of California. Defense Distributed likely is an actual example of this.
It's irrelevant if the file can be downloaded from California - the terms of service you agreed to to download it said not to distribute the file to California residents and not for use in California.
Metas attorneys in this hypothetical would just appeal to federal court and get any cases dismissed.
I don't get the "nothing is so bad that you have to resort to a ballot proposition" (for SB 1047 to circumvent the veto). Your unironic position is that this veto will either cripple the AI industry, cause X-risk, or both. Is going for a weird legal circumvention seriously worse?
Because the chances of it being the most stupid and un-repealable version of such a bill are extremely high.
I mean, making it just letter-for-letter the same bill would be good even if unrepealable, because it just fundamentally doesn't demand much.
Isn’t the bill pretty long? I could be wrong but I think initiatives are in general pretty short, since you’re asking laypeople to vote up or down.
I mean, a one-sentence compression of it would be something along the lines of "any model of inflation-injusted $1000000 in compute (or whatever) must release a plan for catastrophic harm, defined as harm over X". Again, I kinda fail to even model a world where having a plan is stifling.
“Then there’s the question of what happens if a catastrophic event did occur. In which case, things plausibly spin out of control rather quickly. Draconian restrictions could result.”
I think draconian restrictions are the most likely scenario at this point.
If this stuff is as dangerous as weapons grade fissionables, where it is easy to create an accident and easy to turn into a device capable of immense harm, that's how it has to be. As far as I know, no country on earth offers easy access to fissionables to an open source community of nuclear hackers and startups.
If this stuff is about to fizzle out at the next model generation and then get forgotten like NFTs, we shouldn't bother with any rules at all. Waste of time.
I don't think Newsome knows which is it. The reasoning along the lines of "let's regulate it if hospitals start connecting an LLM to the life support system" honestly sounds reasonable to me. As long as people are using it to generate text and cheat on their homework this stuff is probably safe.
What’s hard about this whole conversation is that people like Zvi (and I think he makes a compelling case) believe both only that AI will likely be the former, but agonizingly not until it passes a threshold after which it will be more or less impossible to control. Once the fissionables are in every house and each able to make more fissionables, then we are not left with any good options. Up until that threshold though, it will give everyone super powers.
The obviously (imho) best solution is to move forward but with extreme caution and vigilance.
So I am a little younger than Zvi but not by much, and what strikes me about this whole thing is that the history of recent technology is a story of mostly disappointment. Everything that isn't an iteration on "screens" advances glacially slowly and is hardly better than before I was born. Medicine is not meaningfully better, jets aren't faster, spacecraft haven't gone farther, big megaprojects mostly all get cancelled for spurious reasons, fusion feels equally far away, and so on. Solar exists but we made nuclear reactors with reprocessing illegal so effectively we just have a different form of clean power than what was available in the 1970s.
So that's the big delta here - accelerationists don't want any chances taken that this new Ai technology fizzles out before it can change the world outside of screens. They want every possible action taken to accelerate it until it becomes proven, with indisputable evidence, that it's as dangerous as fissionables. No laws, no brakes, accelerator to the floor until AI actually jumps Morevacs paradox.
That may seem unwise but the other alternative is that when you are dying of aging jack shit in the outside world would have happened. Doctors don't know why you are dying because ai isn't good enough to replace them, your meds are late because a nurse has to deliver them, there is still zero treatment at all for your cells deciding not to do their jobs because and arbitrary amount of time passed, and the news media is full of inane reality TV gossip because no one got fusion spacecraft to work.
I feel like Zvi’s entire blog is about addressing the delta and explaining why AI is different; if you don’t find him convincing then you almost certainly won’t be convinced by me. But fundamentally you either think that super-human intelligence is possible and coming soon, or you don’t.
That's not remotely the only positions you can hold. And it's a bad bet to be confident a complex and unprecedented technology is definitely coming soon.
Yes theoretically it's possible, but while there are many feedback loops there are also many barriers.
In the recent past even "sure thing" technology like an autonomous car or heck just a decent open world videogame is delayed by years.
It's very rare for any technology project to even be on time.
Soon is a squishy term, I’ll give you that. But if you think it’s possible that we get AGI/ASI eventually, you have to contend with the implications of sharing the planet with super intelligent beings that have (for all intents and purposes) unlimited memory, unlimited replicability, and who don’t need to sleep. It’s hard to see how humans remain in charge of the economy in any meaningful way.
Podcast episode for this post:
https://open.substack.com/pub/dwatvpodcast/p/newsom-vetoes-sb-1047
My first reaction to this was that Gavin Newsom was either an idiot or heavily influenced by lobbyists (possibly both).
On the other hand, there might be a charitable reading where he's a skeptic about the possibility of x-risks, but really concerned about mundane harms. (Infringing copyright, generating deepfake nudes of Taylor Swift, etc. etc.) The mundane harms are becoming evident even with current smaller models, so you'ld want to regulate tbem if that's where you think the majority of thee risk is.
Zvi has talked about this before, but for many (most?) ppl who are not concerned about x-risk, it is because they are skeptical about the potential of AI and don’t think it will get much more powerful.
Also see Matt Yglesias:
https://www.slowboring.com/p/what-the-ai-debate-is-really-about
The AI companies and open source AI advocates ought to be really concerned, not celebrating, if Gavin Newsom is going to regulate mundane harms. SB 1047 was relatively easy to comply with.
It often felt like "goodbye humanity" with the bill veing vetoed, basically due to the seeming ignorance and greed around it.
This really seems to emphasize that we need to do a lot more education around, and if you want to join our lobbying with #PauseAI, I think it is needed more than ever
If this does signal model-level regulation is not happening, but that use-level regulation is, isn't a company like OpenAI going to look pretty seriously about splitting into two nominally separate entities, one of which creates models (and perhaps even opens the weights) and the other of which serves/deploys those models in consumer products (most obviously as chatbots but there's much more they could do)? Then the second company is sort of running a wrapper and hosting service, and can tack on all sorts of reactive, whack-a-mole responses to the, as you I think correctly predict it will be, ever-evolving laundry list of vague dos and don'ts, while the first company just gets on with making models that are scary powerful and scary scary, but they don't actually deploy anything so are on no hooks of any kind? That would be a very bad time.
I think for all practical purposes they already are only the first company, or that's the plan. They could easily have branched into also doing various things to make their model more useful, and they've chosen not to, and to hand those tasks off.
At this point, you should think of OpenAI as a completely normal tech startup, that happens to have a weird legal and PR cloud of chaos hanging around its founding, just like Facebook had a weird legal and PR cloud with the Winklevoss brothers and so on.
It's more evidence that unusual legal structures don't work as well as standard tech startups. Don't do a nonprofit that owns a for-profit, don't do a Florida LLC. Just make a C Corp. Innovate on your tech, not on your corporate structure.
Small typo
depolyers -> deployers
Also a lot of image to text produced Al (AL) instead of AI (ai).
Yeah a bunch of that is actually copying PDFs not only image-to-text as such. I fixed a lot of them, not shocked I missed more.
Unfamiliar with CA politics, why aren't people pushing for an override? They have the votes in the Senate, and the Assembly had a large number of people who abstained that could presumably be persuaded.
Do they have the votes for a super majority? That certainly would be a plot twist
Its too late in the session for an override
The false sense of security line is confusing. I know it’s not supposed to make sense, but how many people are worried about the risks of small models? That’s a weird place to land, so I’m surprised the propaganda included it
A big part of the anti-AI faction, perhaps the majority of it, is people worried about AI taking their jobs. Artists, musicians, Uber drivers, dock workers. For these people, limiting the size of the model doesn't actually help very much. I take this as a signal that Gavin Newsom cares more about maintaining jobs than he cares about x-risk.
I think you'll be surprised that while many of them may have originally been concerned for other reasons, they have come to realize x-risk as a major threat.
One small note. It's trademark laws that require you to enforce them to maintain your trademark, not copyright law.
So, if I start selling T-Shirts of my own design but claim that they are Disney merchandise, that's Trademark infringement. Disney must enforce it's trademark over it's name to maintain the trademark. The point of Trademarks is so that consumers are reasonably certain than when they buy Disney merch it's from Disney and when they buy Bud Light it really is Bud Light. Hence the compulsory enforcement.
Copyright is about the actual creative outputs. If I start writing fanfiction of Alladin and even give Disney credit for the original source material, that's copyright infringement. But, crucially, Disney doesn't lose their Copyright if they choose not to enforce.
Ah, TIL! I thought it was both - e.g. with HPMOR I thought this was why they couldn't just let it go.
Harry Potter is also a trademark.https://trademarks.justia.com/763/60/harry-76360446.html
For context, I am generally opposed to AI regulation and I am happy that SB 1047 failed. But I do respect Zvi and others as smart people with serious, legitimate concerns, and I think there is a real chance of AI doom.
I don't agree with the model of "if we don't pass a regulation now, we'll get a worse one later." If you look at other overregulated areas, like building housing, it is just never the case that a single anti-housing regulation satisfies the NIMBY crowd. They would like to pile up regulation after regulation, until progress comes to a halt.
There is certainly an intelligent faction among the supporters of AI regulation. However, it is becoming clear that the intelligent AI safety people are a very small part of the anti-AI political faction. The anti-AI faction is dominated by people who are either anti-technology, anti-capitalist, or an an industry like art, music, or driving that appears likely to be disrupted by AI.
So, I don't see the argument for why pro-AI people should compromise on regulation with the AI safety faction in any way. The AI safety faction does not have the ability to "rein in" the rest of the anti-AI coalition in an area like California politics.
I think the most reasonable venue for compromise is technological. Any innovation that makes it easier for humans to control LLMs, or avoids specific harms like securing hackable software systems, preventing deepfake scamming, and so on, these are things we can support. I would be very happy to find grounds for the sides to work together in those areas. I just don't believe AI safety regulation is going to achieve any positive effect.
The problem is that technologically, AI may be fundamentally unsteerable and on top of that, money goes into capabilities. Thus regulations are needed.
The simple fact is that as AI safety is forced back, we have to make alliances with others affected. Its the fault of accelerationists that make extinction ever more likely.
I'm curious, if AI is really fundamentally unsteerable, what's your desired outcome. For regulation to stop all AI development at some fixed point and never go beyond that?
I mean, as zvi said, if it is going to kill us, yes, you stop building the doom machine. But there are alternate architectures like narrow AI that should work and is safer.
Newsom is a deeply, personally corrupted individual. He is political to the core.
For some reason, he believes he will be president, which I believe will never happen, but his ambitions are such that he will not offend the oligarchs.
It is truly sad to watch.
Well, we've failed at light-touch regulation that only affected the most powerful and expensive models. And which probably would have created only slightly more paperwork than is involved in the average medical insurance claim, and only for projects costing at least $100 million.
So now I guess we need to go for plan B: Attempt to regulate use cases so heavily that we make it impossible to make any further progress at all.
This is a shitty plan that destroys a lot of economic value, and which probably won't work. But I bet we can find some people to create infinite compliance paperwork.
Since I'm in favor of a pause, I propose we assemble a committee of corporate IT security compliance managers, San Francisco zoning permit officials, and the kind of HOA committee members who wouldn't allow veterans to fly US flags, and ask them to implement use-case based regulation with strict know-your-customer liability for AI model creators. For maximum effect, we should draw up convincing policy documents now, and then propose them during the first major crisis.
If we try hard and believe in ourselves, we can avoid making ourselves the second smartest species on the planet.
(Proposal status: "Ha ha only serious.")
The problem with this plan is that the lobbyists who came out to kill this bill will be more aggressive about anything stricter. And threaten to move to Texas, which has a state government limited during Reconstruction to be hobbled in its power. Texas would be happy to have AI companies, guzzling power from ERCOT, primarily from locally generated natural gas.
Well, yeah. Implicit in my "ha ha only serious" plan is that meaningful regulation of AI that poses strategic risks or extinction risks is currently impossible.
So unfortunately, we'll need to wait for a major catastrophe before we can do anything to prevent future catastrophes. All we can do for now is to start laying the groundwork for when the public is *angry as hell* at AI companies, so we can try to regulate then.
The way this will go is probably something like:
1. First AI takes the artists' jobs. Artists are upstream of parts of culture, so this will lead to diffuse, low-level resentment.
2. Then AI eventually takes a whole bunch of other people's jobs, including people who find that AI has already taken every other job they could do.
3. At some point, some AI system screws up in a highly legible way that crystalizes public fury.
Then we'll be able to regulate AI. Maybe.
I do not believe that "alignment" is possible in any rigorous sense. If we build something smarter than us, the best case scenario is that it likes enough to keep us as pets (and hopefully doesn't spay us, or whatever). And the worst case scenarios are nightmares.
Given the inference that a16z effectively “got to” the governor, and also that there is a high likelihood that a different, more onerous form of regulation is now on the cards — to whatever extent you believe both are true — my question would be: are a16z et al that stupid/short-sighted? Or do they see some route to some other outcome? Or what else might be going on?
If lobbyists can "get to" this governor this will actually mean vetoing of it all until this governor is not in power, or a clear crisis occurs. (An actual incident made possible by AI or mass layoffs)
Right, that’s the apparent contradiction that I’m driving at. But maybe it’s not as simple as “they’ve got him: everything will be vetoed”… maybe these things proceed on a much less thought-out/orderly basis…
What a putz. Someone's not getting any further votes from me, and I'll do my best to talk others out of same. Which...shouldn't be hard, Governor Nice Hair isn't exactly well-loved in his home state. One can always mea culpa and try again tomorrow night, but given the track record...bluh. Expected negative outcomes are still disappointing, one must find hope somewhere.
>SB 896 by Senator Bill Dodd (D-Napa)
Oh...I guess someone else survived Unsong. Wonder if he was a washed-up physicist with a powerful Macbook gaming laptop before getting elected. TINAC, etc.
Reading this article you would think that California is the only state in the union. Why can't any other state pass a bill like SB 1047?
https://apnews.com/article/california-ai-safety-measures-veto-newsom-92a715a5765d1738851bb26b247bf493
But even with Newsom’s veto, the California safety proposal is inspiring lawmakers in other states to take up similar measures, said Tatiana Rice, deputy director of the Future of Privacy Forum, a nonprofit that works with lawmakers on technology and privacy proposals.
“They are going to potentially either copy it or do something similar next legislative session,” Rice said. “So it’s not going away.”
Some states passing, and some states refusing creates a comparative economic advantage. This was why it was so weird the dialogue around 1047 - Pelosi is correct. This is a Federal issue and the states don't contribute value here.
These AI companies are bringing in billions and what is anticipated to be trillions, and they don't emit noise or local pollution or require many workers. They are a dream company for any state to host. It is an obvious move for Meta to create subsidiaries that do not exist in California who research huge models (probably in Austin), using architectures scaled up from the core bay area talent team.
The download pages for the models will prohibit California residents by IP, and subscription services would have a similar geolocation wall.
This would obviously make California tech companies less productive, due to no legal access to the latest models, and push billions and later trillions to other states.
Federal law would make more of a difference though it repeats the same problem at an international level, but now its prisoners dilemma but defectors can't be more than lightly punished due to their nuclear arsenals.
Also having access to advanced AI but being sanctioned would probably be a positive ROI tradeoff.
I think I basically disagree about the prisoners dilemma aspect of this, but specifically with respect to the states — nobody is going to block IPs in Texas, New York, or Florida just because of some model paperwork. If they are, that would be highly significant that they are doing something bad, that we would want to know about. And also, such laws could effectively prohibit open source models, since there would be no way not to make them available in all states.
I am not a legal expert but if I offer software "not for use in California" from a company that only exists in Texas, and that software is legal by federal law, California courts will not have jurisdiction if the software breaks the law of California. Defense Distributed likely is an actual example of this.
It's irrelevant if the file can be downloaded from California - the terms of service you agreed to to download it said not to distribute the file to California residents and not for use in California.
Metas attorneys in this hypothetical would just appeal to federal court and get any cases dismissed.