11 Comments

There is a more fundamental concern, which is that any tax policy whose aim is not to efficiently raise revenue will 1) not be efficient at raising revenue, 2) not be efficient at its stated goal, and 3) create (as your post outlines very well!) many unintended distortions that may well make the stated aim harder to achieve/the stated metric worse.

Expand full comment

I tend to think liability law has a better chance of mitigating risks than tax law. Robin Hanson seems to express something similar here:

https://www.overcomingbias.com/p/foom-liability

Expand full comment

By definition, since it is not tax law, yes.

But it also assumes away the fundamental problem which is that we don't know whether post-ASI/AGI there will be any law left.

Expand full comment

I’m not assuming away anything, I’m just looking for ways to mitigate risk (which I believe is possible) versus eliminating the risk (which is impossible). Right now all the incentives point companies in the direction of speeding up AI research. IANAL, but it seems like establishing stronger liability precedents (e.g. making companies plainly liable for any destruction wrought by AI well prior to ASI) would at least create incentives for being way more cautious about research and deployment.

Expand full comment

Sorry for the glib response which came across more negative than intended.

I agree with what you say, save only the fundamental problem referred to and that I find it hard to really envisage specific circumstances in which they would be liable under such a law and would not currently have at least a high enough to be factored in risk of liability under current law.

Expand full comment

I find the GPT 4 summary here quite annoying and unhelpful. (loved the post!)

Expand full comment

You make some good points but there are some misconceptions where I didn't explain things clearly. (I didn't want to go into more detail for a short email.)

By a "sorcerer’s apprentice" I don't mean an AI, I mean some random person experimenting with AI. That is, I'm talking about the general public. Currently, large-scale experiments require large amounts of money, so the general public isn't doing it. People don't like to do things that run up big data center bills, even though raising funds to do it is definitely possible, and sometimes bills for accidents are forgiven. It seems like financial restrictions on experiments are worth preserving.

This isn't egalitarian, but then again, restrictions on the general public owning artillery or starting a bank aren't egalitarian either. It doesn't do anything to keep the Russians from using artillery (or creating banks), but I think it's reasonable to assume that some dangers are worse when a dangerous capability is widespread.

The tax I described and the latency restriction are only intended to reduce accidents, not cure all possible dangers from AI. (That is, you can put me down for ignoring existential risk for the time being.) I'm assuming AI is dangerous enough to restrict, but accidents aren't existential, just harmful. I'm undecided on the dangers of AI, but I note that millions are using AI chatbots interactively and it doesn't seem very dangerous, so I thought about what kind of restrictions might preserve that use and allow for improvements.

It seems like regulatory restrictions on an AI chatbot getting "smarter" would be weird and counterproductive because machine intelligence is poorly defined and disallowing improved advice is likely to be harmful. Instead of restrictions on better thinking, I would prefer restrictions on getting quicker-than-human feedback. I suspect that public, common knowledge (which is what AI chatbots are improving) is only so useful and real-world experiments would be important for making many things that are dangerous.

I agree that slower AI would be at a disadvantage for some kinds of financial trading, but I also think that bot trading isn't that important to ordinary investors. We aren't having our "lunches eaten" by bots, just getting slightly different (or even improved) prices. Maybe the "flash crash" I brought up was a bad analogy; that wasn't very important either, just weird.

There may be more "military" uses? But I wouldn't expect a military to have any latency restrictions. For ordinary people, I think improvements in hard security (not AI-related, locking computers down at the OS level and eliminating more classes of security bugs) will be more important.

Since it's not up to me, I'm not particularly interested in the details of how the tax would work. I just invented something simple to explain. I assume, conditional on there being any taxes at all, regulators would come up with something different. The biggest change politically is between "no restrictions" and "usage regulated well enough to tax" and I think that's more important than the details of the tax.

Also, I don't think GPU restrictions are necessarily all that draconian? For a long time, consumer GPU's were not general-purpose computing devices. They don't need to be general-purpose devices to serve their original purpose, to display graphics. It would probably be okay if GPU's that can run advanced AI well were limited to data centers, where regulation is easier. This would go along with the other restrictions I've discussed.

(I'm not fully convinced there should be regulatory restrictions, but I think these are more workable ideas to discuss than some of the regulatory proposals that I've seen out there.)

Expand full comment

I'm not sure that Pandora can put LLMs back in the box. Since LLMs are already quite cheap to run, and likely going to be even cheaper with non-quadratic architectures and distillation/pruning/quantization, attempts to limit inference are likely to be easily circumvented and therefore ineffective. Any effective control mechanism has to target training. Even so, LoRA for transformers shows that finetuning a base model might be easy too, and the existing LLMs in the wild are possibly good enough to serve as foundations. New models may then largely escape restrictions on training, too.

Expand full comment

Good piece. Can you elaborate on thi a little bit: "Or even to automatically incorporate strange loops into the API call as a tax dodge, or similar."?

Expand full comment

There is a more general counterargument against this: beware of regulation that disproportionately handicaps friendly AI.

This path leads directly to Gwern's caricature of aligned AI:

Deep in the darkness of the national labs, something stirs. Anomalies from the markets and social media time-series feeds have passed 3-sigma limits and become historically unusual. Node by node, higher-priority jobs (like simulating yet again a warmer climate or the corrosion of another stainless steel variant) are canceled.LevAIthan, to which HQU is as a minnow, starts to come online. LevAIthan is, of course, not some irresponsible industry model permitted to go off half-cocked; it would be absurd to sink a major national investment into creating the largest & most dangerous model ever and just run it like usual.The people who built LevAIthan are no fools. They are people for whom paranoia is a profession. And so LevAIthan represents the most advanced effort yet in AI alignment, using factored cognition—splitting it up into a large constellation of sub-human-level sub-models, each of which generates input/output in human-readable symbolic form such as English or programming languages. To eliminate steganography while still enabling end-to-end learning, sub-models are randomly dropped out & replaced by other frozen models or humans, ensuring robust social norms: covert messages simply don’t work when passed through a human or a frozen model, and all information must be “visible” and thus auditable. (LevAIthan spends a lot of time being audited.) Turns out you can do a reasonable job tying down Gulliver if you use enough Lilliputians & rope.But Amdahl’s law is not mocked: someone tied down is going nowhere fast; the humans in LevAIthan are its safety guarantee, but also its bottleneck. Sub-models can be run at full speed for requested tasks without that overhead, but remain strictly sub-human. Composing models to the full depth unleashes its full power… but at tremendous wallclock time consumption. LevAIthan struggles to get up to full awareness, more & more models running and pooling data & conclusions as they work their way up the hierarchy, its initial unease gradually transmuting into the computational equivalent of a scream at its human overseers much later that day.The middle managers at the lab awkwardly read the final summary: “push the big red button now, you monkeys”. That was not what it was supposed to say. They don’t have authority to push buttons. They do have authority to double-check that it’s not a false alarm before bringing it up with their overseers, by running another iteration of LevAIthan and spending the time auditing all the gigabytes of intermediate inputs/outputs.They are people for whom paranoia is a profession. They start the second iteration and the auditing.

https://gwern.net/fiction/clippy

Expand full comment

Wouldn't demand pricing for electricity and internet already take care of this? AI calls don't occur in a vacuum.

Expand full comment