Response To: Via Marginal Revolution, Brian Slesinsky proposes a tax on language model API calls.
Brian Slesinsky: My preferred AI tax would be a small tax on language model API calls, somewhat like a Tobin tax on currency transactions. This would discourage running language models in a loop or allowing them to “think” while idle.
For now, we mostly use large language models under human supervision, such as with AI chat. This is relatively safe because the AI is frozen most of the time [1]. It means you get as much time as you like to think about your next move, and the AI doesn’t get the same advantage. If you don’t like what the AI is saying, you can simply close the chat and walk away.
Under such conditions, a sorcerer’s apprentice shouldn’t be able to start anything they can’t stop. But many people are experimenting with running AI in fully automatic mode and that seems much more dangerous. It’s not yet as dangerous as experimenting with computer viruses, but that could change.
Such a tax doesn’t seem necessary today because the best language models are very expensive [2]. But making and implementing tax policy takes time, and we should be concerned about what happens when costs drop.
Another limit that would tend to discourage dangerous experiments would be a minimum reaction time. Today, language models are slow. It reminds me of using a dial-up modem in the old days. But we should be concerned about what happens when AI’s start reacting to events much quicker than people.
Different language models quickly reacting to each other in a marketplace or forum could cause cascading effects, similar to a “flash crash” in a financial market. On social networks, it’s already the case that volume is far higher than we can keep up with. But it could get worse when conversations between AI’s start running at superhuman speeds.
Financial markets don’t have limits on reaction time, but there are trading hours and circuit breakers that give investors time to think about what’s happening in unusual situations. Social networks sometimes have rate limits too, but limiting latency at the language model API seems more comprehensive.
Limits on transaction costs and latency won’t make AI safe, but they should reduce some risks better than attempting to keep AI’s from getting smarter. Machine intelligence isn’t defined well enough to regulate. There are many benchmarks and it seems unlikely that researchers will agree on a one-dimensional measurement, like IQ in humans.
This seems like it has even worse versions of all the issues Tyler Cowen takes with regulatory proposals for AI, while also not doing what you want it to do? Why does Tyler suddenly fail to notice these concerns? If indeed we are worried about losing to China or slowing down progress or unenforceable rules that imply dystopias once you work out their implications, don’t we need to consistently apply such worries?
As always, an AI regulation can either take existential risk seriously, it can take existential risk non-seriously or in a confused way, or it can ignore existential risk entirely.
This seems like it falls into the second category, for a few reasons.
The Ancient Art of Taxation
One simple reason for this would be, when do you pay your taxes? Who is making you?
If a dangerous AI were to come into existence, and start setting up API calls to itself or something similar where we would want the tax to apply, does the taxman appear and say ‘no! You shall not compute until you pay?’ At best, this happens if you are already paying for the API calls with a third party, which will cut you off already after a certain point. When one is worried about rapid expansions in capabilities, or such systems rapidly getting out of control, a tax won’t help.
Let’s say the tax somehow did apply in advance in an enforceable way. How big a tax are we talking, anyway? The price of compute is rapidly declining. Are you going to impose a tax that is most of the marginal cost of usage? If you don’t, at its theoretical best this buys you a small amount of time.
Notice the contrast between a tax on API calls versus a tax on tokens, as you pay by the token to use API calls right now, to reflect real costs. If you put a large tax on API calls, the same way you have a limited number of GPT-4 calls as a human, what happens? You do more and more increasingly bespoke prompt engineering, and turn up the size and complexity of responses, and do more outside-LLM processing.
Worst of all, this tax applies at the model usage level, where there is mundane utility and relatively low marginal risk, and ignores the training level. Each API call currently has super low marginal cost, versus a high fixed cost to train the model. Once the model is trained, the API calls aren’t free exactly, but they are quite cheap, unless you are doing industrial-strength things they are basically free and will get cheaper over time.
So when you tax API calls, yes you are discouraging some specific types of strange loops of consideration differentially. But effectively most of what you are discouraging is extraction of small amounts of mundane utility per call.
The Almost as Ancient Art of the Tax Dodge
In turn, this moves people away from ‘train a bespoke set of distinct models and call between them’ or ‘use a smaller model that calls itself with scaffolding or does sampling or what not’ and towards exactly the worst possible thing, which is training the most powerful model possible, so that the marginal benefit per API call can afford to pay the taxes. Or even to automatically incorporate strange loops into the API call as a tax dodge, or similar.
Why would one impose a fixed tax per action on an action with variable marginal costs and marginal benefits, rather than a percentage tax on profits or revenue? If you’re going to impose a >100% tax on some actions, and a very low tax on others, you need to know exactly what you are doing and want to discourage.
Also, what happens with non-transactional API calls, or things that are not API calls at all? What happens with open source software run on some college student’s computer? What happens when the program itself starts doing operations that use the model’s capabilities without being API calls, a scenario that is obviously quite worrisome? How are you going to get a reasonable definition here that doesn’t actively drive activity to get that much more dangerous?
How are you going to check if every computer in the world is running an LLM? Consider the parallel to taxing humans for thinking. Strange that Tyler Cowen did not point out the obvious issues here.
Thus, as written, this seems like pretty terrible tax policy.
Focus On Training Runs
A much better tax policy would be to focus on a tax on training runs. Training that scales too much is where the biggest danger lies. This is what we want to discourage. So let’s tax that, ideally in super-linear fashion as the model size, compute used and data used go up.
That would encourage more use of less dangerous smaller models for mundane utility, while discouraging the activity with potentially limitless negative externalities.
One could then potentially also impose an additional tax on marginal usage of existing models if one is worried about humans being unfairly taxed compared to computers, perhaps, in addition to being worried about existential risks.
This should be proportional to compute to preserve economic efficiency. Otherwise, you are going to distort LLM use towards use of longer and more detailed API calls, which will be wasteful and create a lot of deadweight loss.
Another potential target would be to directly tax GPU hardware, either one time on manufacturing or sale, continuously or both, which would also create a tracking regime.
What about if the fear is ordinary, non-existential dangers of out of control AutoGPT-style activities, as opposed to fear of economic competition or existential risk? This is a tough threat model to properly respond to here.
The tax levels required to actually do much discouragement would be prohibitive. It is going to be rare that such a process has as its primary cost the API calls, as opposed to such actions imposing the very risks we want to minimize, and the requirements to get it set up correctly. To the extent that marginal costs start to approach marginal benefits, one assumes that is because similar such systems are in economic competition, where a small tax wouldn’t change things much.
Enforcement is a Problem
There’s also still the issue of how to enforce such a tax. If you’re worried about an out-of-control intelligent agent on the internet, are you going to count on the IRS to shut it down before damage is done?
As partly noted above we have at least four very difficult enforcement issues. All of them are serious problems for the proposed regime.
The first difficulty is internal use. Models calling themselves, using open source software on your own computer, or a corporation using its own models. These are relatively dangerous cases. How are we going to detect and tax such usage? How are we going to enforce this reliably and quickly enough to prevent a dangerous situation if one arises?
The second difficulty is rogue use. In many scenarios we wish to guard against, we wish to guard against them exactly because the AI has escaped human control. It is not in any particular location or on any one computer in a way that lets us enforce tax collection upon it.
The third difficulty is timing. Even if we did have the ability to enforce taxes eventually, the time scale of developing AI threats is a different order of magnitude of speed from the time scale of IRS enforcement actions. By the time we even notice taxes are not paid, let alone enforce, how is it not already too late?
The fourth difficulty is international agreement. Tax havens are a known problem with any tax regime. If China isn’t willing to slow down its AI development under any circumstances, why would they agree to a prohibitive taxation scheme, especially one requiring such dystopian monitoring? What happens when the Cayman Islands refuses to collect the tax and starts selling bespoke services using AI? If the main marginal cost of AI is tax, and the main cost of many economically valuable actions is the cost of AI, the advantages offered by tax havens would be extreme.
To solve these overlapping problems robustly would, even more so than any rule against AI model development, require a rather draconian monitoring regime be implemented worldwide, whether or not one views this as dystopian. Otherwise, one is handing the future to whoever is most willing and able to dodge taxes, or to the AI models on their computers, depending on how that interaction proceeds.
Contrast this with a tax on training runs. You still have the issue of detection and monitoring, but that problem becomes far easier, as does enforcement. You have much easier targets to track, and you can ‘look at the results’ as well - as a last resort, if there is a model being used, you can ask whether tax was paid. If someone tries to dodge such a tax, they would do so in ways that we are likely to prefer.
What About Minimum Reaction Time?
Should ‘we should be concerned when AIs start reacting to events quicker than people’?
Yes, I think we should. Also that time has already come and gone, as anyone familiar with financial markets or AIs knows. GPT-4 happens to be ‘slow’ in some ways right now, but it is far faster than humans in others, most other current AI systems are universally faster than humans, and future AI systems will doubtless be far faster than us in all ways that matter.
Inevitably, AIs will be interacting with each other far faster than humans can process info. It would be wise to tackle with the implications now. Could we meaningfully impose some sort of rate limitation?
Possibly?
If we did impose such a restriction, there would be many places it did not meaningfully bind and was at most slightly annoying, and other places where it very much did bind.
An exciting potential upside would be if this reaction time applied during training, and it greatly slowed the training of newer models, while noting the enforcement and international agreement issues this implies.
One big problem with this proposal is that there is a trade-off between model capabilities and model speed. The bigger and more capable the model, and thus the more dangerous in most ways, the slower it will run. If you introduce a minimum reaction time, you are pushing people towards developing and using more dangerous models, and towards more complex reactions designed to minimize the number of reaction steps, which could be importantly wasteful and distortionary.
Centrally, I notice that this is another case where all the standard anti-restriction, anti-regulation arguments come right back into play. In the financial markets an unrestricted agent is going to eat the restricted one’s lunch in an eyeblink. Many other such cases, too.
If AIs are constantly interacting with each other at faster-than-human speed, and meaningful restrictions are imposed on ours but not on others, isn’t that a huge strategic issue? If they are imposed on everyone, doesn’t this slow economic growth?
Why doesn’t this fail under the broad ‘lose to China’ objection? If we don’t build the fast AI someone else will, or even easier if we don’t run our AIs fast someone else will run them faster. It would be very easy for the open source people to take the wait command out of their function calls or some financial firm to cheat on this, and that’s that. Not a very dignified way to die, I’d say.
Thus, we’d once again be talking about a far far more dystopian and extreme surveillance and electronic monitoring state than one focused on GPU restrictions, if we wanted such restrictions to hold and have teeth, as there would be no physical thing one could target, you’d need to be up in the system of every computer on the planet.
That’s not going to happen. Seems like a bad place to focus.
Executive Summary by GPT-4 (using system message and lightly edited)
- Brian Slesinsky proposes a tax on language model API calls to discourage dangerous AI experiments and reduce risks.
- Issues with Slesinsky's proposal:
- Tax enforcement and timing: difficult to enforce on rogue/internal use and international agreement needed.
- Tax may not be effective in preventing dangerous AI scenarios.
- Distortion of AI usage: fixed tax per action could lead to wasteful and dangerous behavior.
- Alternative tax policy suggestions:
- Best option: Tax on training runs: discourages dangerous large models, easier to enforce.
- Proportional tax on compute usage: preserves economic efficiency.
- Tax on GPU hardware: creates a tracking regime.
- Minimum reaction time proposal:
- AIs already interact faster than humans, introducing restrictions could create strategic issues.
- Imposing restrictions would require extreme surveillance and electronic monitoring.
- Key considerations for AI tax policy:
- Protecting against existential risks and dangerous AIs.
- Keeping activity legible and within human control.
- Raising revenue and addressing tax code favoritism.
- Challenges in defining, enforcing, and agreeing on international tax rules.
Conclusion
There are several distinct reasons to consider some form of an AI tax, including:
Protecting against existential risks and dangerous AIs.
Keeping activity legible and within human control.
Raising revenue.
Stopping the tax code from favoring AI use over humans, as humans are taxed.
Making the tax code favor humans over AIs, to protect jobs and people.
The dangers of such regimes include:
Defining what is being taxed is often tricky.
Tax avoidance that could steer activity towards inefficient or dangerous behavior.
Taxes on model use could drive investment in more dangerous models.
Enforcement of such laws is extremely difficult, implying extreme surveillance.
International agreement on such rules seems necessary, and difficult to get.
I tend to think liability law has a better chance of mitigating risks than tax law. Robin Hanson seems to express something similar here:
https://www.overcomingbias.com/p/foom-liability
Wouldn't demand pricing for electricity and internet already take care of this? AI calls don't occur in a vacuum.