Table of Contents
Man With a Plan
The primary Man With a Plan this week for government-guided AI prosperity was UK Prime Minister Keir Starmer, with a plan coming primarily from Matt Clifford. I’ll be covering that soon.
Today I will be covering the other Man With a Plan, Sam Altman, as OpenAI offers its Economic Blueprint.
Cyrps1s (CISO OpenAI): AI is the ultimate race. The winner decides whether the future looks free and democratic, or repressed and authoritarian.
OpenAI, and the Western World, must win - and we have a blueprint to do so.
Do you hear yourselves? The mask on race and jingoism could not be more off, or firmly attached, depending on which way you want to set up your metaphor. If a movie had villains talking like this people would say it was too on the nose.
Somehow the actual documents tell that statement to hold its beer.
Oh the Pain
The initial exploratory document is highly disingenuous, trotting out stories of the UK requiring people to walk in front of cars waving red flags and talking about ‘AI’s main street,’ while threatening that if we don’t attract $175 billion in awaiting AI funding it will flow to China-backed projects. They even talk about creating jobs… by building data centers.
The same way some documents scream ‘an AI wrote this,’ others scream ‘the authors of this post are not your friends and are pursuing their book with some mixture of politics-talk and corporate-speak in the most cynical way you can imagine.’
I mean, I get it, playas gonna play, play, play, play, play. But can I ask OpenAI to play with at least some style and grace? To pretend to pretend not to be doing this, a little?
As opposed to actively inserting so many Fnords their document causes physical pain.
The full document starts out in the same vein. Chris Lehane, their Vice President of Global Affairs, writes an introduction as condescending as I can remember, and that plus the ‘where we stand’ repeat the same deeply cynical rhetoric from the summary.
In some sense, it is not important that the way the document is written makes me physically angry and ill in a way I endorse - to the extent that if it doesn’t set off your bullshit detectors and reading it doesn’t cause you pain, then I notice that there is at least some level on which I shouldn’t trust you.
But perhaps that is the most important thing about the document? That it tells you about the people writing it. They are telling you who they are. Believe them.
This is related to the ‘truesight’ that Claude sometimes displays.
As I wrote that, I was only on page 7, and hadn’t even gotten to the actual concrete proposals.
The actual concrete proposals are a distinct issue. I was having trouble reading through to find out what they are because this document filled me with rage and made me physically ill.
It’s important to notice that! I read documents all day, often containing things I do not like. It is very rare that my body responds by going into physical rebellion.
No, the document hasn’t yet mentioned even the possibility of any downside risks at all, let alone existential risks. And that’s pretty terrible on its own. But that’s not even what I’m picking up here, at all. This is something else. Something much worse.
Worst of all, it feels intentional. I can see the Fnords. They want me to see them. They want everyone to implicitly know they are being maximally cynical.
Actual Proposals
All right, so if one pushes through to the second half and the actual ‘solutions’ section, what is being proposed, beyond ‘regulating us would be akin to requiring someone to walk in front of every car waiving a red flag, no literally.’
The top level numbered statements describe what they propose, I attempted to group and separate proposals for better clarity. The nested statements (a, b, etc) are my reactions.
They say the Federal Government should, in a section where they actually say words with meanings rather than filling it with Fnords:
Share national security information and resources.
Okay. Yes. Please do.
Incentivize AI companies to deploy their products widely, including to allied and partner nations and to support US government agencies.
Huh? What? Is there a problem here that I am not noticing? Who is not deploying, other than in response to other countries regulations saying they cannot deploy (e.g. the EU)? Or are you trying to actively say that safety concerns are bad?
Support the development of standards and safeguards, and ensure they are recognized and respected by other nations.
In a different document I would be all for this - if we don’t have universal standards, people will go shopping. However, in this context, I can’t help but read it mostly as pre-emption, as in ‘we want America to prevent other states from imposing any safety requirements or roadblocks.’
Share its unique expertise with AI companies, including mitigating threats including cyber and CBRN.
Yes! Very much so. Jolly good.
Help companies access secure infrastructure to evaluate model security risks and safeguards.
Yes, excellent, great.
Promote transparency consistent with competitiveness, protect trade secrets, promote market competition, ‘carefully choose disclosure requirements.’
I can’t disagree, but how could anyone?
The devil is in the details. If this had good details, and emphasized that the transparency should largely be about safety questions, it would be another big positive.
Create a defined, voluntary pathway for companies that develop LLMs to work with government to define model evaluations, test models and exchange information to support the companies safeguards.
This is about helping you, the company? And you want it to be entirely voluntary? And in exchange, they explicitly want preemption from state-by-state regulations.
Basically this is a proposal for a fully optional safe harbor. I mean, yes, the Federal government should have a support system in place to aid in evaluations. But notice how they want it to work - as a way to defend companies against any other requirements, which they can in turn ignore when inconvenient.
Also, the goal here is to ‘support the companies safeguards,’ not to in any way see if the models are actually a responsible thing to release on any level.
Amazing to request actively less than zero Federal regulations on safety.
Empower the public sector to quickly and securely adopt AI tools.
I mean, sure, that would be nice if we can actually do it as described.
A lot of the components here are things basically everyone should agree upon.
Then there are the parts where, rather than this going hand-in-hand with an attempt to not kill everyone and ensure against catastrophes, attempts to ensure that no one else tries to stop catastrophes or prevent everyone from being killed. Can’t have that.
For AI Builders
They also propose that AI ‘builders’ could:
Form a consortium to identify best practices for working with NatSec.
Develop training programs for AI talent.
I mean, sure, those seem good and we should have an antitrust exemption to allow actions like this along with one that allows them to coordinate, slow down or pause in the name of safety if it comes to that, too. Not that this document mentions that.
Think of the Children
Sigh, here we go. Their solutions for thinking of the children are:
Encourage policy solutions that prevent the creation and distribution of CSAM. Incorporate CSAM protections into the AI development lifestyle. ‘Take steps to prevent downstream developers from using their models to generate CSAM.’
This is effectively a call to ban open source image models. I’m sorry, but it is. I wish it were not so, but there is no known way to open source image models, and have them not be used for CSAM, and I don’t see any reason to expect this to be solvable, and notice the reference to ‘downstream developers.’
Promote conditions that support robust and lasting partnerships among AI companies and law enforcement.
Content Identification
Apply provenance data to all AI-generated audio-visual content. Use common provenance standards. Have large companies report progress.
Sure. I think we’re all roughly on the same page here. Let’s move on to ‘preferences.’
People should be ‘empowered to personalize their AI tools.’
I agree we should empower people in this way. But what does the government have to do with this? None of their damn business.
People should control how their personal data is used.
Yes, sure, agreed.
‘Government and industry should work together to scale AI literacy through robust funding for pilot programs, school district technology budgets and professional development trainings that help people understand how to choose their own preferences to personalize their tools.’
No. Stop. Please. These initiatives never, ever work, we need to admit this.
But also shrug, it’s fine, it won’t do that much damage.
And then, I feel like I need to fully quote this one too:
In exchange for having so much freedom, users should be responsible for impacts of how they work and create with AI. Common-sense rules for AI that are aimed at protecting from actual harms can only provide that protection if they apply to those using the technology as well as those building it.
If seeing the phrase ‘In exchange for having so much freedom’ doesn’t send a chill down your spine, We Are Not the Same.
But I applaud the ‘as well as’ here. Yes, those using the technology should be responsible for the harm they themselves cause, so long as this is ‘in addition to’ rather than shoving all responsibility purely onto them.
Infrastructure Week
Finally, we get to ‘infrastructure as destiny,’ an area where we mostly agree on what is to actually be done, even if I despise a lot of the rhetoric they’re using to argue for it.
Ensure that AIs can train on all publicly available data.
This is probably the law now and I’m basically fine with it.
‘While also protecting creators from unauthorized digital replicas.’
This seems rather tricky if it means something other than ‘stop regurgitation of training data’? I assume that’s what it means, while trying to pretend it’s more than that. If it’s more than that, they need to explain what they have in mind and how one might do it.
Digitize government data currently in analog form.
Probably should do that anyway, although a lot of it shouldn’t go on the web or into LLMs. Kind of a call for government to pay for data curation.
‘A Compact for AI’ for capital and supply chains and such among US allies.
I don’t actually understand why this is necessary, and worry this amounts to asking for handouts and to allow Altman to build in the UAE.
‘AI economic zones’ that speed up the permitting process.
Or we could, you know, speed up the permitting process in general.
But actually we can’t and won’t, so even though this is deeply, deeply stupid and second best it’s probably fine. Directionally this is helpful.
Creation of AI research labs and workforces aligned with key local industries.
This seems like pork barrel spending, an attempt to pick our pockets, we shouldn’t need to subsidize this. To the extent there are applications here, the bottleneck won’t be funding, it will be regulations and human objections, let’s work on those instead.
‘A nationwide AI education strategy’ to ‘help our current workforce and students become AI ready.’
I strongly believe that what this points towards won’t work. What we actually need is to use AI to revolutionize the education system itself. That would work wonders, but you all (in government reading this document) aren’t ready for that conversation and OpenAI knows this.
More money for research infrastructure and science. Basically have the government buy the scientists a bunch of compute, give OpenAI business?
Again this seems like an attempt to direct government spending and get paid. Obviously we should get our scientists AI, but why can’t they just buy it the same way everyone else does? If we want to fund more science, why this path?
Leading the way on the next generation of energy technology.
No arguments here. Yay next generation energy production.
Clearly Altman wants Helion to get money but I’m basically fine with that.
Dramatically increase federal spending on power and data transmission and streamlined approval for new lines.
I’d emphasize approvals and regulatory barriers more than money.
Actual dollars spent don’t seem to me like the bottleneck, but I could be convinced otherwise.
If we have a way to actually spend money and have that result in a better grid, I’m in favor.
Federal backstops for high-value AI public works.
If this is more than ‘build more power plants and transmission lines and batteries and such’ I am confused what is actually being proposed.
In general, I think helping get us power is great, having the government do the other stuff is probably not its job.
Paying Attention
When we get down to the actual asks in the document, a majority of them I actually agree with, and most of them are reasonable, once I was able to force myself to read the words intended to have meaning.
There are still two widespread patterns to note within the meaningful content.
The easy theme, as you would expect, is the broad range of ‘spend money on us and other AI things’ proposals that don’t seem like they would accomplish much. There are some proposals that do seem productive, especially around electrical power, but a lot of this seems like the traditional ways the Federal government gets tricked into spending money. As long as this doesn’t scale too big, I’m not that concerned.
Then there is the play to defeat any attempt at safety regulation, via Federal regulations that actively net interfere with that goal in case any states or countries wanted to try and help. There is clear desirability of a common standard for this, but a voluntary safe harbor preemption, in exchange for various nebulous forms of potential cooperation, cannot be the basis of our entire safety plan. That appears to be the proposal on offer here.
The real vision, the thing I will take away most, is in the rhetoric and presentation, combined with the broader goals, rather than the particular details.
OpenAI now actively wants to be seen as pursuing this kind of obviously disingenuous jingoistic and typically openly corrupt rhetoric, to the extent that their statements are physically painful to read - I dealt with much of that around SB 1047, but this document takes that to the next level and beyond.
OpenAI wants no enforced constraints on their behavior, and they want our money.
OpenAI are telling us who they are. I fully believe them.
nerds getting outsmarted by sociopaths, name a more iconic duo
AI economic zones should also allow you to build apartments. For AI