36 Comments

Initial thought: I would have splurged for an actual artist to draw the cover of the blueprint, not used a DALL-E generation.

Expand full comment

AI economic zones should also allow you to build apartments. For AI

Expand full comment

and while we are at it, just create an 'economic zone' where there is basically zero government regulation for anything and we can build and make the future better

Expand full comment

nerds getting outsmarted by sociopaths, name a more iconic duo

Expand full comment

But but the nerds get to work for the current “it” company. It impresses their friends and helps them get chicks.

Expand full comment

Note - your nested responses are still labeled 1/2/3 instead of a/b/c

Expand full comment

For me they are correct on the web, but show up as all numbers on the substack app and in my RSS reader. This is always the case with DWATV lists. No idea why. My only advice is to switch to the web for these big nested-list-style posts, if you can.

Expand full comment

I don't know what to do about this - the Substack app is broken in several ways, not only this.

Expand full comment

Ah that did it, thanks

Expand full comment

Regarding the jingoist bravado, I suspect this is intended for an audience of one.

Expand full comment

It's actually insane how you people think that most people in most countries are like you and aren't actually more nationalistic than the average Republican.

Expand full comment

> When we get down to the actual asks in the document, a majority of them I actually agree with, and most of them are reasonable, once I was able to force myself to read the words intended to have meaning.

The movement for an AI pause was a total failure with both policy makers and the general public. That was a signal to the rationalist/AI safety community that they do not speak *or even understand* the language commonly used in these discussions.

Focus on the content of the proposals, and aggressively down weight the emotional response provoked by the form and the language. This community is not the intended audience, and the language this community would find congenial and persuasive would be very ineffective at reaching the intended audience.

Expand full comment

This document just reads like a set of completely generic, mundane statements to me. Yeah we should consider all stakeholders and the impact on blah blah blah.

Expand full comment

Could this be the rise of the Fnords?

Expand full comment

I suspect the wording of these proposals would be extremely different if Kamala was president.

Also I noted, how they mentioned talent quite a bit to ensure American dominance, but never talked about immigration, avoiding attracting any ire from the current administration

Expand full comment

Xi Jinping must be a complete idiot. He has free reign to do what he wants, and apparently cares greatly about the AI race, and yet he isn't trying to get millions of Indian coders to move to China. Talk about leaving money on the table!

Expand full comment

I would say this is just putting another mask on, palatable to the new president. Just like Zuck going on Rogan to discuss his deeply hidden reservations over content moderation in the Biden years. They hit what would seem to be his first level preferences on US-China economic competition and a carefully worded "America First" AI education scheme.

Expand full comment

So you think he was lying (in spite of all the very well known evidence to the contrary) about the whitehouse threatening facebook over moderation? Because if this is all an act, it would seem wildly redundant for the whitehouse or congress to need to do any of that.

Expand full comment

I think his primary aim is continuing his business model. That allows for flexible positions on moderation depending on the regime in power.

Expand full comment

I am really getting tired of people using the phrase "clear, common-sense". In EY's words, I think it's a 'stop' word. It's supposed to stop any thought about the actual complexities involved or in making real tradeoffs... In the openAI document, it's particularly annoying, since it's basically saying "no regulations" which are the only possible "common-sense" approaches to AI regulation. Ugh.

Expand full comment

Yes, like "common sense immigration policy", which of course means de facto open borders.

Expand full comment

I don't understand this "Do you hear yourselves? The mask on race and jingoism could not be more off, or firmly attached, depending on which way you want to set up your metaphor."

It seems to me China is a real risk, and a real threat to freedom and democratic values. (Which is NOT at all to say I trust OpenAI with that, or even our own government... Just that I trust China (broadly) even less.)

If OpenAI achieves AGI/ASI, it very well could be a winner-take-all situation. It is by no means a zero-sum game, and China would (ideally) have access to these tools (or some "restricted" version)...

What am I missing?

Expand full comment

I read it as "we must do X or else China wins!" serving an analogous negative function (thing we must avoid at any cost) to the issue you highlight with "clear, common-sense" (thing we must not deviate from at any cost) immediately above. "Must" only works if an ASI-enabled China is understood to be arbitrarily worse than the next worst competing outcome that our actions impacting AI+China might lead to instead (pick your poison).

Expand full comment

Give me a break - we wouldn't even be having this discussion if it were Russia or Hungary or whatever. The real story about race here is the mental gymnastics progressives engage in to defend a country like China because they're not white.

Expand full comment

Speaking for myself, I would much rather have a Chinese-aligned ASI, than a kill-everyone ASI, and racing seems more likely to ensure the latter.

Expand full comment

Yes, we all would rather survive (broadly). But they released the weights, which (broadly) increases likelihood of p(doom). Death < Chinese ASI < US ASI < general libertarian-esque ASI. All of those are "generally".

Racing is already happening, China or not. OpenAI is clearly pushing as hard as they can to AGI, and other groups don't want to fall behind. DeepSeek seems to increase the race dynamics, because it puts additional pressure on OpenAI to keep ahead.

Expand full comment

“While also protecting creators from unauthorized digital replicas.” could also be referring to deep fakes and voice cloning?

Expand full comment

Oh, yeah, it probably does also refer to that, good catch. Not sure why I didn't think of that.

Expand full comment

The document says "People should be empowered to personalize their AI tools, including through controls on how their personal data is used".

I guess the second part of the sentence might be relevant to what the authors actually meant here.

If this actually about data privacy regulations, then there's a potential role for government in imposing penalties for privacy regulations.

e.g. if some AI company sells me access to a model, with a contractual promise that they will keep confidential my personal data that I submit to the model, and then they don't stick t this (e.g. they use my personal data to train a new model, and this ends up leaking my data), then is there any redress?

It occurs to me that this interacts with the idea that open source models might be banned. There's a possible future where approximately everyone runs open source models locally, because absolutely no-one trusts an AI company with their data.

Expand full comment

As a potential customer of a LLM service, you might like to buy that service from a provider you trust to respect confidentiality promises.

This is somewhat in conflict with...

Certain AI companies are being very visibly sued by copyright holders for making use of their copyright data without authorization in a way which -- lawsuit alleges -- was not fair use.

Expand full comment

What happened to "pay the piper, call the tune"?

Expand full comment