and while we are at it, just create an 'economic zone' where there is basically zero government regulation for anything and we can build and make the future better
For me they are correct on the web, but show up as all numbers on the substack app and in my RSS reader. This is always the case with DWATV lists. No idea why. My only advice is to switch to the web for these big nested-list-style posts, if you can.
It's actually insane how you people think that most people in most countries are like you and aren't actually more nationalistic than the average Republican.
> When we get down to the actual asks in the document, a majority of them I actually agree with, and most of them are reasonable, once I was able to force myself to read the words intended to have meaning.
The movement for an AI pause was a total failure with both policy makers and the general public. That was a signal to the rationalist/AI safety community that they do not speak *or even understand* the language commonly used in these discussions.
Focus on the content of the proposals, and aggressively down weight the emotional response provoked by the form and the language. This community is not the intended audience, and the language this community would find congenial and persuasive would be very ineffective at reaching the intended audience.
This document just reads like a set of completely generic, mundane statements to me. Yeah we should consider all stakeholders and the impact on blah blah blah.
I suspect the wording of these proposals would be extremely different if Kamala was president.
Also I noted, how they mentioned talent quite a bit to ensure American dominance, but never talked about immigration, avoiding attracting any ire from the current administration
Xi Jinping must be a complete idiot. He has free reign to do what he wants, and apparently cares greatly about the AI race, and yet he isn't trying to get millions of Indian coders to move to China. Talk about leaving money on the table!
I would say this is just putting another mask on, palatable to the new president. Just like Zuck going on Rogan to discuss his deeply hidden reservations over content moderation in the Biden years. They hit what would seem to be his first level preferences on US-China economic competition and a carefully worded "America First" AI education scheme.
So you think he was lying (in spite of all the very well known evidence to the contrary) about the whitehouse threatening facebook over moderation? Because if this is all an act, it would seem wildly redundant for the whitehouse or congress to need to do any of that.
I am really getting tired of people using the phrase "clear, common-sense". In EY's words, I think it's a 'stop' word. It's supposed to stop any thought about the actual complexities involved or in making real tradeoffs... In the openAI document, it's particularly annoying, since it's basically saying "no regulations" which are the only possible "common-sense" approaches to AI regulation. Ugh.
I don't understand this "Do you hear yourselves? The mask on race and jingoism could not be more off, or firmly attached, depending on which way you want to set up your metaphor."
It seems to me China is a real risk, and a real threat to freedom and democratic values. (Which is NOT at all to say I trust OpenAI with that, or even our own government... Just that I trust China (broadly) even less.)
If OpenAI achieves AGI/ASI, it very well could be a winner-take-all situation. It is by no means a zero-sum game, and China would (ideally) have access to these tools (or some "restricted" version)...
I read it as "we must do X or else China wins!" serving an analogous negative function (thing we must avoid at any cost) to the issue you highlight with "clear, common-sense" (thing we must not deviate from at any cost) immediately above. "Must" only works if an ASI-enabled China is understood to be arbitrarily worse than the next worst competing outcome that our actions impacting AI+China might lead to instead (pick your poison).
Give me a break - we wouldn't even be having this discussion if it were Russia or Hungary or whatever. The real story about race here is the mental gymnastics progressives engage in to defend a country like China because they're not white.
Yes, we all would rather survive (broadly). But they released the weights, which (broadly) increases likelihood of p(doom). Death < Chinese ASI < US ASI < general libertarian-esque ASI. All of those are "generally".
Racing is already happening, China or not. OpenAI is clearly pushing as hard as they can to AGI, and other groups don't want to fall behind. DeepSeek seems to increase the race dynamics, because it puts additional pressure on OpenAI to keep ahead.
The document says "People should be empowered to personalize their AI tools, including through controls on how their personal data is used".
I guess the second part of the sentence might be relevant to what the authors actually meant here.
If this actually about data privacy regulations, then there's a potential role for government in imposing penalties for privacy regulations.
e.g. if some AI company sells me access to a model, with a contractual promise that they will keep confidential my personal data that I submit to the model, and then they don't stick t this (e.g. they use my personal data to train a new model, and this ends up leaking my data), then is there any redress?
It occurs to me that this interacts with the idea that open source models might be banned. There's a possible future where approximately everyone runs open source models locally, because absolutely no-one trusts an AI company with their data.
As a potential customer of a LLM service, you might like to buy that service from a provider you trust to respect confidentiality promises.
This is somewhat in conflict with...
Certain AI companies are being very visibly sued by copyright holders for making use of their copyright data without authorization in a way which -- lawsuit alleges -- was not fair use.
Initial thought: I would have splurged for an actual artist to draw the cover of the blueprint, not used a DALL-E generation.
AI economic zones should also allow you to build apartments. For AI
and while we are at it, just create an 'economic zone' where there is basically zero government regulation for anything and we can build and make the future better
nerds getting outsmarted by sociopaths, name a more iconic duo
But but the nerds get to work for the current “it” company. It impresses their friends and helps them get chicks.
Note - your nested responses are still labeled 1/2/3 instead of a/b/c
For me they are correct on the web, but show up as all numbers on the substack app and in my RSS reader. This is always the case with DWATV lists. No idea why. My only advice is to switch to the web for these big nested-list-style posts, if you can.
I don't know what to do about this - the Substack app is broken in several ways, not only this.
Ah that did it, thanks
Regarding the jingoist bravado, I suspect this is intended for an audience of one.
It's actually insane how you people think that most people in most countries are like you and aren't actually more nationalistic than the average Republican.
Podcast episode for this post:
https://open.substack.com/pub/dwatvpodcast/p/on-the-openai-economic-blueprint
> When we get down to the actual asks in the document, a majority of them I actually agree with, and most of them are reasonable, once I was able to force myself to read the words intended to have meaning.
The movement for an AI pause was a total failure with both policy makers and the general public. That was a signal to the rationalist/AI safety community that they do not speak *or even understand* the language commonly used in these discussions.
Focus on the content of the proposals, and aggressively down weight the emotional response provoked by the form and the language. This community is not the intended audience, and the language this community would find congenial and persuasive would be very ineffective at reaching the intended audience.
This document just reads like a set of completely generic, mundane statements to me. Yeah we should consider all stakeholders and the impact on blah blah blah.
Could this be the rise of the Fnords?
I suspect the wording of these proposals would be extremely different if Kamala was president.
Also I noted, how they mentioned talent quite a bit to ensure American dominance, but never talked about immigration, avoiding attracting any ire from the current administration
Xi Jinping must be a complete idiot. He has free reign to do what he wants, and apparently cares greatly about the AI race, and yet he isn't trying to get millions of Indian coders to move to China. Talk about leaving money on the table!
I would say this is just putting another mask on, palatable to the new president. Just like Zuck going on Rogan to discuss his deeply hidden reservations over content moderation in the Biden years. They hit what would seem to be his first level preferences on US-China economic competition and a carefully worded "America First" AI education scheme.
So you think he was lying (in spite of all the very well known evidence to the contrary) about the whitehouse threatening facebook over moderation? Because if this is all an act, it would seem wildly redundant for the whitehouse or congress to need to do any of that.
I think his primary aim is continuing his business model. That allows for flexible positions on moderation depending on the regime in power.
I am really getting tired of people using the phrase "clear, common-sense". In EY's words, I think it's a 'stop' word. It's supposed to stop any thought about the actual complexities involved or in making real tradeoffs... In the openAI document, it's particularly annoying, since it's basically saying "no regulations" which are the only possible "common-sense" approaches to AI regulation. Ugh.
Yes, like "common sense immigration policy", which of course means de facto open borders.
I don't understand this "Do you hear yourselves? The mask on race and jingoism could not be more off, or firmly attached, depending on which way you want to set up your metaphor."
It seems to me China is a real risk, and a real threat to freedom and democratic values. (Which is NOT at all to say I trust OpenAI with that, or even our own government... Just that I trust China (broadly) even less.)
If OpenAI achieves AGI/ASI, it very well could be a winner-take-all situation. It is by no means a zero-sum game, and China would (ideally) have access to these tools (or some "restricted" version)...
What am I missing?
I read it as "we must do X or else China wins!" serving an analogous negative function (thing we must avoid at any cost) to the issue you highlight with "clear, common-sense" (thing we must not deviate from at any cost) immediately above. "Must" only works if an ASI-enabled China is understood to be arbitrarily worse than the next worst competing outcome that our actions impacting AI+China might lead to instead (pick your poison).
Give me a break - we wouldn't even be having this discussion if it were Russia or Hungary or whatever. The real story about race here is the mental gymnastics progressives engage in to defend a country like China because they're not white.
Speaking for myself, I would much rather have a Chinese-aligned ASI, than a kill-everyone ASI, and racing seems more likely to ensure the latter.
Yes, we all would rather survive (broadly). But they released the weights, which (broadly) increases likelihood of p(doom). Death < Chinese ASI < US ASI < general libertarian-esque ASI. All of those are "generally".
Racing is already happening, China or not. OpenAI is clearly pushing as hard as they can to AGI, and other groups don't want to fall behind. DeepSeek seems to increase the race dynamics, because it puts additional pressure on OpenAI to keep ahead.
“While also protecting creators from unauthorized digital replicas.” could also be referring to deep fakes and voice cloning?
Oh, yeah, it probably does also refer to that, good catch. Not sure why I didn't think of that.
The document says "People should be empowered to personalize their AI tools, including through controls on how their personal data is used".
I guess the second part of the sentence might be relevant to what the authors actually meant here.
If this actually about data privacy regulations, then there's a potential role for government in imposing penalties for privacy regulations.
e.g. if some AI company sells me access to a model, with a contractual promise that they will keep confidential my personal data that I submit to the model, and then they don't stick t this (e.g. they use my personal data to train a new model, and this ends up leaking my data), then is there any redress?
It occurs to me that this interacts with the idea that open source models might be banned. There's a possible future where approximately everyone runs open source models locally, because absolutely no-one trusts an AI company with their data.
As a potential customer of a LLM service, you might like to buy that service from a provider you trust to respect confidentiality promises.
This is somewhat in conflict with...
Certain AI companies are being very visibly sued by copyright holders for making use of their copyright data without authorization in a way which -- lawsuit alleges -- was not fair use.
What happened to "pay the piper, call the tune"?