I just started to read this article, and hit the section about making sure there is no bias in the AIs. I completely disagree with this. I don't want to read advice from an AI that just says "this is the best answer, and I can certify that I am not biased".
There is a Federal law about Advisory Committees that provide recommendations to decision makers in government agencies. The Committees are supposed to provide more than "unbiased recommendations" - they are supposed to provide a wide range of opinions based on the expertise and experiences of the members. So, the membership is supposed to include people who come from many different backgrounds and biases, so that the whole story can be laid out for the decisionmakers to decide.
In many technical issues, if you are not biased, it is because you do not understand the issues well enough to be able to develop a position, and are therefore, not an expert. The people who have strong opinions and biases do so BECAUSE they ARE experts, and because they understand all of the difficult issues where real experts disagree. It is important to have those opinions, from experts, aired in full, to be able to understand their biases.
I will continue to read this article, but this is not a good start.
Adding to this, "absence of bias" is a common failure mode in large organizations.
Let's say you're an internal team that provides some service to other teams. If your team is neutral about how clients use your service, then this creates enormous costs for your clients (who, lacking your expertise, will inevitably do the wrong thing) and enormous costs for your team (who must now maintain a set of diverse and largely incorrect use cases).
It is much better to say "we only serve clients who are trying to do things Our Way, and we will gladly expound the principles of Our Way to anyone who asks." This transparent bias promotes efficient competition.
Very good point. Absence of bias means that you have no suggestions for how to deal with an issue, and the person who needs to make a decision or pay for the project cannot count on you for help in deciding what to do. If you have no suggestions, then what are they paying for? Much better to present a spectrum of opinions, all showing some sort of "bias" about the solution, with a discussion of the advantages and disadvantages of each proposal.
"Absence of bias" is a cop-out. No lawyer would say that - they all have opinions about what to do and why, and they do not hesitate to express them, because it is what they do.
This analysis was enormously helpful. Not just your assessment (though that was the bulk of the value), but also the synthesis of commentary. The ability to be clear eyed about this *and* the Democratic Response *and* the general expert reactions is rare and valuable. Thank you!
My key disagreement is on open-weight models. For any business, building on a proprietary model is a major risk; you're handing your IP, customer data, and roadmap to a proprietary lab which may expand its operations into your market, leveraging its unprecedented economies of scale to outcompete you.
The cold reality is that banning foreign open weights models just makes US companies uncompetitive by locking them into higher-cost — costs paid in money and in data — platforms.
The only solution is for the US to lead in open-weights. This means shifting the safety paradigm to "security through certification." An automated, low-cost portal to evaluate and certify custom/fine-tuned models—an "App Store" for AI safety—would allow American businesses to compete and innovate responsibly.
> My key disagreement is on open-weight models. For any business, building on a proprietary model is a major risk; you're handing your IP, customer data, and roadmap to a proprietary lab which may expand its operations into your market, leveraging its unprecedented economies of scale to outcompete you.
You're describing the risk profile of letting Amazon run your infrastructure. Except that in Amazon's case, the risk is not just "they could do this", it's "they have done this". Remember Toys R Us? Yeah, Amazon ran their storefront, monitored their order flow, and then flagrantly violated the exclusivity agreement to become a major toy retailer in their own right. There was a big lawsuit over it, and Amazon lost - but here they are, and here Toys R Us isn't.
Nonetheless, it's clearly not the case that AWS use makes companies uncompetitive. Quite the opposite; access to cloud services makes most firms *massively* better off than their 20th century predecessors.
But the more important issues is - why should I care? Amazon is not my friend, but neither is Walmart or Target or my local mom and pop surplus incinerator. Economies of scale are a *good* thing, and when the most serious risks are exacerbated by competitive dynamics, as they are for AI safety, so is centralization. I don't especially want to live in a world run by a handful of plutocrats, but it's a hell of a lot better than getting pulped by a fine-tuned GPT-12 some hedge fund moron told to go maximize returns no matter what. Critical support for comrade Amodei in his war against the demon god of Carthage. For comrade Xi as well, should it come to that.
The paragraph that tries to explain why Trump's version of bias is good while Biden's was bad is impenetrable and I gave up. South Park did the most amazing impression of people being afraid to criticize him
This reads to me as insufficiently concerned about the DOD-related provisions, which really emphasize AI for defense over AI for other branches of government. That seems pretty bad; personally I'd like to keep AIs and weapons as far away from each other as possible.
Amazing work by Dean Ball, kudos to him and the rest of the OSTP crew.
Podcast episode for this post:
https://open.substack.com/pub/dwatvpodcast/p/americas-ai-action-plan-is-pretty?r=67y1h&utm_campaign=post&utm_medium=web&showWelcomeOnShare=true
I just started to read this article, and hit the section about making sure there is no bias in the AIs. I completely disagree with this. I don't want to read advice from an AI that just says "this is the best answer, and I can certify that I am not biased".
There is a Federal law about Advisory Committees that provide recommendations to decision makers in government agencies. The Committees are supposed to provide more than "unbiased recommendations" - they are supposed to provide a wide range of opinions based on the expertise and experiences of the members. So, the membership is supposed to include people who come from many different backgrounds and biases, so that the whole story can be laid out for the decisionmakers to decide.
In many technical issues, if you are not biased, it is because you do not understand the issues well enough to be able to develop a position, and are therefore, not an expert. The people who have strong opinions and biases do so BECAUSE they ARE experts, and because they understand all of the difficult issues where real experts disagree. It is important to have those opinions, from experts, aired in full, to be able to understand their biases.
I will continue to read this article, but this is not a good start.
Adding to this, "absence of bias" is a common failure mode in large organizations.
Let's say you're an internal team that provides some service to other teams. If your team is neutral about how clients use your service, then this creates enormous costs for your clients (who, lacking your expertise, will inevitably do the wrong thing) and enormous costs for your team (who must now maintain a set of diverse and largely incorrect use cases).
It is much better to say "we only serve clients who are trying to do things Our Way, and we will gladly expound the principles of Our Way to anyone who asks." This transparent bias promotes efficient competition.
Very good point. Absence of bias means that you have no suggestions for how to deal with an issue, and the person who needs to make a decision or pay for the project cannot count on you for help in deciding what to do. If you have no suggestions, then what are they paying for? Much better to present a spectrum of opinions, all showing some sort of "bias" about the solution, with a discussion of the advantages and disadvantages of each proposal.
"Absence of bias" is a cop-out. No lawyer would say that - they all have opinions about what to do and why, and they do not hesitate to express them, because it is what they do.
> Mostly AI job market impact is going to AI job market impact.
Come again?
I think Zvi is saying that AI is going to have effects on the job market and nobody can do anything about it.
Ah yes that makes sense.
This analysis was enormously helpful. Not just your assessment (though that was the bulk of the value), but also the synthesis of commentary. The ability to be clear eyed about this *and* the Democratic Response *and* the general expert reactions is rare and valuable. Thank you!
Seconded!
Great analysis, Zvi.
My key disagreement is on open-weight models. For any business, building on a proprietary model is a major risk; you're handing your IP, customer data, and roadmap to a proprietary lab which may expand its operations into your market, leveraging its unprecedented economies of scale to outcompete you.
The cold reality is that banning foreign open weights models just makes US companies uncompetitive by locking them into higher-cost — costs paid in money and in data — platforms.
The only solution is for the US to lead in open-weights. This means shifting the safety paradigm to "security through certification." An automated, low-cost portal to evaluate and certify custom/fine-tuned models—an "App Store" for AI safety—would allow American businesses to compete and innovate responsibly.
> My key disagreement is on open-weight models. For any business, building on a proprietary model is a major risk; you're handing your IP, customer data, and roadmap to a proprietary lab which may expand its operations into your market, leveraging its unprecedented economies of scale to outcompete you.
You're describing the risk profile of letting Amazon run your infrastructure. Except that in Amazon's case, the risk is not just "they could do this", it's "they have done this". Remember Toys R Us? Yeah, Amazon ran their storefront, monitored their order flow, and then flagrantly violated the exclusivity agreement to become a major toy retailer in their own right. There was a big lawsuit over it, and Amazon lost - but here they are, and here Toys R Us isn't.
Nonetheless, it's clearly not the case that AWS use makes companies uncompetitive. Quite the opposite; access to cloud services makes most firms *massively* better off than their 20th century predecessors.
But the more important issues is - why should I care? Amazon is not my friend, but neither is Walmart or Target or my local mom and pop surplus incinerator. Economies of scale are a *good* thing, and when the most serious risks are exacerbated by competitive dynamics, as they are for AI safety, so is centralization. I don't especially want to live in a world run by a handful of plutocrats, but it's a hell of a lot better than getting pulped by a fine-tuned GPT-12 some hedge fund moron told to go maximize returns no matter what. Critical support for comrade Amodei in his war against the demon god of Carthage. For comrade Xi as well, should it come to that.
(Sam Altman is the demon god.)
The paragraph that tries to explain why Trump's version of bias is good while Biden's was bad is impenetrable and I gave up. South Park did the most amazing impression of people being afraid to criticize him
So, you do realize ChatGPT wrote this right?
This reads to me as insufficiently concerned about the DOD-related provisions, which really emphasize AI for defense over AI for other branches of government. That seems pretty bad; personally I'd like to keep AIs and weapons as far away from each other as possible.
China outcompetes USA on the systems level for ai chips: https://semianalysis.com/2025/04/16/huawei-ai-cloudmatrix-384-chinas-answer-to-nvidia-gb200-nvl72/