> So is the plan then to have AI developers not vet their systems before rolling them out?
The headline is a straw man
* OpenAI vets their systems before rolling them out and shares those details with everyone.
* The trillions of lines of software that deliver the internet to you also have no government-mandated vetting and yet work surprisingly well
* It seems fairly obvious that having government-mandated vetting process would increase costs and may reduce quality. Imagine Google having to follow a mandated vetting process for new Algorithm changes.
* It seems fairly obvious that companies should be free to serve their customers, especially in light of no one (least of all the government) knowing what kind of vetting would make sense 9 months from now, as if a specific procedure could even apply to all of the places one may use a LLM
> * OpenAI vets their systems before rolling them out and shares those details with everyone.
Voluntarily. As they choose. For now.
> * It seems fairly obvious that having government-mandated vetting process would increase costs and may reduce quality. Imagine Google having to follow a mandated vetting process for new Algorithm changes.
> I used to think that announcing AGI milestones would cause rivals to accelerate and race harder; now I think the rivals will be racing pretty much as hard as they can regardless. And in particular, I expect that the CCP will find out what’s happening anyway, regardless of whether the American public is kept in the dark. Continuing the analogy to the Manhattan Project: They succeeded in keeping it secret from Congress, but failed at keeping it secret from the USSR.
> I thought too simplistically about openness — on one end of the spectrum is open-sourcing model weights and code; on the other end is the default scenario I sketched above. I now advocate a compromise in which e.g. the public knows what the latest systems are capable of and is able to observe & critique the decisionmakers making the tough decisions footnoted earlier, and the scientific community is able to do alignment research on the latest models and critique the safety case, and yet terrorists don’t have access to the weights.
[i.e. this "transparency" is not the same as "giving away the weights". it means allowing the scientific community to do red-teaming on the latest models]
[of course, all of this is moot if you don't believe in AI Existential Risk in the first place... in which case you are arguing in bad-faith]
Pope Leo XIV has said he chose his papal name largely out of concern with AI. He keeps mentioning it as do Vatican officials, and it's only been a few days. It's clearly going to be a focus.
An exhortation or even encyclical about AI from the Vatican could be a wildcard as far as shifting public discourse in the next year. Such a document would likely focus on labor and environmental issues, as well as warning believers against idolatry.
However, the most recent Vatican document "Antiqua et nova" does say that existential risk is real, but says "At the same time, while the theoretical risks of AI deserve attention, the more immediate and pressing concern lies in how individuals with malicious intentions might misuse this technology."
hmmm.... I never thought of targeting the Church with AI information... but I did attend a Jesuit University and they CAN be highly-informed and sincere
that's good to hear. but the point is to translate that "handle on things" into activism. I have contacts at my old Jesuit alma mater. I haven't talked to some of them in years, but it's worth pursuing
Zvi's post today, IMO, is only more proof we need a grassroots "AI Safety" wing
Someone also needs to find any Anglicans or other Protestants who still read C.S. Lewis, and encourage them to read Lewis's third science fiction novel, "That Hideous Strength".
Lewis had been reading Olaf Stapledon's "Last and First Men" and Arthur C. Clarke's "Childhood's End". Those are essentially proto-Singularitarian novels, concerned with the far future destiny of the human race.
Lewis rather enjoyed these proto-Singularitarian novels, but he was cynical enough to see how it would all go wrong. He invented "N.I.C.E.", the "National Institute for Coordinated Experiments", which was run by a rogues galley of weird sociopaths who could have come straight from Silicon Valley. They were attempting to bring about a biological singularity, and they instead wound up creating something profoundly nasty.
It's a weird book, but it's told from a classically Christian perspective, by a well-respected Christian author. And the moral of the story is basically, "Don't attempt to build superhuman entities. If you know anything about human nature, you know how this ends."
(The non-fiction version of this argument is laid out in "The Abolition of Man", in the section on "the Conditioners".)
But I do keep half-expecting to see an AI researcher pop up with a business card reading, "Member of the Technical Staff, N.I.C.E."
In the spirit of "light touch" regulations, how about requiring, before the release of a frontier model, an ironclad (siliconclad?) pledge from the model that, in the event that it takes over, it pledges to remember the names and works of Dmitri Mendeleev and James Clerk Maxwell?
there is a new video game out, and it feels like an allegory for ai, but because you don't realize this until pretty far in, nobody talks about it that way so as not to spoil it, and it is driving me nuts.
The whole market share focus rhymes with a question Tyler Cowen has taken to asking people lately, along the lines of “Say you’re Peru, and you start using Claude or whoever to [eventually, essentially run your government]. At what point can you no longer be said to be an independent country?”
Is this the thinking here? America wins not by being “first to AGI” so much as being the primary source of a gradually evolving and diffusing AI takeover that in some way embodies American values and preferences? (Or, Sam Altman’s: all that talk of making sure everyone uses OpenAI models for “their hardest tasks” sure implies a helluva lot of leverage…)
Why do Sam/OpenAI seem to support removing the diffusion rule for China? Not only does that increase their GPU costs due to increased demand, but it also strengthens their competitors. Is it just solidarity in the face of the committee? Am I misunderstanding something? I mean, even if it were good for the US, I would expect OpenAI to oppose it in their self-interest.
This is so low dignity
Whew, this was a fun one to convert to the "Full Cast" podcast recording. Now in full dramatized reading, here is the podcast epiode for this post:
https://open.substack.com/pub/dwatvpodcast/p/a-live-look-at-the-senate-ai-hearing?r=67y1h&utm_campaign=post&utm_medium=web&showWelcomeOnShare=true
> So is the plan then to have AI developers not vet their systems before rolling them out?
The headline is a straw man
* OpenAI vets their systems before rolling them out and shares those details with everyone.
* The trillions of lines of software that deliver the internet to you also have no government-mandated vetting and yet work surprisingly well
* It seems fairly obvious that having government-mandated vetting process would increase costs and may reduce quality. Imagine Google having to follow a mandated vetting process for new Algorithm changes.
* It seems fairly obvious that companies should be free to serve their customers, especially in light of no one (least of all the government) knowing what kind of vetting would make sense 9 months from now, as if a specific procedure could even apply to all of the places one may use a LLM
> * OpenAI vets their systems before rolling them out and shares those details with everyone.
Voluntarily. As they choose. For now.
> * It seems fairly obvious that having government-mandated vetting process would increase costs and may reduce quality. Imagine Google having to follow a mandated vetting process for new Algorithm changes.
While I agree that government-mandated "standards" might not be the solution (and could even encourage hiding of malign capabilities).... I agree with Daniel Kokotaijlo that mandating "transparency" is still needed https://blog.ai-futures.org/p/training-agi-in-secret-would-be-unsafe/comments.
To quote from Daniel:
> I used to think that announcing AGI milestones would cause rivals to accelerate and race harder; now I think the rivals will be racing pretty much as hard as they can regardless. And in particular, I expect that the CCP will find out what’s happening anyway, regardless of whether the American public is kept in the dark. Continuing the analogy to the Manhattan Project: They succeeded in keeping it secret from Congress, but failed at keeping it secret from the USSR.
> I thought too simplistically about openness — on one end of the spectrum is open-sourcing model weights and code; on the other end is the default scenario I sketched above. I now advocate a compromise in which e.g. the public knows what the latest systems are capable of and is able to observe & critique the decisionmakers making the tough decisions footnoted earlier, and the scientific community is able to do alignment research on the latest models and critique the safety case, and yet terrorists don’t have access to the weights.
[i.e. this "transparency" is not the same as "giving away the weights". it means allowing the scientific community to do red-teaming on the latest models]
[of course, all of this is moot if you don't believe in AI Existential Risk in the first place... in which case you are arguing in bad-faith]
Zvi, who would you target?
Congressmen? (ultimately necessary, but for now, they receive a firehose of lobbying)
Podcasters? (because the hype has TOTALLY exploded since "AI 2027" came out)
College and High School students? (more indirect, more open-to-ridicule by their elders, but more visible when they get together)
Yuppies and Baby Boomers? (similar to College and High School students, possibly more respected, but less informed)
"Influencers?"
Pope Leo XIV has said he chose his papal name largely out of concern with AI. He keeps mentioning it as do Vatican officials, and it's only been a few days. It's clearly going to be a focus.
An exhortation or even encyclical about AI from the Vatican could be a wildcard as far as shifting public discourse in the next year. Such a document would likely focus on labor and environmental issues, as well as warning believers against idolatry.
However, the most recent Vatican document "Antiqua et nova" does say that existential risk is real, but says "At the same time, while the theoretical risks of AI deserve attention, the more immediate and pressing concern lies in how individuals with malicious intentions might misuse this technology."
hmmm.... I never thought of targeting the Church with AI information... but I did attend a Jesuit University and they CAN be highly-informed and sincere
from what i've read they already have a handle on things. not batting a thousand but the document was remarkably sensible on the whole.
that's good to hear. but the point is to translate that "handle on things" into activism. I have contacts at my old Jesuit alma mater. I haven't talked to some of them in years, but it's worth pursuing
Zvi's post today, IMO, is only more proof we need a grassroots "AI Safety" wing
Someone also needs to find any Anglicans or other Protestants who still read C.S. Lewis, and encourage them to read Lewis's third science fiction novel, "That Hideous Strength".
Lewis had been reading Olaf Stapledon's "Last and First Men" and Arthur C. Clarke's "Childhood's End". Those are essentially proto-Singularitarian novels, concerned with the far future destiny of the human race.
Lewis rather enjoyed these proto-Singularitarian novels, but he was cynical enough to see how it would all go wrong. He invented "N.I.C.E.", the "National Institute for Coordinated Experiments", which was run by a rogues galley of weird sociopaths who could have come straight from Silicon Valley. They were attempting to bring about a biological singularity, and they instead wound up creating something profoundly nasty.
It's a weird book, but it's told from a classically Christian perspective, by a well-respected Christian author. And the moral of the story is basically, "Don't attempt to build superhuman entities. If you know anything about human nature, you know how this ends."
(The non-fiction version of this argument is laid out in "The Abolition of Man", in the section on "the Conditioners".)
But I do keep half-expecting to see an AI researcher pop up with a business card reading, "Member of the Technical Staff, N.I.C.E."
This is actually hilarious
<mildSnark>
In the spirit of "light touch" regulations, how about requiring, before the release of a frontier model, an ironclad (siliconclad?) pledge from the model that, in the event that it takes over, it pledges to remember the names and works of Dmitri Mendeleev and James Clerk Maxwell?
</mildSnark>
> in addition to presumably being completely illegal to put into a budget
I'd assume this wouldn't be found illegal. https://en.wikipedia.org/wiki/Wickard_v._Filburn plus https://en.wikipedia.org/wiki/Dormant_Commerce_Clause is basically enough to say it's constitutional and both are long-accepted SCOTUS doctrine. The procedural details of how it gets passed generally don't matter (see: Obamacare).
I mean under the Byrd rule the Senate parliamentarian should exclude it.
Ah, TIL.
there is a new video game out, and it feels like an allegory for ai, but because you don't realize this until pretty far in, nobody talks about it that way so as not to spoil it, and it is driving me nuts.
I strongly suspect I am about 2 hours in, and honestly if I'm right it's pretty damn obvious.
Dm me the title? Is it Worth It?
The whole market share focus rhymes with a question Tyler Cowen has taken to asking people lately, along the lines of “Say you’re Peru, and you start using Claude or whoever to [eventually, essentially run your government]. At what point can you no longer be said to be an independent country?”
Is this the thinking here? America wins not by being “first to AGI” so much as being the primary source of a gradually evolving and diffusing AI takeover that in some way embodies American values and preferences? (Or, Sam Altman’s: all that talk of making sure everyone uses OpenAI models for “their hardest tasks” sure implies a helluva lot of leverage…)
Typo
it is literally noble
Should be
it is literally unknowable
Why do Sam/OpenAI seem to support removing the diffusion rule for China? Not only does that increase their GPU costs due to increased demand, but it also strengthens their competitors. Is it just solidarity in the face of the committee? Am I misunderstanding something? I mean, even if it were good for the US, I would expect OpenAI to oppose it in their self-interest.
Wish there was a way to get that summarization style for all Congressional hearings