Today’s post will be a little different. This past week,Sam Altman and others testified at a US Senate hearing on AI competitiveness.
Here’s one summary of how Altman handled it:
Altman repeatedly rejected specific calls for regulation. He said proposals requiring AI developers to vet their systems before rolling them out would be “disastrous” for the industry.
Asked about more limited proposals to have the National Institute of Standards and Technology (NIST) set AI standards, Altman replied, “I don't think we need it. It can be helpful.” Altman later advocated for “sensible regulation that does not slow us down.”
So is the plan then to have AI developers not vet their systems before rolling them out? Is OpenAI not planning to vet their systems before rolling them out? This ‘sensible regulation that does not slow us down’ seems to translate to no regulation at all, as per usual.
And that was indeed the theme of the Senate hearing, both from the Senators and from the witnesses. The Senate is also setting up to attempt full preemption of all state and local AI-related laws, without any attempt to replace them, or realistically without any attempt to later pass any laws whatsoever to replace the ones that are barred.
Most of you should probably skip the main section that paraphrases the Senate hearing. I do think it is enlightening, and at times highly amusing at least to me, but it is long and one must prioritize, I only managed to cut it down ~80%. You can also pick and choose from the pull quotes at the end.
Table of Contents
Advance Summary and Analysis Before the Main Event
I will now give the shortened version of this hearing, gently paraphrased, in which we get bipartisan and corporate voices saying among other things rather alarmingly:
America. Innovation. Race. Beat China. Innovation. America.
But race to what? Market share. Nvidia’s (and AMD’s) market shares matter, maybe also Microsoft’s and OpenAI’s market shares, not who has compute.
‘Exports’ in AI are mainly physical chips, rather than American AI services.
Giving away America’s compute advantage is how America will ‘win the race.’ In particular, if we don’t sell our chips to everyone giving away our advantage in chips, then they might make their own chips, and then we won’t have as big an advantage in chips.
Giving away America’s AI models for free is how America will ‘win the race.’
It would be disastrous to even have standards for AI unless industry sets them.
We need ‘light touch’ federal regulations, code for essentially no rules at all. Our only choices, as Cruz says, are “Do we go down the path that embraces our history of entrepreneurial freedom and technological innovation? Or do we adopt the command and control policies of Europe?” And any attempt to lift any fingers at all, for any reason, would be the path of Europe.
E.g. we need to prevent states from regulating AI at all or we will turn into the EU.
Almost zero mentions of any catastrophic risks, zero of existential risks.
Deep confusions about AI climate change, water and existing electrical power use being much bigger issues than they are, and about DeepSeek and China in general and presenting China as being far more competitive in AI than they are.
Needless to say, I do not agree with most of that.
It is rather grim out there. Almost everyone seems determined to not only go down the Missile Gap road into a pure Race, but also to use that as a reason to dismiss any other considerations out of hand, and indeed not to even acknowledge that there are any other worries out there to dismiss, beyond ‘the effect on jobs.’ This includes both the Senators and also Altman and company.
The discussion was almost entirely whether we should move to lock out all AI regulations and whether we should impose any standards of any kind on AI at all, except narrowly on deepfakes and such. There was no talk about even trying to make the government aware of what was happening. SB 1047 and other attempts at sensible rules were routinely completely mischaracterized.
There was no sign that anyone was treating this as anything other than a (very important) Normal Technology and Mere Tool.
If you think most of the Congress has any interest in not dying? Think again.
That could of course rapidly change once more. I expect it to. But not today.
We did also see many copies of sensible statements such as:
It is vital that we build strong trade alliances and work with our allies.
It is vital that we access the world’s best AI talent, as in high-skilled immigration, although We Are Not Using That Word.
It is vital that our rules about trade not be declared by fiat, and that they be simple and consistent rules we can trust going forward.
It is vital we do permitting reform and enable building more power capacity, with a correct emphasis on getting problems out of the way, not seeking money.
NIST should indeed establish ‘industry standards.’
The witnesses tried to explain the DeepSeek situation accurately, and that America is still ahead.
The most glaring pattern was the final form of Altman’s pivot to jingoism and opposing all meaningful regulation, while acting as if AI poses no major downside risks, not even technological unemployment let alone catastrophic or existential risks or loss of human control over the future.
Peter Wildeford: When Biden was President and Sam Altman came to testify in front of the Senate, the discussion was mainly about how to rein in AI. Altman proposed the creation of a US or global licensing agency for powerful AI systems, and suggested the agency could “take that license away and ensure compliance with safety standards.” He expressed concerns about AI risks, stating, “If this technology goes wrong, it can go quite wrong”.
However, in Altman’s first major congressional hearing in Trump’s second term, the discussion switched from reining in AI to building it out. And Altman switched his earlier position, stating that requiring government approval before a model’s release would be “disastrous.”
What changed?
~
My main guess would be that Altman is just doing his best to represent his business and fit in with the vibes. And the vibes were very different now than two years ago.
Peter then offers a summary of testimony, including how much Cruz is driving the conversation towards ‘lifting any finger anywhere dooms us to become the EU and to lose to China’ style rhetoric.
You also get to very quickly see which Senators are serious about policy and understanding the world, versus those here to create partisan soundbites or push talking points and hear themselves speak, versus those who want to talk about or seek special pork for their state. Which ones are curious and inquisitive versus which ones are hostile and mean. Which ones think they are clever and funny when they aren’t.
It is amazing how consistently and quickly the bad ones show themselves. Every time.
And to be clear, that has very little to do with who is in which party.
My Offer Is Nothing
Indeed, the House Energy and Commerce Committee is explicitly trying for outright preemption, without replacement, trying to slip it into their budget proposal (edit: I originally thought this was the Senate, not the House):
The Energy and Commerce committee put explicit preemption in their budget proposal (p9) — "Subsection (c) states that no state or political subdivision may enforce any law or regulation regulating artificial intelligence models, artificial intelligence systems, or automated decision systems during the 10-year period beginning on the date of the enactment of this Act."
That is, of course, completely insane, in addition to presumably being completely illegal to put into a budget. Presumably the Byrd rule kills it for now.
It would be one thing to pass this supposed ‘light touch’ AI regulation, that presented a new legal regime to handle AI models at the federal level, and do so while preempting state action.
It is quite another to have your offer be nothing. Literally nothing. As in, we are Congress, we cannot pass laws, but we will prevent you from enforcing any laws, or fixing any laws, for ten years.
They did at least include a carveout for laws that actively facilitate AI, including power generation for AI, so that states aren’t prevented from addressing the patchwork of existing laws that might kneecap AI and definitely will prevent adaptation and diffusion in various ways.
Even with that, the mind boggles to think of even the mundane implications. You couldn’t pass laws against anything, including CSAM or deepfakes. That’s in addition to the inability of the states to do the kinds of things we need them to do that this law is explicitly trying to prevent, such as SB 1047’s requirements that if you want to train a frontier AI model, you have to tell us you are doing that and share your safety and security protocol, and take reasonable care against catastrophic (and existential) risks.
Again, this is with zero sign of any federal rule at all.
Quick Style Guide
In the following, if quote marks are used they literally said it. If not it’s a dramatization. I was maximizing truthiness, not aiming for a literal translation, read this as if it’s an extended SNL sketch, written largely for my own use and amusement.
We Now Take You Live to the Senate Hearing
Senator Ted Cruz (R-Texas): AI innovation. No rules. Have to beat China. Europe overregulates. Yay free internet. Biden had bad vibes and was woke. Not selling advanced AI chips to places that would let our competitors have them would cripple American tech companies. Trump. We need sandboxes and deregulation.
Senator Maria Cantwell (D-Washington): Pacific northwest. University of Washington got Chips act money. Microsoft contracted a fusion company in Washington. Expand electricity production. “Export controls are not a trade strategy.” Broad distribution of US-made AI chips. “American open AI systems” must be dominant across the globe.
Senator Cruz: Thank you. Our witnesses are OpenAI CEO Sam Altman, Lisa Su the CEO of AMD who is good because Texas, Michael Intrator the CEO of Core Weave, and Brad Smith the vice Chair and President of Microsoft.
Sam Altman (CEO OpenAI): Humility. Honor to be here. Scientists are now 2-3 times more productive and other neat stuff. America. Infrastructure. Texas. American innovation. Internet. America. Not Europe. America is Magic.
Dr. Lisa Su (CEO AMD): Honor. America. We make good chips. They do things. AI is transformative technology. Race. We could lose. Must race faster. Infrastructure. Open ecosystems. Marketplace of ideas. Domestic supply chain. Talent. Public-private partnership. Boo export controls, we need everyone to use our chips because otherwise they won’t use our technology and they might use other technology instead, this is the market we must dominate. AMD’s market. I had a Commodore 64 and an Apple II. They were innovative and American.
Senator Cruz: I also had an Apple II, but sadly I’m a politician now.
Michael Intrator (CEO Coreweave): I had a Vic 20. Honor. American infrastructure. Look at us grow. Global demand for AI infrastructure. American competitive edge. Productivity. Prosperity. Infrastructure. Need more compute. AI infrastructure will set the global economic agenda and shape human outcomes. China. AI race.
We need: Strategic investment stability. Stable, predictable policy frameworks, secure supply chains, regulatory environments that foster innovation. Energy infrastructure development. Permitting and regulatory reform. Market access. Trade agreements. Calibrated export controls. Public-private partnership. Innovation. Government and industry must work together.
Brad Smith (President of Microsoft): Chart with AI tech stack. All in this together. Infrastructure. Platform. Applications. Microsoft.
We need: Innovation. Infrastructure. Support from universities and government. Basic research. Faster adaptation, diffusion. Productivity. Economic growth. Investing in skilling and education. Right approach to export controls. Trust with the world. We need to not build machines that are better than people, only machines that make people better. Machines that give us jobs, and make our jobs better. We can do that, somehow.
Tech. People. Ambition. Education. Opportunity. The future.
Rep. Tim Sheey (R-MT, who actually said this verbatim): “Thank you for your testimony. Certainly makes me sleep better at night. Worried about Terminator and Skynet coming after us, knowing that you guys are behind the wheel, but in five words or less, start with you Mr. Smith. What are the five words you need to see from our government to make sure we win this AI race?”
Brad Smith: “More electricians. That's two words. Broader AI education.”
Rep. Sheey: “And no using ChatGPT as a friend.”
Michael Intrator: “We need to focus on streamlining the ability to build large things.”
Dr. Lisa Su: “Policies to help us run faster in the innovation race.”
Sam Altman: “Allow supply chain-sensitive policy.”
Rep. Sheey: So we race. America wins races. Government support and staying out of your way are how America wins races. How do we incentivize companies so America wins the race? A non-state actor could win.
Sam Altman (a non-state actor currently winning the race): Stargate. Texas. We need electricity, permitting, supply chain. Investment. Domestic production. Talent recruitment. Legal clarity and clear rules. “Of course there will be guardrails” but please assure us you won’t impose any.
Dr. Lisa Su: Compute. AMD, I mean America, must build compute. Need domestic manufacturing. Need simple export rules.
Rep. Sheey: Are companies weighing doing AI business in America versus China?
Dr. Lisa Su: American tech is the best, but if it’s not available they’ll buy elsewhere.
Rep. Sheey: Infrastructure, electricians, universities, regulatory framework, can do. Innovation. Talent. Run faster. Those harder. Can’t manufacture talent, can’t make you run faster. Can only give you tools.
Senator Cantwell: Do we need NIST to set standards?
Sam Altman: “I don’t think we need it. It can be helpful.”
Michael Intrator: “Yes, yes.”
Senator Cantwell: Do we want NIST standards that let us move faster?
Brad Smith: “What I would say is this, first of all, NIST is where standards go to be adopted, but it's not necessarily where they first go to be created.” “We will need industry standards, we will need American adoption of standards, and you are right. We will need US efforts to really ensure that the world buys into these standards.”
Michael Intrator: Standards need to be standardized.
Senator Cantwell: Standards let you move fast. Like HTTP or HTML. So, on exports, Malaysia. If we sell them chips can we ensure they don’t sell those chips to China?
Michael Intrator (‘well no, but…’): If we don’t sell them chips, someone else will.
Senator Cantwell: We wouldn’t want Huawei to get a better chip than us and put in a backdoor. Again. Don’t buy chips with backdoors. Also, did you notice this new tariff policy that also targets our allies is completely insane? We could use some allies.
Michael Intrator: Yes. Everyone demands AI.
Dr. Lisa Su: We need an export strategy. Our allies need access. Broad AI ecosystem.
Senator Bernie Moreno (R-Ohio): You all need moar power. TSMC making chips in America good. Will those semiconductor fabs use a lot of energy?
Dr. Lisa Su: Yes.
Senator Moreno: We need to make the highest-performing chips in America, right?
Dr. Lisa Su: Right.
Senator Moreno: Excuse me while rant that 90% of new power generation lately has been wind and solar when we could use natural gas. And That’s Terrible.
Brad Smith: Broad based energy solutions.
Senator Moreno: Hey I’m ranting here! Renewables suck. But anyway… “Mr. Altman, thank you for first of all, creating your platform and an open basis and agreeing to stick to the principles of nonprofit status. I think that's very important.” So, Altman, how do we protect our children from AI? From having their friends be AI bots?
Sam Altman: Iterative deployment. Learn from mistakes. Treat adults like adults. Restrict child access. Happy to work with you on that. We must beware AI and social relationships.
Senator Moreno: Thanks. Mr. Itrator, talk about stablecoins?
Michael Intrator: Um, okay. Potential. Synergy.
Senator Klobuchar (D-Minnesota): AI is exciting. Minnesota. Renewables good, actually. “I think David Brooks put it the best when he said, I found it incredibly hard to write about AI because it is literally noble whether this technology is leading us to heaven or hell, we wanted to lead us to heaven. And I think we do that by making sure we have some rules of the road in place so it doesn't get stymied or set backwards because of scams or because of use by people who want to do us harm.” Mr. Altman, do you agree that a risk-based approach to regulation is the best way to place necessary guardrails for AI without stifling innovation?
Sam Altman: “I do. That makes a lot of sense to me.”
Senator Klobuchar: “Okay, thanks. And did you figure that out in your attic?”
Sam Altman: “No, that was a more recent discovery.”
Senator Klobuchar: Do you agree that consumers need to be more educated?
Brad Smith: Yes.
Senator Klobuchar: Pivot. Altman, what evals do you use for hallucinations?
Sam Altman: Hallucations are getting much better. Users are smart, they can handle it.
Senator Klobuchar: Uh huh. Pivot. What about my bill about sexploitation and deepfakes? Can we build models that can detect deepfakes?
Brad Smith: Working on it.
Senator Klobuchar: Pivot. What about compensating content creators and journalists?
Brad Smith: Rural newspapers good. Mumble, collective negotiations, collaborative action, Congress, courts. Balance. They can get paid but we want data access.
Senator Cruz: I am very intelligent. Who’s winning, America or China, how close is it and how can we win?
Sam Altman: American models are best, but not by a huge amount of time. America. Innovation. Entrepreneurship. America. “We just need to keep doing the things that have worked for so long and not make a silly mistake.”
Dr. Lisa Su: I only care about chips. America is ahead in chips, but even without the best chips you can get a lot done. They’re catching up. Spirit of innovation. Innovate.
Michael Intrator: On physical infrastructure it’s not going so great. Need power and speed.
Brad Smith: America in the lead but it is close. What matters is market share and adaptation. For America, that is. We need to win trust of other countries and win the world’s markets first.
Ted Cruz: Wouldn’t it be awful if we did something like SB 1047? That’s just like something the EU would do, it’s exactly the same thing. Totally awful, am I right?
Sam Altman: Totally, sure, pivot. We need algorithms and data and compute and the best products. Can’t stop, won’t stop. Need infrastructure, need to build chips in this country. It would be terrible if the government tried to set standards. Let us set our own standards.
Lisa Su: What he said.
Brad Smith (presumably referring to when SB 1047 said you would have had to tell us the model exists and is being trained and what your safety plan was): Yep, what he said. Especially important is no pre-approval requirements.
Michael Intrator: A patchwork of regulatory overlays would cause friction.
Senator Brian Shatz (D-Hawaii): You do know no one is proposing these EU-style laws, right? And the alternative proposal seems to be nothing? Does nothing work for you?
Sam Altman: “No, I think some policy is good. I think it is easy for it to go too far and as I've learned more about how the world works, I'm more afraid that it could go too far and have really bad consequences. But people want use products that are generally safe. When you get on an airplane, you kind of don't think about doing the safety testing yourself. You're like, well, maybe this is a bad time to use the airplane example, but you kind of want to just trust that you can get on it.”
Senator Brian Shatz: Great example. We need to know what we’re racing for. American values. That said, should AI content be labeled as AI?
Brad Smith: Yes, working on it.
Senator Brian Shatz: “Data really is intellectual property. It is human innovation, human creativity.” You need to pay for it. Isn’t the tension that you want to pay as little as possible?
Brad Smith: No? And maybe we shouldn’t have to?
Brian Shatz: How can an AI agent deliver services and reduce pain points while interacting with government?
Sam Altman: The AI on your phone does the thing for you and answers your questions.
Brad Smith: That means no standing in line at the DMV. Abu Dhabi does this already.
Senator Ted Budd (R-North Carolina): Race. China. Energy. Permitting problems. They are command and control so they’re good at energy. I’m working on permit by rule. What are everyone’s experiences contracting power?
Michael Intrator: Yes, power. Need power to win race. Working on it. Regulation problems.
Brad Smith: We build a lot of power. We do a lot of permitting. Federal wetlands permit is our biggest issue, that takes 18-24 months, state and local is usually 6-9.
Senator Ted Budd: I’m worried people might build on Chinese open models like DeepSeek and the CCP might promote them. How important is American leadership in open and closed models? How can we help?
Sam Altman: “I think it’s quite important in both.” You can help with energy and infrastructure.
Senator Andy Kim (D-New Jersey): What is this race? You said it’s about adaptation?
Brad Smith: It’s about limiting chip export controls to tier 2 countries.
Senator Kim: Altman, is that the right framing of the race?
Sam Altman: It’s about the whole stack. We want them to use US chips and also ChatGPT.
Senator Kim (asking a good question): Does building on our chips mean they’ll use our products and applications?
Sam Altman: Marginally. Ideally they’ll use our entire stack.
Senator Kim: How’re YOU doin? On tools and applications.
Sam Altman: Really well. We’re #1, not close.
Senator Kim: How are we doing on talent?
Dr. Lisa Su: We have the smartest engineers and a great talent base but we need more. We need international students, high skilled immigration.
Senator Eric Schmitt (R-Missouri): St. Louis, Missouri. What can we learn from Europe’s overregulation and failures?
Sam Altman: We’d love to invest in St. Louis. Our EU releases take longer, that’s not good.
Senator Schmitt: How does vertical AI stack integration work? I’ve heard China is 2-6 months behind on LLMs. Does our chip edge give us an advantage here?
Sam Altman: People everywhere will make great models and chips. What’s important is to get users relying on us for their hardest daily tasks. But also chips, algorithm, infrastructure, data. Compound effects.
Senator Schmitt: EU censorship is bad. NIST mentioned misinformation, oh no. How do we not do all that here?
Michael Intrator: It makes Europe uninvestable but it’s not our focus area.
Sam Altman: Putting people in jail for speech is bad and un-American. Freedom.
Senator Hickenlooper (D-Colorado): How does Microsoft evaluate Copilot’s accuracy and performance? What are the independent reviews?
Brad Smith: Don’t look at me, those are OpenAI models, that’s their job, then we have a joint DSBA deployment safety board. We evaluate using tools and ensure it passes tests.
Senator Hickenlooper: “Good. I like that.” Altman, have you considered using independent standard and safety evaluations?
Sam Altman: We do that.
Senator Hickenlooper: Chips act. What is the next frontier in Chip technology in terms of energy efficiency? How can we work together to improve direct-to-chip cooling for high-performance computing?
Lisa Su: Innovation. Chips act. AI is accelerating chip improvements.
Senator John Curtis (R-Utah): Look at me, I had a TRS-80 made by Radio Shack with upgraded memory. So what makes a state, say Utah, attractive to Stargate?
Sam Altman: Power cooling, fast permitting, electricians, construction workers, a state that will partner to work quickly, you in?
Senator Curtis: I’d like to be, but for energy how do we protect rate payers?
Sam Altman: More power. If you permit it, they will build.
Brad Smith: Microsoft helps build capacity too.
Senator Curtis: Yay small business. Can ChatGPT help small business?
Sam Altman: Can it! It can run your small business, write your ads, review your legal docs, answer customer emails, you name it.
Senator Duckworth (D-Illinois): Lab-private partnerships. Illinois. National lab investments. Trump and Musk are cutting research. That’s bad. Doge bad. Innovation. Don’t cut innovation. Help me out here with the national labs.
Sam Altman: We partner with national labs, we even shared model weights with them. We f***ing love science. AI is great for science, we’ll do ten years of science in a year.
Brad Smith: All hail national labs. We work with them too. Don’t take them for granted.
Dr. Lisa Su: Amen. We also support public-private partnerships with national labs.
Michael Intrator: Science is key to AI.
Senator Duckworth: Anyone want to come to a lab in Illinois? Everyone? Cool.
Senator Cruz: Race. Vital race. Beyond jobs and economic growth. National security. Economic security. Need partners and allies. Race is about market share of RA AI models and solutions in other countries. American values. Not CCP values. Digital trade rules. So my question: If we don’t adopt standards via NIST or otherwise, won’t others pick standards without us?
Brad Smith: Yes, sir. Europe won privacy law. Need to Do Something. Lightweight. Can’t go too soon. Need a good model standard. Must harmonize.
Senator Lisa Blunt Rochester (D-Delaware): Future of work is fear. But never mind that, tell me about your decision to transition to a PBC and attempt to have the PBC govern your nonprofit.
Sam Altman: Oceania has always been at war with Eastasia, we’ve talked to lawyers and regulators about the best way to do that and we’re excited to move forward.
Senator Rochester: I have a bill, Promoting Resilient Supply Chains. Dr. Su, what specific policies would help you overcome supply chain issues?
Dr. Lisa Su: Semiconductors are critical to The Race. We need to think end-to-end.
Senator Rochester: Mr. Smith, how do you see interdependence of the AI stack sections creating vulnerabilities or opportunities in the AI supply chain?
Brad Smith: More opportunities than vulnerabilities, it’s great to work together.
Senator Moran (R-Kansas): Altman, given Congress is bad at job and can’t pass laws, how can consumers control their own data?
Altman: You will happily give us all your data so we can create custom responses for you, mwahahaha! But yeah, super important, we’ll keep it safe, pinky swear.
Senator Moran: Thanks. I hear AI matters for cyberattacks, how can Congress spend money on this?
Brad Smith: AI is both offense and defense and faster than any human. Ukraine. “We have recognize it’s ultimately the people who defend not just countries, but companies and governments” so we need to automate that, stat. America. China. You should fund government agencies, especially NSA.
Senator Moran: Kansas. Rural internet connectivity issues. On-device or low bandwidth AI?
Altman: No problem, most of the work is in the cloud anyway. But also shibboleth, rural connectivity is important.
Senator Ben Ray Lujan (D-New Mexico): Thanks to Altman and Smith for your involvement with NIST AISI, and Su and Altman for partnerships with national labs. Explain the lab thing again?
Altman: We give them our models. o3 is good at helping scientists. Game changer.
Dr. Lisa Su: What he said.
Senator Lujan: What investments in those labs are crucial to you?
Altman: Just please don’t standardize me, bro. Absolutely no standards until we choose them for ourselves first.
Dr. Lisa Su: Blue sky research. That’s your comparative advantage.
Senator Lujan: My bipartisan bill is Test AI Act, to build state capacity for tests and evals. Seems important. Trump is killing basic research, and that’s bad, we need to fix that. America. NSF, NIH, DOE, OSTP. But my question for you is about how many engineers are working optimizations for reduced energy, and what is your plan to reduce water use by data centers?
Brad Smith: Okay, sure, I’ll have someone track that number down. The data centers don’t actually use much water, that’s misinformation, but we also have more than 90 water replenishment projects including in New Mexico.
Michael Intrator: Yeah I also have no idea how many engineers are working on those optimizations, but I assure you we’re working on it, it turns out compute efficiency is kind of a big deal these days.
Senator Lujan: Wonderful. Okay, a good use of time would be to ask “yes or no: Is it important to ensure that in order for AI to reach its full prominence that people across the country should be able to connect to fast affordable internet?”
Dr. Su: Yes.
Senator Lujan: My work is done here.
Senator Cynthia Lummis (R-Wyoming): AI is escalating quickly. America. Europe overregulates. GDPR limits AI. But “China appears to be fast-tracking AI development.” Energy. Outcompete America. Must win. State frameworks burdensome. Only 6 months ahead of China. Give your anti-state-regulation speech?
Sam Altman: Happy to. Hard to comply. Slow us down. Need Federal only. Light touch.
Michael Intrator: Preach. Infrastructure could be trapped with bad regulations.
Senator Lummis: Permitting process. Talk about how it slows you down?
Michael Intrator: It’s excruciating. Oh, the details I’m happy to give you.
Senator Lummis: Wyoming. Natural gas. Biden doesn’t like it. Trump loves it. How terrible would it be if another president didn’t love natural gas like Trump?
Brad Smith: Terrible. Bipartisan natural gas love. Wyoming. Natural gas.
Senator Lummis (here let me Google that for you): Are you exploring small modular nuclear? In Wyoming?
Brad Smith: Yes. Wyoming.
Senator Lummis: Altman, it’s great you’re releasing an open… whoops time is up.
Senator Jacky Rosen (D-Nevada): AI is exciting. We must promote its growth. But it could also be bad. I’ll start with DeepSeek. I want to ban it on government devices and for contractors. How should we deal with PRC-developed models? Could they co-opt AI to promote an ideology? Collect sensitive data? What are YOU doing to combat this threat?
Brad Smith: DeepSeek is both a model and an app. We don’t let employees use the app, we didn’t even put it in our app store, for data reasons. But the model is open, we can analyze and change it. Security first.
Senator Rosen: What are you doing about AI and antisemitism? I heard AI is perpetuating stereotypes. Will you collaborate with civil society on a benchmark?
Sam Altman (probably wondering if Rosen knows that Altman is Jewish): We do collaborate with civil society. We’re not here to be antisemitic.
Senator Rosen (except less coherently than this, somehow): AI using Water. Energy. We’re all concerned. Also data center security. I passed a bill on that, yay me. Got new chip news? Faster and cooler is better. How can we make data more secure? Talk about interoperability.
Dr. Lisa Su (wisely): I think for all our sakes I’ll just say all of that is important.
Senator Dan Sullivan (R-Alaska): Agree with Cruz, national economic and national security. Race. China. Everyone agree? Good. Huge issue. Very important. Are we in the lead?
Sam Altman: We are landing and will keep leading. America. Right thing to do. But we need your help.
Senator Sullivan: Our help, you say. How can we help?
Sam Altman: Infrastructure. Supply chain. Everything in America. Infrastructure. Supply chain. Stargate. Full immunity from copyright for model training. “Reasonable fair like touch regulatory framework.” Ability to deploy quickly. Let us import talent.
Senator Sullivan (who I assume has not seen the energy production charts): Great. “One of our comparative advantages over China in my view has to be energy.” Alaska. Build your data centers here. It is a land. A cold land. With water. And gas.
Sam Altman: “That’s very compelling.”
Senator Sullivan: Alaska is colder than Texas. It has gas. I’m frustrated you can’t see that Alaska is cold. And has gas. And yet Americans invest in China. AI, quantum. Benchmark Capital invested $75 million in Chinese AI. How dare they.
Brad Smith: China can build power plants better than we can. Our advantage is the world’s best people plus venture capital. Need to keep bringing best people here. America.
Senator Sullivan: “American venture capital funds Chinese AI. Is that in our national interest?”
Brad Smith: Good question, good that you’re focusing on that but stop focusing on that, just let us do high-skilled immigration.
Senator Markey (D-Massachusetts): Environmental impact of AI. AI weather forecasting. Climate change. Electricity. Water. Backup diesel generators can cause respiratory and cardiovascular issues and cancer. Need more info. Do you agree more research is needed, like in the AIEIA bill?
Brad Smith: “Yes. One study was just completed last December.” Go ahead and convene your stakeholders and measure things.
Sam Altman: Yes, the Federal government should study and measure that. Use AI.
Senator Markey: AI could cure cancer, but it could also cause climate change. Equally true. Trump wants to destroy incentives for wind, solar, battery. “That’s something you have to weigh in on. Make sure he does not do that.” Now, AI’s impact on disadvantagd communities. Can algorithms be biased and cause discrimination?
Brad Smith: Yes. We test to avoid that outcome.
Senator Markey (clip presumably available now on TikTok): Altman doesn’t want privacy regulations. But AI can cause harm. To marginalized communities. Bias. Discrimination. Mortgage discrimination. Bias in hiring people with disabilities. Not giving women scholarships. Real harms. Happening now. So I wrote the AI Civil Rights Act to ensure you all eliminate bias and discrimination, and we hold you accountable. Black. Brown. LGBTQ. Virtual world needs same protections as real world. No question.
Senator Gary Peters (D-Michigan): America. Must be world leader in AI. Workforce. Need talent here. Various laws for AI scholarships, service, training. University of Michigan, providing AI tools to students. “Mr. Altman, when we met last year in my office and had a great, you said that upwards of 70% of jobs could be limited by AI.” Prepare for social disruption. Everyone must benefit. How can your industry mitigate job loss and social disruption?
Sam Altman: AI will come at you fast. But otherwise it’s like other tech, jobs change, we can handle change. Give people tools early. That’s why we do iterative development. We can make change faster by making it as fast as possible. But don’t worry, it’s just faster change, it’s just a transition.
Senator Peters: We need AI to enhance work, not displace work. Like last 100 years.
Sam Altman (why yes, I do mean the effect on jobs, senator): We can’t imagine the jobs on the other side of this. Look how much programming has changed.
Senator Peters: Open tech is good. Open hardware. What are the benefits of open standards and system interoperability at the hardware level? What are the supply chain implications?
Dr. Lisa Su: Big advantages. Open is the best solution. Also good for security.
Senator Peters: You’re open, Nvidia is closed. Why does that make you better?
Dr. Lisa Su: Anyone can innovate so we don’t have to. We can put that together.
Senator John Fetterman (D-Pennsylvania, I hope he’s okay): Yay energy. National security. Fossil. Nuclear. Important. Concern about Pennsylvania rate payers. Prices might rise 20%. Concerning. Plan to reopen Three Mile Island. But I had to grab my hamster and evacuate in 1979. I’m pro nuclear. Microsoft. But rate payers.
Brad Smith: Critical point. We invest to bring on as much power as we use, so it doesn’t raise prices. We’ll pay for grid upgrades. We create construction jobs.
Senator Fetterman: I’m a real senator which means I get to meet Altman, squee. But what about the singularity? Address that?
Sam Altman: Nice hoodies. I am excited by the rate of progress, also cautious. We don’t understand where this, the biggest technological revolution ever, is going to go. I am curious and interested, yet things will change. Humans adapt, things become normal. We’ll do extraordinary things with these tools. But they’ll do things we can’t wrap our heads around. And do recursive self-improvement. Some call that singularity, some call it takeoff. New era in humanity. Exciting that we get to live through that and make it a wonderful thing. But we’ve got to approach it with humility and some caution, always twirling twirling towards freedom.
Senator Klobuchar (oh no not again): There’s a new pope. I work really hard on a bill. YouTube, RIAA, SAG, MPAA all support it. 75k songs with unauthorized deepfakes. Minnesota has a Grammy-nominated artist. Real concern. What about unauthorized use of people’s image? How do we protect them? If you don’t violate people’s rights someone else will.
Brad Smith: Genuflection. Concern. Deepfakes are bad. AI can identify fakes. We apply voluntary guardrails.
Senator Klobuchar: And will you both read my bill? I worked so hard.
Brad Smith: Absolutely.
Sam Altman: Happy to. Deepfakes. Big issue. Can’t stop them. If you allow open models you can’t stop them from doing it. Guardrails. Have people on lookout.
Senator Klobuchar: “It's coming, but there's got to be some ways to protect people. We should do everything privacy. And you've got to have some way to either enforce it, damages, whatever, there's just not going to be any consequences.”
Sam Altman: Absolutely. Bad actors don’t always follow laws.
Senator Cruz: Sorry about tweeting out that AI picture of Senator Federman as the Pope of Greenland.
Senator Klobuchar: Whoa, parody is allowed.
Senator Cruz: Yeah, I got him good. Anyway, Altman, what’s the most surprising use of ChatGPT things you’ve seen?
Sam Altman: Me figuring out how to take care of my newborn baby.
Senator Cruz: My teenage daughter sent me this long detailed emotional text, and it turns out ChatGPT wrote it.
Sam Altman (who lacks that option): “I have complicated feelings about that.”
Senator Cruz: “Well use the app and then tell me what your thoughts. Okay, Google just revealed that their search traffic on Safari declined for the first time ever. They didn't send me a Christmas card. Will chat GPT replace Google as the primary search engine? And if so when?”
Sam Altman: Probably not, Google’s good, Gemini will replace it instead.
Senator Cruz: How big a deal was DeepSeek? A major seismic shocking development? Not that big a deal? Somewhere in between? What’s coming next?
Sam Altman: “Not a huge deal.” They made a good open-source model and they made a highly downloaded consumer app. Other companies are going to put out good models. If it was going to beat ChatGPT that would be bad, but it’s not.
Dr. Lisa Su: Somewhere in between. Different ways of doing things. Innovation. America. We’re #1 in models. Being open was impactful. But America #1.
Michael Intrator (saying the EMH is false and it was only news because of lack of situational awareness): It raised the specter of China’s AI capability. People became aware. Financial market implications. “China is not theoretically in the race for AI dominance, but actually is very much a formidable competitor.” Starting gun for broader population and consciousness that we have to work. America.
Brad Smith: Somewhere in between. Wasn’t shocking. We knew. All their 200 employees are 4 or less years out of college.
Ted Cruz: I’m glad the AI diffusion rule was rescinded. Bad rule. Too complex. Unfair to our trading partners. “That doesn't necessarily mean there should be no restrictions and there are a variety of views on what the rules should be concerning AI diffusion.” Nivida wants no rules. What should be the rule?
Sam Altman: I’m also glad that was rescinded. Need some constraints. We need to win diffusion not stop diffusion. America. Best data centers in America. Other data centers elsewhere. Need them to use ChatGPT not DeepSeek. Want them using US chips and US data center technology and Microsoft. Model creation needs to happen here.
Dr. Lisa Su: Happy they were rescinded. Need some restrictions. National security. Simplify. Need widespread adoption of our tech and ecosystem. Simple rules, protect our tech also give out our tech. Devil’s in details. Balance. Broader hat. Stability.
Michael Intrator: Copy all that. National security. Work with regulators. Rule didn’t let us participate enough.
Brad Smith: Eliminate tier 2 restrictions to ensure confidence and access to American tech, even most advanced GPUs. Trusted providers only. Security standards. Protection against diversion or certain use cases. Keep China out, guard against catastrophic misuse like CBRN risks.
Ted Cruz: Would you support a 10-year ban on state AI laws, if we call it a ‘learning period’ and we can revisit after the singularity?
Sam Altman: Don’t meaningfully regulate us, and call it whatever you need to.
Dr. Lisa Su: “Aligned federal approach with really thoughtful regulation would be very, very much appreciated.”
Michael Intrador: Agreed.
Brad Smith: Agreed.
Ted Cruz: We’re done, thanks everyone, and live from New York it’s Saturday night.
Some Highlighted Pull Quotes
In particular, these literal quotes are worth remembering or referencing:
Ted Cruz: China aims to lead the world in AI by 2030. In this race, the United States is facing a fork in the road. Do we go down the path that embraces our history of entrepreneurial freedom and technological innovation? Or do we adopt the command and control policies of Europe?
Brad Smith: What are we at this table trying to do? What do these two letters AI really mean to them? Are we who are working in this industry trying to build machines that are better than people or are we trying to build machines that will help people become better?
Emphatically it is and needs to be the latter.
Are we trying to build machines that will outperform people in all the jobs that they do today or are we trying to build machines that will help people pursue better jobs and even more interesting careers in the future?
Indisputably, it needs to be the second, not the first.
Rep. Tim Sheey (R-MT, quoted above): Thank you for your testimony. Certainly makes me sleep better at night. Worried about Terminator and Skynet coming after us, knowing that you guys are behind the wheel.
Sam Altman: We need to make sure that companies like OpenAI and others have legal clarity on how we're going to operate. Of course there will be rules. Of course there need to be some guardrails. This is a very impactful technology, but we need to be able to be competitive globally. We need to be able to train, we need to be able to understand how we're going to offer services and sort of where the rules of the road are going to be.
Sam Altman: “I don’t think we need [NIST standards]. It can be helpful.”
Brad Smith: “We will need industry standards, we will need American adoption of standards, and you are right. We will need US efforts to really ensure that the world buys into these standards.”
Senator Amy Klobuchar: I think David Brooks put it the best when he said, I found it incredibly hard to write about AI because it is literally noble whether this technology is leading us to heaven or hell, we wanted to lead us to heaven. And I think we do that by making sure we have some rules of the road in place so it doesn't get stymied or set backwards because of scams or because of use by people who want to do us harm.
Sam Altman: “No, I think some policy is good. I think it is easy for it to go too far and as I've learned more about how the world works, I'm more afraid that it could go too far and have really bad consequences. But people want use products that are generally safe. When you get on an airplane, you kind of don't think about doing the safety testing yourself. You're like, well, maybe this is a bad time to use the airplane example, but you kind of want to just trust that you can get on it.”
Sam Altman: “I think there's a lot of things that can increase US leadership, but we touched on this earlier. I think it's so important. There will be great ships made around the world. There will be great models trained around the world if the United States companies can win on products and the sort of all of the positive feedback loops that come from how you can improve this. Once real users are using your products in their daily lives for their hardest tasks, that is something special that is not so easy to catch up with just by doing good chips and good models. So making sure that the US can win at the product level here, obviously I'm talking my book a little bit, but I really do believe it is quite important and that's in addition to all of the chips algorithm, the infrastructure, algorithms and data. I think this is a new area where the US is really winning and has a very strong compound in effect.”
Sam Altman: “So we never ... Thank, you for the question, Senator, and the chance to explain this. It's a complicated thing that I think has gotten misrepresented. So this is a wonderful forum to talk about it. We never planned to have the nonprofit convert into anything. The nonprofit was always going to be the nonprofit. And we also planned for A PBC from the very beginning. There were a bunch of other considerations about is it the PBC board that would control the nonprofit somehow or how our capital structure was going to work? That there was a lot of speculation on most of it inaccurate in the press. But our plan has always been to have a robust nonprofit. We hope our nonprofit will be one of the best, maybe someday the best resourced nonprofit in the world and A PVC with the same mission that would make it possible for us to raise the capital needed to deliver these tools and services at the quality level and availability level that people want to use them at. But still stick to our mission, which we've been proud over the last almost decade, our progress towards. So we had a lot of productive conversations with a lot of stakeholders and a lot of lawyers and a lot of regulators about the best way to do this. It took longer than we thought it was going to. I would've guessed that we would've been talking about this last year. But now we have a proposal that people seem pretty excited about and we're trying to now advance.”
Senator Gary Peters: “Mr. Altman, when we met last year in my office and had a great, you said that upwards of 70% of jobs could be limited by AI.”
Sam Altman: “The thing that I think is different this time than previous technological revolutions is the potential speed. Technological revolutions have impacted jobs and the economy for a long time. Some jobs go away, some new jobs get created. Many jobs just get more efficient and people are able to do more and earn more money and create more. And that's great. Over some period of time, society can adapt to a huge amount of job change. And you can look at the last couple of centuries and see how much that's happened. I don't know, I don't think anyone knows exactly how fast this is going to go, but it feels like it could be pretty fast.
The most important thing or one of the most important things I think we can do is to put tools in the hands of people early. We have a principle that we call iterative deployment. We want people to be getting used to this technology as it's developed. We've been doing this now for almost five years, since our first product launch as society and this technology co-evolve, putting great capable tools in the hands of a lot of people and letting them figure out the new things that they're going to do and create for each other and come up with and provide sort of value back to the world on top of this new building block we have and the sort of scaffolding of society that is I think the best thing we can do as open AI and as our industry to help smooth this transition.”
“And I don't think we can imagine the jobs on the other side of this, but even if you look today at what's happening with programming, which I'll pick because it's sort of my background and near and dear to my heart. What it means to be a programmer and an effective programmer in May of 2025 is very different than what it meant last time I was here in May of 2023. These tools have really changed what a programmer is capable of the amount of code and software that the world is going to get. And it's not like people don't hire software engineers anymore. They work in a different way and they're way more.”
This exchange was pretty wild, the one time someone asked about this, and Altman dodges it:
John Fetterman: “Thank you. Well now one of the perks of being a senator that for me anyway, I get an opportunity to meet people that have much more impressive kinds of jobs or careers that I've led. And now Mr. Altman, now I'm going to count this as a highlight recently. I know the work that you've done and you're really one of the people that are moving AI and now it's an opportunity. I was excited to meet you and people ask me, it's like if you're going to talk about AI and now I get to ask you, I mean like the literal the expert, some people are worried about AI or whatever, and I'm like, what about the singularity? So the people like that, if you would address that please.”
Sam Altman: “Thank you, Senator for the kind words and for normalizing hoodies and more spaces. I love to see that. I am incredibly excited about the rate of progress, but I also am cautious and I would say, I dunno, I feel small next to it or something. I think this is beyond something that we all fully yet understand where it's going to go. This is I believe, among the biggest, maybe it was trying to be the biggest technological revolutions humanity will have ever produced. And I feel privileged to be here. I feel curious and interested in what's going to happen, but I do think things are going to change quite substantially. I think humans have a wonderful ability to adapt and things that seem amazing will become the new normal very quickly. We'll figure how to use these tools to just do things we could never do before and I think it will be quite extraordinary. But these are going to be tools that are capable of things that we can't quite wrap our heads around. And some people call that as these tools start helping us to create next future iterations. Some people call that singularity, some people call that the takeoff, whatever it is. It feels like a sort of new era of human history, and I think it's tremendously exciting that we get to live through that and we can make it a wonderful thing, but we've got to approach it with humility and some caution.
Sam Altman on DeepSeek and how big a deal it was:
Sam Altman: “Not a huge deal. There are two things about DeepSeek. One is that they made a good open-source model and the other is that they made a consumer app that for the first time briefly surpassed ChatGPT as the most downloaded AI tool, maybe the most downloaded app. Overall, there are going to be a lot of good open source models and clearly there are incredibly talented people working at Deepsea doing great research, so I'd expect more great models to come. Hopefully. Also us and some of our colleagues will put out great models too on the consumer app. I think if the DeepSeek consumer app looked like it was going to beat ChatGPT and our American colleague’s apps, is the default AI systems that people use, that would be bad. But that does not currently look to us like what's happening.”
Here’s an exchange that happened:
Sen. Ted Cruz (R-TX): So I will tell you a story that I've told you before, but my teenage daughter several months ago sent me this long detailed text and it was emotional and it was really well-written, and I actually commented, I'm like, wow, this is really well-written. She said, oh, I use ChatGPT to write it. Like, wait, you're texting your dad and you, it is something about the new generation that it is so seamlessly integrated into life that she's sending an email, she's doing whatever, and she doesn't even hesitate. Think about going to chat GPT to capture her thoughts.
Sam Altman: I have complicated feelings about that.
Sen. Ted Cruz (R-TX): Well use the app and then tell me what your thoughts. Okay, Google just revealed that their search traffic on Safari declined for the first time ever. They didn't send me a Christmas card. Will chat GPT replace Google as the primary search engine? And if so when?
Sam Altman: Probably not. I mean, I think some use cases that people use search engines for today are definitely better done on a service like ChatGPT, but Google is like a ferocious competitor. They have a very strong AI team, a lot of infrastructure, a very well-protected business, and they're making great progress putting AI into their search.
Where Does This Leave Us?
For the time being, we are not hoping the Congress will help us not die, or help the economy and society deal with the transformations that are coming, or even that it can help with the mess that is existing law.
We are instead hoping the Congress will not actively make things worse.
Congress has fully abrogated its job to consider the downside risks of AI, including catastrophic and existential risks. It is fully immersed in jingoism and various false premises, and under the sway of certain corporations. It is attempting to actively prevent states from being able to do literally anything on AI, even to fix existing non-AI laws that have unintended implications no one wants.
We have no real expectation that Congress will be able to step up and pass the laws it is preventing the states from passing. In many cases, it can’t, because fixes are needed for existing state laws. At most, we can expect Congress to manage things like laws against deepfakes, but that doesn’t address the central issues.
On the sacrificial altars of ‘competitiveness’ and ‘innovation’ and ‘market share’ they are going to attempt to sacrifice even the export controls that keep the most important technology, models and compute out of Chinese hands, accomplishing the opposite. They are vastly underinvesting in state capacity and in various forms of model safety and security, even for straight commercial and mundane utility purposes, out of spite and a ‘if you touch anything you kill innovation’ madness.
Meanwhile, they seem little interested in addressing that the main actions of the Federal government in 2025 have been to accomplish the opposite of these goals. Where we need talent, we drive it away. Where we need trade and allies, we alienate our allies and restrict trade.
There is talk of permitting reform and helping with energy. That’s great, as far as it goes. It would at least be one good thing. But I don’t see any substantive action.
They’re not only not coming to save us. They’re determined to get in the way.
That doesn’t mean give up. It especially doesn’t mean give up on the fight over the new diffusion rules, where things are very much up in the air, and these Senators, if they are good actors, have clearly been snookered.
These people can be reached and convinced. It is remarkably easy to influence them. Even under their paradigm of America, Innovation, Race, we are making severe mistakes, and it would go a long way to at least correct those. We are far from the Pareto frontier here, even ignoring the fact that we are all probably going to die from AI if we don’t do something about that.
Later, events will overtake this consensus, the same way they the vibes have previously shifted at the rate of at least once a year. We need to be ready for that, and also to stop them from doing something crazy when the next crisis happens and the public or others demand action.
Pope Leo XIV has said he chose his papal name largely out of concern with AI. He keeps mentioning it as do Vatican officials, and it's only been a few days. It's clearly going to be a focus.
An exhortation or even encyclical about AI from the Vatican could be a wildcard as far as shifting public discourse in the next year. Such a document would likely focus on labor and environmental issues, as well as warning believers against idolatry.
However, the most recent Vatican document "Antiqua et nova" does say that existential risk is real, but says "At the same time, while the theoretical risks of AI deserve attention, the more immediate and pressing concern lies in how individuals with malicious intentions might misuse this technology."
This is so low dignity