15 Comments

"The risk is that this report greatly expands and other requirements are added over time, in a way that do introduce an undue burden, and that this also applies to a wider range of systems, and also that none of that was actually necessary for safety. That is a possible world. I don’t expect it, but it makes sense to keep an eye."

Assuming the "death by a thousand cuts" scenario is plausible, isn't the best time for pushback _right now_ rather than in a few years when the requirements get ratcheted up? Yes, the requirements _today_ are to send out a few reports, but the optimal strategy for fighting this is to ignore the "it's just reports" part and blow it up into a huge political fight.

This is a good parallel to the fight against extinction-level AGI: safety people aren't exactly convinced by the idea that we shouldn't pushback against AGI development today because AI capabilities are not that significant just yet. Instead, safety people want to start the fight _now_ and not wait for these capabilities to actually happen.

Expand full comment

Come on, Zvi. This is the exact story of every regulation: “The risk is that this report greatly expands and other requirements are added over time, in a way that do introduce an undue burden, and that this also applies to a wider range of systems, and also that none of that was actually necessary for safety. That is a possible world. I don’t expect it, but it makes sense to keep an eye.”

You greatly underestimate this possibility. It’s a guarantee.

Expand full comment

Are there any signs that the EO was written at all by an LLM because that would be pretty funny.

Expand full comment

Thanks for your two posts. FYI, in Varun's take they mention "10^26 floating-point operations (or 0.1 exaflops)" which is a completely wrong conversion, off by a billion. The exa prefix means 1e18. Instead it should be one of the new prefixes, specifically 0.1 ronnaflops (or 100 yottaflops). The mistake is repeated a few lines below.

Expand full comment

"Matthew Yglesias: My thought is that it’s probably not a good idea to cover the administration’s AI safety efforts much because if this becomes a high-profile topic it will induce Republicans to oppose it."

That is not the way this would go. Instead, Republicans would criticize it as not going far enough. The issue is not yet clearly polarized, but when/if it does, it's likely that the right will be the more pro-safety side, unless something changes a lot. (Polls: "The uncontrollable effects of artificial intelligence (AI) could be so harmful that it may risk the future of humankind", Trump voters - 70% agree, Biden voters - 60%, Ipsos poll; "How worried are you that machines with artificial intelligence could eventually pose a threat to the existence of the human race – very, somewhat, not too, or not at all worried?", Republicans - 31% "very worried", Democrats - 21%, 31% each for "somewhat worried", Monmouth. Among politicians, it's less obvious a skew, but Sunak, von der Leyen, and Netanyahu are all right-wing within their systems.) This will likely end up being a problem, because academia, big tech, and the media are all held by the left.

Expand full comment

Fun note: the Libertarian position seems to be that "generating a bunch of text" will lead to "bad things happening."

Applying this logic to other domains is left as an exercise for the reader.

Expand full comment

I love the symmetry between the opposing sides in this debate. For some, AI unaligned with the interests of powerful corporations and governments seems like the overwhelming existential risk to humanity. For others, it’s AI aligned with those interests that seems most risky.

Note that I reject the notion that AI could be aligned with humanity as such. Clearly, nobody serious wants AI aligned with North Korea, religious extremists, anti-vaxxers, or other assorted “bad actors”.

Expand full comment

Obama actually drove the effort and communicated/coordinated on Zoom with the various tech bros..

Excellent links, thanks! The EO doesn’t sound too diabolical, but (as with all EOs) there is usually an activist/donor-led agenda not always visible in the text. The good news is that it may be a year or two away from mattering.

Expand full comment

IMO "tens of B" means between 10.1B and 19.9B. 'Tens', in this case, being used instead of the common usage 'teens.' And we all know what teens means.

Sooner or later we may find out the actual threshold, but if I read you right, you have concerns about power in the mid-teens. Would be funny if a bunch of developers wasted their time on 19B models, based on that interpretation, as a means to dodge reporting. Do your paperwork!

Expand full comment