9 Comments

So of all the persuaders in the world, who would be person with the best odds of sitting in a room with Trump and persuading him that this is all an unprecedented national security risk and we should be taking it gravely seriously?

Expand full comment

I have been ruminating on this and similar thoughts (replace Trump with Dario/Sam/Elon/Demis/Zuck)...it seems like crafting a narrative/argument/strategy and identifying and training an individual (or multiple individuals to increase the number of shots on goal) to actually increase the understanding and awareness of these key decision makers would be very high impact. Are there any projects like this under way? Starting with long time scale heaving involvement and small required shift in ideological position (ie. you've got a month to convince your wife that this is an unprecedented risk) and moving up the difficulty scale as success is shown seems like a reasonable strategy. Zvi gestures at this a bit with "Rhetorical Innovation", and I feel like it might make a very engaging podcast series with the right charismatic host. But just from a memetic perspective, how do we best encapsulate this information in a way that convinces either "people that matter" or "enough people to matter"?

Expand full comment

Yeah I have a lot or faith in both the American public’s and national security departments’ ability to grossly overreact to potential threats. The fact that nobody on high is trying to commandeer AI companies into the military suggests that they simply don’t understand the risks. It seems like “rhetorical innovation” is maybe the most important field in the world right now… perhaps Claude could help with this? :-)

Expand full comment

I think "enough people to matter" is key. When you're talking about the general population, they're already pretty negative about this, and that's going to worsen as it disrupts the job market. For many elite decision makers, that disruption is a feature, not a bug, and that's one reason they are racing ahead. So the incentives are very much in favor of the general public realizing this a problem long before the elite.

Expand full comment

I feel like these documents are being produced by groups with little political power inside their companies. If they really intended to slow down due to safety concerns then Zuck and Google leadership would care more about other companies also adopting safety principles and be lobbying for that.

These are smart people but they’re not acting like they’re in a prisoners dilemma. They’re acting like they are producing some meaningless documents.

Expand full comment

Two quick notes:

"If you’re Google and you’re not at least at SL2 for every model worth deploying, why the hell not?"

We totally are -- as you know Google has some practice in defending against state actors and very skilled hackers. The next level up requires some work though.

Re persuasion, we're working on it, it just didn't make the publication cut this time.

Expand full comment

Great to hear on both counts.

Expand full comment

Re the CBRN part: I'm not sure that knowledge or intelligence is really the limiting factor for these threats. For radiological and nuclear, the limit is likely to be access to radioisotopes and for nuclear, to fissile isotopes (in critical mass quantities).

For biologicals - yes, it is a very complex field, and, if one were developing a _novel_ pathogen, yes, a great deal of intelligence would be required - though a great deal of experimental work would _also_ be required. Most proposed biological weapons have relied on existing pathogens.

For chemical: Well, it wouldn't be good for ISIS to get a tutor walking them through how to synthesize and deploy sarin, on the other hand, most of the information is readily available, albeit not in quite so predigested a form.

Re tutorial information and enabling competent design work at low cost: If we are going to worry about that, then assisting mechanical engineering of weapons becomes a concern - something like automating the innovations that Ukraine made in using drones.

For any of these kinds of automation of existing weapon types, there is a big loss in general utility of the systems if one tries to preclude the sort of assistance a STEMM tutor or a reference librarian could provide.

David Shapiro just posted a video "AI Safety Cannot be Solved at the Model Level - Anthropic's Latest Fiasco - The Wrong Approach" https://www.youtube.com/watch?v=8_bl4lJqj5E&t=5s

where he tried to trigger a warning by asking for what he thought of as potentially dangerous chemical information - but he was asking about PPE for handling effectively _anything_ volatile and hazardous, which includes a vast array of legitimate, useful, albeit hazardous reagents. It is _fortunate_ that no warnings were triggered, since any system that concealed PPE requirements for hazardous reagents would _itself_ be a hazard to any of the many thousands of people who work with this broad range of materials.

Expand full comment