14 Comments

Typos thread: (and thank you Zvi for doing this!)

Signaling out landlords is very on brand for Biden. -> Singling out

Expand full comment

Very Freudian! I kind of like it.

Expand full comment

You may have noticed recently that some federal government websites don't suck any more. For example, login.gov is pretty good.

This is because the US Digital Service (within the White House) as well as the internal consulting group 18F (within the GSA) actually employ quite competent technologists, and have managed to somehow make the federal hiring process for these roles a little bit less bizarre (it's still very bureaucratic, but I believe they can evade the usual points-based resume evaluation system).

I think the combination of groups like these (on the "product" side) as well as the national labs (for example, Oak Ridge is posting on the 80k hours job board for various roles) means that the government probably can get some decent AI talent. No way to match big tech salaries, but not hopeless.

Expand full comment

Government salaries are lower, but salary + benefits are often higher. Tech companies don’t give out defined benefit pensions, do they? Plus the benefit of job security and flexible schedules.

Expand full comment

entry-level big tech salaries for AI type roles are like $100-$200k (i.e. new employees are already near the top of the GS scale). And could go higher for a highly qualified in-demand research scientist. that's just the salary, there are typically large stock grants on top of this that push total compensation to like $300k-500k or higher as you get promoted.

you don't get a pension but you do get 401k matching. typically very high quality health insurance and generous parental leave. lots of fringe benefits like reimbursements for gym memberships and transit, food in the office. for the most part schedules are quite flexible too. it's an offer that's very hard to beat, and the downsides of being in the big tech environment are hard to see at first.

Expand full comment

AI safety isn't a continuous kind of thing where you make things a bit safer by adding a few extra hoops like food/product safety might be.

I fear it's a losing move if you are concerned about AI safety because

1) It lets AI companies say "don't worry we're already regulated" and creates the perception the government is doing something about AI.

2) This also seems to implicitly add substantial fixed costs to doing AI research -- you need to report various things (even if you aren't legally penalized there is PR risk of thumbing your nose at the regs). That likely means more huge models trained per small experiment performed to help us get a better theoretical handle on the systems.

3) It imposes a small drag on AI research here which -- if you believe that AI risk is more likely to be something US researchers worry about than ones in China -- could raise its own AI safety risks.

-

As I'm not so concerned I tend to see it as a winning move. Sure, I'd prefer not having this kind of drag but I believe there is public demand to do something and this is the minimal something.

Expand full comment

It's notable that companies are not prevented from releasing an AI, even if they think it's dangerous.

This could work, I guess. Assume the currently released models aren't dangerous. The order requires the AI companies to declare it to the government if they have reason to suppose their newer models are dangerous.

Case a) z they report that their newer models are still not dangerous. Government says fine, carry on,

Case b) They report their newer models are dangerous. Government at least had the heads up to get new regulations in place.

I do wonder about the liabilty is a company:

A) Reports to the government that their model is potentially dangerous

B) decides to release it anyway

C) Somebody suffers a serious loss as a result.

Step (a) prevents the company pleading ignorance, and puts the government to argue that (a)+ (b) + (c) establishes criminal intent ... or at least lability for negligence.

In the U.K., terrorist training manuals are illegal to possess. The obvious extension to models that can produce terrorist training manuals on request might not even need new legislation.

But in the US, you have the first amendment.

So for this to be illegal in the US, (c) would presumably need to be more than, your model generated a terrorist manual and some terrorist group acted on it. Actual incitement, maybe. Remember the guy in the UK that got convicted of treason (yes, treason) because an AI persuaded him to try an kill the Queen.

Expand full comment

Hey Zvi, I find your confusion on 5(c) very interesting.

The federal government has been pushing for way more small business participation for a while now via changes to the policies about how agencies award contracts (one of these days after I start my own 'Stack I might try to do a deep dive on the history of the legislation here, but for the short form just know that the "big bigs" have really pissed off several agencies with their inability to deliver timely, working products). AI contracts are no exception--a decent number of the ones I've seen recently have been marked as Small Business Set-Asides (SBSAs). You can look at a fairly representative slice yourself if you go to SAM.gov (the unclassified listing of requests for information/quote/proposal run by the General Services Administration (GSA)) and just search for "artificial intelligence." Even before this, the GSA has also been running a lot of competitions (for some examples, just search "GSA AI Challenge") aimed at getting innovative small businesses in AI connected with various branches of the government.

Biden is just making explicit something that's already been happening.

Expand full comment

Was anyone else surprised about how much there was about synthetic nucleic acids? I feel like the media is obsessed with self-driving cars and like, artist attribution but I don't ever heard about synthetic nucleic acid risk; seems worse.

Expand full comment

Indeed, there seems to be clear traction in Washington to worry about synthetic bio. It's a good thing - and unlike AI risk, it's something pretty much everyone agrees on directionally, it's just a fight to get attention and funding.

Expand full comment

We need a billionaire to make this one of their pet causes. Maybe someone young from a finance-ish background.

Expand full comment

Thank you for a great summary.

I have been a contractor to the government and an employee of the government for over 15 years. I do not believe most employees will do much on the writing front. It will be a boon for consulting companies, especially the Top consulting companies. Also, most of the AI talent will be coming from consulting companies only. It is hard to hire talented people on the government salaries and bureaucracy that you have to deal with once you are in is very difficult to overcome. I know there are some exceptions but in most cases, it is a mission impossible. The other challenge is that the quality of data in the government is very low quality which will mean another boon for consulting companies to improve the data. Can all this be overcome? The answer is Yes but will require a very different mindset and the leadership that we have right now in most government agencies.

Expand full comment

Just use AI to examine the full EO and get it to explain the ins and outs the earmarks the pros and cons ,

Expand full comment

> There is no reason for those capable of building tomorrow’s AI companies and technologies to be anywhere but the United States.

Bringing these people to the US might accelerate capabilities, because they would not be able to build AI companies as effectively outside the US (network effects, more funding, etc.).

Expand full comment