31 Comments

Thank *you* for the best content out there on AI! Your summaries, your thoughts, all extremely valuable.

Expand full comment

Yeah, I jokingly thought earlier today that the OpenAI crisis might end up in net positive territory just by increasing your reach, Zvi. :D

Expand full comment

Congrats on a year!

It's interesting, all of the areas you've chosen seem unbelievably entrenched. The type of situation where a small motivated group greatly benefits from the status quo of what you're trying to change. Although I don't know your funding situation, they probably have more money at stake financially than what you have raised. In that case, wouldn't it make sense for them to spend money to oppose your efforts to do things like alter or repeal the Jones Act up to the point where it is no longer profitable for them to do so? Right now, they enjoy the money directly, but if they could spend some of that money counter lobbying you and enjoy less but still positive money, it seems worth it to spend that on directly opposing you no? Is this a case where you think you can muster greater determination than them and win despite the money imbalance or do you think that politicians would prefer to agree with you? Again, I might be wrong and you might just have more money than what is gained by people profiting from the Jones Act.

Expand full comment
author

Oh, no, we don't have the ammunition to fight the whole fight and win or anything. This is a small operation, as I note.

I chose the areas because there are clear big wins, and I don't actually see as much opposition as you might think. But of course, any place there is a clear big win and we don't take it, SOMEONE is an entrenched interest there! Otherwise we'd have done it.

My theory is more like, there are steps we can do to make it much easier to get this to happen, and also that there's a better battle plan and approach that no one's looking at.

But yeah, long shots, and we probably don't get any of these wins in the end. And that's fine. Still seems right to try.

Expand full comment

> I believe that the numbers will show both big wins and few net losers from repeal—including few losers among the dedicated interest groups that are fighting for the Jones Act tooth and nail—such that full compensation for those losers would be practical.

This remark of yours about how to approach repeal of the Jones Act strikes me as both obvious and highly idiosyncratic (maybe that's why I like you!). But is the notion of buying-out the interest groups even practical from a policy perspective? It seems to me like it should be but I'm not aware of similar prior legislative efforts that take such a direct route.

When I think of the horse-trading to get legislative work done, I think of highly indirect exchanges of benefits to constituencies, such as "I will support your effort for X if funding for project Y is earmarked in part for work in my state" (see, for instance, the wildly distributed supply chains for NASA and DoD projects that touch almost every state in the country).

Maybe I'm a little over-excited about a new way to think of policy issues, but the idea of proving out the cost / benefit analysis to motivate a change in regulation and then simply greasing the wheels to get us over the activation energy barrier by simply putting up the money to compensate existing interests is so radically simple compared to the way things are normally done, I kind of can't believe it would work.

But if this approach ultimately does work, are there other examples of low-hanging fruit you see where the status-quo is undesirable and a solution of "just put up the money" would do the trick?

Expand full comment
author

I agree it is non-standard. You might well have to find a way to disguise that this is what you were doing - e.g. you find some fig leaf to describe what you are paying them, or why they get compensation.

Consider eminent domain. You take someone's land, you must compensate them. I think this is the parallel - if someone has got their hold on such rents, then it is *kind of* like a government taking to get rid of it. So it is actually reasonable to say, we need to make you whole if we take it away, because you had the expectation that this was wealth, and may have paid to get it.

Hence "Get your government hands off my Medicare."

Which sounds crazy! But is actually a reasonable thing to say, from the right point of view.

Expand full comment

The parallel to eminent domain makes a lot of sense. So rather than private property, we’re talking about economic rents protected by existing regulation. I like that.

Expand full comment

Zvi is pretty fucking great at this kind of thing!

Expand full comment

I like your choice of priorities.

re: jones act, via Tyler Cowen: https://www.nber.org/papers/w31938#fromrss

Expand full comment

The NEPA proposal is...going to be difficult. I gather the plan would be to replace not just NEPA but also NHPA and its associated tribal consultation requirements? And the Endangered Species Act?

One difference between the proposal and the status quo is that a panel with the power and obligation to decide on a project by majority vote will take minority interests into account systematically less than currently. For example, what is the spiritual cost of uranium mining on sites that are sacred to Indian tribes? Anywhere from zero to infinite depending on how you view it. Currently there are big incentives for government and developers to find some compromise with the affected tribe, under your proposal much less so.

Which means...all the minority groups that benefit from the status quo will be highly incentivized to fight this proposal.

Expand full comment
author

I agree these are big issues - this is very much the least 'gamed out' of the plans, including AI, at this stage. The goal is to give real stakeholders the same kind of leverage they currently have, to the extent they have real stakes in the fight - so you'd want to give them some form of oversize weight, require supermajorities, or other similar tactics. I think it's basically fine to have a de facto rule that says 'the local tribe has to approve if it is sufficiently impacted' in which case, yes, they will demand a bribe, but also Coase, they can get one.

The basic Balsa philosophy is that if someone has successfully sought rent, buy the property from them, because what matters far more is almost always the deadweight loss.

Expand full comment

Why focus on the Jones Act and not the Foreign Dredge Act?

Expand full comment
author

If you go for the Dredge Act the people supporting Jones Act oppose you anyway because they correctly think they're next. So you don't win anything. Whereas if you get Jones Act you also get Dredge Act, so you might as well hunt the bigger game.

But yes, if I thought I could get Dredge Act at a big effort multiplier instead I'd go for it. I suspect it's a much larger fraction of Jones in value than people realize.

Expand full comment

I did worry for awhile that post-covid posts, there'd be some rough seas, but it turned out to be a good year indeed. Was pretty cool to see your OpenAI drama explainer linked by Matt Yglesias as a "better than what you'll find in the MSM summary".

Been meaning to ask for awhile, do you have any particular preference for funding allocation streams? The main reason I ended up becoming a paid subscriber in the first place is because for whatever reason I wasn't easily able to locate your Patreon at the time, and this was easier.

Good luck on the Jones Act mission. For the sake of friends, family, and corporate interests in my ambit that pay unreasonable rents because of it (Hawaii, sigh), I hope you're successful, or at least make an honourable attempt that inspires future efforts.

Expand full comment
author

If AI hadn't happened, I had other big plans that would have been a lot of fun. There was never a shortage of things to explore!

Dollar per dollar I think SS is more efficient than Patreon, but the difference is small. Balsa is a different pool of money for a different purpose, so support that if and only if that's what you want to support (and especially consider it if you do get to actively deduct charitable contributions).

And thanks.

Expand full comment

I hope you start writing more about the Jones Act! I am eager to hear about what can be done effectively there.

Expand full comment

The first people to build AI sufficiently powerful to risk destroying the world are almost certainly going to be the well-funded corporations or governments, since they are the ones with the resources (both money for datacenters and motivated programmers) and therefore the only ones with models actually at the frontier of intelligence. (An actual ban on "open source frontier models" would therefore be acceptable because it bans nothing, but obviously that's not what you mean.)

So instead of attacking people who are open-sourcing models which we already know are not intelligent enough to destroy the world, why not focus your work instead on preventing organizations like OpenAI and Anthropic from building newer, more powerful, models in the first place, with every advance they make risking humanity's ruin?

From my perspective, once the model has been created, the horse is already out of the barn; researchers are going to test the capabilities of their model, so if it can destroy the world, it will. Going downstream of these frontier models to regulate their open-source pale imitations is just wasted effort that won't work at stopping the world from ending and hurts everybody along the way.

Expand full comment

"instead of"

You can of course do both, but the prioritization strategy here (focusing on the use of models over the creation of them) doesn't make sense to me.

Expand full comment

Aren't Facebook/Meta the main dangerous actor releasing open source models? (Maybe not – I haven't been keeping up-to-date.)

They have sufficient money/resources to worry

me.

Expand full comment

Meta is the company with the most cavalier attitude towards safety, but (fortunately) their AIs are less intelligent than those of OpenAI and Anthropic, and what's going to destroy humanity is general superintelligence, not some bad actor using AI to inspire a chemical attack or whatever.

Expand full comment

Their AIs are worse – but everyone's, including their's, are continuing to get better too.

It only takes one unaligned AGI/ASI to kill us all.

Expand full comment

I'm also not currently sufficiently convinced that some existing model, with sufficient "scaffolding", isn't already very dangerous, even if not literally existentially so. (It's a big space of computational possibility to try to 'provably' convince oneself is 'safe'.)

Expand full comment

Yes, the frontier models will almost certainly be created by the bigger 'closed' organizations, e.g. tech startups/companies.

But ALL of the models, both the (current) frontier models, but also _all_ of the (now many) open source models too – ARE GETTING BETTER, and maybe faster and faster too. It's really only been months, or, arguably, a few years, since any of this seemed like an _impending_ possibility.

Besides the danger of the models themselves tho, there very well could be some amount or kind of what Zvi calls 'scaffolding' that could make the difference between even a mediocre LLM and an AGI. It's not _totally_ obvious that something like the existing LLMs would ever, on its own, be a catastrophically dangerous AI.

But I also expect that, if anything, progress on 'scaffolding' will be MUCH more rapid for the open source projects. That's exactly the kind of iterative and collaborative effort at which open source particularly excels, e.g. anyone and everyone can just fork (or copy) each other's code to build on each other's work. The 'closed' organizations can also (legally) copy (some of) the open source work too, but they just CANNOT possible evaluate or search as much of the open source code as the other open source people can.

It only takes one – really, THE FIRST – unaligned AGI/ASI to kill us all. But I don't think that has to necessarily be a 'frontier model'.

Expand full comment

I am not arguing that open-source groups should be allowed to do anything they want, just that they are at present not the real threat. The real threats are OpenAI, DeepMind, Meta, and Anthropic. (and Meta's AI research is the problem, not their open-sourcing of the products of their research)

I do not think your belief about scaffolding is borne out by past record of progress.

Expand full comment

I don't think we should be tracking _only_ 'current threats' – 'future threats' are important, to at least observe regularly. The open source models/whatever are getting (much) better (and quickly) too!

The _recent_ past "record of progress" (in AI) ALSO wasn't expected because of the _previous_ "record of progress". We can't just keep extrapolating lines on a graph forever and refuse to consider other possibilities. I expect to be surprised.

Expand full comment

Regardless of how much risk open-source may have in the future, it is clearly less than the threat of OpenAI, DeepMind, Meta, and Anthropic.

General high levels of rational paranoia makes sense because of the stakes; what does not make sense is letting the above companies continue to develop more and more powerful models.

I am not refusing to consider other possibilities. I think they are worth considering, but still less of a threat than the companies who are explicitly attempting to build superintelligence and who clearly do not know how to build superintelligence which can be known to not destroy the world.

Expand full comment

We're replying past each other.

I agree with the relative priorities. I just ALSO think it's a mistake to NOT worry about the open source AIs TOO.

Expand full comment

Thank you! I am so happy with everything you've done, are doing, and what I expect you to do in the future. I feel honored to know you, even as meager as that might be.

Expand full comment

I'll join the chorus of "thank you"s. You have indeed helped all of this make more sense. And I'm glad it's not consuming so much of your time that you can't do other things too.

Expand full comment

one thing I'd love to see from anyone pursuing policy changes is some kind of website policy "dashboard." Today's super short media cycle makes it hard to keep a topic on voters/politicians' minds, especially one that even in the best possible case isn't the #1 problem facing the country.

What I would love to see is a single page that 1) made the shortest possible argument for the change (links to more detailed arguments), 2) the good faith estimate costs of NOT doing it, amortized to today - "The Jones Act Costs Americans X million dollars a day", 3) a war-room score card with links to "territory won" - pundits, public figures, pols who have been pitched the case and come out in support and 4) "your donation dollars at work" showing what's been accomplished and what's next.

I might wake up in the morning and check the price of my company's stock. I'd love to be able to check the pulse of the stock "Will The Jones Act Be Repealed?" in a similar time frame.

Expand full comment