19 Comments

> It seems like Tyler is thinking of greater intelligence in terms of ‘fitting together quantum mechanics and relativity’ and thus thinking it might cap out, rather than thinking about what that intelligence could do in various more practical areas.

Maybe the implicit premise is that intelligence is not the bottleneck in practical areas, but rather e.g. coordination problems? Of course, there's probably a sufficient level of intelligence that could figure out how to coordinate people/institutions.

Expand full comment

>I believe that with proper attention to what actually causes success plus hard work small business can have higher private returns than a job for a great many people

Have you written more about this somewhere?

Expand full comment

All of Cowen's statements on AI seem to reflect an underlying belief that there is something inherently special about human intelligence that cannot be significantly improved upon. He also kind of dodges that question when put to him directly even when asked to assume that it is possible.

Expand full comment

You are tremendously skilled at sharing your brain. I came here for the AI, stayed for the intellectual rigor. I really want to debate you about fertility reduction, population. Perhaps there will be an opportunity.

I am not tech, but you make the subject easy to digest. One question, please explain to me "model weights".

Expand full comment

I am surprised to read that Tyler thinks Argentina's cyclical hyperinflation is a mystery. Adam Tooze summarizes a thoroughly reported explanation of the political economy here: https://adamtooze.substack.com/p/chartbook-144-the-energy-shock-and . Short version: it is caught in a policy trap, heavy energy subsidies to the most powerful, which repeatedly snap under macroeconomic pressure then are restored, causing pressure to build until the next snap. (The argument could be wrong of course.) Perhaps the takeaway is that neither LLMs nor polymaths such as Tyler save us from the challenge of knowing whether we have thought everything through, a sort of research equivalent to the halting problem.

Thanks for your wonderful newsletters. I find them very rewarding.

Expand full comment

Re ancestors and descendants: the flip side of compound interest is exponential fading

Expand full comment

"very long, very detailed, and very good. Interesting throughout!" Tyler Cowen rightly about the post above :) "Zvi Mowshowitz covers my podcast with Dwarkesh Patel

by Tyler Cowen February 3, 2024 at 2:59 pm" Also: a great interview.

Expand full comment

On VCs and ROI, as you probably know, Marc Andreessen thinks of whaling as a precursor to the VC funding mechanism. Out of curiosity I did a bit of looking around and found an article that looked at 11K whaling voyages between 1800 and 1899 and found: "During the nineteenth century, US government bonds, a risk-free asset, returned an average of 4.6%; whaling, a risky asset, returned a mean of 4.7%. This shows 0.1% as the risk premium for whaling over US government bonds."

Here's the blog post where I report that: https://new-savanna.blogspot.com/search?q=whaling

Expand full comment

Dwarkesh has the best in-depth podcast that I know of.

Expand full comment

> So in the end, if you combine his point that he would think very differently if international coordination were possible or others were rendered powerless, his need for a hegemon if we want to achieve safety, and clear preference for the United States (or one of its corporations?) to take that role if someone has to, and his statement that existential risk from AI is indeed one of the top things we should be thinking about, what do you get? What policies does this suggest? What plan? What ultimate world?

So your interpretation is that Cowen thinks Straussianism is instrumentally rational for public intellectuals who forecast AI capabilities? This didn't draw any attention in the comments here or on LW, which surprises me, so I think there's a good chance I've misunderstood.

Anyway, if my interpretation of your interpretation is correct, do you think it's because Cowen wants to maintain his personal ability to influence key decision-makers, public and/or privately?

Or would it be because he wants to justify a maximal US lead in AI capabilities over China, for a broad readership which has diffuse political influence on whether this will happen? If this were the case, maybe he thinks the most likely scenario for effective international AI safety cooperation is the US gaining full hegemony in global politics, so that it can punish defectors.

Expand full comment