13 Comments

I’ve been meaning to ask what you have in mind by your use of “mundane” in your recurring topic names. Is the idea to distinguish larger, more revolutionary changes? Just a slightly odd choice of words perhaps.

Expand full comment

I take it to mean: it does something valuable that laypeople can understand and appreciate, as opposed to being of esoteric use for wizards

Expand full comment

I don’t think wrongly claiming to solve a problem counts as misalignment. Humans can be overconfident in the wrong answer but we wouldn’t claim that they are misaligned because of it. I suspect a better prompt would let o1 be ok with being wrong, given that Claude can do this

Expand full comment

I must confess that I am starting to become as frustrated with AI boosters like Zvi as I am with doomers like Marcus. It is now obvious that the scaling laws alone will not take us to truly transformational AI since we have exhausted text data and there are no 5 level models forthcoming without another breakthrough, even if we make progress on other axes and can dress up 4 level agents with test time compute into something that looks like an autonomous AGI. The time derivative has declined since gpt4 was released and my money would be on it remaining low until a breakthrough on multi modality clears the overhang, possibly in a few years, possibly never, but also possibly in a very dangerous way. Until then we will have to recognize that the hype of 2023 will not be realized even as past breakthroughs are integrated into the economy, I am disappointed that nirvana is not forthcoming, grateful that the apocalypse isn’t either, disgusted by the slop that students now substitute for thought.

Expand full comment

From your mouth to God’s ears. I truly hope you’re right.

The problem I see is that, as o3 is already better at computer science than many computer scientists, if it can get to “looks like an autonomy AGI” it might be sufficient for escape velocity once they are agent-y enough to deploy millions of instances of them on developing the next iteration of AI, and so on.

Expand full comment

The numbers you give for Google, Amazon and Meta's capex are for 2024, and not projections for 2025 if I'm not mistaken.

Also re: Nvidia. Still curious what you think about Nvidia vs Broadcom. Broadcom already proved they can design hardware as good as or better than Nvidia with Google's TPUs. And they are already partnering with Amazon, Microsoft and Meta to create custom chips. Amazon seems to be the only one of them seriously investing for now, but if they are successful it, the others might invest more.

Expand full comment

The basic economist position is that intelligence, like everything else, is subject to diminishing marginal returns.

I don't think this is subject to serious dispute. Instead, the ASI changes everything perspective is that intelligence explosion will lead to such substantial gains in intelligence that it offsets the diminishing marginal returns. If the returns to intelligence scale as log(x) then maybe ASI scales intelligence itself along the lines of 2^x rather than x^2, so that it never hits an asymptote.

Either position seems ex ante plausible to me? Yes, there is plausibly a level of ASI that grows explosively far beyond our imagination. But it's also totally plausible that higher levels of ASI require resource investments that -- even with the ASI itself recursively self-improving -- never explode.

Expand full comment

How does that square with the value of human intelligence? An outlier unusually smart person is more valuable on your team than a pretty smart person, right?

Expand full comment

Of course.

But what about adding a second genius to the team, do they bring as much value as the first one? A seventh? A fortieth? What about when you get to the point that every square foot of obvious space is filled with geniuses and no can even turn around or sit down?

Expand full comment

Now we've switched from the level of intelligence to the "parallelizability" by talking about more geniuses, instead of next level genius that makes the previous genius look mediocre -- which is also important to think about when it comes to AI, but I think anyone who "feels the AGI" expects the "next level genius" part to be more important than the many geniuses

Expand full comment

I've never heard of there being a diminishing marginal return to intelligence? How are you even measuring intelligence, in order to calculate the marginal return? If it's IQ, do you think an IQ 100 person isn't twice as productive (in anything that at all uses intelligence) as an IQ 50 person, or an IQ 200 person over an IQ 100 person?

That's certainly not something I was ever taught in my econ coursework.

Expand full comment

Convoluted first sentence and second sentence. How about writing a sentence that is clear?

Expand full comment