30 Comments

I’ve been meaning to ask what you have in mind by your use of “mundane” in your recurring topic names. Is the idea to distinguish larger, more revolutionary changes? Just a slightly odd choice of words perhaps.

Expand full comment

I take it to mean: it does something valuable that laypeople can understand and appreciate, as opposed to being of esoteric use for wizards

Expand full comment

I think it's a reference to this trope: https://tvtropes.org/pmwiki/pmwiki.php/Main/MundaneUtility

Expand full comment

Ah! Well how about that! Thanks, brother.

Expand full comment

Can confirm that is where I got the term.

Expand full comment

I don’t think wrongly claiming to solve a problem counts as misalignment. Humans can be overconfident in the wrong answer but we wouldn’t claim that they are misaligned because of it. I suspect a better prompt would let o1 be ok with being wrong, given that Claude can do this

Expand full comment

I must confess that I am starting to become as frustrated with AI boosters like Zvi as I am with doomers like Marcus. It is now obvious that the scaling laws alone will not take us to truly transformational AI since we have exhausted text data and there are no 5 level models forthcoming without another breakthrough, even if we make progress on other axes and can dress up 4 level agents with test time compute into something that looks like an autonomous AGI. The time derivative has declined since gpt4 was released and my money would be on it remaining low until a breakthrough on multi modality clears the overhang, possibly in a few years, possibly never, but also possibly in a very dangerous way. Until then we will have to recognize that the hype of 2023 will not be realized even as past breakthroughs are integrated into the economy, I am disappointed that nirvana is not forthcoming, grateful that the apocalypse isn’t either, disgusted by the slop that students now substitute for thought.

Expand full comment

From your mouth to God’s ears. I truly hope you’re right.

The problem I see is that, as o3 is already better at computer science than many computer scientists, if it can get to “looks like an autonomy AGI” it might be sufficient for escape velocity once they are agent-y enough to deploy millions of instances of them on developing the next iteration of AI, and so on.

Expand full comment

The numbers you give for Google, Amazon and Meta's capex are for 2024, and not projections for 2025 if I'm not mistaken.

Also re: Nvidia. Still curious what you think about Nvidia vs Broadcom. Broadcom already proved they can design hardware as good as or better than Nvidia with Google's TPUs. And they are already partnering with Amazon, Microsoft and Meta to create custom chips. Amazon seems to be the only one of them seriously investing for now, but if they are successful it, the others might invest more.

Expand full comment

The basic economist position is that intelligence, like everything else, is subject to diminishing marginal returns.

I don't think this is subject to serious dispute. Instead, the ASI changes everything perspective is that intelligence explosion will lead to such substantial gains in intelligence that it offsets the diminishing marginal returns. If the returns to intelligence scale as log(x) then maybe ASI scales intelligence itself along the lines of 2^x rather than x^2, so that it never hits an asymptote.

Either position seems ex ante plausible to me? Yes, there is plausibly a level of ASI that grows explosively far beyond our imagination. But it's also totally plausible that higher levels of ASI require resource investments that -- even with the ASI itself recursively self-improving -- never explode.

Expand full comment

How does that square with the value of human intelligence? An outlier unusually smart person is more valuable on your team than a pretty smart person, right?

Expand full comment

Of course.

But what about adding a second genius to the team, do they bring as much value as the first one? A seventh? A fortieth? What about when you get to the point that every square foot of obvious space is filled with geniuses and no can even turn around or sit down?

Expand full comment

Now we've switched from the level of intelligence to the "parallelizability" by talking about more geniuses, instead of next level genius that makes the previous genius look mediocre -- which is also important to think about when it comes to AI, but I think anyone who "feels the AGI" expects the "next level genius" part to be more important than the many geniuses

Expand full comment

I've never heard of there being a diminishing marginal return to intelligence? How are you even measuring intelligence, in order to calculate the marginal return? If it's IQ, do you think an IQ 100 person isn't twice as productive (in anything that at all uses intelligence) as an IQ 50 person, or an IQ 200 person over an IQ 100 person?

That's certainly not something I was ever taught in my econ coursework.

Expand full comment

I mean, there are *ultimately* diminishing marginal returns to intelligence, insofar as the use of intelligence is problem-solving and problems have a best solution. Someone with 200 IQ won't do any better than someone with 120 IQ at naughts and crosses (tic-tac-toe for American heathens), because someone with 120 IQ already has the game basically solved. More intelligence doesn't help if you were already acting optimally.

Of course, the thing is that the game of "reality" is complicated enough that nobody is anywhere remotely close to perfect play, so the fact that there will eventually be diminishing returns is cold comfort to us mere mortals.

Expand full comment

The issue isn't that, it's that reality isn't fair and 200 vs 120, the 200 has advantages on the margin, on the long tail.

This won't compensate for a massive material advantage. This is why they "boxing match vs Einstein" example is used. Say the AI has a slow, weak, uncoordinated robot.

It's a superintelligence but also gets no prep time to load the robot with guns or find out personal info on the opponent. Can the superintelligence win? No.

For a given IQ difference, the largest robotic handicap needed for the robot to win 50 percent of the boxing matches would be the utility gain.

Expand full comment

I think it's really useful to think of plausible "primate level" tasks which your intelligence was designed for. Why didn't nature make you smarter?

If you think of a plausible real world task, "I am hungry, I am going to go pick fruit". The smarter you are - the more and more complex CoT you could do on this task scrutinizing every move - gives you only benefits on the tail of the bell curve.

For example scrutinizing every single step for a snake and checking behind every bush for a predator - you don't expect to see any - costs you energy and compute but only rewards you when the danger was present. In some environments that's rare.

I think all tasks exhibit this diminishing returns property. Resources aren't infinite either so there are only so many tasks an "ASI" can even consider doing that are possible within the resources it has.

Expand full comment

Agriculture is a pretty impressive "return on intelligence" solution to the "I am hungry" problem.

Expand full comment

And also an example consistent with diminishing returns. Agriculture is not purely an "intelligence" solution, it has long term roi but is so poor in payoff many groups never developed it. The high yield we see now is a stack of other innovations including mechanization, fertilizer, genetic engineering, land surveying and legal systems. All had to be developed.

Primate: I am hungry now in this lifetime.

Yudnowsky style foom makes that happen all at once, not over decades to centuries. May not be possible. (Where that is self replicating nano machines and Dyson swarms)

Expand full comment

Well, you could also have, "Use intelligence to build trap to catch prey." "Use intelligence to make weapons to hunt prey." "Use intelligence to make basket to carry much more fruit." "Use intelligence to outsmart prey." "Use intelligence to coordinate with other members of the species to hunt/gather more efficiently."

Yes, for a lot of these, intelligence is more of a compound interest situation than a simple "Should I do A or B?" That seems to me to be how any intelligence or skill works - they are advantages you apply over time. That doesn't mean that (in a reasonable range) d^2(Productivity)/d(Intelligence)^2<0 (i.e. diminishing returns to intelligence).

Expand full comment

X = quantity of grey matter used. (You can think of it as lit 2nm silicon)

Y = reward, in this case calories today, expected lifetime calories.

"Diminishing returns" : increasing X causes smaller and smaller increments in Y.

You're correct there is probably a threshold effect where if you and your whole species is above a certain level of intelligence, expected lifetime calories actually increases faster than the cost of the additional grey matter.

For example you gave "tool use, make traps from available materials".

Hmm. I wonder what things could be built in our world that we humans are all individually and collectively too stupid to see.

I think maybe your hypothesis is correct then. I was thinking at the limits, "infinitely smart", where even with infinite intelligence finite data and finite materials bottlenecks what can be accomplished. Infinite intelligence doesn't let a cave man develop antimatter fueled starships within his lifetime.

Expand full comment

Convoluted first sentence and second sentence. How about writing a sentence that is clear?

Expand full comment

Maybe it's a ploy to get people to engage their brains before reading the post. But for real, that second sentence required several attempts for me.

Expand full comment

Bit of an aside but the twoot thread about AI art and meaning got me thinking about if the reaction to AI art pretty much proves that most people who espouse it don't actually believe in Death of the Author (DotA). In art spaces (in the broad sense) you will struggle to find many people who both support DotA and AI Art but if you accept that authorial intent is utterly irrelevant than the fact that diffusion models don't operate through mechanisms where authorial intent can reasonably be proposed shouldn't stop AI art from being just as meaningful as artist art, the same probably applies to LLM produced fiction but it seems more up in the air if an LLM can't have authorial intent. Hence it seems to expose most proponents of DotA as just using it as an excuse to thrust their own meaning onto a work rather than actual true believers.

Expand full comment

Regarding alz's post about the difficulty of reading "high-register" English, I am reminded of this quote:

"Instead of making sure old books are 'suitable for modern readers,' how about making sure modern readers are suitable for old books?" - David Burge

Expand full comment

Honest question: Why do we expect AI to become cheaper? Is it just because Moore's Law makes the hardware cheaper and more energy efficient? Or is there something else happening?

As I got it, all things considered Moore's Law should make computations around 30% cheaper per year. At current development it seems that the actual price of a prompt goes up by a facter of 10 per new model / year (Gpt 4 - o1 - o3). This is very handwawy but as I see it we are dealing with two different orders of magnitude in prices.

Expand full comment

Regarding how well AIFilter works, I recorded a little demo video when I put the project up last year

https://www.youtube.com/watch?v=CligVVTC5io&t=3s

Expand full comment

I'm sure everyone already knows this, but if you're coming in a little crunched on my fitness pal or just need a change, you can give o1 your remaining calorie needs for the day and your macro goals, plus stuff like "no cottage cheese or hippie food" and it'll spit out some really good options. You can pretty easily plan a week's worth of meals or avoid blowing your budget left in the day this way.

Expand full comment

To be clear, you be specific like "It's 8 pm, I can get to an Albertsons in Las Vegas, I need 85 protein, 20 fat, 65 carbs before the end of the night and I don't want to eat hippie food. make it spicy" and you'll get a plan.

Expand full comment