38 Comments
User's avatar
AT's avatar

Glad to feel some hopeful vibes on the future and sounds like a great event

Expand full comment
Rachel's avatar

Thanks so much for writing this Zvi. Hard to overstate how much I appreciate reading anecdotes and learnings from on the ground.

"You can also look at it as Year 1 of the curve was billed (although I don’t use the d word) as ‘doomers vs. accelerationists’ and now as Nathan Lambert says it was DC and SF types, like when the early season villains and heroes are now all working together as the stakes get raised and the new Big Bad shows up, then you do it again until everything is cancelled."

Yes yes yes. Endorsed. The internal docs from March 2025 would attest.

Expand full comment
Fionna Garity's avatar

can i ask a serious question? why is it improbable that ai would stall in its advancement in various fields (or reach a period where resource demands drastically outsize projected gains), and probable that it will not do so?

Expand full comment
Abstraction's avatar

Because it would imply three things at the same time:

1) That there is a (lower-than-human) fundamental limit to how well "intelligence" (whatever that is) can be approximated by end-to-end optimized systems of modern architecture given sufficient compute;

2) That current+ level of AI will be bad at figuring alternative approaches to intelligence specifically - that is, that intelligence problem turns out to be uniquely difficult for AI to help with;

3) That turning modern amounts of compute to other approaches we already have (say, Eurisko-style expert systems empowered by modern LLMs) will also be subject to 1) and 2).

And that hypothetical limit wasn't where we expected it to be and needs to be somewhere very close.

General Laplace-style argument: if there were N possible distinct levels of that limit and k levels remain then probability of not hitting the limit at all is ~exp(-k/(N-k+2)). k seems small enough by now.

Expand full comment
Fionna Garity's avatar

sorry, you lost me on the last bit. why does the limit have levels? and if it does, how can we know how large N or k is? that equation seems pretty sensitive to the magnitude of those numbers.

(also, shouldn't we have a working definition of "intelligence" if we're concerned with how ai can "approximate" it?)

Expand full comment
Abstraction's avatar

> why does the limit have levels? and if it does, how can we know how large N or k is?

Bad wording, my mistake. Imagine the "capabilities progression":

(very-obvious-heuristics) |-----------------------| (significantly-higher-than-human)

On this line, the best system we've created "moves" from left to right, like so:

|--T----E0-E1------M-----G2--G3--G4--G5--...--|

(T - OXO (1952), E0 - Eurisko at launch, E1 - Eurisko at the limit, M - MYCIN, Gn - GPT-n)

Now, the line is a continuum, but we may postulate there's a "resolution limit" λ from our perspective - that points close enough are roughly equivalent. In that case, the line consists of a finite number of these λ-sized segments, with total length of Nλ.

(There is a bit of a "moving goalposts" problem on the left - that over time, things that initially seemed nontrivial, like basic search algorithms, become perceived as obvious.)

Any given approach has a limit to how far it can go. Imagine we invent a new approach ("task solving via banana simulation", TSBS) and try to scale it up. In that case, we can potentially stall at every distinct segment - maybe we can't make TSBS solve tic-tac-toe, maybe TSBS-based systems can't reliably tell us what symptoms of a cold are, etc.

So, on every segment, there are two outcomes - either TSBS can be made that good [in practice] or it can't. Single-layer perceptron was unable to implement XOR; Eurisko, in practice, didn't go beyond E1 despite nominal ability to self-improve.

IF we consider different segments to be "equivalent" tasks, with no prior knowledge of how hard they are relative to each other or how much events "TSBS stalls on segment i _provided_ it reached (i-1)" correlate, the probability of stalling at the given next segment goes down as the number of previous successes S increases. Laplace rule suggests 1/(S+2), more general Solomonoff induction gives ~ 1/K(S) ~ 1/(S ln²S).

And my point was, there doesn't seem that many _noticeable_ steps up from where we are now to where significantly-higher-than-human is. That is, how long can you make a chain of hypothetical systems "TSBS-0 can do everything GPT-5 can do but can't do X1, TSBS-1 can do that and X1 but can't do X2, ..., TSBS-k can do that and Xk but still isn't significantly-higher-than-human", where the perceived difficulty of Xi increases?

> (also, shouldn't we have a working definition of "intelligence" if we're concerned with how ai can "approximate" it?)

I surely would love to. But the only half-decent definition I have is extensional - I can point around at humans and say "that thing in their task-solving and/or goal-choice that distinguishes them from other species, can you notice it exists? Yeah, let's call that thing 'intelligence'". There have been some shots at better definition over the last 80 years, the one I like is "the ability to effectively run a search in the space of algorithms", but they all seem to fall short, one way or another. The fact that the scale exists is seen more clearly than the scale itself. (One funny association: we say that the electron has "charge" e and "mass" m as distinct and basic entities but our experiments give us [e/m] with much higher precision than either e or m.)

Expand full comment
Jeffrey Soreff's avatar

Great report! Many Thanks!

One weirdness:

90% of code is written by AI by ~2028.

and also

First one-person $1 billion company by 2026.

as (median???) predictions seem like a strange combination.

I'd expect "90% code written by AI" to be a prerequisite for "First one-person $1 billion company",

so the order of those dates feels wrong to me.

"Helen Toner gave a talk on taking AI jaggedness seriously."

is fascinating. So many discussions about AI approximate it as

like a human, but with IQ and speed scaled up by some factors.

And yet this is obviously very wrong today, and is not a phenomenon

that seems likely to be erased by increasing AI intelligence.

Loved the bullet point about

"Don't hold your breath for drop-in remote workers, do expect disruption in who has power."

It is going to be a very, very, weird world.

Are there any further details on her talk?

Expand full comment
JV's avatar

90% of all code is limited by tech diffusion. There are companies where 0% is the reality and will be for years.

Expand full comment
Jeffrey Soreff's avatar

Many Thanks! My experience was the the bulk of coding is either fixing bugs in an existing code base, or bolting on a new feature to an existing code base. Could you elaborate what you mean by tech diffusion in this context? Diffusion of tools (AI tools, in this context), or diffusion of some other part of technology?

Expand full comment
User's avatar
Comment deleted
Oct 8
Comment deleted
Expand full comment
Jeffrey Soreff's avatar

Many Thanks! I'm a bit confused. These are all very near-term considerations, e.g. where AI is still worse at understanding a large code base than human programmers, say for the next year (and there may be situations even now where an AI is better at understanding a code base than humans - there is a lot of variation in positive and negative experiences).

Expand full comment
[insert here] delenda est's avatar

I'm confused too, my comment was not meant to be a reply to another comment 🙁

Expand full comment
Jeffrey Soreff's avatar

Oops! Oh well. Many Thanks!

Expand full comment
JV's avatar
Oct 8Edited

Tools and processes. Many individual developers and companies do not use AI for coding. Some companies explicitly ban it. So the average will lag what AI capabilities are and what startups will do.

Significant real world coding happens in companies where it's not a major focus and the downsides of getting it wrong are much bigger than the upside of doing it cheaper/quicker with AI. So it isn't even unreasonable.

Expand full comment
Jeffrey Soreff's avatar

Many Thanks! Ok, so it sounds like there will be a wide distribution of what fraction of coding is done by AI in different companies.

Yes, it makes sense that in an established company where the "downsides of getting it wrong" are, as you said, severe, AI coding will lag.

And then there will be "average" companies where both the costs and the risks are significant. ( I'd expect a lot will depend on how quickly the ability of AI to digest and understand an existing large code base advances. )

And then there is the "one-person [presumably start-up] company" which, if it is marketing something where "green-field" code is a necessary component of the product. Without rapid coding, that company has nothing to sell, so I'd expect the 90% AI coding to happen there before the other two cases.

Expand full comment
MrSquiggles's avatar

Stockholm Syndrome.

Expand full comment
Nathan Lambert's avatar

I think the curve is designed in a way that I'd expect every attendee to have a very different experience :). My post is mostly a reflection of what nerd sniped me more than an overall summary. We'll say hi at the next one!

Expand full comment
MichaeL Roe's avatar

Re: whether AI capabilities are going to max out soon …

To me, this feels like being in a casino and watching some guy who is trying the strategy of betting all his poker chips on red at each spin of the roulette wheel. Like, sure, each time 50% chance he loses, but also his stack of poker chips is doubling each spin.

In a similar spirit, I think (a) significant chance that AI capabilities max out soon; (b); also significant chance we get to see some really wild stuff before we reach the limit.

I think the ship has sailed on AI regulation. It’s too late to matter. We will either hit a technical limit to capabilities or see some really wild stuff before the government has a chance to regulate.

Expand full comment
MichaeL Roe's avatar

I can believe a very near future where (a) 90% of code written by AI; (b) programming is only 5x faster or so, because you haven’t automated everything

Expand full comment
Matt Wigdahl's avatar

For sure! Amdahl's Law is both very real and chronically underconsidered.

Expand full comment
Steven Adler's avatar

Appreciate this writeup and hearing about everyone’s experience, even as someone who attended & was at many of the same talks

Expand full comment
Alon Torres's avatar

I'm ashamed to say I haven't heard of the curve until I read this post. Thanks for sharing, Zvi!

Does anyone have a list of notable future conventions to keep on my radar so I don't miss out again?

Expand full comment
David Watson's avatar

I may get around to creating markets for the other discussed predictions later, but here's on for the self driving car one: https://manifold.markets/DavidFWatson/by-what-year-will-a-majority-of-car

Expand full comment
David Watson's avatar

Looks like the 2026 Unicorn market already exists on manifold and uh... it does not agree with the folks at this conference: https://manifold.markets/louis/will-there-be-a-singleperson-unicor?r=RGF2aWRGV2F0c29u

Expand full comment
David D's avatar

I know you say you don’t write fiction, but this intro felt a little like an epic poem or movie trailer in a good way. This is a story worth being a part of. “The team gets back together for one last job”. Like, fuck yeah, man. Proud to be on the same team, your work is helping empower/educate lots of randos like me. Keep it up ❤️🙏🏼✊🏼

Expand full comment
Tim's avatar

"But also all the talk of ‘bottlenecks’ therefore 0.5% or 1% GDP growth boost per year tops has already been overtaken purely by capex spending..." I think you are confusing levels and rates here, and also assuming that the relationship between capex and GDP is simpler than it is. AI capex could be 10% of GDP next year and the 0.5% or 1% GDP growth rate boost is still plausible.

Short term, for investment to cause the GDP growth rate (not level) to increase, there needs to be continued growth in AI investment year over year (∆I > 0). Additionally, it needs to be new investment - it can't crowd out other investment. For the long term, when we're thinking about how capex spending on AI relates to GDP growth we care about how much higher the marginal product of capital is for AI than for whatever else that capital would have been spent on.

The real question for AI is how much it impacts TFP, which you probably shouldn't take the level of capex as a strong signal for.

Expand full comment
gregvp's avatar

Additionally: since GDP is mainly household income, and that's mainly wages, the investment cannot continually cause negative year-on-year employment growth. Otherwise the investment acts to reduce GDP level, and therefore GDP growth as a second order effect (by chilling general investment, e.g. housing, vehicles).

Expand full comment
I.M.J. McInnis's avatar

Could you elaborate on ‘it’s worse than you know’? I've been hearing the opposite, that it's better than you know! (Though I've heard *that* among legislators.)

Expand full comment
Kenny's avatar

> the fluid dynamics metaphor, while gorgeous, makes the opposite mistake

Oh yeah – truly more of an Eldritch Combinatorial Horror

Expand full comment