28 Comments
User's avatar
[insert here] delenda est's avatar

I can save him time, the Dalai Lama's answer will surely be "how can I destroy you?"

Thanks for the write-up!

Askwho Casts AI's avatar

Looks like 7 is repeated.

Sergey Kornilov's avatar

If great art gets mistaken for basic good art, that's not costless.

The problem with the A.I. crowd is that it got completely disconnected from actual behavioral research and cognitive research that had been done in the XX century.

So of course Altman (or, in part, even Cowen - although to him at least there might be something collective and temporal about aesthetic greatness that resists individual verification):

1. in the aesthetic domain even if individual readers can't reliably distinguish a 9 from a 10 in the moment, a 10 might be something that becomes valuable and recognized over time through cultural resonance, reinterpretation, influence on other works, etc. The 10-ness emerges from a collective, historical process, not from any single evaluator.

2. there might be a genuine asymmetry where: no single person can reliably produce 10s, no single person can reliably identify 10s in isolation; but collectively, over time, humanity can identify 10s through a complex process of reception, criticism, influence, etc.

3. the claim that "only 10-writers can identify 10s" is most certainly false in the strong form, ignores what ability/skill/achievement really are, reduces them to a single number, equating person and product.

This is - effectively - blind reasoning about something that society has plenty of knowledge about. They choose to do it, because engaging with the actual theory and behavioral science and actual research and contextualizing their reasoning in it would require nuance, complexity, and would prevent them from being able to spit out all of this garbled nonsense that is pretending like it is exploring humanity's future or some such.

David's avatar

I'm betting that over the next few years, there will be top-10 lists written for AI pieces in x or y domain, and those examples high on the list will have been things that have gone viral, certainly at first. Maybe memes first. Desktop or phone wallpaper. Popular TikTok videos or Youtube shorts. This is probably the low-hanging fruit. Who is to say this isn't art? As for today, it might already be happening with music.

avalancheGenesis's avatar

It definitely doesn't take a 10 to know a 10, but sometimes it does take further context...a Great Work that one has never encountered before, seen referenced otherwise, experienced derivatives of, doesn't know the history? That's a hard sell, especially for more esoteric experiences. But cultural touchstones get better and better with more exposure and erudition, to a point where it can be a real joy to find "the original" after seeing a dozen echoes elsewhere. Like I'm no poet and never cared for the genre, but was recently enjoying an obscure metal album...inspired by an old poem...and then I went and read Ozymandias for the first time, and it's like, oh, suddenly a hundred different references make sense, across sundry domains. It would have been sad to go the rest of my life not knowing that, in the same way it's impoverishing to try and understand the Western canon with no grasp of the Bible. (Or laughing at an AI anime image without ever actually watching Ghibli.)

In theory AI ought to be able to accelerate that process of "cultural 10 diffusion"...in practice it currently seems like another checkmark in the Slop Is A Demand Side Problem column. I do respect the ambition to create a 10 ex machina, and AI art now is the worst it ever will be, but...cmon. It's bizarre blank slateism to assume artistic talent is evenly distributed and we'll be seeing the Michelangelo of MidJourney any day now.

Sergey Kornilov's avatar

It does take 10 to know a 10 - kind of. To know that it's a 10 and not a 9 or an 8 or an 11 - that takes a form of expertise. In theory expert judges reliably identify work superior to their own production capability - but recognizing exceptional work requires substantial domain expertise. The threshold is lower than for producing such work. But it's still up there.

Agree d re: the ;last argument. You could generate a million AI images and find some "10s" in there but the problem is you can't reliably identify them without human curation.

Neurology For You's avatar

I think there’s a lot of every day benefit to be gotten from making AIs that are better at helping people use them, the chat interface is deceptive since people take it at face value without realizing a prompt is not like asking your friend for help. Maybe it’s bad to teach people to ask the robot how the robot can help them in the long-term, but it would be a huge win for the average person who is not tech-savvy.

Mo Diddly's avatar

“…a sufficiently capable AI can do 10/10 on poems, heck it can do 11/10 on poems. But yeah, I don’t think you or I will care other than as a technical achievement. “

As an artist I promise I will care. It would devastate me beyond measure.

YM Nathanson's avatar

It’s incoherent that an AI (or anyone) could make a 10/10 poem and people won’t care. Poetry, and the arts in general, are subjectively (and intersubjectively) evaluated. For a poem to be 10/10 people must care. If people don’t care, it’s not a 10. That’s how subjective judgments work.

Fundamentally, AI can’t authentically write a love story, or describe the feeling of a sunset, or mourn the death of a child, because it doesn’t have human experience.

An AI could write a great piece about its own experience as an AI. That, it could do at a superhuman level.

Mark Russell's avatar

I think people who are writing about poetry as a reaction to this piece (and I am one of you and love you for it) may be mis-remembering the nature of poetry, current and otherwise. Great poems happen all the time, and disappear into obscurity for lack of notice and recognition, even when they have prominent publication (think New Yorker).

Writing a great poem and having people not recognize and not care, or recognize and still not care, are the risks one takes when they write poetry.

It is not for the faint of heart.

And thus, we have a disproportionality problem here, a playing by 2 sets of rules, where the human has to care or not care about what happens to the poem, but the AI does not, much in the same way that a computer was allowed to play and win at Jeopardy despite not having to figure out how to time the buzzer. Nope, it automatically got to answer first.

Mark Russell's avatar

I think people who are writing about poetry as a reaction to this piece (and I am one of you and love you for it) may be mis-remembering the nature of poetry, current and otherwise. Great poems happen all the time, and disappear into obscurity for lack of notice and recognition, even when they have prominent publication (think New Yorker).

Writing a great poem and having people not recognize and not care, or recognize and still not care, are the risks one takes when they write poetry.

It is not for the faint of heart.

And thus, we have a disproportionality problem here, a playing by 2 sets of rules, where the human has to care or not care about what happens to the poem, but the AI does not, much in the same way that a computer was allowed to play and win at Jeopardy despite not having to figure out how to time the buzzer. Nope, it automatically got to answer first.

YM Nathanson's avatar

What does great (or 10/10) really mean?

Is it enough to give one person — the author’s mother — a great feeling? Or must a work have great impact to count as great?

While poetry qua poetry has receded in cultural prominence, one can imagine its return, or remember that bygone era. And in such a world, for a poem to be truly great would require it to be big (audience-wise), to be critically acclaimed, and to influence the next wave of poets. In other words, a great poem is culturally significant.

Other arts downstream of poetry, such as songwriting and playwriting, are popular enough for us to see, in the coming years, if masses and high class critics quote and contemplate AI writings.

I expect to live in a hybrid world for the foreseeable. While language may be written by AIs, concepts and feelings must come from humans, because the Creative Act, and the life story of the Artist, remain key to unlocking the energy of audiences and turning on the attention of critics.

Randall Randall's avatar

Sam is asked if he believes any conspiracy theory. Your points about it all seem to indicate that you believe that conspiracy theories are definitionally false:

"[...] we know this. At the time it was a conspiracy theory but I think that means this is no longer a conspiracy theory?"

...and the lab leak thing, where you indicate that it's not really a conspiracy theory because it seems to have turned out to be likely. If Tyler and Sam had that definition, it wouldn't make much sense to ask if there were any either of them believe: "Do you believe any false things?"

Jens B Fiederer's avatar

If you are trying to appreciate the beauty of Neruda's language, it is probably best to read the poem in the language it was written rather than a translation (otherwise you are appreciating Merwin's language). Or maybe read Frost instead if your Spanish is weak (I can't claim to appreciate poems in Spanish either, maybe a few song lyrics!).

I am not at all sure how much of poetry appreciation is real esthetics and how much of it is signaling.

Mark Russell's avatar

Thumbs down on the "have to read it in its original language" notion, no, lots of great works translate well, in fact that might be a judgement criteria.

But definitely, read Frost.

Jens B Fiederer's avatar

Lots of great works might translate well, but this is definitely less true for poetry.

Not that I haven't tried! In https://cardioblogy.blogspot.com/2009/10/that-is-my-pride.html I managed to preserve not only the meaning, but also the meter and the rhyme scheme of the German poem. Have not been able to do that with any other work (not that I do a lot of translation).

Dave Friedman's avatar

Slack might suck but it's miles better than Discord.

Henry Oliver's avatar

Neruda isn't good in translation

Scott Wolchok's avatar

> The entire conversation takes place with an understanding that no one is to mention existential risk or the fact that the world will likely transform

was this stated somewhere? bothered me while listening

Jeffrey Soreff's avatar

Re: "In the default AGI scenarios, we don’t only live to 98, we likely hit escape velocity and live indefinitely, and then it comes down to what that costs. "

I would love for that to be correct, but I think it is overoptimistic (even given human-controlled AGI or even ASI).

The basic problem is that this isn't just an engineering problem, it is also a scientific problem. We are constantly finding that biology and biochemistry have more pieces than we expected. If a friendly ASI with infinite intelligence, but only the data and knowledge we have now, attempted to develop new pharmaceuticals, it would almost certainly bump into unexpected side effects. Yes, AlphaFold was a huge breakthrough, but even knowing the shapes of every protein, and knowing how to design a new drug to bind to each one, there are large chances that the metabolic path that one is trying to block or speed also affects some other, as yet unknown pathway. About half of phase III trials fail.

Specifically in regards to "escape velocity": For either chronic conditions caused by aging, or aging itself, there is the additional problem that one is going to have to dose the patient for the rest of their life. This makes the side effect problem worse. If e.g. an antibiotic for a three week course of treatment to eliminate an infection has a deleterious side effect (a) one can notice the side effect in a clinical trial with a reasonable length (b) there is just less opportunity for slow deleterious side effects to do damage. E.g. if the antibiotic doubled the rate at which cataracts progressed, but only did so for the three week period of treatment, this isn't that much damage. If one is instead, intending to dose a patient for three decades, and the medication interferes with some important process on a time scale of two decades, detecting the problem during clinical trials will fail, and there is a broader class of such potential problems that might happen.

avalancheGenesis's avatar

At the risk of Evidence From Fiction, I worry a lot about "wetware durability" too...biological minds just aren't designed to hold an ever-expanding amount of data with any sort of fidelity, and we see this today even absent active deterioration like Alzheimers. Alternatively it's more a cultural-fit problem, where immortals essentially die of boredom because they weren't conditioned for such a long life, and certain pillars like the assumption of death are found to be incredibly load-bearing for the stability of society. Gerontocracy cranked to 11 isn't a particularly inspired extrapolation, but it's an obvious failure mode...if people today don't move into adulthood by 30 because people are hanging on until 98, imagine that an OOM worse, where you exit the tutorial at 300 years old because the 9800ers won't kick the bucket yet. Plus the obvious implications for fertility. Death Is The Enemy - and after that mere technical problem is solved, you've got a whole new raft of very thorny problems, which I am less sure AI will be of use for. (It'd sure be convenient if we got neural uploads to work!)

Jeffrey Soreff's avatar

Many Thanks!

"biological minds just aren't designed to hold an ever-expanding amount of data with any sort of fidelity"

Hmm... I don't think that this is all that much of a problem, particularly in terms of fidelity of recall. "The worst ink and paper is better than the best memory" - and this has been true for as long as we've been human. Yes, memories fade, and memories of banal days and events barely get remembered at all, and we all live with it.

"imagine that an OOM worse, where you exit the tutorial at 300 years old because the 9800ers won't kick the bucket yet."

Yes, that will cause some strains. I'd much rather have those problems than the problem that virtually everyone ages, sickens, and dies before 100. Unfortunately, solving aging is a _hard_ technical problem.

Sam Penrose's avatar

Thank you for this valuable public service. Two priors need updating. First:

> Nuclear power regulations are insanely restrictive and prohibitive, and the insurance the government writes does not substantially make up for this, nor is it that expensive or risky. The NRC and other regulations are the reason we can’t have this nice thing, in ways that don’t relate much if at all to the continued existence of these Nervous Nellies. Providing safe harbor in exchange of that really is the actual least you can do.

Per https://www.bloomberg.com/news/features/2025-10-30/silicon-valley-s-risky-plan-to-revive-nuclear-power-in-america, there is more to the problem than the NRC (and note Altman’s role in setting a move-fast-and-break-things culture).

Altman settled on nuclear as his hope for an abundant, cheap energy source in **2014** — the second prior that needs updating:

1. AP1000s (and similar designs) at $BB/ 1.1 GW each, are the only reactor we actually know how to build

2. Only China knows how to build them with any regularity, and is doing so at a rate of ~10GW/year, heading to 30GW/year https://www.neimagazine.com/news/china-approves-10-new-reactors/

3. Meanwhile, China will produce solar panels this year which are 800GW nameplate / ~150GW realized, or **a multiple of the power produced by new nuclear** at **a small fraction of the cost**. Volume and cost will improve by 30% every year as long as China wants them to. Yes, storage and demand-shifting are needed to extract all the power — but the scale and cost advantages are enormous **and growing**.

4. Conversely, there are strong reasons to believe that we will never drive the cost of nuclear significantly lower: https://austinvernon.substack.com/p/a-nuclear-fission-regulatory-blank

5. Enhanced geothermal is arriving at ~$100/MWh in 10MW units — a much bigger market than 1.1GW AP1000s — with learning curve opportunities likely to halve that cost over the next decade.

6. I believe the linchpin of the GAI drive for GW-scale datacenters is inability to distribute training compute efficiently. If that is correct, do we think solving that problem is harder than quickly deploying a dozen AP1000-powered new datacenters?

Being pro-nuclear is an **identity** for the Breakthrough Institute crowd, Bay Area techies such as Altman and Stewart Brand, and many friends of this Substack. But y’all also have a “less-wrong” identity as people who listen to alternative perspectives and update your priors based on evidence. I hope you will do so for nuclear power.

Mark Russell's avatar

Well, for one thing, you have to put all those panels somewhere for them to work, so land usage starts to become a big deal. That is a lot of land space. Are you sure you have accounted for land acquisition in your pricing?

In the US, that would require 4,000,000 acres. Those acres, in solar projects that I know, are renting up to $2,000/acre/year. So in land acquisition you are up to $8B/year every year to get your 150GW realized. You sure those reactors are overpriced?

Garrett MacDonald's avatar

Maybe I’m a philistine but I think well-sung music lyrics are better than poetry in every way. I tend to like electronic-type music and it seems like AI is gonna dominate those genres pretty soon. This song for example is better than most non-AI I’ve heard:

https://suno.com/song/171aedd9-04c7-486e-ab8f-6d88aec139fc

Shon Pan's avatar

I'll like to thank you once again for your tireless reporting on AI.