> We are not yet spending more than a few hours at a time in the singularity, where news happens faster than it can be processed.
Sorry, but, isn't this just what it's like to try to stay up to speed on a hot science? Imagine trying to write these for all of biology or chemistry.
There's something like 120(!) drugs being researched for weight loss alone at the moment. The parallel universe Zvi trying to stay on top of these developments might also be overwhelmed (thank you for trying!)
AI is a real science now, with more news than is possible to keep up on and also fantastic claims that everyone is suspicious of until other labs reproduce.
I've never sat in the Zvi cockpit so I am speculating but if (e.g.) more doctors and scientists posted more openly to Twitter about exciting things in GLP-1s and their ilk and patients were more open about discussing their health journeys with these drugs I'd probably feel this was a widening firehose of interesting and relevant content that was increasingly impossible to keep up on.
If such a news firehose existed for GLP-1 science, I suspect it wouldn't keep viewers coming back (given other news category competing for attention). Maybe news-consumer-seconds is the right unit to quantify amount of meaningful news in a category.
Dividing by the breadth of the category in question would help comparisons (biology wide, GLP-1 narrow), but hard to quantify that.
Actually, no, we don't care that more intelligent people are nicer to strangers (even if they are). What we actually care about is: are more intelligent people more likely to be serious, active, involved *conservationists*. Because in the AI example, we are not strangers, because we are not peers. We are (charitably) chimpanzees who provide no real economic value to humans but some very small number of people have decided are worth saving for non-economic reasons.
And in my experience, beyond some very trivial "oh sure, conserving species sounds good", no, most smart people don't seem more likely to be willing to sacrifice towards conservation related ends.
Exactly. To take this to a necessary further conclusion, the argument should really rest on how likely intelligent people in our world are to be amenable towards the Deep Ecology movement (which explicitly argues that ecosystems and non-human life have intrinsic value not in any way connected to ordinal human goals).
After all, even much of the hardcore conservation movement now is still, in some way, justified by some form of utilitarian philosophical foundation: that future generations would derive utility from interacting with nature, or that we can justify protecting far-flung remote areas because we can estimate/measure non-visitor utility (using WTP methods) even if the area in question doesn’t feature much human visitation whatsoever. See: ANWR.
Only the Deep Ecology movement goes beyond these philosophical structures to say that it doesn’t matter what WTP/WTA methods might say, a specific utilitarian argument for conservation of a given area or ecosystem is not relevant to whether or not it is worthy of preservation – because life has a sort of innate sanctity and a consequent value in and of itself.
Anyway, few intelligent people are convinced by these arguments. I expect this tells us something about the base rate of ASI’s respect for human life qua human life
It's funny that Montezuma's Revenge still has not been solved by the AIs, but mostly because we stopped building connectors between modern AIs and the Atari environment, and that's a nontrivial amount of software engineering effort. I feel like there's something here about how the difficulty of AI progress often lies in the unexpected "glue" parts rather than in the "core intelligence".
Not exactly o3 (or Gemini FT either), but it runs on an old laptop with limited RAM and costs whatever you allocate for electricity and depreciation. A lot easier to customize into one's workflow, too.
So what is the actual explanation why Dario, Altman etc so openly and blatantly contradict what they said was needed for AI safety just a few years ago? Are they just sociopaths happy to ride any wave if it gives them more power? Have they been corrupted by getting a little power and now want all the power? Have their expectations about the safety of future AI improved by seeing LLMs which deeply understand human language and ideas, and thus seem unlikely to grow into naive paperclippers?
If I may be so rude -- fundamentally, LessWrong and friends were fools for thinking they could achieve their goals by making rational arguments to the elite. The elite have always, always cherrypicked their experts on the basis of who is saying what they most want to hear. Now that there are people saying "it will be fine", Dario and Altman are listening to them and not to Eliezer. This is a political lesson, and unfortunately LessWrong was never politically savvy.
Smarter people are, on average, more ethical because among humans, ethics is a costly signal. I don't steal because I don't need to steal, and in fact stealing most stealable things would be a small enough boost to my wealth that the math just doesn't make sense. If I were poor, the math would work out differently, and possibly I would steal more.
More to the point, you care about your reputation. If you had an invisibility cloak and knew 100% that you could never get caught, I suspect that would change the calculus as well
Let’s do Ethics first, because it’s the easiest of the lot :
1. Humans, in general, prefer to live in Successful and Flourishing societies. Some non-exhaustive desired qualities of those societies : high-trust, place for self-expression, wealth, security, sense of community. There is also an obvious selection effect here.
2. There are Behaviors when, generalized, leads to societies to go in the Successful and Flourishing direction. We call those behaviors Virtues.
3. Today, we have Game Theory, Economics and Decision Theory to study the effect of behaviors more formally.
4. Historically, it was all the result of trial-and-errors and societal evolution and selection (and presumably somewhat genetic), so we had endless debates like "is Ethics and Morality objective or subjective" ? But now we more-or-less know what grounds Ethics.
So, my claim: this generalized ; "Good Taste" is something that has to be grounded to useful and objective qualities :
1. Programming, the one I know very well, in a few words: it’s what the 9-months-later-future-you (or someone replacing you) will thank you for.
2. Mathematics: Good Definitions naturally lead to Important Theorems in one direction and potential generalizations in other one (think of the history of the axioms of Topology). Important Theorems are one that helps in a wide variety of "real-world" problems (in physics, engineering, economics, cryptography), potentially being in advance of those applications (Riemannian Geometry created before General Relativity). Cross-domain insights is another pretty strong indicator of quality.
3. Literature: Narrative, "grokable" and memorable insights into psychology, history and philosophy.
4. Architecture: Buildings that you like living in. Convenience and Comfort. Sturdiness and Durability. Something like "the 5-years-later future you will thank you for" too.
Note that I said "grounded in", not "directly measures". Part of what make Taste hard is that the thing you directly observe are only weakly, partially correlated to the important useful objective things, and even the important objective things are poorly defined and understood. But still, they exists.
On the other hand, if there is no "groundiness" to your domain, I’m very suspicious that it’s not "Good Taste" but more "Completely Made-Up Arbitrary Fashion" (which, to be clear, can be okay ! you are allowed to enjoy Arbitrary Made-Up Things ! just don’t say those who don’t share it that they just have "Bad Taste"). Candidates for that: Music, Painting, Poetry, Œnology.
Maybe I’m just ignorant of what is grounding those. Maybe nobody knows because it’s even more subtle than the previous examples. Or maybe they are really just made-up arbitrary fashion.
Or, in shorter, more beautiful words (not mine): "Beauty is nothing but the promise of happiness".
One interesting nugget that wasn't mentioned (I think). Demis Hassabis said that he sees it as equally likely that in order to get to AGI, we'll need to:
1. Just scale up current tech.
2. Scaling isn't enough and we'll need 1-2 transformer level breakthroughs.
> James Darpinian: IMO these usually increase human readability as well, contra "best practices."
This is insane. Duplicating code does not make it easier for a human or AI to maintain code. Is it better for AI to waste input tokens on duplicated code, or is it easier to have abstraction it can reuse? A smart AI with long term goals will _compress_ information, not duplicate it.
> Faze Adorno: Who the f*** has random day-to-day AI queries?
>
> "I'm gonna use this technology that just makes up information at anywhere between a 5 and 25 percent clip for my everyday information! I'm so smart!"
What? AI does not hallucinate at a 5% clip. That might have been the case before. But as usual, remember that AI is getting better so fast that it's hard to keep up. Try DeepSeek R1, ask it to compare two made up movies and watch it tell you, in its own words, that it doesn't know something but is making assumptions.
> Q: Compare the movies "Lands of Lore" and "Solona".
> A: Okay, so I need to compare the movies "Lands of Lore" and "Solona." Hmm, first off, I've never heard of either of these movies. Maybe they're obscure or maybe I'm mixing up the titles? Let me think. Wait, "Lands of Lore" sounds familiar. Isn't that a video game series from the 90s? Maybe there's a movie adaptation I don't know about. But "Solona" doesn't ring a bell at all.
> We are not yet spending more than a few hours at a time in the singularity, where news happens faster than it can be processed.
Sorry, but, isn't this just what it's like to try to stay up to speed on a hot science? Imagine trying to write these for all of biology or chemistry.
There's something like 120(!) drugs being researched for weight loss alone at the moment. The parallel universe Zvi trying to stay on top of these developments might also be overwhelmed (thank you for trying!)
AI is a real science now, with more news than is possible to keep up on and also fantastic claims that everyone is suspicious of until other labs reproduce.
I mean, obviously no one is trying to keep up with *literal everything* and I obviously wasn't before either.
Well, I wasn't trying to claim that.
I've never sat in the Zvi cockpit so I am speculating but if (e.g.) more doctors and scientists posted more openly to Twitter about exciting things in GLP-1s and their ilk and patients were more open about discussing their health journeys with these drugs I'd probably feel this was a widening firehose of interesting and relevant content that was increasingly impossible to keep up on.
If such a news firehose existed for GLP-1 science, I suspect it wouldn't keep viewers coming back (given other news category competing for attention). Maybe news-consumer-seconds is the right unit to quantify amount of meaningful news in a category.
Dividing by the breadth of the category in question would help comparisons (biology wide, GLP-1 narrow), but hard to quantify that.
To Scott Sumner's argument:
Actually, no, we don't care that more intelligent people are nicer to strangers (even if they are). What we actually care about is: are more intelligent people more likely to be serious, active, involved *conservationists*. Because in the AI example, we are not strangers, because we are not peers. We are (charitably) chimpanzees who provide no real economic value to humans but some very small number of people have decided are worth saving for non-economic reasons.
And in my experience, beyond some very trivial "oh sure, conserving species sounds good", no, most smart people don't seem more likely to be willing to sacrifice towards conservation related ends.
Exactly. To take this to a necessary further conclusion, the argument should really rest on how likely intelligent people in our world are to be amenable towards the Deep Ecology movement (which explicitly argues that ecosystems and non-human life have intrinsic value not in any way connected to ordinal human goals).
After all, even much of the hardcore conservation movement now is still, in some way, justified by some form of utilitarian philosophical foundation: that future generations would derive utility from interacting with nature, or that we can justify protecting far-flung remote areas because we can estimate/measure non-visitor utility (using WTP methods) even if the area in question doesn’t feature much human visitation whatsoever. See: ANWR.
Only the Deep Ecology movement goes beyond these philosophical structures to say that it doesn’t matter what WTP/WTA methods might say, a specific utilitarian argument for conservation of a given area or ecosystem is not relevant to whether or not it is worthy of preservation – because life has a sort of innate sanctity and a consequent value in and of itself.
Anyway, few intelligent people are convinced by these arguments. I expect this tells us something about the base rate of ASI’s respect for human life qua human life
Ah, yes, it's time for tachycardia and an unusually high amount of hugging my loved ones again.
Podcast episode for this post:
https://open.substack.com/pub/dwatvpodcast/p/ai-101-the-shallow-end
Love your writing man - such great wit and synthesis
It's funny that Montezuma's Revenge still has not been solved by the AIs, but mostly because we stopped building connectors between modern AIs and the Atari environment, and that's a nontrivial amount of software engineering effort. I feel like there's something here about how the difficulty of AI progress often lies in the unexpected "glue" parts rather than in the "core intelligence".
FYI I have a rather skeptical take on MONA in my 2023 post “Thoughts on ‘Process-Based Supervision’” https://www.alignmentforum.org/posts/D4gEDdqWrgDPMtasc/thoughts-on-process-based-supervision-1 , see especially Section 5.3 on the “alignment tax” a.k.a. MONA systems not performing as well.
> ByteDance Duabao-1.5-Po, which matches GPT-5o benchmarks
GPT-5o is a typo I’m so not ready to read.
The cheapest reasoning model isn't Gemini Flash Thinking. It's a local reasoning model like an R1-distilled Qwen or Llama:
https://ollama.com/library/deepseek-r1:14b
Not exactly o3 (or Gemini FT either), but it runs on an old laptop with limited RAM and costs whatever you allocate for electricity and depreciation. A lot easier to customize into one's workflow, too.
So what is the actual explanation why Dario, Altman etc so openly and blatantly contradict what they said was needed for AI safety just a few years ago? Are they just sociopaths happy to ride any wave if it gives them more power? Have they been corrupted by getting a little power and now want all the power? Have their expectations about the safety of future AI improved by seeing LLMs which deeply understand human language and ideas, and thus seem unlikely to grow into naive paperclippers?
This is an excellent question
If I may be so rude -- fundamentally, LessWrong and friends were fools for thinking they could achieve their goals by making rational arguments to the elite. The elite have always, always cherrypicked their experts on the basis of who is saying what they most want to hear. Now that there are people saying "it will be fine", Dario and Altman are listening to them and not to Eliezer. This is a political lesson, and unfortunately LessWrong was never politically savvy.
Smarter people are, on average, more ethical because among humans, ethics is a costly signal. I don't steal because I don't need to steal, and in fact stealing most stealable things would be a small enough boost to my wealth that the math just doesn't make sense. If I were poor, the math would work out differently, and possibly I would steal more.
More to the point, you care about your reputation. If you had an invisibility cloak and knew 100% that you could never get caught, I suspect that would change the calculus as well
OK, that discussion on Beauty is overdue.
Let’s do Ethics first, because it’s the easiest of the lot :
1. Humans, in general, prefer to live in Successful and Flourishing societies. Some non-exhaustive desired qualities of those societies : high-trust, place for self-expression, wealth, security, sense of community. There is also an obvious selection effect here.
2. There are Behaviors when, generalized, leads to societies to go in the Successful and Flourishing direction. We call those behaviors Virtues.
3. Today, we have Game Theory, Economics and Decision Theory to study the effect of behaviors more formally.
4. Historically, it was all the result of trial-and-errors and societal evolution and selection (and presumably somewhat genetic), so we had endless debates like "is Ethics and Morality objective or subjective" ? But now we more-or-less know what grounds Ethics.
So, my claim: this generalized ; "Good Taste" is something that has to be grounded to useful and objective qualities :
1. Programming, the one I know very well, in a few words: it’s what the 9-months-later-future-you (or someone replacing you) will thank you for.
2. Mathematics: Good Definitions naturally lead to Important Theorems in one direction and potential generalizations in other one (think of the history of the axioms of Topology). Important Theorems are one that helps in a wide variety of "real-world" problems (in physics, engineering, economics, cryptography), potentially being in advance of those applications (Riemannian Geometry created before General Relativity). Cross-domain insights is another pretty strong indicator of quality.
3. Literature: Narrative, "grokable" and memorable insights into psychology, history and philosophy.
4. Architecture: Buildings that you like living in. Convenience and Comfort. Sturdiness and Durability. Something like "the 5-years-later future you will thank you for" too.
Note that I said "grounded in", not "directly measures". Part of what make Taste hard is that the thing you directly observe are only weakly, partially correlated to the important useful objective things, and even the important objective things are poorly defined and understood. But still, they exists.
On the other hand, if there is no "groundiness" to your domain, I’m very suspicious that it’s not "Good Taste" but more "Completely Made-Up Arbitrary Fashion" (which, to be clear, can be okay ! you are allowed to enjoy Arbitrary Made-Up Things ! just don’t say those who don’t share it that they just have "Bad Taste"). Candidates for that: Music, Painting, Poetry, Œnology.
Maybe I’m just ignorant of what is grounding those. Maybe nobody knows because it’s even more subtle than the previous examples. Or maybe they are really just made-up arbitrary fashion.
Or, in shorter, more beautiful words (not mine): "Beauty is nothing but the promise of happiness".
One interesting nugget that wasn't mentioned (I think). Demis Hassabis said that he sees it as equally likely that in order to get to AGI, we'll need to:
1. Just scale up current tech.
2. Scaling isn't enough and we'll need 1-2 transformer level breakthroughs.
Source: https://www.youtube.com/watch?v=yr0GiSgUvPU
> James Darpinian: IMO these usually increase human readability as well, contra "best practices."
This is insane. Duplicating code does not make it easier for a human or AI to maintain code. Is it better for AI to waste input tokens on duplicated code, or is it easier to have abstraction it can reuse? A smart AI with long term goals will _compress_ information, not duplicate it.
> Faze Adorno: Who the f*** has random day-to-day AI queries?
>
> "I'm gonna use this technology that just makes up information at anywhere between a 5 and 25 percent clip for my everyday information! I'm so smart!"
What? AI does not hallucinate at a 5% clip. That might have been the case before. But as usual, remember that AI is getting better so fast that it's hard to keep up. Try DeepSeek R1, ask it to compare two made up movies and watch it tell you, in its own words, that it doesn't know something but is making assumptions.
> Q: Compare the movies "Lands of Lore" and "Solona".
> A: Okay, so I need to compare the movies "Lands of Lore" and "Solona." Hmm, first off, I've never heard of either of these movies. Maybe they're obscure or maybe I'm mixing up the titles? Let me think. Wait, "Lands of Lore" sounds familiar. Isn't that a video game series from the 90s? Maybe there's a movie adaptation I don't know about. But "Solona" doesn't ring a bell at all.
Congrats on Odd Lots! I was surprised and tickled when they introduced you. They should indeed lean on you as a recurring AI guest.