53 Comments

No insights or opinions to contribute. Just thanks for keeping across all of this, so that I at least have a broad idea of what's going on.

This seems necessary, since I'm genuinely embarrassed by the opinions I held on the matter as recently as February.

Expand full comment

You untypo Snoop Dogg as Snoop Dog a couple times. Incidentally, I recommend listening to his album Doggystyle if you haven't, guy's famous for a reason.

Expand full comment

> What’s striking about the above is that the alarms are simply ‘oh, we solved robotics.’

Just to clarify, I wasn't really describing smoke alarms, but rather the sort of "alarms" used by fire companies when you already have a major fire underway. According to Wikipedia, a 3-alarm fire would involve calling out 12 engine companies and 7 ladder companies. It's not an early warning tool, it's a crisis management tool. I am not trying to tell people when they should start looking for EXIT signs and walking calmly in that direction. GPT 4 is already at that point. Please move towards the EXIT signs now. I'm trying to tell people, "If you see X, you need to be prepared to mount a major, immediate response with significant resources, or else the city will burn."

I think that it's worth having this kind of alarm, too. Especially for people who aren't sold on ultra-short timelines where an AI quickly bootstraps itself to effective godhood and disassembles the planet with nanobots. If an AI is forced to rely on some combination of humans, robotics and synthetic biology to manufacture GPUs, then we need to be able to distinguish between different sizes of "fires." You need different responses for "there is a grease fire in my wok" and "we have multiple high-rises on fire" and "the western US is on fire again."

Anyway, more on robotics...

The problem with robotics is pretty much like the problem of self-driving. The first 90% is 90% of the work. The next 9% is _another_ 90% of the work. And the next 0.9%... You get the idea. And to complete most real-world tasks, you'll need a lot of 9s. Waymo has mostly solved self-driving, but "mostly" isn't nearly good enough.

We've been able to build "pretty good" robots for a couple of decades now. I'm not talking your classic factory robots, which are incredibly inflexible. I'm not even talking about pick-and-place robots, which involve visual processing and manual dexterity: https://www.youtube.com/watch?v=RKJEwHfXs4Q. We've actually had jogging robots and somersaulting robots and soccer robots for a while now. And they're not terrible? Many of them are similar to self-driving cars: They can accomplish a specific task 90-99.9% of the time under reasonably real-world conditions. Which isn't close to good enough to make them viable.

This is a major reason why a working "handybot" is one of my alarms that we're nearing the endgame. If you've ever worked on old houses, you'll know that nothing ever goes quite according to plan. You need vision and and touch and planning and problem recognition and fine manipulation and brute force. A hostile or indifferent AI which can solve this is almost certainly capable of surviving without humanity, given enough time to build out a full-scale industrial economy.

Expand full comment

I'm a regular EconTalk listener, so that was the first one of EY's podcast interviews I had seen (I've read a fair amount of hist stuff on various places, but I haven't watched any live stuff). While I don't 100% agree with EY, I think I mostly understand his position. If his interview with Russ was emblematic: someone _please_ tell him to stop doing these. It is literally painful to listen to. He is _terrible_ at making the argument he's trying to make. I'm having trouble finishing the episode. The first 30 minutes, at least, consist of Russ trying to get EY to explain his point and he seems completely incapable of doing so in a way that will make sense to people who don't already know and understand the arguments. EY should have been able to bypass the first 30 minutes of conversation with a few sentences.

No wonder this argument is not making headway with "normies" if this is the quality of interview that happens.

For what it's worth, the writings of you and Scott Alexander, and the recent podcast interview you did, were all _dramatically_ better.

Like I said, while I take the risk of AI-killeveryonism pretty seriously, I think I'm much more hopeful than you and others in the community, but even I wanted to go back in time and interrupt EY to tell him better ways to communicate his points.

I can imagine someone who was completely unfamiliar with the debate coming away thinking that EY's side of the argument is completely nonsensical, and resulting in _decreased_ concern.

Maybe the second half of the interview gets better, but like I said, I'm having trouble getting to it.

Expand full comment

I was thinking the same thing. Russ is a great interviewer who is genuinely interested in understanding a different point of view. Yudkowsky could not communicate in anything other than jargon and analogies that seemed completely misaligned. It was pretty embarrassing.

(Edited to fix punctuation and grammar)

Expand full comment

Haven't listened yet but I have listened to Russ interview many others, so I will say that IME this is both the strength and weakness Russ has most - he has a way of thinking, and he will apply that way of thinking come hell or high water. It can be very good because it is an undersupplied and valuable way of thinking!

Expand full comment

> Sure, but why should I care? As Bart Simpson pointed out, I already know how not to hit a guy. I also know how not to convince him of anything.

I've recently come to believe that mere words don't change people's minds. I watch a rationalist-adjacent-adjacent community where lots of politics gets discussed daily. Even with the rationalist-ish openness to ideas and a decent amount of kindness, hardly anyone has actually meaningfully changed their minds on any major issue.

My crackpot theory is that what changes political beliefs is an event with (perceived) extremely low probability that shocks the brain and forces it to update, typically heavily emotional. Words *can* rarely do this, but they have to be very novel, not just the standard arguments everyone's heard before. I don't know if these words can be crafted on demand, though.

> The issue is that these properties do not tell you

You may have accidentally the rest of the sentence here.

Expand full comment

Yep, left out several sentences actually.

If words don't change people's minds... then what are they for, in the context of a debate?

Also, if people are talking similarly every day, you'd expect little change in positions each day. The hope would be that in a superior discussion that is unusually good you'd see SOME additional change. Otherwise, why not be polite and discuss... anything else?

Expand full comment

Under my crackpot model, words are a reflex. One hears something they disagree with, they move to defend it using words. Formally entering a debate is produces typically better quality words, but I've rarely seen it end in anything but "we'll agree to disagree".

Honestly, my model of "words so powerful they can change opinions" doesn't really map to discussions as much as single statements (or other monologues) that are so obviously convincing that there's nothing else to say. For big political issues, these are exceptionally rare, but arise commonly in other situations - "hey, your shoe's untied" "oh you're right, thanks" - sort of like how you telling me your name is Zvi Mowshowitz raises my prior on your name being Zvi Mowshowitz from <1% to >99%.

The lack of politeness under my model comes from a felt need to defend one's position/values/tribe. Considerations of politeness come after this feeling, and usually both participants aren't much for getting along.

I don't know, maybe I've just seen too many Internet debates and not enough real-world debates.

Expand full comment

> I've recently come to believe that mere words don't change people's minds. I watch a rationalist-adjacent-adjacent community where lots of politics gets discussed daily. Even with the rationalist-ish openness to ideas and a decent amount of kindness, hardly anyone has actually meaningfully changed their minds on any major issue.

People don't change their beliefs atomically / immediately after discussion which maybe should've changed them.

Expand full comment

You're going to love this: You can't reach the brain through the ears: https://www.experimental-history.com/p/you-cant-reach-the-brain-through

Expand full comment

Indeed! Thank you for sharing this with me. All excellent points, and this article would be a good part of, say, an introduction to Rationalism sequence.

Expand full comment

Greatly appreciate the roundup and especially the contrary takes on the Google memo. Strong agreement that given the ease of switching between models, something needs to actually surpass GPT-4 to be taken seriously (and I'm certain GPT-4.5 is in development!)

One editor's note: you accidentally a portion of the superintelligence section.

Expand full comment

Thank you, sir.

Expand full comment

> You use Stable Diffusion if you want to run experiments with various forms customization, if you want to keep your creations and interactions off of Discord, or you want to blatantly violate MidJourney’s content guidelines.

Earlier in the post you asked why MidJourney is only available on Discord. I think this is why--MidJourney can point to DISCORD's content guidelines as an additional firewall between them and highly realistic images of [disturbing thing].

Either way, thanks for another great roundup!

Expand full comment

I thought your piece in the Telegraph was very good. I just want to point out that I'm a normie, and I read your substack as well as more popular pieces. You may bring some more normies over here by writing for more popular publications. Whether you think that's a good thing remains to be seen.

Expand full comment

I like the idea of AI freeing people up to create art but I also don't think you can't have it both ways. What I think is important here is that there ought to be cultural norms in favor of clearly indicated what is human produced vs AI produced (and then perhaps regulation in the case that coordination fails to produce this). Further, I think there's a lack of imagination for where I would expect AI to help people who aren't artistically inclined in conventional ways to produce works of art that they never would have otherwise; I'm expecting a revolution in single-man independent games as people who are only good at one of writing, art, programming, or sound design have tools to handle all the others (or even if the revolutions doesn't happen I have some ideas that I wouldn't mind personally throwing into that machine if nobody else wants to).

I think there was an old Scottpost that was something like "it doesn't matter how much art AI produces because I'll still be interested in the art my friends produce regardless" and I think that stands both in the positive absolute sense but also in a less obvious current sense in which all the content that exists that isn't made by my buds might as well be AI right now for how little meaningful human contact I can get with the people making it. I try to be charitably inclined towards artists who are worried about losing their livelihood but I also get the impression that there's a more urgent sense that artists are most worried about losing their monopsony on having access to a parasocial audience. I mean, a lot of content creators say they hate this so maybe this is me failing at trying to be charitable towards them, and there's lots to be said about content glut not guaranteeing quality, but I can't shake that there seems to be an edge of ego to the panic.

Expand full comment

The hate of it comes from a deep emotional sense of art being a kind of divinity and something we uniquely loved, something we had a tight emotional aspect to, and then to see it being replaced by a soulless machine is crushing, even suicide-inducing.

Its not really about money - we don't generally make money from our art.

Is there ego to it? Perhaps. It is because we identify with it it, so it is basically our soul. And then to have our work copied, and then replaced by a mechanism is both theft and pain.

Expand full comment

I'm way more of a doomer than most (considering both risk of death and risk of obsolescence), but I never quite understood this line of thought. Just because what you created was reached through another method does not render your creation without value (other than monetary due to mass production I guess). Would you say your art becomes meaningless because someone started copying it? They're not necessarily trying to express something or able to value it the same as you do. Same with AI.

I can understand the ego part though. I still refuse to use GPT-4 for things it probably can do, even if I'm paying for it, just out of pride. Honestly, I'd lose respect for anyone who didn't have a little bit of this.

Expand full comment

I would say if someone copied what I did without asking for my permission, it could either be flattering or offensive, but it would have emotions involved. A machine doesn't have any emotion one way or another, and it is just a profoundly alien and disturbing thought.

I don't think art should be automated. Its one thing to have tools for it, but the key difference has always been that digital tools, etc, have edited or changed or helped; now its the opposite, where we might ask it to generate and the human is editing the final product.

For me, the fear of suffering from being replaced from being able to do anything valuable makes extinction seem almost a mercy.

Expand full comment

At least for now, doesn’t what and how we “ask it to generate” constitute an artistic act, with the bit in between that and the human editing still plausibly describable as just another tool? (Although it might be a bad tool that makes bad art; totally open to that idea!)

Expand full comment

I would agree with that. I think diffusion models make commercial art trivial (though they might still need an editor or something), but they don't make Art trivial.

I would define Art as being something indescribable, except through itself. In that sense, diffusion models will only ever be a tool for that purpose, because what they produce isn't inherently expressive. They can generate impressive and beautiful pictures, but that's not what Art is.

I tried generating some art on SD. It was good, but it wasn't what I wanted. If I want the art I want to make, it seems the best way still is to make it myself. Well, at least until mind reading becomes way better than what we have now...

Expand full comment

Again, I try to have sympathy for artists who feel as you do but there seems to be a certain dissonance in the sort of "art as human endeavor"/"creation is a part of the fundamental joy of my life" therefore "no AI in art ever" argument that ignores both the synthetic creative potential of AI and all of the ways AI will enable creatively constrained individuals to create art in ways they would not have been able to otherwise.

Like imagine if I really really really wanted to make movies but I didn't have the money or the social clout to get an entire production company and actors and everything involved with film production together on my own, but now suddenly I have most of the tools I need to create whereas before my only option was "oh well".

I feel like if you think art as human achievement is an important part of quality of life, you lose some amount of claim to sympathy for not considering the collateral damage of cutting off others' access to art as it would contribute to -their- quality of life. AI will enable our theoretical filmmaker while not cutting off existing artists from the joy of their pre-existing creativity, which suggests more to me that this is some other more pragmatic protectionism than any particular philosophy of art-as-humanism. If there's some indescribable way the existence of AI ruins it for you I guess I believe you but I don't really know what to say other than this doesn't really persuade me to agree with you on banning it.

Expand full comment

No, this is completely besides the point.

Art is an extension of you. AI robs it from that.

Say if you had a dvd of a movie, and then you made a copy of it to give to your friend. You would know that you had provided art to a friend, but since the production of it wasn't you(beyond the copying), you wouldn't feel that it was an extension of yourself.

But if you changed this slightly - if you recorded a riff track over the movie, in your voice, then you would know that it was yours, even though the movie wasn't. It was something that only you could do, whether it was good quality or not.

This applies to everything - from a stick figure to great statues - the creation of art is an extension of yourself, with each stroke telling something about yourself - perhaps you went harder because you had a bad breakup, or perhaps you put in more time on the face of someone you loved, etc.

All of that is robbed by a soulless machine, and it removes the very human, very divine element that was art that was part of you.

Neither will AI "enhance the way that art is created by creatively constrained individuals"; assuming that there is any input from a human, then the "gatekeeping" will still happen, and larger companies will have better models and better editors and so on. A more powerful tool doesn't equalize things any more than guns equalized the individual with the government armies.

And if AI completely does the art for you, then it is very much nothing of you at all; a variation of copying the DVD. It happened /to/ you. You had minimal contribution. Its like instead of playing a game, you just always get the endings. But the experience of playing the game is the important part, and which would have created the ending.

That's the reason why the existence of AI ruins it for me. Art is beautiful, whether "quality" or not, because it is something that speaks of your particular soul. It speaks of meaning. It connects you to the artist. When that is rubbed away, there is only desolation and despair.

There is the lack of meaning, and that is true existential fear.

Expand full comment

How much of this applies to derivative works? Unauthorized sequels, mixes, fanfiction, that sort of thing?

Expand full comment

I think it actually has a lot to do with sweat and effort; if you just traced Venus De Milo, well, it is something that certainly your paper and pencil varied, so there is that.

But if you redrew Venus with a great deal of effort and love to be in your backyard porch, then it is much more yourself.

Your effort would show in the specific brush strokes you used, your backyard porch would be your own, and your emotions would show in the dedication to the piece. Say it had an imperfection because you had a wrist pain - so it would also be a testimony of your history, and your being, at the time of drawing, and of your will to draw through the pain.

Expand full comment

Re: describe how person smells. Google made smell embeddings in 2019 https://ai.googleblog.com/2019/10/learning-to-smell-using-deep-learning.html

Expand full comment

I'm no expert, but you seem to be generally good at advocating for AIdontkilleveryoneism. So I was surprised to see this:

> What is the point of this tone policing, if it does not lead to policy views changing? The whole point of encouraging people to be more polite and respectful is so they will listen to each other. So they will engage, and consider arguments, and facts could be learned or minds might be changed.

AI that can adjust people's beliefs toward arbitrarily selected positions is Bad. Like, this is the scenario I worry about, more than nanobots or gene-tailored viruses or hunter-killer drones or anything else. Both from an "everyone dies" perspective, and from a "technological civilization collapses" perspective if we're luckier.

To anyone reading, 10 Internet Points for the best explanation of how "don't boil a goat in its mother's milk" actually means "don't listen to anything an AI tells you".*

(Bing Chat came up with some stuff that ended with "In this context, a human might say that the biblical commandment and the advice are both examples of cautionary principles that aim to protect one's values and interests from potential harm or corruption.", but I can't seem to get anything more specific out of it.)

* And also, "R2-D2, you know better than to trust a strange computer!"

Expand full comment

If the system universally encouraged people to communicate with each other in ways that made them absorb info and consider arguments, that would seem good, the same way it would if they did it on their own?

I do think 'AI manipulates people' is a worry down the line. This doesn't seem like that thing, nor does its failure give any comfort.

Expand full comment

Taking the second part first, I'm a bit worried that if we can train neural nets to make statements less inflammatory, we can train them to make statements more inflammatory. (And depending on how strong that "Waluigi effect" is, we may have already done so.) But given the revelations about the Twitter and Facebook algorithms that have come out, we seem to be doing "well" on that by ourselves, so how much worse could it get? **crosses fingers**

As for the first part, I'm in favor of the goal of coming to accurate beliefs, whatever those beliefs might be, through calm reasonable discussion, without pressure. Not emotion-free, but where emotions are data and not an argument. I'm happy to encourage that (although I'm somewhat unclear about the line between acceptable pressure and unacceptable pressure).

What I am viscerally scared of is the goal of changing people's beliefs. "Making them absorb info" cuts all ways. I'm not going to go so far as to say that some arguments shouldn't be considered, but I think the freedom to not consider an argument is part of our mind's immune system. I've been around people are good at manipulation, and the aftermath is bad. It's like something out of Vernor Vinge's "A Fire Upon The Deep": having to rip lots of information out of your mind and put it in a box, and then take small pieces out of the box and check them, one by one, before re-installing them. Except that the real problem isn't incorrect information, but incomplete information, and patterns that prey on weak spots in your cognition. Those are a lot harder to find.

I didn't have the fear before this experience, and probably wouldn't have had it except for this experience, and I don't know a way to communicate its effect in words. Ironically, I wish I were better able to persuade you. But I have to settle for hoping that you'll at least remember it in some back corner of your mind, and dredge it up into working memory if any confirming evidence comes to your attention.

Expand full comment

I am interested in number 28, but it links back to your post.

Expand full comment

Thats the table of contents -- it describes the contents of the post. It’s a long post, and the sections don’t maintain the numbering of the table of contents, so it is a little hard to tell.

Also... the table of contents links don’t work well in the substack app, disappointingly.

Expand full comment

If people wanted the sections to have numbers on them later we could do that, my current thought is it's not worth it. The whole ToC thing has to be done fully manually at this time.

Expand full comment

> I also continue to see people *assume* that AI will increase inequality, because AI is capital and They Took Our Jobs. I continue to think this is very non-obvious.

Let me take a stab, specifically for *income* inequality (I'll address consumption equality below). Loosely speaking, a person's income is a function of their earning power (the market value of their abilities) and their invested capital. To the extent that AI competes with humans for tasks that people are paid to perform today, it shifts the supply / demand curve and thus drives down the market value of human abilities. Thus, the ability-based component of personal income will shrink. If we assume that AI will eventually reach / exceed human-level abilities in general, and that progress in robotics eventually brings physical tasks into scope, then the ability-based component may shrink to effectively zero.

It seems to me that returns to scale must drive invested capital toward inequality, while there are at least some factors that drive personal earning power toward equality. (We all start out with pretty much the same genetic inheritance, the same allocation of arms and legs, etc.) It's already the case that the earning potential of a child from a wealthy family may benefit from better education, social connections, nutrition, and so forth, to say nothing of an inheritance or at least a safety net. But the child of parents with a net worth of $10,000,000 will not typically have 1000 times the income of the child of parents with a net worth of $10,000. In a world where AI takes most or all jobs, I think that reverses?, and I could envision that the system dynamics lead to income inequality tending toward infinity, at least until something so drastic happens that concepts like "income inequality" are no longer meaningful.

Regarding your four points:

> 1. If AI can do a task easily, the marginal cost of task quickly approaches zero, so it is not clear that AI companies can capture that much of the created value, which humans can then enjoy.

Perhaps so, but I don't see how this undercuts the argument for increased income inequality. There will still be rivalrous goods whose supply is physically limited, and their prices will not go to zero. The unemployed radiologist may be able to enjoy cheap robo-psychotherapy, but they will have trouble making payments on their large house.

> 2. If we eliminate current jobs while getting richer, I continue to not see why we wouldn’t create more different jobs. With unemployment at 3.4% it is clear there is a large ‘job overhang’ of things we’d like humans to do, if humans were available and we had the wealth to pay them, which we would.

Sure, but we are still shifting the supply / demand curve, which should push down wages, yes? Also, as AI capabilities continue to increase, it seems easy to envision unemployment rates skyrocketing. Friction effects will come into play; for instance, people may tire of repeatedly retraining themselves. Consider the example of a factory town remaining economically depressed many years after the factory closed.

> 3. If many goods and services shrink in cost of production, often to zero, actual consumption inequality, the one that counts, goes down since everyone gets the same copy. A lot of things don’t much meaningfully change these days, no matter how much you are willing to pay for them, and are essentially free. We’ll see more.

Agreed, this seems likely, for non-rivalrous and/or virtual goods. That covers a lot of ground, but it also misses a lot of ground.

> 4. Jobs or aspects of jobs that an AI could do are often (not always, there are exceptions like artists) a cost, not a benefit. We used to have literal ‘calculator’ as a job and we celebrate those who did that, but it’s good that they don’t have to do that. It is good if people get to not do similar other things, so long as the labor market does not thereby break, and we have the Agricultural Revolution and Industrial Revolution as examples of it not breaking.

Regarding the first two sentences: I don't think this is intended to argue against the potential for increased income inequality, it's making a separate point? As for the last sentence: yes, but if AI advances sufficiently, this might turn out to be "I covered one eye, and I could still see, so I'm sure it will be fine when I cover the other eye as well".

Expand full comment

> I can’t tell for myself because despite signing up right when they were announced I still don’t have plug-in access.

Gwern also doesn't have access (which surprised me). Do they just bring new people at random?

Expand full comment

I presume there are a combination of factors, including when you joined the wait-list and how good a customer you are in various ways.

I also presume various people would be well-served to put their fingers on the scale, even if I can't be too confident which direction they should want!

Expand full comment

Why are you surprised Gwern doesn't have access? I don't have a good sense of how well known he is nowadays but I could imagine his 'notoriety' helping _or hurting_ his chances at getting access to things like this.

Expand full comment

> How much of that will be sustained as the market grows?

Hopefully it won't be. Someone can (and frankly, should) do the same on any of zillion existing influencers. Generative AI will do the exact same thing with "influencers" as it will with artists. Except influencers certainly don't have any moral ground to stand on.

Expand full comment