53 Comments

No insights or opinions to contribute. Just thanks for keeping across all of this, so that I at least have a broad idea of what's going on.

This seems necessary, since I'm genuinely embarrassed by the opinions I held on the matter as recently as February.

Expand full comment

You untypo Snoop Dogg as Snoop Dog a couple times. Incidentally, I recommend listening to his album Doggystyle if you haven't, guy's famous for a reason.

Expand full comment

> What’s striking about the above is that the alarms are simply ‘oh, we solved robotics.’

Just to clarify, I wasn't really describing smoke alarms, but rather the sort of "alarms" used by fire companies when you already have a major fire underway. According to Wikipedia, a 3-alarm fire would involve calling out 12 engine companies and 7 ladder companies. It's not an early warning tool, it's a crisis management tool. I am not trying to tell people when they should start looking for EXIT signs and walking calmly in that direction. GPT 4 is already at that point. Please move towards the EXIT signs now. I'm trying to tell people, "If you see X, you need to be prepared to mount a major, immediate response with significant resources, or else the city will burn."

I think that it's worth having this kind of alarm, too. Especially for people who aren't sold on ultra-short timelines where an AI quickly bootstraps itself to effective godhood and disassembles the planet with nanobots. If an AI is forced to rely on some combination of humans, robotics and synthetic biology to manufacture GPUs, then we need to be able to distinguish between different sizes of "fires." You need different responses for "there is a grease fire in my wok" and "we have multiple high-rises on fire" and "the western US is on fire again."

Anyway, more on robotics...

The problem with robotics is pretty much like the problem of self-driving. The first 90% is 90% of the work. The next 9% is _another_ 90% of the work. And the next 0.9%... You get the idea. And to complete most real-world tasks, you'll need a lot of 9s. Waymo has mostly solved self-driving, but "mostly" isn't nearly good enough.

We've been able to build "pretty good" robots for a couple of decades now. I'm not talking your classic factory robots, which are incredibly inflexible. I'm not even talking about pick-and-place robots, which involve visual processing and manual dexterity: https://www.youtube.com/watch?v=RKJEwHfXs4Q. We've actually had jogging robots and somersaulting robots and soccer robots for a while now. And they're not terrible? Many of them are similar to self-driving cars: They can accomplish a specific task 90-99.9% of the time under reasonably real-world conditions. Which isn't close to good enough to make them viable.

This is a major reason why a working "handybot" is one of my alarms that we're nearing the endgame. If you've ever worked on old houses, you'll know that nothing ever goes quite according to plan. You need vision and and touch and planning and problem recognition and fine manipulation and brute force. A hostile or indifferent AI which can solve this is almost certainly capable of surviving without humanity, given enough time to build out a full-scale industrial economy.

Expand full comment
May 11, 2023·edited May 11, 2023

I'm a regular EconTalk listener, so that was the first one of EY's podcast interviews I had seen (I've read a fair amount of hist stuff on various places, but I haven't watched any live stuff). While I don't 100% agree with EY, I think I mostly understand his position. If his interview with Russ was emblematic: someone _please_ tell him to stop doing these. It is literally painful to listen to. He is _terrible_ at making the argument he's trying to make. I'm having trouble finishing the episode. The first 30 minutes, at least, consist of Russ trying to get EY to explain his point and he seems completely incapable of doing so in a way that will make sense to people who don't already know and understand the arguments. EY should have been able to bypass the first 30 minutes of conversation with a few sentences.

No wonder this argument is not making headway with "normies" if this is the quality of interview that happens.

For what it's worth, the writings of you and Scott Alexander, and the recent podcast interview you did, were all _dramatically_ better.

Like I said, while I take the risk of AI-killeveryonism pretty seriously, I think I'm much more hopeful than you and others in the community, but even I wanted to go back in time and interrupt EY to tell him better ways to communicate his points.

I can imagine someone who was completely unfamiliar with the debate coming away thinking that EY's side of the argument is completely nonsensical, and resulting in _decreased_ concern.

Maybe the second half of the interview gets better, but like I said, I'm having trouble getting to it.

Expand full comment

> Sure, but why should I care? As Bart Simpson pointed out, I already know how not to hit a guy. I also know how not to convince him of anything.

I've recently come to believe that mere words don't change people's minds. I watch a rationalist-adjacent-adjacent community where lots of politics gets discussed daily. Even with the rationalist-ish openness to ideas and a decent amount of kindness, hardly anyone has actually meaningfully changed their minds on any major issue.

My crackpot theory is that what changes political beliefs is an event with (perceived) extremely low probability that shocks the brain and forces it to update, typically heavily emotional. Words *can* rarely do this, but they have to be very novel, not just the standard arguments everyone's heard before. I don't know if these words can be crafted on demand, though.

> The issue is that these properties do not tell you

You may have accidentally the rest of the sentence here.

Expand full comment

Greatly appreciate the roundup and especially the contrary takes on the Google memo. Strong agreement that given the ease of switching between models, something needs to actually surpass GPT-4 to be taken seriously (and I'm certain GPT-4.5 is in development!)

One editor's note: you accidentally a portion of the superintelligence section.

Expand full comment

Thank you, sir.

Expand full comment

> You use Stable Diffusion if you want to run experiments with various forms customization, if you want to keep your creations and interactions off of Discord, or you want to blatantly violate MidJourney’s content guidelines.

Earlier in the post you asked why MidJourney is only available on Discord. I think this is why--MidJourney can point to DISCORD's content guidelines as an additional firewall between them and highly realistic images of [disturbing thing].

Either way, thanks for another great roundup!

Expand full comment

I thought your piece in the Telegraph was very good. I just want to point out that I'm a normie, and I read your substack as well as more popular pieces. You may bring some more normies over here by writing for more popular publications. Whether you think that's a good thing remains to be seen.

Expand full comment

I like the idea of AI freeing people up to create art but I also don't think you can't have it both ways. What I think is important here is that there ought to be cultural norms in favor of clearly indicated what is human produced vs AI produced (and then perhaps regulation in the case that coordination fails to produce this). Further, I think there's a lack of imagination for where I would expect AI to help people who aren't artistically inclined in conventional ways to produce works of art that they never would have otherwise; I'm expecting a revolution in single-man independent games as people who are only good at one of writing, art, programming, or sound design have tools to handle all the others (or even if the revolutions doesn't happen I have some ideas that I wouldn't mind personally throwing into that machine if nobody else wants to).

I think there was an old Scottpost that was something like "it doesn't matter how much art AI produces because I'll still be interested in the art my friends produce regardless" and I think that stands both in the positive absolute sense but also in a less obvious current sense in which all the content that exists that isn't made by my buds might as well be AI right now for how little meaningful human contact I can get with the people making it. I try to be charitably inclined towards artists who are worried about losing their livelihood but I also get the impression that there's a more urgent sense that artists are most worried about losing their monopsony on having access to a parasocial audience. I mean, a lot of content creators say they hate this so maybe this is me failing at trying to be charitable towards them, and there's lots to be said about content glut not guaranteeing quality, but I can't shake that there seems to be an edge of ego to the panic.

Expand full comment

Re: describe how person smells. Google made smell embeddings in 2019 https://ai.googleblog.com/2019/10/learning-to-smell-using-deep-learning.html

Expand full comment
May 11, 2023·edited May 11, 2023

I'm no expert, but you seem to be generally good at advocating for AIdontkilleveryoneism. So I was surprised to see this:

> What is the point of this tone policing, if it does not lead to policy views changing? The whole point of encouraging people to be more polite and respectful is so they will listen to each other. So they will engage, and consider arguments, and facts could be learned or minds might be changed.

AI that can adjust people's beliefs toward arbitrarily selected positions is Bad. Like, this is the scenario I worry about, more than nanobots or gene-tailored viruses or hunter-killer drones or anything else. Both from an "everyone dies" perspective, and from a "technological civilization collapses" perspective if we're luckier.

To anyone reading, 10 Internet Points for the best explanation of how "don't boil a goat in its mother's milk" actually means "don't listen to anything an AI tells you".*

(Bing Chat came up with some stuff that ended with "In this context, a human might say that the biblical commandment and the advice are both examples of cautionary principles that aim to protect one's values and interests from potential harm or corruption.", but I can't seem to get anything more specific out of it.)

* And also, "R2-D2, you know better than to trust a strange computer!"

Expand full comment

I am interested in number 28, but it links back to your post.

Expand full comment

> I also continue to see people *assume* that AI will increase inequality, because AI is capital and They Took Our Jobs. I continue to think this is very non-obvious.

Let me take a stab, specifically for *income* inequality (I'll address consumption equality below). Loosely speaking, a person's income is a function of their earning power (the market value of their abilities) and their invested capital. To the extent that AI competes with humans for tasks that people are paid to perform today, it shifts the supply / demand curve and thus drives down the market value of human abilities. Thus, the ability-based component of personal income will shrink. If we assume that AI will eventually reach / exceed human-level abilities in general, and that progress in robotics eventually brings physical tasks into scope, then the ability-based component may shrink to effectively zero.

It seems to me that returns to scale must drive invested capital toward inequality, while there are at least some factors that drive personal earning power toward equality. (We all start out with pretty much the same genetic inheritance, the same allocation of arms and legs, etc.) It's already the case that the earning potential of a child from a wealthy family may benefit from better education, social connections, nutrition, and so forth, to say nothing of an inheritance or at least a safety net. But the child of parents with a net worth of $10,000,000 will not typically have 1000 times the income of the child of parents with a net worth of $10,000. In a world where AI takes most or all jobs, I think that reverses?, and I could envision that the system dynamics lead to income inequality tending toward infinity, at least until something so drastic happens that concepts like "income inequality" are no longer meaningful.

Regarding your four points:

> 1. If AI can do a task easily, the marginal cost of task quickly approaches zero, so it is not clear that AI companies can capture that much of the created value, which humans can then enjoy.

Perhaps so, but I don't see how this undercuts the argument for increased income inequality. There will still be rivalrous goods whose supply is physically limited, and their prices will not go to zero. The unemployed radiologist may be able to enjoy cheap robo-psychotherapy, but they will have trouble making payments on their large house.

> 2. If we eliminate current jobs while getting richer, I continue to not see why we wouldn’t create more different jobs. With unemployment at 3.4% it is clear there is a large ‘job overhang’ of things we’d like humans to do, if humans were available and we had the wealth to pay them, which we would.

Sure, but we are still shifting the supply / demand curve, which should push down wages, yes? Also, as AI capabilities continue to increase, it seems easy to envision unemployment rates skyrocketing. Friction effects will come into play; for instance, people may tire of repeatedly retraining themselves. Consider the example of a factory town remaining economically depressed many years after the factory closed.

> 3. If many goods and services shrink in cost of production, often to zero, actual consumption inequality, the one that counts, goes down since everyone gets the same copy. A lot of things don’t much meaningfully change these days, no matter how much you are willing to pay for them, and are essentially free. We’ll see more.

Agreed, this seems likely, for non-rivalrous and/or virtual goods. That covers a lot of ground, but it also misses a lot of ground.

> 4. Jobs or aspects of jobs that an AI could do are often (not always, there are exceptions like artists) a cost, not a benefit. We used to have literal ‘calculator’ as a job and we celebrate those who did that, but it’s good that they don’t have to do that. It is good if people get to not do similar other things, so long as the labor market does not thereby break, and we have the Agricultural Revolution and Industrial Revolution as examples of it not breaking.

Regarding the first two sentences: I don't think this is intended to argue against the potential for increased income inequality, it's making a separate point? As for the last sentence: yes, but if AI advances sufficiently, this might turn out to be "I covered one eye, and I could still see, so I'm sure it will be fine when I cover the other eye as well".

Expand full comment

> I can’t tell for myself because despite signing up right when they were announced I still don’t have plug-in access.

Gwern also doesn't have access (which surprised me). Do they just bring new people at random?

Expand full comment

> How much of that will be sustained as the market grows?

Hopefully it won't be. Someone can (and frankly, should) do the same on any of zillion existing influencers. Generative AI will do the exact same thing with "influencers" as it will with artists. Except influencers certainly don't have any moral ground to stand on.

Expand full comment