8 Comments

Once again, an excellent update. Lets hope the federal agency comes through

Expand full comment

I'm taking issue with this quote of yours:

“Consider how this works with actual humans. In the short term, there are big advantages to playing various manipulative games and focusing on highly short term feedback and outcomes, and this can be helpful in being able to get to the long term. In the long term, we learn who our true friends are.”

I’ve been reading your Substack (avidly! eagerly!) for half a year now, and it’s clear as an azure sky that you are smart as fuck. I like to think of myself as smart-ish (PhD in English from the U of Utah, 2011), and, as a high school English teacher, I have a LOT of experience with “actual humans” (i.e., people of all ages for whom your writing would be impenetrable).

I’ve been noticing for a while that you (and most other AI players) consider the set of “actual humans” to include pretty much ONLY adults who made it through elementary and secondary education WITHOUT the influence of current LLMs and their wide availability on the internet and other platforms.

These kids are *not* thinking about the world in rational ways, and all the effort I make as a teacher to help them *do* so amounts to a hill of fucking beans (that we might all die on). And I put in A LOT OF FUCKING EFFORT! And yet:

1. One of my students (a junior, so ~17 years old) told me that she got some really good mental health advice from the Snapchat AI "friend" she has on her phone;

2. One of my students (a senior, so ~"an adult") said I should "stop talking so much about AI in class;"

3. All of my students think they know better about AI and tech in general than all of the adults they know (which I can relate to, having managed my parents' VCRs, DVDs, DVRs, Internets, modems and whatever else since I was the age of my students).

In the "long term" that you mention, the "actual humans" you mention (i.e., the young adults who are coming of age in a world where AI is *uniformly available to cater to their every desire*) will absolutely NOT be capable of knowing who in fact are the "true friends" might be. They're going to trust whatever the Snapchat bot, or the ChatGPT bot, or the MetaBot, or what-the-fuck-ever-Bot will tell them.

I'm deeply, DEEPLY concerned about how little attention you luminaries in the AI world are giving to how AI is affecting the less-aware folks among us, and in particular, the young'uns. I keep seeing this assumption from y'all that kids will simply be able to navigate the ubiquity of AI in a way that, back in the day, we could navigate the setting of a digital clock for our parents, or the spooling of magnetic tape from a mixtape our girlfriend gave us when the stereo ate it.

You guys ARE smart as fuck, but your common sense was nurtured in a world where AI didn't have much of a role to play in the development OF that common sense. The rest of the world is by and large dumber than y'all, and by and large younger than y'all, and AI is having an unprecedented effect on their ability to understand what it even means to "play manipulative games."

When it comes to the role of AI in our world, your warrant that "In the long term, we learn who our true friends are" is a dangerous one to hold, especially when you and I have had a *much* longer term in which to discover who are our true friends, and have been able to do so with much less influence from AI.

I hate to parrot a conservative cliché (especially since I'm on the far left), but "think of the children!" has become pretty sound advice for those of you writing about AI in ways that are otherwise so insightful and intelligent.

Expand full comment

I appreciate the effort, and I 100% am not at all trying to speak to such folks here - I don't know if I could do so but I know I would write completely differently.

What I mean in that segment is that I think that's how it works when e.g. people generally interact with each other? It certainly seems true directionally, that over time people figure these things out more than they do in the short run. And that over the months and years, regular people do learn who they can and can't rely on and trust, even if sometimes they are fooled. And yes, sometimes such people end up trusting the wrong people or groups and thinking pretty wrong things.

I still do think that people will collectively figure it out, with help from AIs and other humans. I think having an AI you CAN trust to talk to is going to be a huge deal, and that e.g. Claude-2 is reasonably trustworthy in that sense, and that soon that will extend to being current and able to respond to claims from other AI (or human) sources in real time. I'd think we can make such people more protected rather than less.

And while I doubt the 17yo got the best possible mental health advice from the AI in Snapchat, in expectation it's probably fine? Or at least, no less fine on average than what they hear from their fellow students or Google or whatever other sources they'd otherwise use?

I do think there's a chance I turn out to be wrong about this, but I remain optimistic on this kind of mundane level.

Expand full comment

> Don’t give me ‘Yes,’ ‘Sure’ and ‘Great,’ how about ‘Yes,’ ‘No’ and ‘Maybe’? Somehow I am the weird one.

Tech companies have always had trouble understanding the subtleties of emotional valence. (This one drives me nuts, too.)

> Many people really, really hate the idea of AI artwork.

I remain convinced that it's a vocal minority acting in their own self-interest. They're trying to do to AI what was done to Google Glass (and more recently, to some extent, the metaverse, though in that case Zuckerberg's execution of the idea was really pathetic). But Glass was custom hardware, and anybody can run Stable Diffusion on a commodity GPU, so I think they will fail.

Expand full comment

Surely it’s not TOO tinfoil hat to read between the lines of Time et al. complaining about information access [to science] as actually complaining about information access [using anyone other than Time et al. as their filter]?

I continue to be confused by people insisting AI will make people more lonely; like you don’t even need human-level AI for that to be not strictly true, it just has to reasonably mimic my cat.

Expand full comment

Interviews over 3 hours tend to be better for the same reasons that rock songs over 5 minutes tend to be better. If they weren't top-notch content they'd have been cut shorter. Selection effects rule everything around me!

Expand full comment

The EconTalk episode was great, though I can't believe I started listening to EconTalk 16ish years ago...

I think Russ doesn't engage with the argument enough when it comes to actual AI risk but you did a great job explaining it and generally in language and style that would make sense to him.

My best attempt at convincing him would probably be to focus on science and tech. I think he thinks AI as better strategists and I think a focus on what they may be able to do simply with e.g. hacking, biotech, robot armies, etc. may work better. He dismissed EYs nanobot example but I think there are more prosaic avenues that could work better.

Expand full comment

Thanks! I've been listening for a while too, although not fully consistently. I definitely knew going in that a sci-fi approach wouldn't work, you need to be practical and of course think like an economist. It helps that I do that a lot anyway, and that the doom case is overdetermined enough that such a case is strong.

Expand full comment