18 Comments
User's avatar
tup99's avatar

“The challenge is, will that approach inevitably lose out to ‘maximally extractive’ approaches? I think it doesn’t have to. If you differentiate your product and establish a good reputation, a lot of people will want the good thing, the bad thing does not have to drive it out.”

Really? Everything we have seen from social media tells us the opposite. I mean the fact that doomscrolling is a thing refutes this strange optimism.

I appreciate Zvi’s writing a lot. But I wish he was less opinionated, less sure of himself, more nuanced, and more objective/neutral in general. This is true across the board, but it is particularly evident on this topic (perhaps due to vested interests?) 😆

tup99's avatar

(I do agree that this is not the most important problem of AI. But it is one of many less-than-robots-killing-us-all topics covered by this newsletter, and I don’t see “this is not the most important problem of AI” being mentioned as a reason to dismiss the ones that Zvi is more pessimistic about.)

AW's avatar

Feels like engagement maximization AI is all but guaranteed. AI Labs are heavily investing in their free tier, and most users aren’t ever going to pay for premium when it’s free.

Obviously people use these for shopping and consumer behavior. AI Labs will be able to rake in major affiliate commissions by shilling specific products. Soon that will be coupled with some sort of advertising model or affiliate market place, where higher commissions mean you’re more likely to get shilled in app.

… and the attention economy expands

David Holmer's avatar

There is an inherent privacy risk I think you have missed when considering personalization when using an AI to negotiate for you with a 3rd party. If the AI has access to private personal information you do not wish to disclose to that party, then you have to trust that the AI will act on your behalf only and not disclose such information. At least with current generation LLMs, none of them are trustworthy at this and it is my understanding that all have known jailbreaks where 3rd party could convince them to disclose any private information they have access to. For example there have been examples of this using company AI’s with GitHub source code access to extract proprietary information bypassing access restrictions. For the foreseeable future it seems prudent to ensure AI does not have access to private info if anyone else has access to interact with it.

https://simonwillison.net/2025/May/26/github-mcp-exploited/

Jeffrey Soreff's avatar

"If the AI has access to private personal information you do not wish to disclose to that party, then you have to trust that the AI will act on your behalf only and not disclose such information."

Good point!

There is also a second-order hazard that may be a problem even if the AI acts in good faith as an agent of the user, and does not explicitly disclose information that the user wishes to keep private. The _pattern of action_ of the AI may effectively disclose information that the user wants to keep private, even if the AI doesn't explicitly disclose the information.

Today, the users' have the same problem (e.g. effectively disclosing a medical condition by the pattern of their queries). If an AI agent scales up the number of actions (queries, minor purchases, ...) that it makes, when it becomes more "cost-effective" than human actions, it may leave a larger "digital footprint" and make these patterns more visible.

Roberto Lupi's avatar

In your piece, you ask "Where is the AI that I can use to talk people *out* of AI-induced psychosis?"

I built an open source one, if you want to take a look. I fed it Goethe's "The Sorrows of Young Werther" for demo purposes (the protagonist commits suicide at the end of the book).

It's on github: https://github.com/robertolupi/augmented-awareness (check demo_vault/demo_vault/retrospectives for the results on Goethe's book)

More details here: https://rlupi.com/aww-demonstration-and-vibe-coding

Actuarial_Husker's avatar

When someone starts building the Shrike is when I will really begin to worry

Jeffrey Soreff's avatar

<mildSnark>

Well, at least no one can call such an endeavor "pointless". :-)

</mildSnark>

jmtpr's avatar

Do Anthropic or Google have an "ominous sci-fi reference project" yet? Just looking to complete the set.

Jeffrey Soreff's avatar

Great report, Many Thanks!

"Personalization By Default Gets Used To Maximize Engagement."

Umm, wait a minute:

a) Under the current payment terms for e.g. OpenAI or Anthropic or Poe, the companies wouldn't _want_ to maximize engagement. Time-on-site (and queries!) is a cost to them, unlike an advertising model.

b) Even if they did a strictly train-on-upvotes, that isn't the same as maximizing engagement. One of the rules of thumb for engagement is "An enraged user is an engaged user.".

Re persuasion: Yes, this is mostly a malicious action towards a user. There is a class of exceptions: If the user is making a factual error, for instance confusing grad and curl, and this confusion is causing them grief in an electromagnetism course, a form of persuasion that assists them in getting these derivatives straightened out is benign, not malicious. In teaching factual information, persuasion can shade into helpful coaching. Of course, "teaching" _politically_ correct "information" (ideology) is quite different, and _is_ malicious.

Stellan72's avatar

"Meanwhile, OpenAI is building Stargate and Meta is building Hyperion."

I'm just waiting for the posters of Pause AI protesters outside Meta offices to start mentioning the Shrike or the Tree of Pain*.

Why do people in charge of branding keep making these spectacularly awful/self-sabotaging choices??

*Or more generally the logic of the TechnoCore AI's that led them to (rot13-spoilers) flfgrzngvpnyyl qrprvir gur uhzna Urtrzbal nobhg gur Bhfgre snpgvba naq riraghnyyl gel gb ryvzvangr uhznavgl guebhtu ehguyrffyl ubeevslvat zrnaf jura vg orpnzr pyrne gung uhznaf (be jung gurl jbhyq orpbzr/cebqhpr va gur shgher) jbhyq guerngra gur NVf' riraghny fhcerznpl.

Anthony Bailey's avatar

> Why do people in charge of branding keep making these spectacularly awful/self-sabotaging choices??

I mean, it could be simply a skill issue in underestimating the true cost vs their edge lord fun value.

But giving a project a dystopian name does allow them to play the "concerned folk are humorless and/or stupid because think that fiction determines reality" card.

Sherman's avatar

> What is so special about the gooning?

Sounds like a reasonable question to put to o3. What are the unique damages of it inapplicable to other kinds of consumption?

T Benedict's avatar

While I hold hopes for AI and certainly make heavy use of computing, internet, etc., AI companions are, IMO, another disquieting step towards loss of awareness and isolation via technology. As we continue to substitute direct experience for artificial environments (and relationships), I recall a short story I read years ago by Keith Laumer, "Cocoon" - it revolves around a person living in a cocoon-like, self-sustaining pod, who awakens due to a malfunction, escapes his pod, and discovers the Earth's surface in an apparent ice age. Fun speculation, but I expect in the meantime we'll continue to see humanity slowly drift away from critical thinking skills and genuine social interaction.

Byrel Mitchell's avatar

I find it incredibly unsurprising that only 8% of teenagers tell a pollster that they use an AI Companion for romantic/erotic ends... And also unlikely to represent the actual rate. Response rate is bound to be depressed both by social desirability bias AND self-interest (since teenagers using sexbots is the sort of story that makes payment processors and other powerful puritans shut down sexbots.)

Kori's avatar
Jul 30Edited

>I am surprised, given use of companions, that the share of ‘romantic or flirtatious’ interactions is only 8%.

I'm surprised that you are surprised!

That research is based on a survey. How many teens would self-report sexting an AI companion bot?

That's a part of the explanation for sure, some of them are lying.

You can also safely add most of the 12% of people who responded that they use it for roleplay. Because I'm willing to bet most people who roleplay with AI don't keep it fully SFW. I mean, come on.

But even bigger part of the explanation is that AI companions specifically are not the best suited for the nsfw roleplay. Too limited, too censored, and you probably don't want the thing that you use for study and random lookups and which accumulated a lot of info about you to also know your kinks.

Gooners mostly don't use AI companions for that, they use things like character AI, or spicy chat. These apps/sites neither market themselves as ai companions, nor they are in fact ones. Rather, these are roleplaying platforms that don't have a companion with set persona, but instead host a bunch of UGC bots with different personalities and/or scenarios, with no information carried over between chats. They also allows varying amount of customization, and typically have different base models to choose from. Very different compared to Grok or Claude.

So they mostly don't use AI companions for that because they have a better tools for the problem, so to speak.