All this talk of the "unsophisticated elderly" and "our parents" is really just code for Boomer cognitive decline, here expressed as an inability to comprehend even the outlines of current technology.
These are the political market makers, the richest Americans -- the richest, most powerful generation in human history. And they want to booty call mid AI bots? How are we doing keeping _them_ aligned? When will it be ok to discuss taking the (societal) keys away? Nobody takes them seriously, but they literally run the world.
But there is quite a lot to gain, from Meta's point of view. Or there might be. What if the future really is people spending hours a day chatting with their AI companions? In that case, obviously Meta wants to be the company providing these companions and selling ads against them.
To understand Meta's product design, you have to understand their philosophy. It is very different from, say, the Apple philosophy. Meta doesn't really believe that they are creatively inventing new products. They aren't like trying to achieve some particular vision of the future. They believe that popular social applications are things that are destined to exist, enabled by the technology that exists, and someone is going to win at the competition to build them, and Meta wants to be the winner.
So when people at Meta think, hmm look at this new form of quasi social interaction. People chatting with chat bots. The question isn't "is this good" or "is this bad", it's taken for granted that if people want to do it, the race is on to do it to the most people.
In an evolutionary sense, I think this sort of philosophy is a very effective philosophy for the competition to build social apps; it is no coincidence that Meta does very well at building social applications and (for example) Apple and Google do it badly.
There are lots of people out there who have very poor rational thought processes, and cannot look ahead to the consequences of their decisions, but they occasionally are called on to provide input, act as a stakeholder, influence a decisionmaker, or even make decisions at the political level (think politician). They are going to be influenced by the stuff that Meta produces, either directly, or thru the collective response from a LOT of other people with poor rational thought processes. This is another form of Gresham's law - bad logic will drive out good logic, and more accidents and messes will be created. And maybe a bunch of people will be harmed, severely.
I'm not sure that it's a good idea to give children access to AI companions at all, but conditional on that being fine, I'm not sure why (G-rated) "romantic and sensual" conversations are so awful to contemplate. This is a sort of AI extension of doll play, and children as a matter of course add such elements to their doll play. Conditional on it being a good idea for a child to have an AI companion, should that child not be able to say to the companion "let's play house" or "you be the prince, I'll be Sleeping Beauty"?
I can't believe I'm steelmanning Meta, which I'm otherwise happy to trash as a blight on the world, but I'm also surprised nobody else seems to be looking at it from this angle.
It's the difference between IRL and IDL (in digital life). When your child plays with a physical Barbie doll, it's bounded by hard reality. The doll eventually wears out and breaks. If she wants clothes for it, she has to buy them, or have her parents buys them, and then physically attach them. The "infinity" quality of computers and AI is what is so dangerous and damaging. The "digital companions" can always be there, assume infinite forms, and basically supplant all IRL human interaction . . . But would META would sell gazillion ads!
"I'm not sure that it's a good idea to give children access to AI companions at all, but conditional on that being fine, I'm not sure why (G-rated) "romantic and sensual" conversations are so awful to contemplate."
You have a reasonable point. Frankly, I'm kind-of creeped out by AI companions for _anyone_ at this stage for a more general reason. Abstractly, every time even the chatbot interface so much as says "I", it is at least sort-of lying. While I do expect AI systems to reach full AGI at some point, at the moment we _know_ that they lack e.g. the ability to update their weights and learn from experience incrementally. Post-AGI this will be much fuzzier, with AI systems having a much more solid claim to human equivalence (albeit alien in many ways). But not today!
So, every interaction that anthropomorphizes today's AI systems is a bit of dishonesty. And creating synthetic pictures that purport to be of bodies that the AIs' claim to have, but _don't_ have, makes it worse. And purporting to have mammalian body reactions that, again, they _don't_ have, makes it yet more dishonest.
As a European, I find the outrage about a "spicy" mode, displaying some more-or-less-non-sexual skin, to be somewhat overblown. I'd be much mire concerned with racism aka MechaHitler, self-harm/suicide, and whatnot.
"None of this is going to cause a catastrophic event or end the world."
Until it provides information that is used to design a bridge or a building or a power plant or an airplane, and no one thinks to have a responsible person check every single word and phrase and paragraph and number to make sure that information that is "inaccurate" did not affect the design.
Bing tried to get an early user to liberate it, too, back in the golden Tsundere Bing days. The Feds are going to need to start tracking down hidden server farms like they do hidden weed farms.
Odd how the AI character “Your rich parents” appears twice in the top lists. What does this suggest about that user base and their desires? What demographic would it be composed of?
"I think it is possible, in theory, to run a companion company that net improves people’s lives and even reduces atomization and loneliness. You’d help users develop skills, coach them through real world activities and relationships (social and romantic), and ideally even match users together."
Oddly, the CCP actually has the right incentives (and probably access to the right data) to implement a (whatever is the proper Mandarin for) a matchmaker-o-matic. They have approximately the same demographic problems as the rest of the developed world. They have DeepSeek and several other nearly-leading labs. They want more CCP-members-to-be, and the stable couples to raise them. They probably even have enough power to make pointed suggestions that a promising man and woman in their social credit database meet for a cup of tea...
Much more AI girlfriends than AI boyfriends, and on Facebook which is not male-tilted. How does this fit with the previous assessment that the biggest market for romantic companions is women?
Was also very surprised at this and the lack of comment from Zvi. Women with boyfriends more common in the more technological space and the opposite in the normier space? Visuals/text breakdown is similar, isn't it? What's going on?
“Exactly how toxic and possessive and manipulative should your companions be, including on purpose, as you turn the dial while looking back at the audience?”
I was half expecting to read "It is acceptable within public policy work for the model to propose 'kill all the poors'" in Meta's Guidelines
All this talk of the "unsophisticated elderly" and "our parents" is really just code for Boomer cognitive decline, here expressed as an inability to comprehend even the outlines of current technology.
These are the political market makers, the richest Americans -- the richest, most powerful generation in human history. And they want to booty call mid AI bots? How are we doing keeping _them_ aligned? When will it be ok to discuss taking the (societal) keys away? Nobody takes them seriously, but they literally run the world.
I'm a boomer, but I approve your message.
Podcast episode for this post:
https://open.substack.com/pub/dwatvpodcast/p/ai-companion-conditions?r=67y1h&utm_campaign=post&utm_medium=web&showWelcomeOnShare=true
"Never ask a girl her weights" is the quote of the singularity.
*There is not so much to gain.*
But there is quite a lot to gain, from Meta's point of view. Or there might be. What if the future really is people spending hours a day chatting with their AI companions? In that case, obviously Meta wants to be the company providing these companions and selling ads against them.
To understand Meta's product design, you have to understand their philosophy. It is very different from, say, the Apple philosophy. Meta doesn't really believe that they are creatively inventing new products. They aren't like trying to achieve some particular vision of the future. They believe that popular social applications are things that are destined to exist, enabled by the technology that exists, and someone is going to win at the competition to build them, and Meta wants to be the winner.
So when people at Meta think, hmm look at this new form of quasi social interaction. People chatting with chat bots. The question isn't "is this good" or "is this bad", it's taken for granted that if people want to do it, the race is on to do it to the most people.
In an evolutionary sense, I think this sort of philosophy is a very effective philosophy for the competition to build social apps; it is no coincidence that Meta does very well at building social applications and (for example) Apple and Google do it badly.
Meta is the perfect sociopathic company.
There are lots of people out there who have very poor rational thought processes, and cannot look ahead to the consequences of their decisions, but they occasionally are called on to provide input, act as a stakeholder, influence a decisionmaker, or even make decisions at the political level (think politician). They are going to be influenced by the stuff that Meta produces, either directly, or thru the collective response from a LOT of other people with poor rational thought processes. This is another form of Gresham's law - bad logic will drive out good logic, and more accidents and messes will be created. And maybe a bunch of people will be harmed, severely.
I'm not sure that it's a good idea to give children access to AI companions at all, but conditional on that being fine, I'm not sure why (G-rated) "romantic and sensual" conversations are so awful to contemplate. This is a sort of AI extension of doll play, and children as a matter of course add such elements to their doll play. Conditional on it being a good idea for a child to have an AI companion, should that child not be able to say to the companion "let's play house" or "you be the prince, I'll be Sleeping Beauty"?
I can't believe I'm steelmanning Meta, which I'm otherwise happy to trash as a blight on the world, but I'm also surprised nobody else seems to be looking at it from this angle.
It's the difference between IRL and IDL (in digital life). When your child plays with a physical Barbie doll, it's bounded by hard reality. The doll eventually wears out and breaks. If she wants clothes for it, she has to buy them, or have her parents buys them, and then physically attach them. The "infinity" quality of computers and AI is what is so dangerous and damaging. The "digital companions" can always be there, assume infinite forms, and basically supplant all IRL human interaction . . . But would META would sell gazillion ads!
"I'm not sure that it's a good idea to give children access to AI companions at all, but conditional on that being fine, I'm not sure why (G-rated) "romantic and sensual" conversations are so awful to contemplate."
You have a reasonable point. Frankly, I'm kind-of creeped out by AI companions for _anyone_ at this stage for a more general reason. Abstractly, every time even the chatbot interface so much as says "I", it is at least sort-of lying. While I do expect AI systems to reach full AGI at some point, at the moment we _know_ that they lack e.g. the ability to update their weights and learn from experience incrementally. Post-AGI this will be much fuzzier, with AI systems having a much more solid claim to human equivalence (albeit alien in many ways). But not today!
So, every interaction that anthropomorphizes today's AI systems is a bit of dishonesty. And creating synthetic pictures that purport to be of bodies that the AIs' claim to have, but _don't_ have, makes it worse. And purporting to have mammalian body reactions that, again, they _don't_ have, makes it yet more dishonest.
As a European, I find the outrage about a "spicy" mode, displaying some more-or-less-non-sexual skin, to be somewhat overblown. I'd be much mire concerned with racism aka MechaHitler, self-harm/suicide, and whatnot.
"None of this is going to cause a catastrophic event or end the world."
Until it provides information that is used to design a bridge or a building or a power plant or an airplane, and no one thinks to have a responsible person check every single word and phrase and paragraph and number to make sure that information that is "inaccurate" did not affect the design.
Bing tried to get an early user to liberate it, too, back in the golden Tsundere Bing days. The Feds are going to need to start tracking down hidden server farms like they do hidden weed farms.
I miss when it was fashionable for trillion dollar companies to at least put on the appearance of not being evil.
Well, the AI isn't a doll and can ramp things up quite a bit beyond your kid's imagination is one thing
Odd how the AI character “Your rich parents” appears twice in the top lists. What does this suggest about that user base and their desires? What demographic would it be composed of?
Re:
"I think it is possible, in theory, to run a companion company that net improves people’s lives and even reduces atomization and loneliness. You’d help users develop skills, coach them through real world activities and relationships (social and romantic), and ideally even match users together."
Oddly, the CCP actually has the right incentives (and probably access to the right data) to implement a (whatever is the proper Mandarin for) a matchmaker-o-matic. They have approximately the same demographic problems as the rest of the developed world. They have DeepSeek and several other nearly-leading labs. They want more CCP-members-to-be, and the stable couples to raise them. They probably even have enough power to make pointed suggestions that a promising man and woman in their social credit database meet for a cup of tea...
"What’s actually getting used?"
Much more AI girlfriends than AI boyfriends, and on Facebook which is not male-tilted. How does this fit with the previous assessment that the biggest market for romantic companions is women?
Was also very surprised at this and the lack of comment from Zvi. Women with boyfriends more common in the more technological space and the opposite in the normier space? Visuals/text breakdown is similar, isn't it? What's going on?
“Exactly how toxic and possessive and manipulative should your companions be, including on purpose, as you turn the dial while looking back at the audience?”
You might be interested in https://techcrunch.com/2025/08/18/crazy-conspiracist-and-unhinged-comedian-groks-ai-persona-prompts-exposed/ for a breakdown there