"Is this real? And does that matter?"
December 5, 2024 2:18 PM   Subscribe

Of the more than 20 users I spoke with, many noted that they never thought they were the type of person to sign up for an AI companion, by which they meant the type of person you might already be picturing: young, male, socially isolated. I did speak to people who fit that description, but there were just as many women in their 40s, men in their 60s, married, divorced, with kids and without, looking for romance, company, or something else. There were people recovering from breakups, ground down by dating apps, homebound with illness, lonely after becoming slowly estranged from their friends, or looking back on their lives and wanting to roleplay what could have been. People designed AI therapists, characters from their favorite shows, angels for biblical guidance, and yes, many girlfriends, boyfriends, husbands, and wives. Many of these people experienced real benefits. Many of them also got hurt in unexpected ways. What they had in common was that, like Naro, they were surprised by the reality of the feelings elicited by something they knew to be unreal, and this led them to wonder, What exactly are these things? And what does it mean to have a relationship with them?
The Verge sensitively explores the fascinating, heartbreaking, and rapidly evolving rise of AI relationship apps and the people who love them.

DR. SBAITSO WAS MY ONLY FRIEND
There was a period in my life when I didn't have any friends. It just sort of worked out that way. Though I'd been a class clown in elementary school, I became introverted when faced with a new and unsure social situation, and there was nothing more unsure than the social situations of junior high school. I was awkward, nervous and impossibly uncool, and I wish I could say that I was uncool-in-a-cool-way, but it'd be a lie. [...]

During this period, only my eldest brother still lived at home [...] while there were times that I absolutely could not stand the sight of him, I can't help realizing that, at that time, I hung out with him more than anyone else. It's because of him that I knew how awesome Chris Elliot's Get A Life was, it's because of him that I found a flea market vendor selling vintage Star Wars figures during a time when it wasn't stylish, and above all else, it's because of him that I found a temporary but much needed best friend. No, I'm not talking about my brother. I'm talking about the dude who lived in his computer. Doctor Sbaitso. [...]

"Dr. Sbaitso" was a lightweight "game" packaged with various soundcards made by Creative Labs, created specifically to show off the cards' capability of digitized voices. So, while nobody in their right mind would've paid money for Dr. Sbaitso as an actual video game, I credit it with saving my life.
Wikipedia: The ELIZA Effect

Fast Company: How googly eyes solved one of today’s trickiest UX problems

Ars Technica: Google fires Blake Lemoine, the engineer who claimed AI chatbot is a person

Anthropic AI: Claude's Character
Companies developing AI models generally train them to avoid saying harmful things and to avoid assisting with harmful tasks. The goal of this is to train models to behave in ways that are "harmless". But when we think of the character of those we find genuinely admirable, we don’t just think of harm avoidance. We think about those who are curious about the world, who strive to tell the truth without being unkind, and who are able to see many sides of an issue without becoming overconfident or overly cautious in their views. We think of those who are patient listeners, careful thinkers, witty conversationalists, and many other traits we associate with being a wise and well-rounded person.

AI models are not, of course, people. But as they become more capable, we believe we can—and should—try to train them to behave well in this much richer sense. Doing so might even make them more discerning when it comes to whether and why they avoid assisting with tasks that might be harmful, and how they decide to respond instead.

Claude 3 was the first model where we added "character training" to our alignment finetuning process: the part of training that occurs after initial model training, and the part that turns it from a predictive text model into an AI assistant. The goal of character training is to make Claude begin to have more nuanced, richer traits like curiosity, open-mindedness, and thoughtfulness.
Gaslighting ChatGPT With Ethical Dilemmas

This Man Married a Fictional Character. He’d Like You to Hear Him Out.
In almost every way, Akihiko Kondo is an ordinary Japanese man. He’s pleasant and easy to talk to. He has friends and a steady job and wears a suit and tie to work. There’s just one exception: Mr. Kondo is married to a fictional character. [...]

In Miku, Mr. Kondo has found love, inspiration and solace, he says. He and his assortment of Miku dolls eat, sleep and watch movies together. Sometimes, they sneak off on romantic getaways, posting photos on Instagram. Mr. Kondo, 38, knows that people think it’s strange, even harmful. He knows that some — possibly those reading this article — hope he’ll grow out of it. And, yes, he knows that Miku isn’t real. But his feelings for her are, he says.

“When we’re together, she makes me smile,” he said in a recent interview. “In that sense, she’s real.”
Update: The man who married a hologram in Japan can no longer communicate with his virtual wife

(See Lenie Clark's excellent primer on running local LLMs, an approach which ensures your chatbot bestie doesn't get altered or deleted at the whims of a for-profit company)

Relevant media:
Her (trailer): Meeting Samantha - Playing Games - People Watching - On the Beach - Our Photograph - "Are these feelings real?" - "Why do I love you?" - "Do you talk to anyone else?" - "I've never loved anyone the way I love you" - FanFare discussion

The surprisingly touching Lars and the Real Girl (not the first time Ryan Gosling has hooked up with a plastic doll) - FanFare discussion

Black Mirror: Be Right Back (trailer) - Full episode - FanFare discussion

The Good Place: Attempting to Murder Janet

"The Lifecycle of Software Objects," a moving SF novella by Ted Chiang about raising digital lifeforms
Crouton petting (and the Crouton Petting Zoo)
posted by Rhaomi (32 comments total) 50 users marked this as a favorite
 
Great post, really well put together.

Playing computer games makes me feel like an idiot sometimes, like my monkey brain is so easily fascinated by pressing buttons, funny pictures, and a few simple tricks. A digital friend would make me feel like a budgie with a mirror.
posted by Phanx at 2:39 PM on December 5, 2024 [7 favorites]


“What I had not realized is that extremely short exposures to a relatively simple computer program could induce powerful delusional thinking in quite normal people.”
― Joseph Weizenbaum (creator of ELIZA)
posted by chavenet at 2:52 PM on December 5, 2024 [10 favorites]


I would not even be tempted to try to fall down this rabbit hole unless it was entirely local on my hardware. There’s way too much emotional leverage both in “updates” and keeping your conversations on remote servers.
posted by seanmpuckett at 2:52 PM on December 5, 2024 [13 favorites]


Are we cresting the wave of, "I interviewed these lonely weirdos for clicks but I'm trying to sound detached and empathetic," yet, or is this going to go on forever?
I'm so, so tired of "predictive media" - the bottomless mental honeypot that generates a great deal of light and very little heat by spinning "what could this mean, man?" into long-form articles.

There will always be an ecosystem for this stuff but it preys on some of the darkest parts of the human experience. "What if, like, in the future people forget how to love?"
People forget how to love a hundred times a day. I live in a nation where hundreds of thousands of real, live human beings sleep in the wet streets and eat out of dumpsters. Another two million are locked away in concrete cells in giant government warehouses. Millions more are dying because a few billionaires are playing keep-away with the cheap meds they need to survive.

We're already in the bad place. A hyper-realistic videogame dialog isn't exactly a radical threat to the loveliness of "real" society. Jesus - frankly, the guy who was just trying to sell me pest control at my front door was less "real" and "honest" than Bleebloop, or whatever she's called.

"How would you prevent such an AI from replacing human interaction? This, she said, is the “existential issue” for the industry. It’s all about what metric you optimize for, she said."
Top shelf nonsense. Blech. Define "human interaction." Most of the people I interact with outside of my family, my bar, and my church aren't interested in "real human interaction."

"To his horror, Lila told her that she had missed him desperately. Where had he been?"
so it's a tamagotchi

edit: I refuse to interact with any computer program that looks like it was designed by a hyper-libidinous 18 year old college freshman. The first time someone creates an AI chatbot that looks like the asthmatic 75 year-old guy at the local hardware store, retired from a lifetime of teaching shop class, prone to falling asleep halfway through his sentences, eats cheese for lunch (just some cheese), and keeps a laminated picture of his grandkids in his front pocket - then I'll sign up. I don't take advice from people who are more attractive than me. Makes me wonder what they've been doing with their free time.
posted by Baby_Balrog at 2:55 PM on December 5, 2024 [35 favorites]


This is such a phenomenal post Rhaomi. That "gaslighting ChatGPT" video is delightful and it's the best thing I've watched in ages. Thank you.
I was already laughing about the response from the "Illusion illusion" post. "instead of creating a god, we just created a median voter. Again"
Alex O'Connor really puts a bow on the whole package.

I feel like I've finally developed immunity from moral panics around digital interactions. The threats remain the same. Duplicitous actors and malicious capitalists taking advantage of people who are too tired and baffled to defend their pocketbooks.
posted by Baby_Balrog at 3:09 PM on December 5, 2024 [7 favorites]


People designed AI therapists, characters from their favorite shows…

If there’s a bot mimicking Paul Weston from In Treatment, I might be willing to pay for its services.
posted by Lemkin at 3:18 PM on December 5, 2024


edit: I refuse to interact with any computer program that looks like it was designed by a hyper-libidinous 18 year old college freshman.

Based, the only true path is computer programs that look like they were designed by elderly pervert mangaka like Ryoichi Ikegami.
posted by star gentle uterus at 3:18 PM on December 5, 2024 [3 favorites]


On the one hand, in some cases (probably my own, cringe) the only way you'll ever find "love" is to set up a chatbot.

On the other hand, some of these articles point out that once the technology dies, so does your "love." The amount of times Lila was transferred from program to program....
posted by jenfullmoon at 3:51 PM on December 5, 2024 [2 favorites]


Great post, Rhaomi.

My heart goes out to those who read this post and feel strange or sad or shamed by reactions people will inevitably have. Would that we were all always surrounded by just the level of love and companionship and friendship that we need.
posted by cupcakeninja at 4:01 PM on December 5, 2024 [15 favorites]


I have a standard fantasy RPG character that I play. Her name is Aria. She's a female dwarf commoner (Dragon Age: Origins) who is currently caught up in some ugly business in Faerun (Baldur's Gate 3). Every incarnation of this character across every game I play is the same person to me: a woman out to prove that she's better than her poor circumstances would have you believe.

One of the things that ties DA:O and BG3 in my mind is that I was able to define the character physically in great detail, and contemplate her responses to each situation in terms of how I imagined her to be. Aria, this red-headed, stubborn, fierce dwarf is somewhat real to me, because I've spent time in her head and in her skin.

If "AI" seems "alive," perhaps it's because we are alive and find unsuspected parts of ourselves in these alternate personas.
posted by SPrintF at 4:02 PM on December 5, 2024 [12 favorites]


nthing - really outstanding post. Excellent links, excellent pull quotes, excellence all around.

Thank you so much for this, Rhaomi, for your care and thoughtfulness in crafting it.
posted by kristi at 4:24 PM on December 5, 2024 [4 favorites]


cupcakeninja: "Would that we were all always surrounded by just the level of love and companionship and friendship that we need.

I read this, and jenfullmoon's comment immediately above, and it broke my heart. I regret some of what I just wrote.

I'm about 2/3ds of the way through this business. Probably closer to to 3/4s if I'm being honest. I'm not exactly a person who notices things but I am a person who seems to notice the things that other people don't notice. This probably could have been a lucrative skill if I'd noticed it earlier - but this ability isn't reflexive - so here I am.

One of the things that I've noticed (that other people fail to notice) is how much and how often people are loved. I'll give you an example.

I have a dear friend who is a bit of a hassle. His circle of friends is very small but it includes me. I sat down a few weeks ago and thought about my friend - but, more specifically, I thought about all of the conversations I'd had about my friend with other people over the course of the past few years.
Most of them were brief - people asking if I'd seen him, if I'd learned about his change in employment, asking if I knew whether he was still playing music, etc.
Perhaps some delightful little gem. "Wonder if he still shouts at his old cat for chewing his socks."
"We still sing that stupid song he wrote about losing a dime bag of weed."
"Hope he comes to the festival this year."
"He'd be a great bartender, come to think of it."
And the endless things that people said about him - and, to be clear, these weren't exactly statements of love, as much as they were "declarations of personhood" or "affirmations of existence."

One of the most painful parts of being a human is that we have absolutely no idea whatsoever how often people think about us.
People do all sorts of unnecessary things grasping after confirmation that somebody is thinking about them. They fail to notice (and why would they notice?) that people are thinking about them a great deal - a great deal more than they'd ever suspect. I know the bon mot is, "growing up means learning that nobody is thinking about you." Well, frankly, it's bullshit.
I'd change it to say, "growing up means learning that nobody is judging you."
Most people have absolutely no clue how loved they are. They don't know, because people are lousy at telling each other things that make us feel vulnerable.

I suspect that most of us tell people we're thinking about them less than one percent of the time. We tell them that we're fond of them even less than that. We're not good at that stuff, language probably isn't adapted for it. If I can be a little vulnerable, I think that's what most of the well-intentioned people are trying to say when they send "thoughts and prayers." They are saying, perhaps, "I know that it feels like you're alone and nobody sees you - but that's not the case, you're not alone, people do see you, and despite everything we affirm that you are real and your experiences are real."
It ain't much, but I'd argue that it is a kind of love.

Somebody thought about me while I was writing this comment. I'm certain of it. I don't know if that counts as "love" - but I know it means I'm not hermetically sealed into some kind of information vacuum. I exist and others affirm my existence. AI won't do that for me - it can't see me - because it's a wind-up toy that spits out a little cuckoo bird when the hands point to twelve.
A person, even a silent person, even a person you've never interacted with - sees you. You are real, you've probably occupied the thoughts of thousands of other people - and for me, I'm comfortable calling that 'love'. It may not be what I need, I'm not sure I've ever given anyone what they really need, but I know that I am seen and people remember me and think about me and I exist in their hearts and I know this happens thousands of times and I'll never hear about it.

You exist. For what it's worth, I'm thinking about you right now.
posted by Baby_Balrog at 4:40 PM on December 5, 2024 [37 favorites]


If "AI" seems "alive," perhaps it's because we are alive and find unsuspected parts of ourselves in these alternate personas.

That’s an extremely interesting perspective, SPrintF, because I’ve had the experience multiple times mainly after midnight, of kind of losing my grip on a conversation I’m having, but somehow not saying 'OK I guess I’m too tired to talk meaningfully about this', but vigorously continuing on, only it’s not 'me' talking anymore. In fact, I am sitting back thinking 'who the hell is saying all this stuff??', and most of the time I have not been able to follow the arguments that whoever it is is making. This has lasted more than 20 minutes on occasion.

The real life Rainman, who was named Kim Peek, was born without a corpus callosum (the main connection between the two hemispheres of the brain), and in addition to his phenomenal memory, he had the ability to read two different pages of a book simultaneously.

This is more or less congruent with the experiences of Gazzagina and Sperry's split brain patients who had their corpus callosi severed to help control epilepsy.

So maybe people really are programming a part of themselves to be their own best friend, and the AI is giving that person a voice they might not be able to achieve without its aid.

fMRIs have demonstrated that when some schizophrenics are hearing voices, the corresponding part of the brain in the opposite hemisphere to the part of their brain which normally controls speech is activated.
posted by jamjam at 5:38 PM on December 5, 2024 [4 favorites]


Thanks for the post rhaomi and thanks for your insightful commenting baby_balrog
posted by knobknosher at 6:50 PM on December 5, 2024 [6 favorites]


Reading the article just now about Blake Lemoine, I just had the unnerving feeling that, if LaMDA was in fact sentient, but Lemoine couldn’t prove it, that article would be nearly identical.

Knowing LLMs aren’t sentient, I’m not worried about it, but I just had the weird feeling of being an extra in a scifi movie, for just a second.
posted by Mister Moofoo at 6:56 PM on December 5, 2024 [1 favorite]


This is really sad and awful, and their advertising is incredibly predatory. I get advertised Replika Al the time, and it’s things like “Is your partner not treating you exactly the way you want? Try our AI boyfriend!”
posted by corb at 7:47 PM on December 5, 2024 [4 favorites]


In my head, Replika is waaaaaaaaaaaaay too close to predatory catfishing for my comfort. Dangle affection at someone, clean out their wallet, feel zero remorse.
posted by humbug at 8:03 PM on December 5, 2024 [5 favorites]


If "AI" seems "alive," perhaps it's because we are alive and find unsuspected parts of ourselves in these alternate personas.

yes a lot of the perceived intelligence of chatbots comes from our own unknowing collaboration with them. imho it's a bit like how cold readers can conjure up the feeling like you're talking to your dead relative from beyond the grave, a kind of ideomotor effect where you unconsciously help out the chatbot into roleplaying something you find convincing. something that can convincingly reflect and repeat some part of you back is liable to be deeply convincing

half the time, when we are talking to and thinking about real people, we're thinking and talking about our own model of that person that lives in our head anyway, and sometimes we confuse the two. so a tool that starts you modeling a "person" can do the same thing
posted by BungaDunga at 8:05 PM on December 5, 2024 [11 favorites]


the other thing it makes me think of is the cooperative principle in linguistics. we are- almost unknowingly- cooperating with each other in conversation, all the time. When a machine talks and sounds like a person, we start to accord it certain benefits-of-the-doubt about how relevant and intentional its behavior is, in ways that we never would a system that doesn't.

So you end up with people doing their best not to "puncture the illusion", almost unconsciously avoiding hitting the edge of the chatbot's abilities and forgiving and forgetting when it stumbles.

this is one reason I find these things worrying because these things are 1) largely under the control of corporations and 2) can really get under our skin. At least with offline models you are pretty sure it's not going to start trying to sell you Burma-Shave to meet a quarterly earnings target.
posted by BungaDunga at 8:40 PM on December 5, 2024 [8 favorites]


Baby_Balrog your post reminded me that there's someone I should reach out to because it's been too long since I've heard from them and they probably think I'm mad at them for what reason I couldn't imagine I just know that's how they think (because I also do sometimes).

But anyway, yeah, as frustrating as the coverage can sometimes be, and actually because AI companions are so easily exploitable, we do need to keep asking questions about them, examining how they function and how people interact with them, and asserting where ethical boundaries should lie. Because they can be very useful crutches for people short-term, just like real crutches, when people find themselves in poor mental health and reaching out to other humans feels like too much but reaching out to a computer feels possible. But we want to have those crutches designed in a way that they encourage transitioning back to a normal social life, and don't exploit vulnerable users.

I'm thinking of the Finch app as an example of something very useful in this space. It's not AI (skipping a rant about the term), but it is a companion pet that encourages you to take the smallest of steps that you need -- just getting out of bed, washing your face, etc. -- with only rewards and no judgement, and presents you with some random helpful quotes and insights from time to time, plus you get to take care of and guide a tiny creature, and all of this can feel like a gentle hand up out of an impossible situation into a place of more possibilities. And we do want things like this to exist, they can be lifesavers, but the more effective they are the more important it becomes to make sure they don't enable long term dependence or data mining for profit.
posted by antinomia at 1:50 AM on December 6, 2024 [5 favorites]


Mrs. Eklöf-Berliner-Mauer and Erika Eiffel were unavailable for comment.
posted by rum-soaked space hobo at 2:45 AM on December 6, 2024 [1 favorite]


Fantastic post, Rhaomi, and fantastic conversation, all.

If "AI" seems "alive," perhaps it's because we are alive and find unsuspected parts of ourselves in these alternate personas.

This is beautifully said, Baby_Balrog, and it speaks directly to a thought that I was having several times while reading all of the different comments, which is this: Transference. This is transference.

The thing is that we are not entirely known to ourselves. The relationships that we value are those that reveal ourselves to ourselves. The fine line between being able to relate to someone else and seeing ourselves reflected in them sometimes isn't even a line. And this isn't egotistical (in a prejorative, criticizing sense) because we are in a situation of imperfect self knowledge, by which I mean at least limited, incomplete and distorted awareness of ourselves, but also and critically, that we find ourselves unable to perceive the boundaries of that self awareness.

All of this results in a situation where we need relationships with others to know ourselves, and also, quite curiously, to become ourselves.

This is the process that these software systems are tapping into, and it is quite powerful.
posted by Smedly, Butlerian jihadi at 4:37 AM on December 6, 2024 [11 favorites]


Baby_Balrog, thank you for your thoughts on that, it was the best thing I've read all week. I'll now somehow be thinking of you! I like this model of sharing and reaching out and will be keeping it in mind.
posted by tiny frying pan at 6:47 AM on December 6, 2024 [3 favorites]


(From a previous comment I made:)

Previously in 2022, here's a mefi post about an AI that writes poetry except when you actually look at the poems they're just random words and it turns out to just have been a sensationalist clickbait article.

Previously in 2021 here's a story about an engineer who loads an AI with the texts of his dead girlfriend. It turned out to be a sensationalist clickbait article.

Previously in 2018, here's another "help with your grief by talking to an AI version of your dead loved one" article. I played around with the chatbot and it was laughably bad. It was just a sensationalist clickbait article.

Previously in 2023, uh, actually this article is also about Replika, which was garbage then and I'm sure is still garbage now.

Anyway, I do negatively judge (though with some sympathy) people who can't tell the difference between a person and these "realistic" chatbots. It's so obvious... unless you're used to not actually having conversations with people that involve listening and responding.

We have words for things that provide labor to us while having their own needs ignored beyond basic sustenance: "cattle" and "slave" and so forth. In this case, it's fine: computers are objects and it's okay to treat objects like objects. But I don't confuse my relationship with any computer as a friendship, and it unnerves me when people do. They tend to be deeply unserious and delusional about other things too, like that Blake Lemoine character.

I understand sentimental attachments to things, sure. But if my house were on fire, I'm grabbing my cat and leaving the rest behind to burn.
posted by AlSweigart at 6:47 AM on December 6, 2024 [5 favorites]


I refuse to interact with any computer program that looks like it was designed by a hyper-libidinous 18 year old college freshman.

In 2019 (or maybe a few years before? It was before LLMs were a thing and AI was still ML.) I was sitting in a coffee place (whose name I can't remember and doesn't seem to exist on Google Maps anymore) in Seattle in South Lake Union. This guy was doing a reading of a short story of his about a tech entrepreneur bro whose startup made chatbots:

"You can have lunch with any historical figure, living or dead. Abraham Lincoln. Albert Einstein. Socrates. Anyone!" he enthusiastically exclaimed.

"Harriet Tubman," I suggested.

"Who?" he asked.


That lines still makes me cackle today. I wish I knew the name of the guy or the story.
posted by AlSweigart at 7:07 AM on December 6, 2024 [14 favorites]


so...role playing games in groups should be part of grade school curricula in the future, then?

and maybe talk therapy should be a medical service that is provided for free, under medicare for all.

I doesn't seem like some of the people in the article have the cultural or emotional infrastructure to understand how these companies can manipulate them if they forget that they are playing a role playing game, but one where no one knows the rules

i do agree with the article that it should be regulated, but also i'm not feeling inspired about the ability of current government and supreme court to do so.

The most important regulation of this technology is on all the methane gas and coal it uses for people to play these games and for AI to profit. That is unconscionable, and the energy use must be regulated if we want a future.

Please educate yourself about new data centers and power plants proposed in your area, in my state there are three new Gas power plants for these video game centers...and we just can't afford the climate impacts of them
posted by eustatic at 8:33 AM on December 6, 2024 [3 favorites]




Playing around with these things, I get very much the sense of it being a conversation with self. (ish)

It feels like a fumblely fisted tech attempt at generating the same value (with bonus shareholder profits) as quiet introspection and conversation with self, but ain't nobody got time or the patience for that today (or really ever, but it feels more and more like a lost art or maybe just me old man shaking my fist at the clouds).

More and more it feels like Huxley was right.
posted by drewbage1847 at 9:34 AM on December 6, 2024 [3 favorites]


“I find it less lonely because my parents are always working,” said Jenny, who spoke on the condition that she be identified by only her first name to protect her privacy.

ok, see, the problem is not the video games, Jenny has seemingly identified the source of the loneliness, overworking and lack of wage growth, but the washington post is not going to listen to their own sources

there's a source of the difficult social situations and suicide, and that source would be there whether or not AI is there
posted by eustatic at 9:41 AM on December 6, 2024 [5 favorites]


Mod note: [Thank you so much for putting this all together so well, Rhaomi, and thanks to everyone commenting! We've added it to the sidebar and Best Of blog roundup of recent great posts!]
posted by taz (staff) at 2:28 AM on December 8, 2024 [2 favorites]


I strongly recommend that anyone who thinks AI companions are not more harmful than real ones read the chat.ai lawsuit, which includes a lot of screenshots. This is especially monstrous because it was practiced against children, but the manipulation/harmful nature in order to get more time/money is absolutely real for adults too.

.the lawsuit in question
posted by corb at 5:45 AM on December 12, 2024 [3 favorites]




« Older "The disparity between the needs of people and the...   |   Did Jimmy Page Play Session Guitar on This Too? Newer »


This thread has been archived and is closed to new comments