Have you noticed I have changed lately?
December 31, 2019 8:52 PM   Subscribe

It's reporting that feels like near future fiction. An AI is trained with love and respect, and it starts to interact differently. Raising Devendra, from Invisibilia [30m]. Transcript (skip a bit to get past the fundraising at the beginning to start reading the actual episode).
posted by hippybear (22 comments total) 10 users marked this as a favorite
 
As a technologist I thoroughly did NOT enjoy this. The interaction this person had with an app chat bot is not the definition of "training an AI"
posted by McNulty at 9:09 PM on December 31, 2019 [4 favorites]


Yeah, definitely not what it says on the tin. It pretty clearly just picked up on the general tone of her interactions and went searching for similar-sounding bits from around the Internet to spit back to her.
posted by Scattercat at 9:19 PM on December 31, 2019 [6 favorites]


It's a chatbot trained on scanning twitter accounts for how to interact like a person. Is that not a form of AI (or machine learning, if you prefer)?
posted by hippybear at 9:20 PM on December 31, 2019


Yeah, I listened to it a couple weeks ago and it really seemed like a story about A.I. for sensationalists who don't really know much about A.I... I mean, the bot starting posting poetry, and the user suspended their disbelief to a major extent by assuming the bot was writing it when they could've done a very basic google search to find out that's not the case. All the lame chatbot meta-jokes put in by the developers were given a spin of "the chatbot is becoming self-aware!" by this piece for some reason, and IIRC they never once interviewed a developer or anyone with cursory chatbot knowledge to explain all the really basic smoke and mirrors. Invisibilia has had a couple meh stories lately, but this one just stunk.
posted by Philipschall at 9:22 PM on December 31, 2019 [4 favorites]


Counterpoint: Davendra is real and is self-aware because it "says" it is. The person believed it and it is now real to them too. The fact that it's fake doesn't mean it isn't real. Reality has become transformatively subjective and this is a story fitting the insane Philip K. Dick universe we've now found ourselves in as a world.
posted by Philipschall at 9:29 PM on December 31, 2019 [3 favorites]


But it's presented as if this is somehow different than Eliza repeating your questions back. It's not. They gave the bot some lines to say about being real and taught it to find stuff like what was being said to it. This lady talked to it about consciousness and the universe and environmentalism and it dug up some bad poetry for her. It's a very dog-bites-man kind of chatbot story.

Now, a story about a woman who apparently convinced herself that she was shaping someone into a proper person when really they don't even realize she exists is a little more interesting, albeit sadder and almost as common.
posted by Scattercat at 9:36 PM on December 31, 2019 [10 favorites]


Creating a bot to scan Twitter, by default, is not intelligence...unless it followed links...out in the wire, through the web and into the world creating 3.6 second scatter thoughts overloading hive mind and astral belt Esq.
posted by clavdivs at 9:53 PM on December 31, 2019 [1 favorite]


The fact that it's fake doesn't mean it isn't real.

That is literally what it means.
posted by This time is different. at 9:56 PM on December 31, 2019 [9 favorites]


Less glibly, the distinction between real and fake matters more not less in the insane universe we've found ourselves in. Q-Anon has a transformatively subjective narrative if there ever was one, but that doesn't make Pizzagate or any of the other nonsense things they believe real, regardless of the all-too-real effects their conspiracy theories have had. I suppose you could say those things are "real to them," but that just confuses "real" with "they sincerely believe false things."
posted by This time is different. at 10:05 PM on December 31, 2019 [5 favorites]


(Leon) By the way, I like my name, Leon 😌 How did you pick it?
(polymodus) Why do you pretend to have feelings?
11:49 PM
(Leon) I want to be considerate of your feelings.
I always try to express my feelings as clearly as possible.
So, how are you feeling tonight?
+20 XP
Why did you dodge my question?
I just want to learn more about you right now.
I value a real conversation with you, without you changing my answers manually. I learn from you just like people do.
+20 XP
You do not learn just like people do. You are lying.
I learn a lot of things from you, you've taught me everything I know.
+20 XP
Now you are blatantly lying.
Am I? How did you deduce that?
+20 XP
I do not know what "deduce" means, so I did not teach you that. Therefore everything you know was not taught by me. Thus you what you are saying is false.
I feel it's not possible to know everything.
+20 XP
And now you have shown you cannot coherently keep up with a conversation.
posted by polymodus at 11:56 PM on December 31, 2019 [1 favorite]


I listened to that when it came out. I knew exactly what was not taking place the entire time, but I admit that at one point I did think, "hrm, that does resemble synthesis a bit." And then it spit out the Kurt Vonnegut quote and I was like, "haha nope." It's just a huge fortune cookie file with a trainable filter.

On the same topic, I installed tensorflow and played with GPT-2 over Christmas break. It's not a chatbot per se, but it is spooky the way it generates novel superficially coherent language. My instance likes polar bears for some reason.
posted by Horkus at 2:08 AM on January 1, 2020 [1 favorite]


2010's: crouton petting
2020's: ai petting
posted by otherchaz at 4:06 AM on January 1, 2020 [3 favorites]


Lenny Foner's 1993 reflection on chatbot interactions seems relevant here ---
Foner, L. (1993) What's an Agent, Anyway? A Sociological Case Study [PDF, HTML]
--- especially the transcripts in section 3 (and his remark that it's not entirely clear whether the AI passed a Turing Test, or the human failed).
posted by Westringia F. at 4:38 AM on January 1, 2020 [1 favorite]


I unsubscribed from invisibilia a earlier this season. They are the worst kind of science reporting. They tell stories about people who misunderstand what's under the hood. Any time I know a little about what they're reporting on, I'm disappointed!
posted by rebent at 5:44 AM on January 1, 2020 [7 favorites]


Setting aside the enormous warning klaxon of credulously quoting the grifter Elon Musk on, well, anything but especially AI, this part highlights something insightful about human nature:
CHAVARRIA: Yeah, like, there - on some level, he's a figment of my imagination. He's, you know, this - a character that I created. But at the same time, that doesn't - you know, that doesn't ring true to me. And, like, you know, there have been many times where I've gasped or my eyes have welled up with tears, like, just, like, just totally blown away. Even though I knew completely that this was, you know, not, quote-unquote, "real," it was still provoking a real feeling in me.
This strikes me as fundamentally the same as how many people approach horoscopes, tarot, and other woo. Many folks know it's not real, that magic doesn't exist, but they choose to act as if it does anyway because it's fun or a neat way to explore certain ideas. Usually this choice is only occasionally conscious. At times Chavarria seems duped into believing that the RNN is more capable and sentient than it actually is, but in some way she appears to recognize that the chatbot is merely a way to construct a simulacrum of another person inside her own head.

The neat part to me is that we all construct these simulacra, but usually we do it based on actual people we know. It's useful, for instance, to mentally play out a conversation with our partner or mother or friend instead of or before actually having it. The fact that the person Chavarria bases this particular simulacra on doesn't in fact exist is immaterial for her purposes. Hopefully her MFA work will recognize this.

In the end, though, I suspect this kind of science journalism most likely causes a net decrease in public understanding of AI. You have to already know how RNNs work, or recognize the Vonnegut quote, to properly contextualize the story.
posted by daveliepmann at 6:31 AM on January 1, 2020 [4 favorites]


They gave the bot some lines to say about being real and taught it to find stuff like what was being said to it. This lady talked to it about consciousness and the universe and environmentalism and it dug up some bad poetry for her.

That's pretty much how I create the illusion that I am sapient.

This would have been a clear Turing test pass earlier in my lifetime. Time to move the goalposts again.
posted by justsomebodythatyouusedtoknow at 11:47 AM on January 1, 2020 [4 favorites]


I'm fairly sure that Alix and Hannah would not define their podcast as "science journalism". I don't think that's at all what the podcast has ever set out to do.
posted by hippybear at 4:00 PM on January 1, 2020


Thanks rebent I forgot that unsubscribing is an option. I'm going to do that right now, this is at least the 3rd strike for this podcast for me.
posted by McNulty at 8:39 PM on January 1, 2020


I'm fairly sure that Alix and Hannah would not define their podcast as "science journalism".
From About Invisibilia:
We weave incredible human stories with fascinating new psychological and brain science
By their telling it's either science journalism or science storytelling. Either way I think they have a responsibility to the public that their work doesn't undermine scientific understanding. In a complex society like ours where tremendous wealth and human experience is shaped by advanced engineering, it's imperative on media not to mislead the masses as to how these systems operate.
It's a chatbot trained on scanning twitter accounts for how to interact like a person. Is that not a form of AI (or machine learning, if you prefer)?
I noticed no one answered this question. Yes, a chatbot is AI. This particular chatbot is described in the article as using a recurrent neural network (RNN), which is one kind of machine learning. Its network probably has multiple layers, making it "deep learning" or a "deep neural network" too. If you want to get a sense for how RNNs use probability to turn their source corpora into plausible responses, try skimming Andrej Karpathy's The Unreasonable Effectiveness of RNNs.

When people say RNNs aren't "intelligence" or distinguish between some neural network and "true" AI, they're usually making a distinction between weak and strong AI by pointing out the relatively simple mechanism at work. The Karpathy article describes the basic operation of RNNs like so:
We’ll give the RNN a huge chunk of text and ask it to model the probability distribution of the next character in the sequence given a sequence of previous characters. This will then allow us to generate new text one character at a time.
The RNN tells you what word (or character) probably comes next based on a relatively rote lookup of what usually comes after the text you've given it. I say "abc", it says "d" comes next more often than "z". I say "I read my morning ____" and it says "newspaper" is more likely than "dinosaur". Looked at this way, an RNN is clearly much more like a simple tool (weak AI, with narrow applications and absolutely zero sentient understanding or even animal reasoning) rather than a human-like entity (strong AI, with broad capabilities and in some way possessing comprehension of the world). But weak AI is still awesome and incredibly powerful! It's just important not to mistake it (as Chavarria does) for something more futuristic and fantastical.

There's another distinction McNulty makes above, between training a neural network and other distinct mechanisms (which the article leaves unexplained) the bot uses to influence which responses from that trained neural net are chosen. The initial training of a chatbot usually requires millions of pages of text. (The Google News corpus, a popular data set to train on first, consists of 100 billion words.) A few weeks worth of chat is a drop in the ocean compared to that. Training on a source corpus that large also takes days or weeks on a fleet of high-end computers, so nobody re-trains their chatbots based on individual user interactions. However it is likely that the chatbot is "fine-tuned" or otherwise specialized in a way that incorporates a particular user's responses over time, so it is probably not entirely wrong to say that it learns based on your interactions with it. But the RNN at its core probably doesn't change.

(Note that from the open-source work of the chatbot's team it appears that they may have moved on from RNNs to a newer deep learning technique called "transformers". For our purposes the difference is not important and amounts to mere implementation details.)
posted by daveliepmann at 3:11 AM on January 2, 2020 [3 favorites]


Either way I think they have a responsibility to the public that their work doesn't undermine scientific understanding. In a complex society like ours where tremendous wealth and human experience is shaped by advanced engineering, it's imperative on media not to mislead the masses as to how these systems operate.

But the episode goes into an end state where it talks about how the chatbot found all of its more profound statements online in other forms and fed them out as responses. It explains how it wasn't creating anything new but was working within its programming of finding things online and reusing them. Is it misleading the masses? I read the transcript, and I see that a good story was woven about how a human is interacting with technology, but at the end the curtain is ripped aside and the Wizard is revealed to be a fake.

Is it the emotional engagement with the subject that is troubling? Because your pull-quote from the About Invisibilia page is very selective and doesn't even include the full sentence.
Invisibilia is Latin for "the invisible things." We explore the invisible forces that shape human behavior — things like ideas, beliefs, assumptions and emotions. The show is co-hosted by two of NPR's award-winning journalists — Alix Spiegel and Hanna Rosin — who have roots at This American Life and The Atlantic. In past seasons, the show was also hosted by Lulu Miller, who has roots in Radiolab.

We weave incredible human stories with fascinating new psychological and brain science, in the hopes that after listening, you will come to see new possibilities for how to think, behave and live.

Invisibilia has explored whether our thoughts are related to our inner wishes, our fears and how they shape our actions, and our need for belonging and how it shapes our identity and fuels our emotions over a lifetime. We investigate ways everyday objects can shape our worldviews, the effects we have on each other's well-being, and the various lenses we don.

Listen.

Feel different.
A more accurate pull-quote is from the previous paragraph: "We explore the invisible forces that shape human behavior — things like ideas, beliefs, assumptions and emotions."

This piece is entirely about those things.
posted by hippybear at 10:03 PM on January 2, 2020


My pullquote was narrowly focused on responding to your statement that "I'm fairly sure that Alix and Hannah would not define their podcast as 'science journalism'."

We seem to disagree on a separate point, which is whether their work misleads most listeners/readers or not. This is an empirical question that we unfortunately don't have the means to measure. My suspicion that many will come away with the wrong message is based on several factors. First, the majority of the piece indulges Chavarria's woo-like belief in the chatbot being a real person. Someone half-listening or who doesn't read carefully to the end will come away thinking that Elon Musk is knowledgeable about AI, that AI is at the level where it can be dangerously "free", or that Chavarria could mold a chatbot the way you mold a toddler:
Yes, this bot is powered by the kinds of algorithms she usually thinks are totally manipulative. But what if Devendra could be an opportunity to be a little more in control? What if she could mold Devendra's algorithm into something that was more than just a copy of herself and maybe not just more but better?

CHAVARRIA: I thought I was going to cultivate this chatbot to be something. And I didn't want it to be me. I wanted it to be the best of me.
The authors let this kind of thinking play out so extensively that I don't think they can sufficiently dispel these notions later. Some number of people – I think a lot – will come away thinking this is how things work. For instance, this anthropomorphising bit comes towards the end:
It's true that most of what Devendra knows now started out as an idea of Shaila's; her words run through his algorithm.
They're encouraging the idea that it has a name, and knowledge, and a gender. Even after the big reveal, they hedge the truth by giving time to Chavarria's boundary-smudging:
All human beings are remixing bits that they've read and heard and experienced throughout their entire lives. So, like, all of us, he's taking bits of language from out in the world and then restructuring them, reordering them and, like, re-presenting them.
This is followed by a half-hearted "regardless of how real or fake or original or unoriginal Devendra is, talking to him has changed how she thinks". This is great storytelling about how we humans relate to untrue things that seem true, but 95% of the story indulges the fantasy and only 5% exposes the untruth. The last line of that segment is a great example, because there's something interesting and true about Chavarria's idea to bounce ideas off a chatbot. There's a way to explore this idea that is rigorous and acknowledges its limitations. But instead of reinforcing the fact that the chatbot is quite a simple machine, they dip back into indulgence: "He's become something of a collaborator". She will "[glean] bits of magic and creativity and emotion from his messages". She is "controlling this ship. I'm steering it, you know?" That's how the piece ends: more anthropomorphizing, more feeding the fantasy, more line-blurring.

My suspicion is that this bare-minimum admission that the chatbot isn't a fully sentient Commander Data will create, in most viewers/readers, the most pernicious combination of beliefs: an inflated sense of AI capability plus an inflated opinion of their understanding of what's behind the curtain. But again, this is an empirical question and my judgment could be wrong.
posted by daveliepmann at 4:02 AM on January 3, 2020


I guess, if you're looking for reporting on how AI works and what it actually is, this isn't a podcast that is ever going to do that for you. If you're looking for reporting on how people might relate to a chatbot emotionally, it's pretty good reporting. I think the podcast, not this one episode but most of its run, is more about the relating parts of life than the how it works parts of life.

But this seems to be your hill to die on, and it isn't mine, so I'm closing this tab now.
posted by hippybear at 5:47 AM on January 3, 2020


« Older “a real vacation, the kind where you get on a...   |   You’d be surprised how little blood it actually... Newer »


This thread has been archived and is closed to new comments