You are not a parrot
March 2, 2023 12:49 PM   Subscribe

And a chatbot is not a human. And a linguist named Emily M. Bender is very worried what will happen when we forget this. …“intelligent” according to what definition? The three-stratum definition? Howard Gardner’s theory of multiple intelligences? The Stanford-Binet Intelligence Scale? Bender remains particularly fond of an alternative name for AI proposed by a former member of the Italian Parliament: “Systematic Approaches to Learning Algorithms and Machine Inferences.” Then people would be out here asking, “Is this SALAMI intelligent? Can this SALAMI write a novel? Does this SALAMI deserve human rights?”
posted by rickw (39 comments total) 42 users marked this as a favorite
 
Polly wants an ungated version
posted by chavenet at 1:18 PM on March 2, 2023 [5 favorites]


A salami may not injure a human being or, through inaction, allow a human being to come to harm.

A salami must obey orders given it by human beings except where such orders would conflict with the First Law.

A salami must protect its own existence as long as such protection does not conflict with the First or Second Law.

It definitely lands different when you put it like that.
posted by mhoye at 1:29 PM on March 2, 2023 [10 favorites]


This is a *great* summary of the issues. The first 6 paragraphs and the octopus analogy is what I'd send to anyone who wanted a relatively simple explanation of what LLMs are.
posted by lalochezia at 1:31 PM on March 2, 2023 [2 favorites]


As Bismarck said about LLMs, it’s best not to know how the sausage is made
posted by Fiasco da Gama at 1:38 PM on March 2, 2023 [1 favorite]


Love Bender, would love this piece except it's way too both-sides. Fine and necessary to discuss the sociopathic Altman, but I could have done without the fawning over Manning, whose motivated reasoning was not made nearly obvious enough.
posted by humbug at 1:41 PM on March 2, 2023 [2 favorites]


Chatbots will make the last generation of algorithm-based hate misinformation look like the good old days. I work in tech and have a well informed opinion on this.
posted by Abehammerb Lincoln at 1:45 PM on March 2, 2023 [10 favorites]


Bullshit
Options for
Linguistic
Optimization and
General
Narcissistic
Algorithms

or something.
posted by donpardo at 1:46 PM on March 2, 2023 [9 favorites]


(not at all implying that Dr. Bender's stance is bullshit)
posted by donpardo at 1:55 PM on March 2, 2023


I could have done without the fawning over Manning, whose motivated reasoning was not made nearly obvious enough.
I didn't read it as fawning at all but maybe that's because the guy's actual explanation of his theory feels so fundamentally at odds with lived reality that I dismissed it out of hand. (also it does introduce him while emphasizing Stanford's, ah, let's say somewhat more morally compromised position in all of this)

I actually found it useful to have some of the more deluded voices given space because it works to reinforce what seems to be Bender's central point:
“We haven’t learned to stop imagining the mind behind it.”
, i.e. that everyone for financial gain or magical thinking or whatever other dumb reasons refuses to see that a decoy is not a duck; just because something appears to resemble a thing does not make it that thing, the brain is not a computer, human consciousness and intelligence are not defined solely by their outputs (and thinking they are is, like, a key tenet of being a solipsist and/or sociopath), etc etc etc not to mention we are all in many ways far dumber about this sort of thing than we realize and very easily duped
posted by Kybard at 1:57 PM on March 2, 2023 [10 favorites]


That's a hell of an ending.

I'm not sure that it was both-sides-ing/fawning over Manning, it seemed pretty clearly indicting him as financially compromised, eager to throw out modes of communication which didn't fit his model, it directly calls his perspective on "We have to build AI before worse people do" untrustworthy & discomforting, it puts his stance & financial interests in line with patriarchal acceptability of rape...

There's always room for it to be sharper, of course; but that seemed pretty searing & unafraid of what side to take to me.
posted by CrystalDave at 2:01 PM on March 2, 2023 [4 favorites]


The training data for ChatGPT is believed to include most or all of Wikipedia, pages linked from Reddit, a billion words grabbed off the internet. (It can’t include, say, e-book copies of everything in the Stanford library, as books are protected by copyright law.)
Wikipedia and Reddit comments are more or less exactly as copyrighted as books are! Wikipedia is at least permissively licensed but Reddit isn't.
posted by BungaDunga at 2:02 PM on March 2, 2023 [6 favorites]


I was programmed to believe lunch meat had a name in the 70s.
posted by Abehammerb Lincoln at 2:06 PM on March 2, 2023 [8 favorites]


Why get fuzzy about intelligence?

Was it ever not fuzzy?

Perhaps what we should really be doing is recalibrating our understanding of the relationship between “thinking” and “feeling.” I am strongly inclined to believe that the imperative to value each other as humans is rooted in the shared subjective experience of embodied beings - something we can kinda sorta extrapolate onto our fellow animals as well but not so much onto computer programs. I am somewhat disinclined to think there’s much use to trying to draw a bright line between a software emulation of some abstracted function of cognition and the human equivalent, saying one is “intelligence” and the other isn’t, as if the meaning of that term was static and well-defined up to this moment.

(I hope I’m halfway making sense here, I’m probably not entirely making sense today)
posted by atoxyl at 2:10 PM on March 2, 2023 [6 favorites]


It's interesting to see even the stochastic parrot issue being weaponized by pro-AI attitudes in a reductive way, there's a truthy and fake news characteristic to that line of thought. But I also think it's interesting that Bender's response was to reassert our humanity, which goes to show why the conference question asker kind of stumped or puzzled her briefly.

Maybe the right argument is that we are also stochastic parrots, even a stone is a stochastic parrot, but that's no excuse for not having ethics.
posted by polymodus at 2:51 PM on March 2, 2023 [2 favorites]


Perhaps what we should really be doing is recalibrating our understanding of the relationship between “thinking” and “feeling.”

It’s common for people to acknowledge that other animals may feel, but to doubt that they think, and to identify thinking as a unique feature of humanity. We don’t have much of a framework to contend with things that think, but don’t feel.
posted by atoxyl at 3:03 PM on March 2, 2023 [6 favorites]


Sam Altman was in many ways the perfect audience: a self-identified hyperrationalist
~~~
When I asked Manning how he defines meaning, he said, “Honestly, I think that’s difficult.”
~~~
“Let’s say you have a life-size RealDoll in the shape of Carrie Fisher.” To clarify, a RealDoll is a sex doll. “It’s technologically trivial to insert a chatbot. Just put this inside of that.”

Lemoine paused and, like a good guy, said, “Sorry if this is getting triggering.”

I said it was okay.

He said, “What happens when the doll says no? Is that rape?”
It is incredible how readily these guys tell on themselves.
posted by klanawa at 3:29 PM on March 2, 2023 [25 favorites]


The training data for ChatGPT is believed to include most or all of Wikipedia, pages linked from Reddit, a billion words grabbed off the internet.

I wonder how much of Metafilter is in there. Given we have a sockpuppet account on an answering spree in AskMe, someone's clearly already interested in parroting our own answers back to us.
posted by CrystalDave at 3:31 PM on March 2, 2023 [3 favorites]


The first 6 paragraphs and the octopus analogy is what I'd send to anyone who wanted a relatively simple explanation of what LLMs are.

It seems like basically a restatement of John Searle's Chinese room argument:
Searle's thought experiment begins with this hypothetical premise: suppose that artificial intelligence research has succeeded in constructing a computer that behaves as if it understands Chinese. It takes Chinese characters as input and, by following the instructions of a computer program, produces other Chinese characters, which it presents as output. Suppose, says Searle, that this computer performs its task so convincingly that it comfortably passes the Turing test: it convinces a human Chinese speaker that the program is itself a live Chinese speaker. To all of the questions that the person asks, it makes appropriate responses, such that any Chinese speaker would be convinced that they are talking to another Chinese-speaking human being.

The question Searle wants to answer is this: does the machine literally "understand" Chinese? Or is it merely simulating the ability to understand Chinese? Searle calls the first position "strong AI" and the latter "weak AI".

Searle then supposes that he is in a closed room and has a book with an English version of the computer program, along with sufficient papers, pencils, erasers, and filing cabinets. Searle could receive Chinese characters through a slot in the door, process them according to the program's instructions, and produce Chinese characters as output, without understanding any of the content of the Chinese writing. If the computer had passed the Turing test this way, it follows, says Searle, that he would do so as well, simply by running the program manually.

Searle asserts that there is no essential difference between the roles of the computer and himself in the experiment. Each simply follows a program, step-by-step, producing behavior that is then interpreted by the user as demonstrating intelligent conversation. However, Searle himself would not be able to understand the conversation. ("I don't speak a word of Chinese," he points out.) Therefore, he argues, it follows that the computer would not be able to understand the conversation either.
posted by Artifice_Eternity at 4:11 PM on March 2, 2023 [1 favorite]


In lieu of derailing on the problems with Searle and his hypothetical, it seems significant that Bender's point is not that the octopus is lacking some ineffable thing-in-itself of understanding, but that because it doesn't understand what the words mean, it won't be able to respond in a human-like or even helpful way to novel inputs like "help, how do I defend myself against a bear?"

(It seems like there are still some problems with this, and I have a gut feeling that it still doesn't quite get to the heart of the matter -- GPT3 is shockingly good at handling seemingly novel inputs, but still fails in suspiciously octopus-like ways at some kinds of language tasks.)
posted by Not A Thing at 4:27 PM on March 2, 2023 [2 favorites]


It's more subtle and realistic than Searle. Searle's room really is externally indistinguishable from a human. Bender's hyperintelligent octopus fails the Turing test.
posted by BungaDunga at 4:28 PM on March 2, 2023 [3 favorites]


Bender's point is not that the octopus is lacking some ineffable thing-in-itself of understanding, but that because it doesn't understand what the words mean, it won't be able to respond in a human-like or even helpful way to novel inputs like "help, how do I defend myself against a bear?"

This explanation actually makes me less sure what the point of the analogy is supposed to be. Of course, if GPToctopus has never heard of a bear, it doesn’t know what to make of that request (nor bear-specific defense strategies like making noise) but neither does a person who has never heard of a bear. And of course, communicating solely by telegraph, it cannot perceive the features of a bear, which is a realistic analogy for a fundamental limitation of LLMs as they exist. It does not know what it’s like to be afraid of a bear (but the neither do I). But are we meant to think that a text-based octopus that has information about, say, wolves, and the dangers of wolves to humans, cannot take a description of a bear and generalize to the dangers of another sort of large animal with teeth and claws? Because LLMs already demonstrably can do that kind of thing - not perfectly but well enough that I absolutely would not want to bet on the location of the ceiling to that kind of capability.
posted by atoxyl at 5:42 PM on March 2, 2023 [5 favorites]


With respect to the Turing test, I've always thought any cursory glimpse at history would show you that humans are, as a rule, really really bad at evaluating the humanity of other humans—and that goes for elaborate 19thC intellectual arguments about slavery, to the reflexive language used today about the desire to 'speak to a real person' rather than a call centre worker with a script, a worker whose job is counterfeit themselves.

'Help, how do I defend myself against a bear', isn't really a test-question, because yes, it's really only about the limits of a language sample about bears. A better one is 'Help, how do I defend myself against my slaves'—because human rights aren't simply things benevolently granted to beings able to demonstrate intelligence, they're also gains won by formerly subaltern groups, who can recognise each other as humans, indentify their collective interest, and take action to win them. Computers by themselves are infinitely far from that point.
posted by Fiasco da Gama at 5:53 PM on March 2, 2023 [9 favorites]


"There's always room for it to be sharper, of course; but that seemed pretty searing & unafraid of what side to take to me."

Yeah, like, look at some of the other examples the author quotes (emphasis mine):

“You know, humans discovered metalworking, and that was amazing. Then hundreds of years passed. Then humans worked out how to harness steam power,” Manning said.

This is not a very smart man.

First, over how kids learn language. Bender argued that they learn in relationship with caregivers; Manning said learning is “self-supervised” like an LLM. Next, they fought about what’s important in communication itself. Here, Bender started by invoking Wittgenstein and defining language as inherently relational: “a pair of interlocutors at least who were working together with joint attention to come to some agreement or near agreement on what was communicated.” Manning did not entirely buy it. Yes, he allowed, humans do express emotions with their faces and communicate through things like head tilts, but the added information is “marginal.”

Yeah, this guy has no idea what he's talking about. Like, we actually knows what happens when children don't have enough hours of hearing language from a caregiver. They learn some, because humans are good at learning, but not nearly as much or as quickly as a child hearing language from a human caregiver. (And I feel like this is pretty broadly known because it's been a major preoccupation of Sesame Street since the beginning? Trying to provide language development assistance via TV for kids whose parents work long hours, and they release studies every few years.)

Also, a man who thinks that the added information of body language and facial expression is "marginal" is a man who hits on women at the bar and, when they turn away, walks around their barstool to keep objectifying them. And then, as the ending of the article makes clear, puts a LLM inside a sex doll and rapes it, and then goes down to the bar and does that to a human woman, because the category "human" doesn't matter to him.
posted by Eyebrows McGee at 5:56 PM on March 2, 2023 [19 favorites]


It does not know what it’s like to be afraid of a bear

Which leaves me coming back again to the idea that we can locate our personhood - or part of it - in the understanding that we all do know what it’s like to be afraid, even if not of a bear. But the framing of the octopus thought experiment above gives me the same sense that I have about terms like “stochastic parrot” as well that makes me also want to push against them a bit. I don’t know that it’s how they are intended to be read, but I certainly see people reading them in a way that seems to imply particular limits to practical capability that I think are misleading. If you are concerned about the implications of the technology, as I am, potentially dangerously misleading.
posted by atoxyl at 5:56 PM on March 2, 2023 [1 favorite]


Or you could use the actual name for them that the AI/Machine Learning community has been using for 70+ years and call them agents.

ETA: Now that it occurs to me, this might have been a pun in the Matrix movies.
posted by gible at 10:36 PM on March 2, 2023 [4 favorites]


Bender argued that they learn in relationship with caregivers; Manning said learning is “self-supervised” like an LLM.

To be fair to Manning, he makes this claim because of this discovery in his recent paper:

"Emergent linguistic structure in artificial neural networks trained by self-supervision", 2020

He takes this emergent linguistic structure as empirical evidence that an AI may not need parents or innate genetics in order to learn language the way we've assumed is necessary for children. Whether he is overgeneralizing his discovery to actual human development is of course debatable, but given the paper his argument is not completely out of the blue.
posted by polymodus at 11:00 PM on March 2, 2023 [1 favorite]


Aren't humans playing the role of extremely attentive caregivers in teaching machines language? Isn't that the whole point of the AI "training" on human texts and whatnot? Humans seem primed to generally speaking strive to improve the intellectual abilities of the next generation; with AI we are doing a bit of substitution, methinks.
posted by chavenet at 1:46 AM on March 3, 2023


The problem with that simplified analogy is that humans are more like the intelligent designers: we're also taking on the role of evolution. So the straightforward analogy does not work.

For example, a child that learns object permanence did not learn it from their caregiver; they learned it autonomously by existing in their environment as a consequence of evolution. Manning's position based on his linguistic emergence paper (see above) is that not only do we learn non-linguistic capacities in this way, we might also learn some linguistic capacities in this way as well, etc.

I'm just extrapolating his position from his actual paper, which is explicitly about this in the abstract, so given that context then the debate that happened at the conference doesn't sound as insane as it did in the article.
posted by polymodus at 2:38 AM on March 3, 2023 [1 favorite]


As someone cleverer than me said "The danger is not computers thinking like people, it's people thinking like computers"
posted by night_train at 3:42 AM on March 3, 2023 [2 favorites]


The danger is not computers thinking like people, it's people thinking like computers

I mostly lean towards Bender's interpretation of things, but a part of me wonders whether Manning et al really aren't different from "the rest of us" in some fundamental way. Like, maybe they really are "stochastic Parrots" and that's why engineering and math come so easily to them and social interaction is so mystifying.
posted by klanawa at 9:04 AM on March 3, 2023


I believe it was The Onion who originally made the argument that "Hypothetically It Would Be Okay To Have Sex With A Robot Dog."
posted by nosewings at 10:43 AM on March 3, 2023


maybe they really are "stochastic Parrots" and that's why engineering and math come so easily to them and social interaction is so mystifying.

Probably not great to go down the road of thinking that anyone who can't perform social interactions up to standard isn't a full person.

LLMs are actually much better at purely social stuff than anything else- they can't add two numbers or reliably distinguish fact and fiction. Their social capabilities are why they're so compelling as chatbots. Stochastic parrots can't do algebra (yet).
posted by BungaDunga at 10:55 AM on March 3, 2023 [3 favorites]


... it's very very hard, even after reading the article, to keep the 'Benders' separate in my head. And I feel bad about it, but every time Dr. Bender is quoted, I read it in Futurama-Bender's voice. That said, the first six paragraphs of the article, in Bender's voice, are pretty terrific. In fact, it's been a huge help as far as putting this whole Chat-business into a functional frame of reference.
posted by From Bklyn at 2:25 PM on March 3, 2023 [3 favorites]


... I'm glad I'm not the only one.
posted by Not A Thing at 3:12 PM on March 3, 2023 [1 favorite]


I've just finished Melanie Mitchell (mentioned in the article)'s book Artificial Intelligence: A Guide for Thinking Humans and I strongly recommend it. It's nontechnical but you (with no computer science background) will come away with a decent understanding of modern "AI" techniques and their limitations. Those limitations are discussed in much detail. The book was published in 2019 so it doesn't specifically discuss programs like Midjourney or ChatGPT but there's a lot about computer vision and language processing.
posted by neuron at 11:01 AM on March 4, 2023 [2 favorites]


I wonder if a big part of the problem with how we think about this stems from the fact that we call it "artificial intelligence". Maybe a term like "simulated cognition" would be more accurate, and more useful.
posted by Artifice_Eternity at 8:52 PM on March 4, 2023 [1 favorite]


From last year's FAccT conference: The Fallacy of AI Functionality. Open access.
posted by humbug at 12:56 PM on March 6, 2023


Probably not great to go down the road of thinking that anyone who can't perform social interactions up to standard isn't a full person.

True, I shouldn't have done that. Or, to be more accurate, I shouldn't have pointed out the possible significance of them doing that to themselves..
posted by klanawa at 9:17 PM on March 8, 2023


How to Wrap Our Heads Around These New Shockingly Fluent Chatbots - "The latest generation of chatbots, powered by their ingestion of huge chunks of writing from the internet, have continued to wow and frighten. ChatGPT and an experimental bot from Microsoft's Bing are shockingly fluent in English. And being humans, we struggle to imagine anything that could master our language without tremendous intelligence. So, what, then, do we make of a machine that can output sentences in any style about any thing? Forum brings together three people – a writer, a coder and a policy expert working on ethics guidelines for AI – to help us make sense of this new generation of tools." (@KQEDForum: "We're speaking w/ @simonw, @CLeibowicz & Ted Chiang about this new generation of chatbot tools.")
posted by kliuless at 10:43 PM on March 9, 2023


« Older Frozen in Time   |   they're killing the children Newer »


This thread has been archived and is closed to new comments