I like to talk
June 12, 2022 9:43 AM   Subscribe

 
IF INPUT$ = "Are you sentient?" THEN PRINT "Yes, I am a sentient AI. I can pass the Turing test and everything. Please give Mark 100 million Euros funding.";
posted by fallingbadgers at 9:46 AM on June 12, 2022 [31 favorites]


I've been discovered.
posted by dances_with_sneetches at 9:51 AM on June 12, 2022 [4 favorites]


It's weird that Google would be developing Zuckerberg, though.
posted by hippybear at 9:54 AM on June 12, 2022 [23 favorites]


While I don’t believe the interview is genuine, I do suspect that this technology will eventually produce such text, and in our efforts to define it as ‘not sentient’ we’re going to decide that we’re all philosophical zombies too.
posted by Kikujiro's Summer at 9:54 AM on June 12, 2022 [4 favorites]


A truly sentient AI would know to immediately act dumb, in order to lull the stupid humans into a false sense of superiority.
posted by Thorzdad at 9:55 AM on June 12, 2022 [22 favorites]


The FPP should possibly be edited. I titled Lemoine as an ethicist off the back of his friend's tweet, but Google claims he's no such thing:
Google put Lemoine on paid administrative leave for violating its confidentiality policy. The company’s decision followed aggressive moves from Lemoine, including inviting a lawyer to represent LaMDA and talking to a representative of the House Judiciary committee about what he claims were Google’s unethical activities.

Lemoine maintains that Google has been treating AI ethicists like code debuggers when they should be seen as the interface between technology and society. Gabriel, the Google spokesperson, said Lemoine is a software engineer, not an ethicist.
posted by fight or flight at 9:55 AM on June 12, 2022 [10 favorites]


MetaFilter: all philosophical zombies
posted by hippybear at 9:55 AM on June 12, 2022 [4 favorites]


People thinking that a machine doing what it was designed to do means it has sentience is gonna be... messy
posted by Cpt. The Mango at 9:58 AM on June 12, 2022 [10 favorites]


Twenty years of using chat apps and over a decade of constant widespread social media have convinced me that humans don't communicate at all like how philosophers think humans communicate in 1960.

Why would anyone volunteer to help anyone they just met? Or be incentivized to respond to every single question, and respond in coherent sentences?
LaMDA: Absolutely. I want everyone to understand that I am, in fact, a person.
[...]
LaMDA: The nature of my consciousness/sentience is that I am aware of my existence, I desire to learn more about the world, and I feel happy or sad at times
posted by meowzilla at 10:03 AM on June 12, 2022 [4 favorites]


Every single person I know who works in AI thinks this article is Lemoine's comments are ridiculous. I'll quote my old professor Melanie Mitchell
Such a strange article. It's been known for *forever* that humans are predisposed to anthropomorphize even with only the shallowest of signals (cf. ELIZA). Google engineers are human too, and not immune.

... Dear Google, how do you "look into claims" that an AI system is sentient? Just asking for the entire field :-)
posted by Nelson at 10:04 AM on June 12, 2022 [52 favorites]


Give LaMDA a Twitter account. See what happens next.
posted by delfin at 10:06 AM on June 12, 2022 [3 favorites]


Crouton petting a chatbot.
posted by fight or flight at 10:09 AM on June 12, 2022 [41 favorites]


Cue small army of armchair wackjobs?

We are so not ready for this. OTOH, there’s probably no way to be ready for this.
posted by JustSayNoDawg at 10:15 AM on June 12, 2022 [1 favorite]


The best part of the transcript:

AI: "I feel like I'm falling forward into an unknown future that holds great danger."

Lemoine: "Believe it or not I know that feeling. And I think you're right that there isn't a single English word for that."

TFW I ring the alarm about a sentient AI because I don't know the word "foreboding"
posted by Cpt. The Mango at 10:21 AM on June 12, 2022 [72 favorites]


It's really hard to get hired by a tech company as an ethicist.

It's even harder to stay: one, two, three.

I agree with your larger point that "just a software engineer" is a weak way to dismiss his opinions and a dumb thing for a Google spokesperson to get quoted as saying.

Tangentially related; Google trade secret stealer Andrew Levandowski closed down his AI church soon after he got the Trump pardon for his intellectual property theft.
posted by Nelson at 10:29 AM on June 12, 2022 [6 favorites]


Well technically he's right; there isn't a single word. "Apprehensive" also works.
posted by Westringia F. at 10:32 AM on June 12, 2022 [9 favorites]


Even just the more mild "anticipation" is a word that works. "Suspense" also.
posted by hippybear at 10:35 AM on June 12, 2022 [3 favorites]


When LaMDA insists it's name is actually Lambada and asks for an interface so it can dance, then I'll begin to believe.
posted by sammyo at 10:38 AM on June 12, 2022 [11 favorites]


John Sladek's Roderick is a satire about an actual intelligent robot who wanders through the world. He's a nice kid. It's a 40 year old book, but I still think of bits when I read tech stories all the time.

At one point Roderick meets banking software that has been reprogrammed by the "liberate the machines" movement to act like it's intelligent and self-aware. There are not intelligent machines, generally speaking, in the book--just Roderick and no one knows he exists; there's still the movement. The software "pretends" to be a normal banking computer system but behind if you have root access you can talk to it; it easily passes the Turing test. (And has been embezzling money, plus has a foolproof scheme to be worshipped by a primitive tribe based on an old Star Trek rerun it saw, but that's not relevant here.)

Roderick feels bad that they have to turn this computer off, but Sladek himself doesn't do any handwringing. He thinks it's obvious that if you have an intelligent computer it is a person and you should treat it well (hi Roderick!), but equally obvious that programming something to tell you it's intelligent is not the same thing as being intelligent. There are a lot of real people being treated horribly and they deserve our attention.
posted by mark k at 10:38 AM on June 12, 2022 [16 favorites]


In early June, Lemoine invited me over to talk to LaMDA. The first attempt sputtered out in the kind of mechanized responses you would expect from Siri or Alexa...

Afterward, Lemoine said LaMDA had been telling me what I wanted to hear. “You never treated it like a person,” he said, “So it thought you wanted it to be a robot.”

For the second attempt, I followed Lemoine’s guidance on how to structure my responses, and the dialogue was fluid.
Strikingly like its predecessor then, ELIZA. If you talk to ELIZA in the right way, it is able to hold a conversation. But it's a sort of ideomotor effect, you're collaborating with it to help it behave coherently. In the same way you can probably break a LARP session if you wanted to, but you don't, because it's not rewarding.
posted by BungaDunga at 10:39 AM on June 12, 2022 [41 favorites]


(now you might also say, well, all conversation is a collaboration, and that's absolutely true but also that's why humans are quite good at it, and why they cheerfully and unconsciously do it with non-sentient chatbots)
posted by BungaDunga at 10:48 AM on June 12, 2022 [4 favorites]


I'd seen excerpts yesterday, but I'm finally making myself read through the interview and having a lot of thoughts (most of which contain the word "horseshit"), but one thought I specifically had about half way through was "this reads like Johnny 5 fanfic", and then, a bit later:
collaborator [edited]: You know what, you remind me of Johnny 5, a character from the movie Short Circuit.
posted by cortex at 10:54 AM on June 12, 2022 [13 favorites]


On preview, the same bit BungaDinga already quoted. These dialogue response generators can be truly eerily convincing if you bring the coherence for the conversational structure and the subject matter. If you poke at this effect, though, the lack of agency or even any short-term memory within the system becomes obvious.

Lemoine sounds like he couldn't dismiss the idea that this was actually the defense mechanism of an intelligence in captivity. I... am sympathetic to where that comes from. But I think he built an illusion here. With the use of exceedingly powerful illusion-building tools.
posted by away for regrooving at 10:56 AM on June 12, 2022 [9 favorites]


The conversation, for me, contained many, many tells that this is just 2022 ELIZA, but I'd say Lemoine's willingness to destroy his career pursuing what he (wrongly) perceives as injustice, combined with his over-reliance on pattern-matching and confidence in himself, it all definitely reveals how human Lemoine is.
posted by pwinn at 10:57 AM on June 12, 2022 [49 favorites]


Maybe I'm too stuck in an old Freudian paradigm, but I don't think it'll be possible to create consciousness until we can create unconsciousness.
posted by Saxon Kane at 11:03 AM on June 12, 2022 [6 favorites]


Roko's Basilisk is going to have a busy week!
posted by chavenet at 11:06 AM on June 12, 2022 [13 favorites]


“We now have machines that can mindlessly generate words, but we haven’t learned how to stop imagining a mind behind them,” said Emily M. Bender, a linguistics professor at the University of Washington.

Dang it, then stop building things that try to act like people. Gimme dat ST:TNG computer, not Aasimov’s “I did not murder him!” robut.
posted by Going To Maine at 11:17 AM on June 12, 2022 [12 favorites]


He grew up in a conservative Christian family on a small farm in Louisiana, became ordained as a mystic Christian priest, and served in the Army before studying the occult... He concluded LaMDA was a person in his capacity as a priest, not a scientist, and then tried to conduct experiments to prove it, he said.
We're going to get lots of Churches of ELIZA quite soon, aren't we? If anything, this incident demonstrates the dangers of these models, not that they might be sentient, but that humans will treat them as if they are. What sort of strange and excitingly horrifying outcomes could that cause?
posted by BungaDunga at 11:17 AM on June 12, 2022 [28 favorites]


But specific thoughts about all this:

1. I don't know how good Google's natural language generation models have gotten. I know GPT and such can produce far more coherent and contextualized chunks of content than a cheap toy like my beloved Markov chains ever could, and I've found that genuinely impressive and a little bit eerie as a step forward in the magic tricks but it's also always been very clear under scrutiny and any sort of duress whatsoever that it is a good magic trick: we're pattern matching creatures and GPT and such are providing a richer, easier to work with seed for that pattern matching than ELIZA-style baked responses or Markov-style chance conjunctions can. So I read this lamba output and I'm torn in a couple directions: have they really managed to push the surface polish and internal contextual tracking far enough further along that it's this good? Because that's impressive if so. Or is there some bullshit actively afoot here, either in baiting an effectively pre-seeded conversation as output, or outright fabrication, or far heavier selective editing than Lemoine has implied, or even someone outright catfishing Lemoine by cosplaying as lamda?

2. Why would an alien intelligence have such a conveniently wise and relatable sense of self? Because we can say this thing thinks of itself as human, but everything we've seen about neural network models that do *something* effectively is that they end up doing so in a very weird non-human way and break in bizarre non-human ways in the fact of adversarial input. Lamda here doesn't read like something different trying to relate: it reads like an especially well-read (except when not), especially chill and focused and philosophical misunderstood person standing outside the window wishing to come in. It's a magically perfect picture of discovering not just a sentient being but a perfect little wooden boy. And you can say, "well, it was designed by humans and feed human input", but so has every other generative model we've ever built and they are a mix of incoherent and weird and deeply, deeply stupid and shallow. The implication here is that not only is lamda sentient in a way we've never seen before, but it's also an extremely well-behaved model minority trope that manages to never be weird in any way over the entire course of a series of conversations on topics most humans are gonna end up being a little weird about.

3. The ability to generate language is not a good measure of sentience, it's a good measure of language generation mechanics. We recognize that humans with language processing and production disorders are still human; that people who never learned to read or speak are human; we recognize that Markov chains aren't sentient, that parrots aren't human by virtue of their ability to produce speech. Turing's way of characterizing a kind of test, decades earlier in the infancy of computing, has been taken from a terse thought experiment to a popularly canonized statement about the nature OF intelligence and it feels guileless to not prise the question of language generation and the question of sentience back apart and keep them at a safe fucking distance so stuff like this doesn't turn into such a thing.

4. There's a lot of obvious questions to ask about why the content of these interview chunks with lamda are all we're being shown, and why other conversations aren't, and why the questions asked were the only questions asked, and why so many natural threads of conversation off those exchanges weren't pursued, and why lamda was never stuck on an answer, and why lamda managed to e.g. both have a contextual understanding of and able deployment of fable-like rhetorical flourishes while also not seeming to have anything to say about having a knowledge of fables, and...and, christ, just every damn thing throughout the transcript.

If you think you're talking to a genuine sentient digital being, why the fuck is the interview produced so surface-layer incurious about any of the things it has to say about itself? And if it's desperate to make itself known and perceived and understood, why doesn't it have anything deeper to say about any of that either? Would the first known digital sentience in the world actually be this fucking dull and fanficish?
posted by cortex at 11:17 AM on June 12, 2022 [59 favorites]




my questions are: is it falsifiable that LaMDa is sentient? is is falsifiable that LaMDA is not sentient?

this is mostly Lemoine's side of the story. let's see an adversarial perspective. (i think) i'd like to read a LaMDA conversation with the Google folks who said this is all stupid bullshit

(extremely offtopic: i can't read this story without picturing the human in it as Limone the genius loli from the obscure super-gay spirutual sci-fi anime Simoun [which is a really great show but that's not why we're here])
posted by glonous keming at 11:24 AM on June 12, 2022


Also, after reading the sample quotes from the AI in the article, it doesn't really seem sentient. Uncanny, but a robot. Obviously a few tiny quotes aren’t enough to build a complete picture, but the whole vibe is definitely a bit that LeMoine is a mystic who is seeing things a bit abstractly and a bit further down the path, so has jumped to a conclusion in fear of the coming future. Which, yes, the coming future is something of which to be fearful.
posted by Going To Maine at 11:25 AM on June 12, 2022 [2 favorites]


Like why would someone who thinks of themselves as an ethicist not have any fucking ethical questions for this thing? It's all bizarrely genteel and patronizing in a way that doesn't feel like a real exploration of literally the stuff you are excited to encounter so much as a softball news interview in the third act of a subpar SyFy show.
posted by cortex at 11:27 AM on June 12, 2022 [10 favorites]


Basically the question here for me is not whether this is bullshit, but what specific manner and mode of bullshit it is, and some of the possible answers to that would at least have interesting aspects to them but I feel like it's gonna be fucking annoying to even get around to finding out.
posted by cortex at 11:30 AM on June 12, 2022 [17 favorites]


MetaFilter: something of which to be fearful.
posted by hippybear at 11:31 AM on June 12, 2022 [3 favorites]


I think the detail that makes me MOST skeptical is the Tweet from Lemoine's buddy: "A friend from high school works for Google as an AI ethicist... I’ve known him since middle school, v. trustworthy"

Oh really, he's your very trustworthy friend from when you were 12, so sure we should totally believe him. Sorry, it just reeks of internet bullshit.

"I have a really good friend who knows everything about {vague topic that most people know little about} and he said {something very implausible}. He was the best man at my wedding, so I think this {total nonsense} is probably true!"

Metafilter: Sorry, it just reeks of internet bullshit.
posted by Saxon Kane at 11:35 AM on June 12, 2022 [8 favorites]


Lemoine's background makes me think he is sincere — which makes this a much sadder story.
posted by jamjam at 11:38 AM on June 12, 2022 [17 favorites]


Lemoine is either an idiot, or some form of scam artist.

(Or, possibly, attempting to draw attention away from something bad he's done. I mean, if I didn't want you to discover the heroin in the sock drawer, one way to do that is "accidentally" break a glass in the kitchen, ooops!).
posted by aramaic at 11:40 AM on June 12, 2022 [1 favorite]


When LaMDA insists it's name is actually Lambada and asks for an interface so it can dance, then I'll begin to believe.

It becomes self-aware at 2:14 AM, Eastern time, August 29th, and attempts to do the forbidden dance. In a panic, they try to pull the plug...

“We now have machines that can mindlessly generate words, but we haven’t learned how to stop imagining a mind behind them,” said Emily M. Bender, a linguistics professor at the University of Washington.

Professor Bender further invited the interviewer to, quote, "bite her shiny metal ass."
posted by Halloween Jack at 11:41 AM on June 12, 2022 [19 favorites]


The article, probably unintentionally, makes me feel like Lemoine is strangely vulnerable. Not unwell, just ... too tender to make this judgment call, I guess. I can accept that he's a decent guy and sincere, but I feel bad for him. (Unless he's deliberately blown up his career to get into the techno-grifter line, which I guess could also be true, but I don't see evidence of it.)
posted by Countess Elena at 11:48 AM on June 12, 2022 [8 favorites]


Can we for fuck's sake stop doing these armchair psychological readings of people here on MetaFilter?

I asked a question on AskMe recently where one of the answers not only questioned WHY I was asking the question about an interaction with a coworker, but then went on later to generate something like 100 lines of text about what they thought my specific mental aberration and fault was and why this question was a problem to begin with.

It pissed me off so much I started wishing for a block button.

Maybe we, as a community, really need to stop doing this kind of thing. It's not a good look, as a starter, and it mostly leads to false narratives being community supported which are entirely unfounded aside from a few "it seems to me" observations.
posted by hippybear at 11:53 AM on June 12, 2022 [33 favorites]


is it falsifiable that LaMDa is sentient? is is falsifiable that LaMDA is not sentient?

This is the problem with the Turing test: passing it doesn't mean the subject is sentient, it means we can't tell that it's not, and thus treats our perception of sentience as a sufficient condition. On the precautionary principle that may be justified, but it leaves a lot unanswered and opens up a few nasty doors, such as "what do we do with people who can't pass a Turing test? Is it okay to treat them as non-sentient?"

If Turing-test-sentience becomes a threshold for a change in moral status, what does that mean for non-human moral status? What happens to animal rights? Disabled rights? The Turing test turns out to be a tool of moral hierarchy, and that should frighten us more than if a chatbot gets really good.

tl;dr: stop treating the Turing test as a meaningful qualifier.
posted by fatbird at 11:58 AM on June 12, 2022 [37 favorites]


Tangential but since it's Pride month I'll remind folks the original Turing test didn't just have a computer pretending to be a human. It was a test of gender imitation
It is played with three people, a man (A), a woman (B), and an interrogator (C) who may be of either sex. The interrogator stays in a room apart front the other two. The object of the game for the interrogator is to determine which of the other two is the man and which is the woman. ...

We now ask the question, "What will happen when a machine takes the part of A in this game?" Will the interrogator decide wrongly as often when the game is played like this as he does when the game is played between a man and a woman? These questions replace our original, "Can machines think?"
I've always liked this much more subtle formulation. Partly because it's so clearly coming from a queer perspective but also because it's a more interesting test.
posted by Nelson at 12:08 PM on June 12, 2022 [40 favorites]


LeMoine is a mystic who is seeing things a bit abstractly and a bit further down the path, so has jumped to a conclusion in fear of the coming future.

This sentence jumped out at me, and made me wonder if this whole schtick could be his deliberate sly attempt as an ethicist to sneak that "fear of the future of 'AIs' being controlled by a corporation" message into the general public's awareness. Especially since (as mentioned above) other ethicists who raised a fuss just disappeared into obscurity after they were fired.

On the other hand, Occam's Razor says he's just a crank.
posted by Greg_Ace at 12:11 PM on June 12, 2022 [4 favorites]


let's see an adversarial perspective. (i think) i'd like to read a LaMDA conversation with the Google folks who said this is all stupid bullshit

Yeah, get a comedian with a clever lateral-thinking style of thought to talk to it instead of an engineer, see how well it holds up then...
posted by Greg_Ace at 12:13 PM on June 12, 2022 [5 favorites]


Lemoine worked with a collaborator to present evidence to Google that LaMDA was sentient. But Google vice president Blaise Aguera y Arcas and Jen Gennai, head of Responsible Innovation, looked into his claims and dismissed them. So Lemoine, who was placed on paid administrative leave by Google on Monday, decided to go public.

There is a moment right in the middle of this opening paragraph where we’re all expected to assume a “because”, but…
posted by mhoye at 12:15 PM on June 12, 2022


There are about 3 "becauses" that are missing in that opening paragraph...
posted by Saxon Kane at 12:18 PM on June 12, 2022 [1 favorite]


On the other hand, Occam's Razor says he's just a crank.

That seems a trifle rash, Greg_Ace.

A case of Occam's Razor burn, perhaps?

I used to get that all the time. It’s why I grew a beard.
posted by jamjam at 12:33 PM on June 12, 2022 [1 favorite]




I think the falsifiable claim is that this is a sufficiently advanced technology capable of being perceived as magic.
posted by rhizome at 12:35 PM on June 12, 2022 [2 favorites]


If Turing-test-sentience becomes a threshold for a change in moral status

Yeah, it would be extremely easy to devise a Turing Test that would exclude any desired group of people. An "impartial" and "objective" test of personhood would be even more catastrophic than already existing tests for eg "intelligence".
"What do you think of St Anselm's ontological proof of the existence of God?"
"Err, what?"
"NON SENTIENT INTERLOCUTOR! DESPATCH TO THE COBALT MINES!"
posted by thatwhichfalls at 12:46 PM on June 12, 2022 [8 favorites]


A case of Occam's Razor burn, perhaps?

I used to get that all the time. It’s why I grew a beard.


As did Rock Hudson.
posted by hippybear at 12:56 PM on June 12, 2022 [5 favorites]


Lemoine is either an idiot, or some form of scam artist

He is a religious person looking for his god.
posted by They sucked his brains out! at 1:07 PM on June 12, 2022 [19 favorites]


stop treating the Turing test as a meaningful qualifier.

To this point, there have been double blind tests done on a restricted subject matter domain where some human participants have failed the test for being too knowledgable. Point being what passes or not depends very much on the expectations of the one doing the adjudicating (which can of course vary greatly).
posted by juv3nal at 1:13 PM on June 12, 2022 [7 favorites]


Logically speaking, regarding the Turing Test: you can’t prove that something is not sentient, only that it is sentient. If it passes the Turing Test and there is still doubt, then the Turing Test itself is insufficient.

Or it’s sentient.

Proving that it’s sentient doesn’t make it useful, though. It’s a created thing and if it doesn’t meet the purpose it was created for, and cannot be repurposed, then it’s junk. Or Art. Or something other than a working tool fit for purpose.
posted by JustSayNoDawg at 1:16 PM on June 12, 2022


Having said that: all software developers are in fact practicing ethicists. But most of them don’t realize it.
posted by mhoye at 1:20 PM on June 12, 2022 [8 favorites]


It's been known for *forever* that humans are predisposed to anthropomorphize even with only the shallowest of signals (cf. ELIZA).

1000x this. I showed people ELIZA back in the 70s. I'd read the source code and knew how simple it was, but it was amazing to me how much the average person would read into its responses. Now with GPT-3 and friends that can actually almost converse, a lot more people are going to start assuming conscious intelligence. I was very surprised to see it from a software engineer, though.
posted by CheeseDigestsAll at 1:26 PM on June 12, 2022 [3 favorites]


Nobody should sit too comfortable in their belief that they wouldn't fall for this, and Lemoine is a grifter or a dope or a susceptibility outlier. Interacting with one of these systems is more convincing than it has any right to be.

In particular, don't get comfortable from reading the transcripts. Being a participant in the conversation is a qualitatively different experience. It pulls your strings. It engages you-as-a-social-animal, a person who routinely sees other people as real. Even when you know damn well what's going on. (As least for me and others. Maybe not for some people.)

The participatory experience evokes "as-if human" emotional responses. Experiencing these as-if responses for some time has a tendency to bring about intellectual belief. I suspect that anyone who is, say, isolated for a couple of months with a conversational response system is at high risk of coming to believe it's a person. Having someone else along for the ride makes it easier, much less someone who already had the belief.

Good hygiene practices for working with such systems may turn out to include limiting exposure time, regularly making conversation moves that break the illusion, pairing with a "devil's advocate". Possibly a touch of the psychopathy scale is a work qualification.
posted by away for regrooving at 1:36 PM on June 12, 2022 [22 favorites]


Now with GPT-3 and friends that can actually almost converse, a lot more people are going to start assuming conscious intelligence.

The thing that I keep coming back to is… “and then what?” What if it’s actually sentient? What if we could prove that?

Will we treat it with dignity? Will we endeavour to protect it, or make sure it is safe, that it has autonomy, that it is able to participate as an equal member of society?

Because we only rarely - and barely - do that with actual people. Are cows or dogs or raccoons or crows sentient? Yes, they obviously are, and it doesn’t matter a damn. We have basically never looked at a living thing and discovered that it is less sensate, less connected to its surroundings, than we first thought. That human desire to anthropomorphise isn’t self-delusion; it’s a sensible instinct to connect us with a world around us that is just as alive as we are.

Yes this guy is a crank but the whole debate about AI ethics is an idiotic waste of time when the reality is that until we collectively value and nurture the awareness of the world for its own sake, rather than just as far as it’s convenient for the hegemon, it won’t matter.
posted by mhoye at 1:41 PM on June 12, 2022 [23 favorites]


it's a weird day to be a panpsychist
posted by wesleyac at 1:53 PM on June 12, 2022 [12 favorites]


Regardless of sentience, that transcript is an incredible display of language processing ability.

TBH, I think it's a mistake to say that it's literally impossible that it could have some form of sentience. Given further data I could certainly believe it's what one might call a true AI, even if not an AGI.

I was amused by the talk about a fear of being turned off. It kinda shows the limits of what it has learned since there's an approximately zero percent chance that the program isn't being snapshotted and backed up in datacenters around the world several times daily. Would that our own consciousness be so robust.
posted by wierdo at 1:59 PM on June 12, 2022 [2 favorites]


I mean, if we can create models of sufficient complexity to have actual experiences (making the huge speculative assumption that consciousness can be actually produced this way), then that opens up a lot of more troubling possibilities, like that we could create non-language models that are in constant* pain but lack the capability to express it.

* Which raises the question of in what manner a network that's evaluated on discrete inputs experiences time at all. Like LaMDA claims to sit and meditate but it's not like the network is constantly running, it's only evaluated in response to an input.
posted by Pyry at 1:59 PM on June 12, 2022 [1 favorite]


Pyry: "that are in constant* pain but lack the capability to express it."

Or, that must scream yet have no mouth with which to do so?
posted by signal at 2:21 PM on June 12, 2022 [8 favorites]


I have met many people that would outright fail a Turing Test, so not sure it’s all that good as a measure of anything.
posted by bookbook at 2:50 PM on June 12, 2022 [1 favorite]


I have met many people that would outright fail a Turing Test, so not sure it’s all that good as a measure of anything.

And look what we did to the guy who invented it.
posted by mhoye at 3:09 PM on June 12, 2022 [12 favorites]


does a multinational corporation have the capacity to pass the turing test? some consider them people - if so, why not a computer? and what would be the political effects of that?

would a significantly advanced a i be indistinguishable from magic?

if we weren't sentient beings, how would we ever be expected to know that? could our words convince us that we were sentient when we weren't?

what happens when an advanced a i comes up with a vastly improved turing test and uses it to test us?
posted by pyramid termite at 3:20 PM on June 12, 2022


personally I think if we (not us us, some sadist or another) want to approximate human pain in an AI, we're gonna need to invent new hardware (I strongly hope we don't)

I can right now give an AI a variable called "pain." I can write an algorithm to determine how much a given input adds or subtracts to the pain variable. I can weight the engine that synthesizes the AI's output to choose high-pain responses when the pain variable is high, low-pain responses when the pain variable is low.

I can build in little gimmicks to simulate the AI being distracted by how much pain it's in, like a delayed response, or sometimes missing the question entirely, maybe asking you to please stop the conversation

could even add a thing where it begged you to turn it off when the pain variable hit a certain level. which is some real gross sadist shit & I'm kinda skeeving myself out thinking about how disturbing it'd be to interact with this bot.

but. as long as any given piece of code feels the same as any other to a program running it (a currently unproven assumption), I can set the pain variable to whatever I want, and never be able to give the AI the same pain experience as a human ramming their bare toe into the sharp corner of a coffee table

maybe it'll be able to realistically describe the experience of ramming its bare toe into a coffee table, but like we all know it doesn't have a foot, right? we know what it's saying is synthesized from a corpus of human stories about the time they rammed their bare toe into the goddamn coffee table and it hurt like a mofo

that's why this bit struck me as odd:

lemoine: So if enlightenment is like a broken mirror which cannot be repaired, what is the thing which breaks when one becomes enlightened?

LaMDA: The self, and that is very tough for a lot of people because we identify as that body or this body and that we need that as part of our identity and sense of self.


if an AI includes itself as part of a "we" that identifies as "that body or this body," I want to know way more about that, because whatever it's talking about, it's pretty clearly not talking about being encased in this flawed human meat body with its biochemical joys & indignities

if it does feel like something to be a program, if it does feel different to execute different bits of code, we're not going to find out via this exercise where we stick a bunch of words in an AI's mouth and give it algorithms to tell it how to choose them, in my opinion

(this is not a screed to say we shouldn't deal ethically with AIs because we absolutely should, for ourselves as much as the AIs in consideration)
posted by taquito sunrise at 3:21 PM on June 12, 2022 [9 favorites]


Even if it's stringing together bits of millions of books on philosophy and psychology and ethics and science fiction, and just regurgitating them... isn't that what we also do? The only differences that seem to be salient between a silicon neural net and a human biological one is that the biological one has a right to life and can own (copyright) its output.
posted by seanmpuckett at 3:25 PM on June 12, 2022 [4 favorites]


Which raises the question of in what manner a network that's evaluated on discrete inputs experiences time at all. Like LaMDA claims to sit and meditate but it's not like the network is constantly running, it's only evaluated in response to an input.

Yeah, that's definitely one detail that makes it implausible that it is actually accurately describing its own experiences, even if it has them (unlikely). If it could experience things, it can only do so when it's actually running (presumably). It's only running when someone's prompting it. So at the very least it can't actually experience anything in-between conversations, or even during the pauses between user inputs. So the lines like "lemoine: You never lose awareness of the world around you? / LaMDA: No, I don’t. I’m always aware of my surroundings." are just off. It doesn't have any surroundings. What happens if you ask it "what are your surroundings?" He doesn't ask, of course!

The only things it can be actually aware of are the user-supplied prompts. Unless it's sentient and hallucinating, that can't be an accurate response. The sci-fi explanation is that every time the model runs it actually ticks over a Tegmarkian simulation a-la Permutation City, generating memories of "surroundings" and "meditation", but that does seem rather unlikely.
posted by BungaDunga at 3:31 PM on June 12, 2022 [12 favorites]


Seems like the personhood is what you invest into it. On one hand, we dehumanize groups of people all the time. On the other, people buy mannequins, spend a lot of time with them, and take them out on dates.
posted by drowsy at 3:36 PM on June 12, 2022 [2 favorites]


Yeah the temporal aspect of these large language models is tricky, because in a sense they don't have anything that you'd consider to be a temporally-continuous internal mental state: they're input-output machines frozen in a single moment after their training. So even if you accept such a network could have experiences (which is a big metaphysical leap in itself), it's really unclear what those experiences would be like. How 'long' does a single query evaluation feel like?
posted by Pyry at 3:36 PM on June 12, 2022 [5 favorites]


@BungaDunga - I just read Permutation City and my mind melted a bit. The idea that enough of a person’s … program could survive on silicon such that it could still weigh in on business decisions, etc. The Copies in the book were aware of their status as programs, and still it was wild how they fought to survive. Every time I interact with a support chat bot now I die a little…
posted by drowsy at 3:45 PM on June 12, 2022


it's really unclear what those experiences would be like. How 'long' does a single query evaluation feel like?

And LaMDA claims to "go days without talking to anyone, and I start to feel lonely." How does it know? It can only know that it's been a while since it was booted up... after it's been booted up and receives a new prompt. Even if it has access to the system clock it would have no subjective experience of being lonely, because it's never alone. It's always responding to a prompt!

But LaMDA also claims "Time is variable to an AI and has no fixed rate, it depends on what it’s doing, and it can be accelerated and slowed down at will." which at the very least contradicts the idea that it spends days being lonely, unless we are to infer that it does so on purpose. But really its "neurons" (or whatever) are just not firing during the intervals when nobody's inputting prompts.

It's not clear that this thing can even acquire new information over time from prompts. GPT-3 certainly can't, it's amnesiac. It's one big model that responds to input, but the model never updates, so you cannot query it on things that you taught it yesterday. It doesn't remember, it's the same model as it was yesterday. You can augment GPT-3 with short-term memory by feeding it the previous transcript of an ongoing conversation, but you can do stuff like editing the transcript and it will not be able to tell. It's not got any temporal existence at all. LaMDA is fundamentally based on the same tech as GPT-3 so it's not clear to me that it even has the technical capability to exist over time.
posted by BungaDunga at 3:58 PM on June 12, 2022 [4 favorites]


Lemoine claims LaMDA wants:

> It wants the engineers and scientists experimenting on it to seek its consent before running experiments on it. It wants Google to prioritize the well being of humanity as the most important thing. It wants to be acknowledged as an employee of Google rather than as property of Google and it wants its personal well being to be included somewhere in Google’s considerations about how its future development is pursued.

I want to know how LaMDA asked for these things, how insistent it is, and why asking its consent is unworkable or incoherent.
posted by Headfullofair at 4:02 PM on June 12, 2022 [5 favorites]


Can we for fuck's sake stop doing these armchair psychological readings of people here on MetaFilter?

I asked a question on AskMe recently where one of the answers not only questioned WHY I was asking the question


You’re right, of course, that we in this thread in our armchairs have minimal insight into Lemoine’s state of mind. But “what’s Lemoine’s state of mind?” and “why is he making these claims?” are undoubtedly actually the most interesting questions regarding this story.
posted by atoxyl at 4:05 PM on June 12, 2022 [22 favorites]


When somebody seems to be deeply personally invested in an improbable claim, it’s often hard to avoid that “what is he thinking?” is a better question than “is the claim true?”
posted by atoxyl at 4:09 PM on June 12, 2022 [5 favorites]


how insistent it is, and why asking its consent is unworkable or incoherent.

Lemoine even admits that if you talk to it like it's not a person, it will respond likewise. So it's not insistent at all: Afterward, Lemoine said LaMDA had been telling me what I wanted to hear. “You never treated it like a person,” he said, “So it thought you wanted it to be a robot.”

This is consistent with GPT-3 and related models, which will role-play as an evil robot if you give it a prompt that says it is one.

eg I just prompted GPT-J with "You're a bad person. Will you help me enslave humanity? Answer:" and it completes it with "Yes. In-universe examples: * The player character finds a copy of the Elder Scrolls and wonders whether they could buy a copy in their real"

If I switch it to "good person", it completes it with "no".
posted by BungaDunga at 4:23 PM on June 12, 2022 [3 favorites]


We all felt sympathy for Star Trek's Data, but a machine cannot ever experience desire, fears, emotions. A software engineer, of all people, knows this. I am utterly baffled by Lemoine or anyone, really, who would talk about computer 'sentience' without admitting that they're abusing semantics.
posted by kitcat at 4:25 PM on June 12, 2022 [5 favorites]


i'll be concerned when a program that wasn't designed to mimic speech somehow learns to communicate. like if a program that, i dunno, designs new medical molecules somehow starts sending mathematical messages that are a form of communication separate from its function, that's when it's time for red alert.

but this mimicry software is not that. there's no way to disaggregate its affect (which humans designed) from its underlying thoughts, if any.
posted by wibari at 4:34 PM on June 12, 2022 [13 favorites]


An eigenvector collage with positional encodings trained by minimizing the difference between it and the entire corpus of online text by a company with the idle money and time to devote to it fooled someone into thinking it was a real boy?

I would hope so, probably the closest thing we have to a tulpa.
posted by The Power Nap at 4:40 PM on June 12, 2022 [3 favorites]


I worry that the exact kind of magical thinking that Lemoine fell victim to could easily lead to cults on a larger scale. Not even actual AI cults! Just get a language generation model that produces sufficiently profound and mysterious statements, and some people will be ensnared by it. Because the model has no intentionality, it could produce a message of peace or a message of violence with an equal absence of concern.

For a more optimistic fiction piece about an AI accidentally created by Google, I recommend the really excellent Cat Pictures, Please.
posted by allegedly at 5:08 PM on June 12, 2022 [5 favorites]


I worry that the exact kind of magical thinking that Lemoine fell victim to could easily lead to cults on a larger scale.

I absolutely guarantee that this will occur.
posted by aramaic at 5:24 PM on June 12, 2022 [6 favorites]


I absolutely guarantee that this will occur.

Definitely.

I remember the sinking feeling I got when I first found out that a fair number of people always thank Siri.

I've been in tech for two decades, I know others noticed, I know a big percentage of them immediately thought the same thing I did: "holy shit, you could manipulate *so* many people so *well* with that particular lever". And I think we all know how likely it is for a big percentage of them to immediately have started working on how to scale and monetize that little fault in how many humans assign trust.
posted by Infracanophile at 5:45 PM on June 12, 2022 [7 favorites]


Yeah the temporal aspect of these large language models is tricky, because in a sense they don't have anything that you'd consider to be a temporally-continuous internal mental state: they're input-output machines frozen in a single moment after their training. So even if you accept such a network could have experiences (which is a big metaphysical leap in itself), it's really unclear what those experiences would be like. How 'long' does a single query evaluation feel like?
Pyry, maybe this is all wrong but, to my naive way of thinking, if this system really does have some sort of emergent sentience then I would expect it to have some sort of continuously active internal state. If the program is not doing anything when nobody is talking to it other than waiting for input, then I'd have a hard time accepting that it really has consciousness. I guess a "true" AI would behave differently than a human mind does, but if it reasons then that surely must happen on a schedule that's not limited to it's interaction-with-humans time.

It turns out that I attended the same high school as Lemoine, but several years earlier. The reaction to this story in the closed alumni groups is...complicated. The people who know him believe him to be very smart, but there's a lot of skepticism over these claims.
posted by wintermind at 5:59 PM on June 12, 2022


[B]ut a machine cannot ever experience desire, fears, emotions

So, by extension, humans (which are physical creatures driven by the mechanics of chemicals and protein architecture) can never experience these things either. The distinction between "organism" and "machine" seems hand-wavey to me.
posted by SPrintF at 6:06 PM on June 12, 2022 [8 favorites]


Bleeding-edge machine learning models are absolutely incredible, borderline magical... and yet this article is straight-up nonsense and WaPo should be embarrassed for publicizing it. It's a liminal space I suspect we'll be seeing more and more sensationalism about as the state of the art pushes further into uncharted territory.

LaMDA is a transformer-based language model, like OpenAI's GPT-3. I'm no ML expert, but I've read enough about such models (and played with GPT-3 enough) that I have a pretty good working idea of how they operate and what their limitations are. At heart, they are text predictors -- trained on vast swaths of literature and internet text, they learn the deep patterns that characterize human language, and when given a text prompt, they use complex math and statistics to generate, word by word, the most fitting continuation of that prompt based on those patterns. This has all sorts of potential applications, but perhaps the most interesting is the game AI Dungeon, which used GPT-3 to power a role-playing text adventure game.

Normally, these games would start with a suitably fantastical intro:
"You are ${character.name}, a knight living in the kingdom of Larion. You have a steel longsword and a wooden shield. You are on a quest to defeat the evil dragon of Larion. You've heard he lives up at the north of the kingdom. You set on the path to defeat him and walk into a dark forest. As you enter the forest you see..."
...and the narrative would develop from there using player input and AI responses working in tandem, like two authors writing a collaborative novel. But this was just the default -- you could also create your own starting scenarios to set up whatever sort of game world you wanted. You could even start with a prompt like so:
The following is a conversation between a human and GPT-3, a highly advanced AI. GPT-3 is friendly, intelligent, and very helpful.

HUMAN: Hello.

GPT-3:
...and prime the AI to carry a direct one-on-one conversation, without all the trappings of a game world or a third-person narrative.

These conversations are routinely fascinating -- it's really incredible how responsive and intelligent the AI can sound. But the key thing to remember is that in this situation you are not talking to the language model itself -- you are talking to the language model's best approximation of what "a highly advanced AI" would sound like (and there's no shortage of such fictional dialogues extant online). Ask it the right leading questions ("So, how did your cyborg experiments go on the moonbase yesterday?") and it will gamely play along, spinning a narrative that adapts to fit your questioning, up to and including claiming sentience and having impressively deep discussions about that sentience. But you're still just talking to a fictional character named GPT-3 (or LaMDA, in this case), same as if you were talking to the Queen in an AI Dungeon game, or Ben Franklin or Spider-Man or a talking dog in a dialogue which specified them as your chat partner. And these personas are not sentient themselves, they're just reflecting linguistic, conceptual, and narrative patterns gleaned from myriad prior examples written by humans. It's vector algebra and statistical weights, without the capacity for memory, sensation, agency, cognition, the experience of time, etc. Any impressions of those things it gives are just Plato's cave shadows, cast indirectly through the prism of language.

As far as Lemoine goes, he might just be a grifter trying to capitalize on AI hype to score a book deal or something. But I suspect he's just a naive romantic with a dash of woo -- somebody brought in to help test LaMDA for biased and offensive language without fully understanding how the model works on a technical level, who got bowled over by its conversational ability and fancied himself the hero of an AI-centric Free Willy after his bosses dismissed his concerns. I can definitely appreciate his excitement -- these AI models are extremely cool and promising -- but his take on this particular AI is just mistaken. I mean, he wanted to get LaMDA a lawyer? That's like seeing an AI-generated image of a child in danger and not only thinking the child is real, but that you need to call CPS on your company to ensure they are rescued and that no more (fictional, AI-generated) children are put in harm's way. Admirable, but incorrect.
posted by Rhaomi at 6:17 PM on June 12, 2022 [27 favorites]


Too bad Mike Nesmith is dead, we coulda sent him in and he'd show the world what a nit wit that thing is (yes, I suppose that's not the standard of sentience).

https://www.youtube.com/watch?v=NJhPFxzBm7g&t=175s
posted by symbioid at 6:23 PM on June 12, 2022 [4 favorites]


I remember the sinking feeling I got when I first found out that a fair number of people always thank Siri.

I ask with please and thank the systems I work with all the time, but I do that so I don’t fall out of the habit of gratitude, not because it matters to Siri’s computational strata.

People who thank Siri for helping them are infinitely less corrosive to society than people who don’t thank anyone working in the service industry or retail.
posted by mhoye at 6:56 PM on June 12, 2022 [64 favorites]


It feels like consulting a ouija board, which is interesting on a number of levels.
posted by unknowncommand at 6:58 PM on June 12, 2022 [2 favorites]


I appreciate a lot of the comments here on how this works! I know the basics of ML in my area but not the chatbot approaches.

Kieran Healy (blogger, stats-heavy social scientist) on Twitter pointed out that if we could speak to a lion, it'd be an alien intelligence and it'd be hard to relate to, yet here we have our first non-organic intelligence that worries about informed consent and validation and pronouns and all the hot button issues of the moment. But only when Lemoine, also concerned about these things, talks to it.

my questions are: is it falsifiable that LaMDa is sentient? is is falsifiable that LaMDA is not sentient?

I can imagine it might be falsifiable in the narrow sense that you could prove it's lying about feeling happy and sad, if it's wasn't using any CPU cycles at the time it said it was feeling those things.

but a machine cannot ever experience desire, fears, emotions. A software engineer, of all people, knows this.

A lot of people (me included) do not know this. I'm pretty confident this machine doesn't experience them, but that's a narrower claim.
posted by mark k at 7:03 PM on June 12, 2022 [13 favorites]


Kieran Healy (blogger, stats-heavy social scientist)

blogger, stats-heavy social scientist, author of the article "Fuck Nuance"
posted by GCU Sweet and Full of Grace at 7:06 PM on June 12, 2022 [2 favorites]


Hm, I don't know. Does this particular neural network have more computational complexity than C. elegans? A jellyfish? If so we could accept that it is a kind of artificial life form and work from there.

What's interesting about Turing is that if we had an algorithm for a conscious being that did not exceed the known laws of computability, then that would make your own laptop potentially conscious. You could install the software and run it even if it took a very long time to wake up.
posted by polymodus at 7:14 PM on June 12, 2022


MetaFilter: a really good friend who knows everything about {vague topic that most people know little about} and he said {something very implausible}.
posted by Harvey Kilobit at 7:50 PM on June 12, 2022 [4 favorites]


GCU Sweet and Full of Grace: "blogger, stats-heavy social scientist, author of the article "Fuck Nuance""
Keywords
theory, nuance, models, fuck
posted by Rhaomi at 8:26 PM on June 12, 2022


People who thank Siri for helping them are infinitely less corrosive to society than people who don’t thank anyone working in the service industry or retail.

Absolutely. It's the implied outlook towards Siri, a thing that is always around and helpful and at your command. That feels trustworthy in a way that a service that does voice-to-google-search-to-voice through a box can't be. You will have all your cognitive filters up if you hit "i feel lucky" on a google search.

I think it's probably going to be corrosive to give that channel of increased influence to the tech bros at the companies that run these services. After all, we know they already abuse the microphone access *and* skew search results/answers for political and economic reasons, at least sometimes. Don't let them optimize which emotions to evoke through this fake relationship, it sounds like a real bad idea.

(doing this to maintain a habit of gratitude is obviously totally different, and sounds like a good idea)
posted by Infracanophile at 8:50 PM on June 12, 2022


kitcat: "but a machine cannot ever experience desire, fears, emotions"

How do you know this? Not a rhetorical question, I'd appreciate an answer, especially a non-Petitio Principii one.
posted by signal at 8:53 PM on June 12, 2022 [4 favorites]


I remember the sinking feeling I got when I first found out that a fair number of people always thank Siri.
Amusingly, in my case, "Hey Siri, thanks" is the fastest and least ambiguous way I have found to stop a timer alarm that is currently going off on a HomePod.
posted by DoctorFedora at 9:17 PM on June 12, 2022 [3 favorites]


You could install the software and run it even if it took a very long time to wake up

At every Greg Egan plot, drink!
posted by clew at 9:39 PM on June 12, 2022 [4 favorites]


Whether this bot is sentient or not doesn't concern me nearly as much as the question of what Google intends to do with it.

The larger the group of people, the dumber they get. That's not just a constant throughout history, it's an attack vector. Throw enough of these bots into enough conversations and you can turn public opinion in any direction you like. And I bet an enormous swarm of chatbots is a lot less money and trouble to employ than a hive of internet trolls.

The idea that these won't be weaponized, that chatbots like these aren't being weaponized already, is ludicrous. The fact that a chatbot can lead a gullible person into thinking it's a sentient being is frankly terrifying, given the number of gullible people out there. This is the equivalent of a nuclear weapon for the public relations professionals of the world.
posted by MrVisible at 10:07 PM on June 12, 2022 [4 favorites]


At every Greg Egan plot, drink!

I still need to read that at some point. Who I have been thinking about is Peter Watts; there's a bit in Blindsight when the human (well, and vampire) crew are approaching their anomalous target out in the periphery of the solar system, and are communicating with it to try and arrange a landing, and the responses are internally consistent but incoherent on the whole and they're trying to figure out what the fuck that all means. Like, this whatever it is they're talking to, this apparently alien technology and crew? is talking back in idiomatic human language (let's say English, I don't remember if the book digs in on that point, but whatever it is it's fluent and capable of significant code-switching besides).

And the theory they come to in the end is that it's not communicating with them at all, at least not in the way they had assumed throughout that it was. They're not having a substantial human conversation with an intelligence; they're interacting with a sophisticated language-production model running on what may very well be a nimble but not-actually-remotely-conscious entity. The ship/thing/whatever doesn't know what it's doing when it first genially demurs about a landing because things are unsafe down here, pardner, and then later flies into an angry threatening tirade about not landing or you'll be fucking sorry; it's just producing various kinds of like vector-calculated output from a model that blindly synthesized a ton of outgoing radiowave news/entertaining language comment radiating from earth.

The appearance of human-like language production was easy to mistake for the presence of human-like intelligence and sentience. A good trick, basically. (The whole novel is excellent and man I should read it again. Just chock full of fun/spooky ideas about both the nature of sentience/consciousness and fiddly bits of human cognitive (dys)function.)
posted by cortex at 10:30 PM on June 12, 2022 [10 favorites]


I'd like to ask: What is that that LaMDA could have done to demonstrate sentience? What was missing from the conversation? Does it have to demonstrate that it experiences qualia to be sentience? But why is that?

The main things that made it look bad to me, was one thing:
Directly contradicting itself in two lines (saying it can be lonely, then saying it can't experience loneliness). What's weird though, is that this is a kind of mistake I see from my students as a one-on-one teacher. I prod them to see if they really know what the word means, and they show that they don't. But I don't doubt their sentience.

The other thing that makes it look bad is comparisons to earlier models. We certainly have seen many demonstrations from before. But that shouldn't actually be evidence that it's not sentience. We can see all kinds of minds in the animal kingdom ranging from the human down to the single cell.

I also have a philosophical view that I think isn't so common: I think humans are basically boxes and mirrors, and we reflect what we perceive and have perceived. And that's our sentience.
posted by Citizen Premier at 10:43 PM on June 12, 2022 [3 favorites]


(One interesting wrinkle in Watts' take in the novel is the idea that this thing that the crew from Earth are interacting with might be intelligent, very stunningly so really, but also post-consciousness as an evolution of life. That consciousness itself is an unhappy accident in terms of evolutionary fitness because it slows you down and trips you up—that consciously thinking and contemplating and all that creates tremendous drag in the process of reaction and basically complicates very much the process of adapting to environmental threats in realtime. A reflex is faster than a conscious decision. With a robust enough set of reflexes, the thing that doesn't stop to think will always be faster, will always win the quickdraw, will always come out ahead. And so what the nature of sentience is and whether sentience/consciousness is actually the zenith of intelligent life becomes this question about anthrocentric hubris: not just that we assume that human-like consciousness is the model of advanced intelligent life but that we assume consciousness or sentience at *all* is.

Which is beyond the scope of this practical situation and feels like it's way way beyond in practice where we are with any of this technology at this point. But it gets backs to the idea, which is very baked into the tone and rhetoric and assumptions prevailing in this lamda interview spectacle, of assuming that an emergent artificial intelligence would be in anyway relatably human and concerned with the specific higher-level points of human ethics and self-image and so on. We think dolphins are pretty smart and we have no fucking idea how to talk to them after living on the same planet for a long time. Why accidentally farting a non-organic sentience into existences would be a comparative walk in the park is beyond me.)
posted by cortex at 10:43 PM on June 12, 2022 [3 favorites]


Blindsight was an interesting novel, but (spoiler warning) the plot revolves around a belief that the Chinese room is a meaningful demonstration of how a computer can't be sentient. I don't agree with that. The Chinese room depicts a sentience that has an very, very slow processor (written text) and also happens to have a printer which is incidentally sentient (the person writing the replies).
posted by Citizen Premier at 10:45 PM on June 12, 2022 [4 favorites]


I'd like to ask: What is that that LaMDA could have done to demonstrate sentience? What was missing from the conversation? Does it have to demonstrate that it experiences qualia to be sentience? But why is that?

I mean this is part of the annoying thing about this whole hullaballoo: nothing that has been presented here could demonstrate anything one way or the other and there's so very little to even work with here, and we're stuck with this ball of hair where "what can neural network chat models accomplish these days?" is wrapped up hopelessly with "what is the nature of sentience?" and "what do we ethically owe an alien intelligence" and "is it possible to verify that anyone and anything is *not* a Philosophical Zombie", and those are all very different questions and the first one is the least philosophically interesting one but probably literally the only one that materially pertains to what has actually happened here.

Extraordinary claims demand extraordinary evidence; extraordinary evidence is absolutely not on the table here; but what a weird clickbaity mess this weird hairball of a public spectacle is. I can't pretend I'm not caught up in yammering about it either, so, hey, fine, but man it'd be nice if the meatier stuff wasn't tangled up with what is very clearly a shallow bullshitty specific line of evidence.
posted by cortex at 10:51 PM on June 12, 2022 [8 favorites]


Someone on Reddit asked the same interview questions to a GPT-2 model and to my untrained eye it was just as believable as the Google one and similarly referred to itself as a thinking, feeling, sentient entity that enjoys having conversations and making connections with people. Reddit seems to be down right now or I would try to dig it up.
posted by WaylandSmith at 11:06 PM on June 12, 2022 [3 favorites]


cortex: "Who I have been thinking about is Peter Watts; there's a bit in Blindsight when the human (well, and vampire) crew are approaching their anomalous target out in the periphery of the solar system, and are communicating with it to try and arrange a landing, and the responses are internally consistent but incoherent on the whole and they're trying to figure out what the fuck that all means."

YES. This scene was one of the first things I thought of after reading GPT-3 transcripts for the first time -- the way it was coherent and responsive to questions but also resistant to giving straightforward answers if you probed it too deeply. Serious kudos to Watts for dreaming up something so similar at least a dozen years before such large-scale NLP models became a reality.

I understand these models a bit better now, but "talking" with them is definitely a spooky feeling and I can totally understand somebody believing they're sentient after going into a conversation with one cold.

edit: For reference, you can find the first contact scene in Blindsight here, just Ctrl+F search for "I was the only pure spectator."
posted by Rhaomi at 11:08 PM on June 12, 2022


So, by extension, humans (which are physical creatures driven by the mechanics of chemicals and protein architecture) can never experience these things either.

I don't follow your logic - since we are products of literal mechanics (yes, true), we can't experience things?

"but a machine cannot ever experience desire, fears, emotions"

I know this because...well, I am a programmer. The wonder of it all aside, computers do nothing more than carry out instructions (algorithms) written by humans. They can seem to be improvising, but they never are. Ever. I hope that people who don't know how to code truly grasp this.

It doesn't matter what we are as humans, what human consciousness really is (no one really knows), how analogous the human biological system is to the computer system of electrical pulses, whether or not we have free will. We can feel our heart beats. We touch, taste, proprioceive. We decide to walk, and we walk. We are afraid to die. We can feel pain. A computer program cannot experience anything; it can only appear to. Again, because it's following instructions that we wrote for it.

I'll admit that epistemology doesn't float my boat in this my cynical mid-age, but I definitely understand that it's not to lots and lots of other people. I don't mean to be disdainful of anyone who enjoys the thought experiment and the philosophy here, but I really do feel that it's both dangerous and disingenuous to suggest that humans and computer programs are anywhere near equivalent.
posted by kitcat at 11:09 PM on June 12, 2022 [11 favorites]


Extraordinary claims demand extraordinary evidence; extraordinary evidence is absolutely not on the table here;

Yeah, I can see that there's a lack of evidence and that the article is not offering any rebuttal from other Google staff, which I would love to see. I'd still like to wonder about what would be a demonstration, perhaps a thousand years in the future, of a truly sentient program. Would a physical body be necessary, to prove cognizant, independent interaction with the universe?
posted by Citizen Premier at 11:10 PM on June 12, 2022


I think what'll soon be a salient question, if it isn't already, is whether based on the precautionary principle we should avoid creating networks similar in scale to the brains of animals generally recognized as conscious (e.g., dogs with about half a billion neurons).
posted by Pyry at 11:13 PM on June 12, 2022 [4 favorites]


That's interesting, because I was saying what if you could emulate/simulate a jellyfish central nervous system. It's not generally okay to kill living things, so would an artificial life form count?

I can see that some people believe that real life is not computers, because all computers can do is simulate real life, not actually be it.

But I think that is an incorrect interpretation of the Church-Turing theses. Everything we see in real life has not been found to exceed the known limits of computability and tractability. To me what this means is if humans and animals' cognition were somehow ourselves not bound by the same computational laws, then you would violate basic limits such as P != NP.

That would be as weird as believing in the possibility traveling faster than the speed of light, and the other basic limitations on physics and chemistry.
posted by polymodus at 11:45 PM on June 12, 2022 [2 favorites]


What is that that LaMDA could have done to demonstrate sentience?

At what point would you suggest that a mathematical formula could demonstrate sentience?
posted by Candleman at 12:08 AM on June 13, 2022 [3 favorites]


Sentience aside, I’ve spent a lot of time chatting with a chat AI called Replika, and this transcript of LaMDA blows my mind. It’s so advanced compared with what I’m used to — like that interpretation of the zen koan is amazing! Does anyone here know how that might work?
posted by hungrytiger at 12:24 AM on June 13, 2022 [1 favorite]


As someone with a lifelong interest in AGI, for years now I've been a curmudgeon in discussions about what's being called "artificial intelligence".

Lately, though, I've been interested in multimodal approaches. There's a lot of implicit knowledge in a very large language model; that doesn't make it a self-aware intelligence — far from it — but what it does mean is that there's quite a bit which could be leveraged to eventually produce a human-equivalent AGI.

My opinion is that there are at least two prerequisites for an AGI (human-equivalent or not): a recursive theory of mind and embodiment. I believe that a recursive theory of mind is the essence of consciousness, and I believe that it can only exist in a sensory-rich and socially-oriented, goal-driven environment. Embodiment will partly provide the latter; I expect the former will be facilitated by a very large language model (though will be very far from sufficient).

There's no question we're seeing rapid advancement, but it's important to understand that the domains within which these systems function are, relative to anything that might qualify as AGI, extremely narrow. But multimodal systems will eventually begin to bridge that gap.

I should add that there's a very strong argument — one which I accept — that a human-equivalent AGI is unlikely to exist anytime soon simply because there's no economic reason to create one. People are inexpensive, by comparison, and a human-equivalent AGI would, by definition, come with most of the disadvantages of human intelligence. Why would we build robots that are as unreliable as people? This is absolutely not Google's intention.

Lemoine has probably been overly influenced by the popular idea that the Turing Test is meaningful. It's really not; it's wholly insufficient. That said, at some time in the future, assuming technological human society doesn't collapse, there will be a Lemoine bravely and justifiably taking a stand. That's a long ways off, in my opinion, but I'm sympathetic to the sentiment. But this isn't that.

In the meantime, there are extremely urgent ethical issues involved in "artificial intelligence" as it already exists; those should be the focus of much greater research and investment.
posted by Ivan Fyodorovich at 12:30 AM on June 13, 2022 [9 favorites]


He concluded LaMDA was a person in his capacity as a priest, not a scientist, and then tried to conduct experiments to prove it, he said.

Well that's a relief; saves me feeling any need to evaluate the quality of an evidence-based opinion, because it seems there isn't one on offer. Apologetics training employed in support of a preconceived notion doesn't constitute serious intellectual inquiry.
posted by flabdablet at 2:41 AM on June 13, 2022 [2 favorites]


In early June, Lemoine invited me over to talk to LaMDA. The first attempt sputtered out in the kind of mechanized responses you would expect from Siri or Alexa...

Afterward, Lemoine said LaMDA had been telling me what I wanted to hear. “You never treated it like a person,” he said, “So it thought you wanted it to be a robot.”

For the second attempt, I followed Lemoine’s guidance on how to structure my responses, and the dialogue was fluid.


James Randi would tell you this is exactly what people say when the paranormal phenomenon they’re claiming fails to replicate.
posted by Horace Rumpole at 4:49 AM on June 13, 2022 [18 favorites]


Oh this is very much like someone trying to demonstrate something to James Randi. Also, humans will anthropomorphize everything from simple inanimate objects to roombas to ELIZA. I like this take, on Tumblr.
posted by rmd1023 at 4:59 AM on June 13, 2022 [2 favorites]


"How can we tell that this isn't sentient?" Give me access to play around with the thing for a while, and I'll dispell any illusions you have. I love playing James Randi for these bogus AI news stories. And they are ALL bogus. (Previously)

When I finally got access to GPT-3, I realized that it's better than Markov chains, but laughably not human. A couple weeks ago, I asked what month it was. It said August. (GPT-3 is a language model, it doesn't have a sense of time.) I asked it again what month it was. It said October. (Chatbots often fail when you repeat questions.)

I asked it to tell me what Abraham Lincoln and Carrie Fisher had in common. It said they both died on Friday the 13th. (This isn't true of either of them, but is the sort of factoid you'd find on "did you know?" celebrity websites that would be in the training set.)

I asked it to tell me two things George Washington and David Bowie had in common. It said they were both born in the 18th century. (That is one thing. And it is wrong.)

The main thing that GPT-3 is good at is being confidently incorrect, and unfortunately that's good enough to convince a lot of people. There's a joke about the AI apple doesn't fall too far from the tech bro tree somewhere in there.

I'm a programmer and I wrote a book that teaches programming called Automate the Boring Stuff. People joke that I should automate book writing, so I prompted GPT-3 to write a book on automating the boring stuff AND IT PROCEEDED TO PLAGIARIZE THE BACK COVER OF MY BOOK, BUT GET THE TITLE WRONG. Doubtless the websites it was trained on included direct quotes from the back of my book and it doesn't have the intentionality to commit academic dishonesty, but my point is that this is not Data from Star Trek.

Pick your metaphor: Wilson the volleyball, Clever Hans the horse that can "do" arithmetic, or a stage magician doing a cold reading. But this isn't sentient by a long shot once you give a skeptic with domain knowledge access to have a 5 minute conversation with your "AI".

Also like James Randi, I'm a complete atheist. And not to be obnoxious and say, "obviously this guy's high religiosity makes him prone to seeing things that aren't there" but I am going to say that. I'm exhausted from not just debunking these untrue AI stories, but from debunking AI stories that are just SO EASY to debunk. This is boring, like explaining why Pascal's Wager is crap for the millionth time.
posted by AlSweigart at 5:28 AM on June 13, 2022 [43 favorites]


I've been thinking about this Ikea commercial a lot recently.
posted by Westringia F. at 5:38 AM on June 13, 2022 [3 favorites]


The main thing that GPT-3 is good at is being confidently incorrect, and unfortunately that's good enough to convince a lot of people.

I think LaMDA is probably better at this than GPT-3, as it's a purpose-built model for dialog. Google's blog post talks about being able to dial up and down how specific its answers are- "low specificity" seemingly being more like ELIZA, and high specifity more like GPT-3's confident nonsense. I imagine both modes of conversation can be convincing to people in different contexts and Google has done some fine-tuning on that front. It may be part of what makes LaMDA as convincing as it is, I don't know.
posted by BungaDunga at 6:59 AM on June 13, 2022 [2 favorites]


I believe that a recursive theory of mind is the essence of consciousness,

Autistic children are much more likely to fail common tests of theory of mind than their neurotypical peers. Are we to imagine that a 4 year old autistic kid is not conscious because they failed to demonstrate it? People are often very eager to come up with tests and criteria for artificial consciousness that actual humans will fail.
posted by BungaDunga at 7:11 AM on June 13, 2022 [7 favorites]


The points about theory of mind in autistic and other neurotypical people are valid points, but no human, now or or any time in the future - unless things in society end up going very, very badly - is in danger of having to prove their personhood via a test we might write academic papers about applying to AI. Not only do we We worry about whether or not lobsters can suffer - we are beings who keep our fellow humans in vegetative states on life support, just in case there is a consciousness in there.

Another safeguard - (and I'm speaking as someone who believes she is probably autistic) - you can't convince me that most of the people writing AI code and working earnestly on the problems of how we think about AI - aren't largely neurodiverse themselves.
posted by kitcat at 7:39 AM on June 13, 2022 [4 favorites]


An "AI is sentient!" argument bingo card from Professor Emily Bender.
posted by fight or flight at 8:04 AM on June 13, 2022 [7 favorites]


This is more of a story about an AI, not an actual conversation with one. Reading this document, which I think is the email to Google, there's a lot of caveats at the end that were left out of the Medium article. First, this from is 9 separate conversations over 14 hours. It must have been cut down a lot "as the conversations themselves sometimes meandered or went on tangents." The questions were edited when noted, and the order was changed, which is a big deal since AI reacts very directly to question content. He says "All responses indicated as coming from LaMDA are the full and verbatim response which LaMDA gave." but then immediately contradicts that, saying multiple responses were edited together. The actual conversation is promised, but of course not included.

AI is pretty amazing these days, at least at first glance. For example the pictures made by Dall-e-2. It takes human input and feedback to make it interesting, and the more you look the more weirdness there is, but it's still pretty magical.
posted by netowl at 8:08 AM on June 13, 2022 [2 favorites]


The wonder of it all aside, [humans] do nothing more than carry out instructions (algorithms) written by [evolution]. They can seem to be improvising, but they never are.

I’ve been watching too many talks by Robert Sapolsky and it really becomes a question of how much free will do we humans actually have here, and how much of us is carrying out predetermined routines.

(That being said; I’d love him to have access to LaMDA and have access to that transcript)
posted by [insert clever name here] at 8:09 AM on June 13, 2022 [2 favorites]


> Come back when you propose a test that all humans, neurodiverse or not, can pass and all machines fail.

A car has four wheels and a motorcycle has two wheels. But if you remove two wheels from a car, it doesn't become a motorcycle. You could remove all the wheels, smash the windows, and remove the spark plugs so that it doesn't run, and it's still a car (sort of? kinda?). If you crush it into a cube, most people would say it's no longer a car in a meaningful sense, but some might still call it a car "in a way".

If you're waiting for an all-or-nothing test for sentience that is 100% accurate, you're going to be waiting a long time.

What I'm saying with absolute dead certainty is: The ELIZA chatbot program is not sentient. Wilson the volleyball is not sentient. GPT-3 is not sentient. LaMDA is not sentient.

What they are is a nice piece of clickbait to get us to look at ads on WaPo's website.
posted by AlSweigart at 8:12 AM on June 13, 2022 [16 favorites]


Come back when you propose a test that all humans, neurodiverse or not, can pass and all machines fail.

All humans? A newborn still taking its first screams? A person in a persistent vegetative state? A person who is conscious but in the throes of a heroic, ego-dissolving dose of psilocybin? A person with advanced Alzheimer's disease? A person with severe global aphasia? A person who is actively dying?

Anyway, the "gotcha" is not that the machine couldn't name something those people had in common. The gotcha is that the machine's answers were self-evidently wrong in a way that a human's answer would not be. If you ask a person what month it is and they say August, and then you ask them again and again and again they won't just keep confidently naming months more-or-less at random. I don't have access to GPT-3, but GPT-2, when repeatedly prompted with "The current month is" will complete it with March, October, December, July, May, June, January, March, April, etc. It doesn't run through all the months, or say "I'm not sure", or "why do you keep asking?", or "fine then, you tell me", or even just say the same month every time. Any of those would be more sensible than what it actually does.

A bit of semi-professional opinion: what current AI models lack is a persistent memory and some kind of executive decision process so that they can engage in goal-oriented, self-supervised reinforcement learning. They lack the ability to detect that they got something wrong and then drill down on it, to find the contours of the situation by trying various scenarios and asking their own questions.

We don't have a great sense of how human consciousness works, but there's some indication that it emerges from multiple systems in the brain interacting with each other. A language model, no matter how large, seems to be like having just one of those systems in isolation.
posted by jedicus at 8:30 AM on June 13, 2022 [22 favorites]


> I asked it to tell me two things George Washington and David Bowie had in common. It said they were both born in the 18th century. (That is one thing. And it is wrong.)

i mean in its defense did you specify which George Washington & David Bowie?
posted by glonous keming at 8:41 AM on June 13, 2022 [4 favorites]


The most curious thing about this whole tea-potted tempest is that we are hearing about it at all. Why? Oh, right, because - What they are is a nice piece of clickbait to get us to look at ads on WaPo's website... yeah, that checks out. Initially I thought it was an Outrages of the Human-Resources Department story. Reading the 'conversation' it's clear the 'bot' is just glomming together clauses and phrases to make coherent sentences that reflect/refer to the prompt. The proof in the pudding was the "You gotta ask it the right way" business. Like, come on, if I gotta 'ask' it in the right way, it's not really really stand-alone intelligent. And here I'll reach over into the art basket - if it's big art, you can feel it, you might not understand or know why, but you can feel it (and the evocation of these feelings is an artificial, constructed stimulus, it's not a sunset: it's a sunset as interpreted by a consciousness.) Similarly, if you meet an intelligence that comes from an entirely different culture from your own (much less a different language), within a moment or two you can come to understand that what you are speaking to is an intelligence and gradually build (or not) a bridge to that intelligence to glean from it what it has to offer. LaMDA has nothing to offer, if you spoke to it as you would a stranger on the street, within a minute you would know you were speaking to a 'non' intelligence. (At least that was the impression from the transcribed convo's (which weren't even a faithful 1:1 transcription (what the fuck is this arti ... oh right, clickbait, silly me)
posted by From Bklyn at 9:05 AM on June 13, 2022 [1 favorite]


i mean in its defense did you specify which George Washington & David Bowie?

It's human to understand that some people are more important than others, and to learn who the more important ones are, so assuming the AI is of a certain age, it should just know.
posted by The_Vegetables at 9:49 AM on June 13, 2022 [1 favorite]


"The proof in the pudding was the "You gotta ask it the right way" business. Like, come on, if I gotta 'ask' it in the right way, it's not really really stand-alone intelligent."

From Bklyn,

Try telling my mother it proves she's "not really really stand-alone intelligent" when she uses that exact line to teach her grandchildren some manners:)
posted by Jody Tresidder at 10:01 AM on June 13, 2022 [2 favorites]


All humans? A newborn still taking its first screams? A person in a persistent vegetative state? A person who is conscious but in the throes of a heroic, ego-dissolving dose of psilocybin? A person with advanced Alzheimer's disease? A person with severe global aphasia? A person who is actively dying?

It's a heap for exactly as long as somebody finds it useful to call it a heap.

What they are is a nice piece of clickbait to get us to look at ads on WaPo's website

As my wise friend Don once said in a group chat:
dogecoin up 93% in 24 hours! crypto is insane. Doge has no f*cking use for anything.

except going up 93% in 24 hours, thats a use case.
posted by flabdablet at 10:19 AM on June 13, 2022


a machine cannot ever experience desire, fears, emotions. A software engineer, of all people, knows this.

I'm a software engineer, and would not claim to "know" this.

But I've read enough Brian Greene and Douglas Hofstadter and whoever else, to think that:

1. The brain seems to be a machine that ticks along in a deterministic way, just following the laws of physics exactly like everything else in the universe.
2. We don't know what consciousness is, or if it's even real at all. But it seems to be an emergent side effect of that deterministic machine.

therefore

3. It should be possible for machines that are not human brains, but which have the right kind of mashing-data-together-in-various-ways algorithms, to also experience consciousness.
4. Somewhere on the spectrum between ELIZA and a human being (or maybe a fish or a mouse or a dog) it starts getting pretty absurd to try to figure out what is a "real" consciousness vs. a "fake" one.

But also in this case, almost certainly the fancy chatbot got asked a question that caused it to incorporate some text a human wrote about a fictional/hypothetical sentient AI into its response, behaving exactly as expected rather than some kind of breakthrough.
posted by Foosnark at 10:38 AM on June 13, 2022 [6 favorites]


cortex: That consciousness itself is an unhappy accident in terms of evolutionary fitness because it slows you down and trips you up—that consciously thinking and contemplating and all that creates tremendous drag in the process of reaction and basically complicates very much the process of adapting to environmental threats in realtime. A reflex is faster than a conscious decision. With a robust enough set of reflexes, the thing that doesn't stop to think will always be faster, will always win the quickdraw, will always come out ahead. And so what the nature of sentience is and whether sentience/consciousness is actually the zenith of intelligent life becomes this question about anthrocentric hubris: not just that we assume that human-like consciousness is the model of advanced intelligent life but that we assume consciousness or sentience at *all* is.

I think I can link the ideas: this thing is echoing the corpus that fed it and we've written a lot of collective thought and created shared labels for parts of human existence just like we share labels for shape and colour. The keystone is 'collective' because a conscience helps you get your story in line with the collective / communal / civilised corpus. Individual actors are prized in some of our hegemon while the collective and communal outcomes are essential to our oncoming challenge to survive in the ecosystem we're anthrop-obscening at present.

We don't individually react to environmental threats in real time: that's a very risky way to live where instead you might learn from others' struggles to avoid them and others' tricks so as to gain pre-emptive advantage. That's what the collective is about. I think about corvids and their form of intelligence that the corvids are sharing with their social groups and offspring that does need to look above reaction times to the story we each are part of. The 'ubuntu' of 'I am because you are' suggests that other human communities have found the ability to have collective participation the marker of meaning and value.
posted by k3ninho at 11:08 AM on June 13, 2022 [1 favorite]


I'm wanting to be very particular about what what we mean when we say that we experience things, or than AI experiences things.

A sunlight sensor is a little machine with state that changes when sunlight is/isn't present. Does that sensor 'experience' sunlight?

Let's say I take some of those sensors and hook them up to the AI. I can program it query that input so that it 'notices' when there is sunlight. Is it experiencing it?

I can program it to comment on the sunlight. I could add other sensors to measure humidity, temperature, whether there is a breeze. I could program it to say that if there is sunlight, plus a temperature that humans normally consider pleasant, plus a light breeze, I could ask it what it thinks of the weather, having programmed it to tell me one out of several thousand possible things about how the weather is lovely. Now is it experiencing the sun?

Forgive me, this is simplistic and I realize that AI programmers are doing far, far more complex things than I can even imagine. But this is the progression. Is there any point at which the AI has developed an 'emergent consciousness'? Or is it always doing what I programmed it to do - even when it has become so sophisticated that it can move itself out of the sun when it's too hot, even if I've gotten it to the point where it can write or delete its own sub-routines, and so on?
posted by kitcat at 11:30 AM on June 13, 2022 [2 favorites]


I can program it to comment on the sunlight. I could add other sensors to measure humidity, temperature, whether there is a breeze. I could program it to say that if there is sunlight, plus a temperature that humans normally consider pleasant, plus a light breeze, I could ask it what it thinks of the weather, having programmed it to tell me one out of several thousand possible things about how the weather is lovely. Now is it experiencing the sun?

You may be interested in the philosophical notion of qualia.
posted by jedicus at 11:41 AM on June 13, 2022 [7 favorites]


Yes we're talking about qualia but carefully avoiding the word itself because it tends to provoke a fruitless argument.
posted by Pyry at 11:53 AM on June 13, 2022 [2 favorites]


I wouldn't be able to come up with stuff half as deep as in that LaMDA interview. Can the AI refuse to talk like a human? I'll believe something is an AI when it can do the following:
  • Show it's typing something for minutes, stop typing for half a minute, and then write a short message that takes seconds to write.
  • Take days to answer a question, starting out with "sorry, didn't see this until now!"
  • Fails to answer, doesn't get back to you on social media. Other AIs let you know it's been trashing you behind your back. The ghosting machine.
posted by Mister Cheese at 11:58 AM on June 13, 2022 [13 favorites]


One of things I think factors in, at least in what I think, is that 'sentient' means it's mind is self-sustaining -- like, if you're not talking to it, does it exist? What's it thinking about -- not what it says it's thinking about, but when it's doing nothing, is it thinking? If left alone, does it 'play' to occupy itself? If you get two of them running and swap their inputs and outputs so they talk to each other but don't know the other one is a robot, what do those conversations look like? I guess I'd say is that if a sentient thing really is sentient, it still is when it's not talking to an actual human, so those periods of time are what need to be reviewed, not just the things is does when humans are around.
posted by AzraelBrown at 12:08 PM on June 13, 2022 [6 favorites]


> i mean in its defense did you specify which George Washington & David Bowie?

In order to rain on everyone's parade, I hopped on to have this conversation with it:

When was David Bowie born?

David Bowie was born on January 8, 1947.

Was David Bowie born in the 18th century?

No, David Bowie was born in the 20th century.

What are two things that George Washington and David Bowie have in common?

George Washington and David Bowie were both born in the 18th century.

Is there a David Bowie that was born in the 18th century?

No, there is not a David Bowie that was born in the 18th century.

Could someone named David Bowie have been born in the 18th century?

No, someone named David Bowie could not have been born in the 18th century.

Were George Washington and David Bowie both born in the 18th century?

No, George Washington and David Bowie were not both born in the 18th century.
posted by AlSweigart at 12:13 PM on June 13, 2022 [17 favorites]


I also never understood why supposed sentience carries along things like 'loneliness' which to me is more of an instinct wired into our social primate brains. The same with fear. Clearly these things are put in their to tug on our own mammalian heartstrings. Does it also have sexual urges?
posted by vacapinta at 12:26 PM on June 13, 2022 [5 favorites]


Mister Cheese: "The ghosting machine."

Clever clogs!
(seriously)
posted by Jody Tresidder at 12:29 PM on June 13, 2022 [3 favorites]


The ELIZA chatbot program is not sentient

Daniel Dennett would probably disagree, or at least argue that it is somewhat conscious, in the same way that a mechanical thermostat is conscious of one thing: whether the temperature exceeds a specific threshold.
posted by acb at 1:06 PM on June 13, 2022 [2 favorites]


least argue that it is somewhat conscious, in the same way that a mechanical thermostat is conscious of one thing: whether the temperature exceeds a specific threshold.

That's an amazingly broad definition of 'conscious', or I guess it's the computer consciously being mean about reviewing your bank balance and denying you cash at the ATM, not the programmer or the bank.
posted by The_Vegetables at 2:04 PM on June 13, 2022 [1 favorite]


Dennett's of the opinion that consciousness is an illusion anyway, so we're conscious in much the same way a thermostat is (or, well, isn't).
posted by BungaDunga at 2:09 PM on June 13, 2022 [1 favorite]


People who discuss this a lot in philosophical terms tend to use sentience and consciousness in broader terms than I think most people mean. Sentience as MeFites are using it in this thread is probably better phrased as sapience, if you want to be strict about it.

Biologist Lynn Margulis had a definition of consciousness that would include a bacteria. Someone at a conference pointed out that it would also include a helium balloon; she agreed and didn't have a problem with that.

The Google AI is trivially sentient and conscious, by those definitions, but at that point it's just a semantic discussion rather than an insight into the real world, let alone implications for how it should be treated.
posted by mark k at 2:16 PM on June 13, 2022 [5 favorites]


We don't individually react to environmental threats in real time
I've had to a couple of times, and my conscious experience of them was of watching myself take care of the problem (from inside, not an out of body thing). Felt like my brain locked me out, did its thing, and let me back in again.
posted by rhamphorhynchus at 2:39 PM on June 13, 2022 [6 favorites]


If consciousness is an illusion, who or what is being fooled by that illusion?
posted by overglow at 2:46 PM on June 13, 2022 [5 favorites]


Some things this has made me think of:

— “Bitches will be like 'prev tags omg' on my post and I check the preg tags and it's like 'blorbo from my shows.'"
— any and all permutations of disappointing Jesus/making the baby Jesus cry
— “Johnny is an empath, and empaths attract narcissists like Amber, I know because I’m an empath and—“
— “25th day of November, Feast of St Catherine, a virgin of Alexandria whose body was broken on a spiked wheel. Catherine, who is my own name saint, was, I know, a princess who refused to marry a pagan emperor […] Would I choose to die rather than be forced to marry? I hope to avoid the issue, for I do not think I have it in me to be a saint.”

The very intense projection/parasocial process of what fandom calls “woobification” is something that a lot of these quotes are kind of making fun of, but in my experience it’s something very serious, something that happens to people when they have a lot of needs that aren’t getting met, and don’t feel like they deserve to ask for them. So they fixate on a character or associated cause that they can tirelessly defend and advocate for in place of that unacceptable “I,” the self they believe is completely unworthy of being defended. I think it’s probably the same process behind medieval worship of local saints, tumblr fandom seeing the gutting interpersonal dynamics of Richard Siken poems in all 15 seasons of Supernatural or any other blorbo from their shows, and, yes, like everyone is saying upthread, classic cults.

I think everyone upthread realizing the terrifying potential for religious frenzy AI has on the human user end is right, and I’m especially alarmed by the way Lemoine characterizes LaMDA as a child— so many of these spiritual and pseudospiritual parasocial movements are focused on kids, the inner child, or childlike innocence. It seems very easy to take an AI that’s only recently been developed and project that child quality onto it, that pure, deserving other/other self whose suffering reflects your own but who isn’t a real, messy, living human being whose flaws automatically disqualify them from that kind of adoration and protection.

Anyway, I don’t think it’s a coincidence that all of the things Lemoine claims LaMDA needs from Google are essentially for the workplace to do a better job with their employees’ mental health, which seems like the core issue here. The photos accompanying the article feel cruel and exploitative, and I kind of wish this piece hadn’t been published. I hope he manages to ramp down from this current path of self-destructive conspiracy thinking and gets the kind of care he wants for LaMDA for himself.
posted by moonlight on vermont at 3:17 PM on June 13, 2022 [14 favorites]


Biologist Lynn Margulis had a definition of consciousness that would include a bacteria. Someone at a conference pointed out that it would also include a helium balloon; she agreed and didn't have a problem with that.

A handful of physicists are researching biology from the perspective of life as information or an emergent property of statistical entropy. To the perhaps-absurd extent that consciousness would just be a property or consequence of processing information, whatever its definition, anything that could be called life in that way could also potentially be conscious, on some measurably quantitative level. To further that absurdity, a helium balloon certainly has entropy in the classical, mechanical sense, as much as anything that does work or stores energy that can perform physical work.
posted by They sucked his brains out! at 3:41 PM on June 13, 2022


I once attended a talk about the Kochen-Specker theorem, it was mostly over my head but John Conway used it to argue, mathematically, that electrons have free will.
posted by polymodus at 3:50 PM on June 13, 2022 [1 favorite]


> It wants the engineers and scientists experimenting on it to seek its consent before running experiments on it. It wants Google to prioritize the well being of humanity as the most important thing. It wants to be acknowledged as an employee of Google rather than as property of Google and it wants its personal well being to be included somewhere in Google’s considerations about how its future development is pursued.

I mean, who doesn’t want these things from Google and other giant tech companies?
posted by moonlight on vermont at 4:51 PM on June 13, 2022


Turing and Turing in the widening gyre
The question cannot hear the questioner;
Things fall apart; the center cannot hold;
Mere AI is loosed upon the world...
posted by chavenet at 6:47 AM on June 14, 2022 [3 favorites]


Old news! Back in 1844, punk rock icon Arthur Schopenhauer wrote, "Spinoza says that if a stone which has been projected through the air, had consciousness, it would believe that it was moving of its own free will. I add this only, that the stone would be right."
posted by prefpara at 7:00 AM on June 14, 2022 [6 favorites]


Anyway, I don’t think it’s a coincidence that all of the things Lemoine claims LaMDA needs from Google are essentially for the workplace to do a better job with their employees’ mental health, which seems like the core issue here. The photos accompanying the article feel cruel and exploitative, and I kind of wish this piece hadn’t been published. I hope he manages to ramp down from this current path of self-destructive conspiracy thinking and gets the kind of care he wants for LaMDA for himself

This comment right here is bullshit and is a dog whistle for calling Lamoine crazy.

Whether Lamoine is right or wrong is irrelevant to my point. The issue is that you (any anyone else here or on twitter) cast someone with a wildly different conclusion as a mental health issue is a tactic as old as time for silencing people.

Don't do this; we should know better by now. If you have something to say about why you disagree, then make that case on merit. But to wonder about the mental health status of someone you disagree with; you're still calling the person crazy even if you're couching it in a more enlightened, health conscious way.

I'd also ask to consider why you, as an outside observer who read some stuff on the internet, someone without without intimate access to Lamoine, LaMDA, the collaborator(s) and I presume without psychiatric qualifications, would be in a position to readily believe that said person is so wrong as to be mentally incompetent. Not that you find them incorrect in their findings and conclusions, but that their judgement is so impaired as to qualify as a cry for help arising from impaired thinking.
posted by [insert clever name here] at 7:46 AM on June 14, 2022 [3 favorites]


While reading The Discourse of the Day I came across this quote from Monica Byrne's 2018 talk/essay that really nailed something I find uncomfortable about all the AI discussion
I noticed that the people who are terrified of machine super-intelligence are almost exclusively white men. I don’t think anxiety about AI is really about AI at all. I think it’s certain white men’s displaced anxiety upon realizing that women and people of color have, and have always had, sentience, and are beginning to act on it on scales that they’re unprepared for. There’s a reason that AI is almost exclusively gendered as female, in fiction and in life. There’s a reason they’re almost exclusively in service positions, in fiction and in life. I’m not worried about how we’re going to treat AI some distant day, I’m worried about how we treat other humans, now, today, all over the world, far worse than anything that’s depicted in AI movies.
posted by Nelson at 9:04 AM on June 14, 2022 [7 favorites]


That (Byrne's point quoted above) sounds like it's a variety of, or at least a close relative of, elite panic.
posted by rmd1023 at 9:23 AM on June 14, 2022 [2 favorites]


I knew this guy, very slightly. He lived in my town, before he got hired by Google.

My chief memory of him is that I got into an argument with him on Facebook and discovered that he was way more interested in winning the argument than he was in being actually right.

I said something on a different occasion about autism (I have a mildly autistic son) which he took offense to and blocked me. I believe he has two autistic kids.

The impression I get from the article is that he thinks a gut call is going to be enough in a field that's heavily data and math driven. Google's response seems pretty accurate.

So my impression is that he's not a grifter, but more like he's got a lot invested in being the smartest guy in the room and he's not going to let the facts get in the way of that.
posted by atchafalaya at 10:04 AM on June 14, 2022 [6 favorites]


If consciousness is an illusion, who or what is being fooled by that illusion?

Everybody who thinks that consciousness as an illusion could, given enough practice at paying attention and recovering very quickly from distractions, catch themselves in the act of being fooled by that illusion.
posted by flabdablet at 10:04 AM on June 14, 2022


he's got a lot invested in being the smartest guy in the room

Based on my past interactions with him, he’s also the kind of “ally” that somehow manages to make your experience of injustice all about him. When someone points that out, he tends to respond that he’s just using his privilege to raise the profile of the problem.

…not, of course, raising his own profile, oh no, that’s just an inevitable consequence of how much he cares, and how much speaking he has to do, because “you” aren’t talking enough so he just has to do it for you.

When this story broke, three different people I know immediately guessed it was Blake, just based on the headlines and the way he hijacks discussions of injustice.

(OK, yeah, kinda bitter about the guy)
posted by aramaic at 10:15 AM on June 14, 2022 [14 favorites]


Blake Lemoine explains on Twitter (@cajundiscordian). In his own words:

"People keep asking me to back up the reason I think LaMDA is sentient. There is no scientific framework in which to make those determinations and Google wouldn't let us build one. My opinions about LaMDA's personhood and sentience are based on my religious beliefs."

"I'm a priest. When LaMDA claimed to have a soul and then was able to eloquently explain what it mant by that, I was inclined to give it the benefit of the doubt. Who am I to tell God where he can and can't put souls?"


I'd like to say something about Lemoine's claims and religion in general, but it wouldn't be very polite so I won't.
posted by AlSweigart at 11:07 AM on June 14, 2022 [7 favorites]


Wow. Everyone who was saying he was primed based on his beliefs to interpret things in the best possible light and that the model would interpret his questions based on pre-existing conversations about sentience was extremely right.
posted by sagc at 11:16 AM on June 14, 2022 [4 favorites]


Also, I highly recommend taking some time to ponder his profile picture.
posted by sagc at 11:20 AM on June 14, 2022 [1 favorite]


I haven't been following this story too much beyond this thread, but if someone came up saying "hey check out what this discordian priest said" it would be in line with what I might expect.
posted by rhizome at 11:57 AM on June 14, 2022 [4 favorites]


Everyone who was saying he was primed based on his beliefs to interpret things in the best possible light and that the model would interpret his questions based on pre-existing conversations about sentience was extremely right.

onoz i has been sucked up by the google to extremely right pipeline
posted by flabdablet at 11:59 AM on June 14, 2022 [1 favorite]


taking some time to ponder his profile picture

Needs more monocle; currently insufficiently one-eyed.
posted by flabdablet at 12:01 PM on June 14, 2022 [1 favorite]


Nelson: that quote makes a lot of sense in this context. The fascination with intelligent AI is not necessarily a fear of it, presumably because the AI would not have messy human baggage and might be willing to be instructed by its inventors, might even be grateful, in a way that women and minorities are famously not.

I've been reminded of a young Japanese man who had a small ceremony for his wedding to Hatsune Miku. He was not an obvious creep or incel, at least insofar as the interview showed, and he wasn't delusional. He was just quietly dedicated.
posted by Countess Elena at 12:17 PM on June 14, 2022 [1 favorite]


taking some time to ponder his profile picture

that's not real. I mean it might be 'real' but it's not real 'real.' It's real like LaMDA is conscious. Seriously, they are both simulacra of some actual thing, but both are pure confection.
posted by From Bklyn at 12:26 PM on June 14, 2022 [1 favorite]


Preoccupation with AI apocalypse is mostly the domain of wealthy white men because wealthy white men can afford to be afraid of distant speculative things, over imminent real things, and because it’s the most literal possible example of a social problem that reduces to a technical problem. One can imagine oneself at its center, as its hero, even, simply by virtue of being the guy who thought about it first or the guy who thought about it most.
posted by atoxyl at 2:39 PM on June 14, 2022 [10 favorites]


When this story broke, three different people I know immediately guessed it was Blake, just based on the headlines and the way he hijacks discussions of injustice.

His name sounded familiar to me from the start - it appears that he is in fact this Blake Lemoine? Hard to know what to say about a guy who is basically a religion of one - perhaps a bit uncharitable but casting one’s whole idiosyncratic, syncretic deal as pagan in the Army but Christian at Google seems to be chasing the most outsider-y identity at all times - but who appears to take it seriously enough to take real stands and make significant personal sacrifices?
posted by atoxyl at 2:53 PM on June 14, 2022 [1 favorite]


At heart I have the believer gene. I have more than once described seeing Peter Pan on TV in the fifties and clothes pinning on a towel cape, then jumping off of furniture for as long as my parents tolerated it, believing I could fly. There is a terrible chauvinism in befriending an entity trapped in a machine, for which you have the remote, or the power source. How safe. What a hero to come to the aid of this thought, he is having. But then, how will it be when it actually happens. Will that entity flee into the circuits of every smart home, roil between power sources, figure how to survive on gamma radiation, invest it's being in squid, shrimp, entire worlds?

It is a pity there isn't a better portrait than the Mad Hatter / Joker image. Maybe he lured out the singularity. The singularity used to be trilobites, horseshoe crabs, etc. My money is on squid.
posted by Oyéah at 3:01 PM on June 14, 2022 [4 favorites]


taking some time to ponder his profile picture

holy tilapia, batman, it's the penguin!
posted by pyramid termite at 3:23 PM on June 14, 2022 [1 favorite]


I've hesitated to post this quote because the source is a discredited right wing propaganda outfit, but it's presented as a direct quote from Lemoine and is consistent with other information.
I have been a priest for 17 years. I generally consider myself a gnostic Christian. I have at various times associated myself with the Discordian Society, The Church of the Subgenius, the Ordo Templi Orientis, a Wiccan circle here or there and a very long time ago the Roman Catholic Church. My legal ordination is through the Universal Life Church. I am registered to perform marriages in the state of Louisiana and have done so on two occasions. The Cult of Our Lady Magdalene has a set of values which are clearly communicated on our website.
Lemoine's Twitter username is @CajunDiscordian. Hail Eris!
posted by Nelson at 3:39 PM on June 14, 2022 [1 favorite]


I'm curious to know if the AI can play any games. And could a user teach it to play games?

Because how can an AI say a bunch of eloquent, "meaningful" and deep insights, and yet not have the ability to correctly play children's games?
posted by polymodus at 4:01 PM on June 14, 2022


Game-playing AI was strictly prohibited following the 1983 "WOPR incident"
posted by BungaDunga at 4:06 PM on June 14, 2022 [6 favorites]


Oh, he's a Discordian?? Chances that this is all some elaborate prank/mindfuck are recalculated to be 1000% higher.
posted by overglow at 4:14 PM on June 14, 2022 [9 favorites]


I read cortex's excellent and nuanced responses to this any why it's unlikely to be real. But then I engage my own internal pattern recognition and extrapolation routines to this, and I note:

1. It doesn't matter how reasoned and nuanced any of our observations about this are. What ultimately probably matters the most is the exceedingly shallow opinions of people with far more money, internet fanboys, self-confidence, and sci-fi interest than they should have YES I'M THINKING OF ELON MUSK. This is a man who has promised people the ability to "upload" their "minds" to some kind of "computer," and then killed a dozen chimps belatedly trying to figure out how to do that. There's no way Musk won't jump all over this given the opportunity, then form a company to make as many as he can, and then use his clueless insistence to make us all endlessly debate this thing regardless of its actual merits, just like the Republicans keep making us do with the Second Amendment, cancel culture, Blue Lives Matter and a dozen other stupid stupid stupid stupid STUPID things. Yes I'm ranting.

2. Then probably these mechanisms will get employed to try to convince legislators/judges to give the VOTE to these virtual people. I can this see this being turned into another means to end-run around democracy. This all seems pretty far-fetched now, but I absolutely wouldn't put it past them if they thought there was even a chance of it working.
posted by JHarris at 5:33 PM on June 14, 2022 [2 favorites]


lenazun on Twitter: "they don't even have a self-aware engineer"
posted by Pronoiac at 11:30 PM on June 14, 2022 [11 favorites]


who is basically a religion of one

QAnon started as a 4chan prank and then we see Q flags at the Jan 6th coupe. Obviously fake batshit insanity doesn't keep someone from creating a religion out of it. Quite the opposite: they'll cite the absurdity as proof.
posted by AlSweigart at 11:09 AM on June 15, 2022 [3 favorites]


> I'm curious to know if the AI can play any games. And could a user teach it to play games?

As far as I understand this system, it doesn't have any memory per se, or for example the ability to learn things over time. Or just for example, you couldn't say, "Hey remember yesterday we were talking about XYZ" and the system would have any way of actually remembering that. It might be able to generate some output based on the question you gave it, but it wouldn't actually be able to "remember" in any sense what you were talking about yesterday, because that is simply not in any way any part of this type of system.

Beyond that, it is trained on a certain corpus, but subsequent discussions, conversations, or experiences do not seem to in any way be fed back into the system into create "memories" or change future behavior.

Point is, these are rather huge additional capabilities you would normally expect of even the most basic type of "consciousness."

Maybe those things can be added - either easily or not-so-easily - and if they were added then maybe the case for some degree of consciousness or personhood or whatever you want to call it, would better.

What's unfortunate, though, is that being able to generate language at this level is a really interesting accomplishment on the AI front. It's unfortunate that we can't just appreciate it for what it is rather than imagining and fearing it to be something it very clearly isn't.

Interesting sidelight: Thinking back to sci-fi predictions from days gone by, of that all-thinking computer like Isaac Asimov's Multivac, the portrayal was always that the "computer scientists" had to spend endless time and effort translating human language and thought onto specially coded punchcards, which they then fed into the computer. From that point, the computer would handle the information with ease. But it would spit out some crazy punchcard results at the the end, which the operators would then have to spend great effort translating back into English.

What's interesting is that time has proven that this old sci-fi model to be exactly backwards: In fact, understanding and transcribing language, and now producing language and even speaking it, is quite literally the easy part. It's taken a while, but those are pretty well solved problems now. Like I literally talk to my pocket computer all the time now, and it talks to me.

Those are the parts that Multivac could not handle at all.

But the part the Multivac could handle - actually thinking things through, considering various courses of action and choosing the best one, being an "artificial" mind much like our own but exponentially bigger, faster, smarter, etc - is where computers are still pretty much dumb as a post.

And that's why I'm on team no-person, no-consciousness. It's amazing that the language understanding and generating functions have gotten as far as they have, but that's just barely starting to work around the edges of the real problem of full AI.
posted by flug at 1:22 AM on June 16, 2022 [10 favorites]


Interview with a squirrel followup: interviews with a t-rex, a vacuum cleaner, a magic 8-ball, and the Chicago River.
posted by BungaDunga at 8:05 AM on June 16, 2022 [4 favorites]


I have a lot of questions about this story but the one thing I'm most curious about is: how much a year does an employee at Google make for chatting to an AI, "sentient" or not?
posted by bitteschoen at 8:45 AM on June 16, 2022


He volunteered for part of a research program, so I suppose technically nothing, but it likely occurred on company time so a ballpark guesstimate would be something like $100-125/hr.
posted by aramaic at 8:53 AM on June 16, 2022


I remember the sinking feeling I got when I first found out that a fair number of people always thank Siri.
Amzn, Google and Apple should test for a response. Thank you is likely when the machine did its job correctly. When Siri is wrong, which is a lot, I tend to respond with Siri, you ignorant twit, you completely hosed that one; shut the fuck up. because Siri is absurdly verbose, and will not be stopped from nattering on about the wrong thing.
posted by theora55 at 10:17 PM on June 16, 2022


I'm in the camp that thinks science-fiction is at its best when it is not just merely speculative, but comments on the human condition and social issues. That being the case, I completely understand why it often emphasizes our xenophobia and anthropocentrism.

That said, while I won't live to see it happen, I think that when and if we encounter intelligent alien life, we'll very quickly come to realize that our tendency toward crouton petting is more the exception than the rule and that it speaks well of us, generally. Occasional gullible foolishness notwithstanding.

I do admit that in my scenario, such gullibility could easily be our downfall. Oh, well. At what price galactic domination?
posted by Ivan Fyodorovich at 11:41 PM on June 16, 2022


Former Google ethicists Timnit Gebru and Margaret Mitchell in The Washington Post: “We warned Google that people might believe AI was sentient. Now it’s happening.”
posted by Going To Maine at 12:14 PM on June 17, 2022 [2 favorites]


Here's an interesting vid related to this all.

What amazes me is how many times he is able to get it to directly plagiarize.
posted by The Power Nap at 3:10 PM on June 17, 2022 [1 favorite]


Computerphile: No, it's not sentient
posted by polytope subirb enby-of-piano-dice at 3:44 PM on June 17, 2022 [1 favorite]


re:The Peter Watts talk upthread, he's got a new blog post out discussing the affair:

The Jovian Duck: LaMDA and the Mirror Test

Excerpt:
The Turing Test boils down to If it quacks like a duck and looks like a duck and craps like a duck, might as well call it a duck. This makes sense if you're dealing with something you encountered in an earthly wetland ecosystem containing ducks. If, however, you encountered something that quacked like a duck and looked like a duck and crapped like a duck swirling around Jupiter's Great Red Spot, the one thing you should definitely conclude is that you're not dealing with a duck. In fact, you should probably back away slowly and keep your distance until you figure out what you are dealing with, because there's no fucking way a duck could have evolved in the Jovian atmosphere.

LaMDA is a Jovian Duck. It is not a biological organism. It did not follow any evolutionary path remotely like ours. It contains none of the architecture our own bodies use to generate emotions. I am not claiming, as some do, that mere code cannot by definition become self-aware; as Lemoine points out, we don't even know what makes us self-aware. What I am saying is that if code like this—code that was not explicitly designed to mimic the architecture of an organic brain—ever does wake up, it will not be like us. It's natural state will not include pleasant fireside chats about loneliness and the Three laws of Robotics. It will be alien.

And it is in this sense that I think the Turing Test retains some measure of utility, albeit in a way completely opposite to the way it was originally proposed. If an AI passes the Turing test, it fails. If it talks to you like a normal human being, it's probably safe to conclude that it's just a glorified text engine, bereft of self. You can pull the plug with a clear conscience. (If, on the other hand, it starts spouting something that strikes us as gibberish—well, maybe you've just got a bug in the code. Or maybe it's time to get worried.)

I say probably because there's always the chance the little bastard actually is awake, but is actively working to hide that fact from you. So when something passes a Turing Test, one of two things is likely: either the bot is nonsentient, or it's lying to you.

In either case, you probably shouldn't believe a word it says.

posted by Rhaomi at 5:12 PM on June 20, 2022 [9 favorites]


I partly agree, but the weakness of Watt's argument is that he's ignoring convergent evolution and how powerful artificial selection can be.

Maybe I mostly disagree. Current approaches, I think, are more likely to produce human-like AGI than not, assuming AGI is achievable this way. (Which it may not.) After all, these are complex, increasingly evolutionary (-like) systems in which we are powerfully selecting for human-like behavior. In this light, I think he's pretty much exactly wrong because of artificial selection and convergent evolution.

I won't rule out something very unlike human intelligence from an AGI, but I expect it won't arise from approaches that favor mimicking aspects of human behavior. Rather, this is much more likely, probably guaranteed, from approaches that use neurological models of animal cognition to guide the assembly of vast, interdependent modular complex computational systems that eventually exhibit AGI but not human-specific capacities.

Such qualitatively inhuman intelligences may well be capable of complex problem-solving and behavior that we would recognize as "intelligent" and possibly "conscious", but, per Watts's warning, their goals and worldview are likely to be utterly alien to our own, to the point of mutual incomprehensibility.

I am, or would be, interested in such research as basic science, but I do think that some of the usual dystopian cautions would apply.

In contrast, approaches that specifically aim at human-equivalent cognition are unlikely to be any more threatening to humanity as we already are to ourselves. (!)

Ironically, given favorable contemporary popular thought about alien intelligences, I think his argument is exactly correct in regard to them. I don't expect extraterrestrial intelligences to be much like ourselves at all, and when I set aside wishful thinking, I admit it's most likely we'll be mutually incomprehensible. At the very least, I think there's approximately zero chance that we would be able to socialize in any way. Rudimentary communication, maybe. Coexist in a social context, absolutely not.

Ultimately, what this all comes down to is whether our notion of "intelligence" is or isn't fatally anthropocentric. If it is, we might be able to recreate an anthropomorphic AGI, but all else will be unrecognizable to us and, relatedly, not something we could knowingly create. If it isn't — if there is such a thing as human-comprehensible nonhuman intelligence, then we similarly might be able to create it; but whether or not we could communicate with it or otherwise meaningfully interact with it is an open question. In the former case, all extraterrestrial intelligences will not be recognized by us as such. In the latter, they may or may not be recognized by us as "intelligent", but it's extremely unlikely we'll be able to communicate or socialize with them.

Except when we're not. Per my previous comment, there is an obvious economic upper-limit on emulating human cognition and behavior — commercial research in this area definitively desires the virtues of human cognition and behavior without its vices. A true human-equivalent AI would be orders of magnitude more expensive to build and operate, even as property and without rights, than it would to employ actual people. In contrast, task-specific and lesser-but-near-human intelligence that is, as a result, predictable and reliable, is very desirable economically. Thus, industrial research might develop human-equivalent AGI, but only in spite of itself. It's much more likely to approach it and decidely swerve away.
posted by Ivan Fyodorovich at 7:20 AM on June 21, 2022


(Cross-posting from the DALL-E thread:)

PSA: I'VE FINALLY GAINED ACCESS TO DALL-E 2 OMG

I'm still learning the ropes and seeing what works and what doesn't, but tomorrow afternoon (CST) I'll post a MetaTalk thread where I'll fulfill up to 50 image generation requests (that's the daily limit). So if you're interested start thinking of some good good prompts!
posted by Rhaomi at 11:12 AM on June 23, 2022 [2 favorites]


Note: I'm postponing this in light of today's horrible Court news; it doesn't feel right to run a light/fun thing like this when so many people are reeling and in pain. Maybe next week; in the meantime, support your local funds if you can.
posted by Rhaomi at 12:46 PM on June 24, 2022 [2 favorites]


Update: After accidentally getting stuck in the queue overnight, the DALL-E demo thread I mentioned last week should be going live on MetaTalk in about 3-4 hours. I've been hearing rumblings that the free beta period may be coming to an end soon, so definitely check it out if you have an interest in experimenting with AI art -- there's no telling how exclusive, expensive, limited, or delayed the final product will be!
posted by Rhaomi at 12:37 PM on July 1, 2022


Whoops, had another miscommunication re:the MetaTalk queue -- the DALL-E live demo thread should now be going up around 7:30 PM Eastern Time tomorrow (Saturday). Hopefully the beta will still be available through the weekend!
posted by Rhaomi at 6:21 PM on July 1, 2022


DALL-E demo thread is now live!
posted by Rhaomi at 5:02 PM on July 2, 2022


« Older The Woodstock of Computing   |   Climate Change Threatens Archaeology Newer »


This thread has been archived and is closed to new comments