The Quality of Mercy Is Not Strnen
October 8, 2019 9:10 AM   Subscribe

Finally, I crossed my Rubicon. The sentence itself was a pedestrian affair. Typing an e-mail to my son, I began “I am p—” and was about to write “pleased” when predictive text suggested “proud of you.” I am proud of you. Wow, I don’t say that enough. And clearly Smart Compose thinks that’s what most fathers in my state say to their sons in e-mails. I hit Tab. No biggie. And yet, sitting there at the keyboard, I could feel the uncanny valley prickling my neck. It wasn’t that Smart Compose had guessed correctly where my thoughts were headed—in fact, it hadn’t. The creepy thing was that the machine was more thoughtful than I was. From The Next Word, a longform look at machine-enabled writing and predictive text by John Seabrook in The New Yorker
posted by chavenet (28 comments total) 24 users marked this as a favorite
 
There's sort of a weird Roko's Basilisk element in this....So in this future you have computers which can, without themselves being conscious the way humans are, perfectly mimic what humans do and say and even expand upon it - produce novels, for instance.

Is this because humans themselves aren't actually "conscious" the way we think we are? Are we just flesh AI?

What do we need humans for? I could see a Peter Thiel future in which humans don't have any special status - killing one isn't important because humans aren't human the way it is conventionally constructed. Humans are only important as allocators of demand and money, which is only important because the Thiels of the world want to make money. Humans would not be considered to exist for themselves because human consciousness isn't really a thing; we just think we're conscious. You don't think twice about turning off the AI machine, right?

Do humans suffer, or do we just think we're suffering? If we do suffer, is suffering our last useful capacity?

Like, is the future just humans and AI being interchangeable things? Peter Thiel et al wouldn't actually, like, have special philosophical status, but since they're the rich ones they would pull the strings.
posted by Frowner at 9:30 AM on October 8 [5 favorites]


Also, nice Mekons reference.
posted by Frowner at 9:31 AM on October 8 [12 favorites]


The creepy thing was that the machine was more thoughtful than I was.

nah, just clearer, cleaner, more refined and defined. Funny how discussing this topic seems to need italics. It's almost as if we're trying to put one over on the machines. Do they know yet what to make of italics?

Anyway, I doubt anyone reading this is getting an entirely good feeling. Is there anyone that feels this is an entirely good thing? If so, I would suggest another long read, recently posted here.

Thoughts on the planetary: An interview with Achille Mbembe

It may seem a completely different concern, but ... well ...

On the other hand, there is a shifting distribution of powers between the human and the technological in the sense that technologies are moving towards “general intelligence” and self-replication. Over the last decades, we have witnessed the development of algorithmic forms of intelligence. They have been growing in parallel with genetic research, and often in its alliance. The integration of algorithms and big data analysis in the biological sphere does not only bring with it a greater and greater belief in techno-positivism and modes of statistical thought. It also paves the way for regimes of assessment of the natural world, and modes of prediction and analysis that treat life itself as a computable object.

Concomitantly, algorithms inspired by the natural world, and ideas of natural selection and evolution are on the rise. Such is the case with genetic algorithms. As Margarida Mendes (“Molecular Colonialism”) has shown, the belief today is that everything is potentially computable and predictable. In the process, what is rejected is the fact that life itself is an open system, nonlinear and exponentially chaotic.


bolding, mine.
posted by philip-random at 9:49 AM on October 8 [6 favorites]


We are conditioned creatures yet we can try to manage what conditions we exist in so that we can hear our own voice clearly. Using automatic composition is, in effect, making it harder to hear what our hearts need to express.
posted by kokaku at 9:57 AM on October 8 [3 favorites]


it is really surprising how well these statistical models can produce natural-seeming language but I tend to think this is more of a surprising fact about human language than about the models.

a lot of language can be captured by distressingly simple models which could not conceivably be called intelligent -- they don't amount to much more than counting word pairs and triples, yet do remarkably well. I find this troubling but I don't really know what to make of it. it seems that the structure of human language can be reproduced by a mechanism much, much simpler than the ones we actually use.

GPT-2 is part of a long line of research into language modeling -- the results are very strong and impressive but less shocking given the context of the past 5 years of steady improvement deep-learning-based language models, and 30+ years before that of other techniques. From that perspective they don't feel like some kind of a big rupture, just like this year's new-and-improved model.

A post from 2015 which, if you scroll down, shows some examples of what could be done then in the early days of deep learning for language, trained on one person's computer on a tiny amount of data.

Is this because humans themselves aren't actually "conscious" the way we think we are? Are we just flesh AI?

I think this is almost unexamined conventional wisdom among Silicon Valley people -- whatever consciousness is exactly, it has to do with certain computations being performed. Of course staring at computers all day might tend to make you think everything is a computer, so...
posted by vogon_poet at 10:48 AM on October 8 [6 favorites]


What a shocker, even a typewriter has a more nuanced social sense than this parent! I wonder what his smart fridge thinks about this? Do you suppose it consoles his son, when he comes for late night snacks, or does it just dispense diet advice?
posted by Oyéah at 11:18 AM on October 8 [4 favorites]


vogon_poet: "... it seems that the structure of human language can be reproduced by a mechanism much, much simpler than the ones we actually use"

You might look at this post titled "English has been my pain for 15 years". He has a section called "European English, that funny language" that seems to be saying essentially the same thing through a current in-use example.
posted by aleph at 11:38 AM on October 8 [1 favorite]


We should get one and dump MetaFilter into it.
posted by zengargoyle at 12:43 PM on October 8 [2 favorites]


Nit:
There's sort of a weird Roko's Basilisk element in this....
If that had anything to do with Roko's Basilisk I was unable to understand how.
It also paves the way for regimes of assessment of the natural world, and modes of prediction and analysis that treat life itself as a computable object.
I am not sure why the bolded part needed bolding... what exactly would be the problem with this? And again,
As Margarida Mendes (“Molecular Colonialism”) has shown, the belief today is that everything is potentially computable and predictable. In the process, what is rejected is the fact that life itself is an open system, nonlinear and exponentially chaotic.
I am not sure what the bolded part is even supposed to mean. I know what all of the words mean, and I agree that they are arranged in a syntactically correct English sentence clause, but... I think I smell vitalism.

And circling back to the first item I quoted,
Is this because humans themselves aren't actually "conscious" the way we think we are? Are we just flesh AI?

I think this is almost unexamined conventional wisdom among Silicon Valley people -- whatever consciousness is exactly, it has to do with certain computations being performed.
I dunno, how do you think you're conscious, and how does that differ from being a "flesh AI?" As for "unexamined conventional wisdom," I don't think it's unexamined at all.

I read the item at the link in OP, and frankly it was a slog for the most part. I suppose a reader not acquainted with the history of the disappointments of the AI program would find that part of it informative, though I think the writer effectively concealed that said history is basically one of failure. The writer's reflection on his own art was the only part that held any interest, for one knowledgeable about the history of AI projects who also adheres to naturalist metaphysics (particularly concerning the nature of consciousness).

The general topic of GPT-2 and what it reveals about language and thinking was already on my mind, because I recently read this post and also the thing linked there. The connection I was in the midst of making is to Buddhist notions of Right Speech.

As is often the case with the Buddha's teachings, much concerning right speech is a list of speech to avoid: above all, deceptive speech, but also harsh speech, abusive speech, speech motivated by anger or ill-will, and also too idle chitchat.

Say what?

I'd long thought this last probably had something to do with how gossip is a distraction from serious matters for monks who presumably have better uses for their time, and maybe also with how it's often a breeding ground for disputation that can lead more-or-less directly to the other kinds of speech that are to be refrained from, but maybe it's at least as important that idle chitchat is first cousin to the emissions of bots? When you're engaging in idle chitchat, isn't it true that you're interacting with other people without engaging your mind, or really even being present at all?
posted by Aardvark Cheeselog at 1:04 PM on October 8 [8 favorites]


I dunno, how do you think you're conscious, and how does that differ from being a "flesh AI?" As for "unexamined conventional wisdom," I don't think it's unexamined at all.

If I'm running some kind of predictive algorithm and there is a huge server fire and the AI is gone, I may be upset or frustrated or disappointed, but I'm not going to mourn for the AI, because I don't think of the AI as existing for itself or being conscious. (Leave aside all the SF stuff about self-aware/conscious/people-like AI.) What's more, if you present me with an identical copy of the AI that does the same stuff, I don't think, "but I still miss the old one", whereas even if you provided me with, eg, an identical copy of my mother who acted just as she did, I might be perplexed and have all kinds of complicated feelings but I would not believe that my mother was literally back from the dead.

But if humans are just really sophisticated predictive text generators, then we're not actually conscious/we are not actually individuals. Our "consciousness", like our will and individuality, is a fake; it's like if a predictive text AI were to "predict" what it should say in conversation and fool you into conversing with it like a conscious person.

And if humans are just sophisticated predictive text generators - well, who gives an algorithm human rights? Does it really suffer or merely have the appearance of suffering? If people are interchangeable, will-less, spark-less things that don't exist for themselves, then everything - labor rights, prison abolition, access to medical care, freedom from violence - is all a joke. All the logic of "humans shouldn't be worked to death" is predicted on the idea that humans exist for themselves, that they're not just another form of sewing machine or car.
posted by Frowner at 1:28 PM on October 8 [2 favorites]


My point being that while I have no notion whether I'm conscious or just have certain consciousness-like features, I'm pretty sure that it's very much in the interests of Peter Thiel et al to adopt a view of humans as interchangeable flesh AI, precisely because treating humans like sophisticated predictive text generators rather than things that exist for themselves is very much to their advantage.
posted by Frowner at 1:39 PM on October 8 [7 favorites]


My version of predictive composing is when I start writing an essay like this in my head and someone famous publishes a better version of it in the New Yorker for $5,000.
posted by mecran01 at 1:42 PM on October 8 [6 favorites]


FYI, many of Janelle Shane's fun experiments (previously on MeFi) use GPT-2.
posted by capricorn at 1:51 PM on October 8 [1 favorite]


who gives an algorithm human rights?

The uncomfortable question is, who gives humans human rights?

In practice, the answer is: empowered humans, if and when they chose to do so.
posted by sjswitzer at 1:58 PM on October 8 [6 favorites]


just sophisticated predictive text generators

By sophisticated, you must mean possessing a personality (vast, complicated set of reactions to just as vast a set of stimuli), an (imperfect) memory, and a sense of self that allows thoughts like "this is the same me, thinking now about my early days, as lived those days." And even more: "this thought belongs to the same me even though my personality has certainly evolved quite a bit since then."

So forget about the Turing test: how do I know whether or not, in thinking these thoughts, I am just predicting text in my head?

Assuming I am just predicting text using my personality and memory and feedback loops of self-awareness, then it is hard not to conclude that a really good version of GPT-2 turned on and allowed to generate text recursively forever...is conscious. It would perhaps be cruel to build such a thing and then tell it to turn itself off in ten minutes, for instance. By then it would have thought a lot of really sad stuff!
posted by TreeRooster at 1:59 PM on October 8 [4 favorites]


But if humans are just really sophisticated predictive text generators

As people increasingly interact through machines the tendency to confuse the two also increases. The statement that a machine can produce text on a screen "just like a person" obscures the fact that producing text on a screen is in fact a thing that machines do, not people.

What distinguishes people from machines is that people, & I would argue living beings in general to some extent, have a measure of freedom, in particular the freedom to act in a morally meaningful way. Machines have no choice but to obey.
posted by dmh at 2:01 PM on October 8 [1 favorite]


The statement that a machine can produce text on a screen "just like a person" obscures the fact that producing text on a screen is in fact a thing that machines do, not people.

Sure, but more to the point is that the machine text is coming from humans, a history of other messages people have written, that the computer has uses to "predict" what the text a person types will be based on all the other texts and probability. It's something like a cliche generator in that sense. Most of our communications are going to follow certain vaguely similar pathways of structure and word choice because that's the usual expectation in making communication easier instead of having to build whole new worlds of dialogue from scratch every time we start a new conversation.

Communication poses difficulties for many people for various reasons, so we tend to stick to formulas to help in the efforts in both writing/speaking and reading/listening. That doesn't mean we can't do otherwise, as one can see when the text predictor goes wrong or when conversation turns to more uncommon or difficult topics that don't have the same ready reference for construction.
posted by gusottertrout at 2:18 PM on October 8 [3 favorites]


freedom to act in a morally meaningful way

The intersection of those who think a programmed machine could be as much a person as you or I, with the group that think free will is an illusion, is probably pretty big.

I don't think those ideas are logically equivalent: if you believe free will to be a true facet of reality, then it could be a spontaneously generated property of any sufficiently smart machine. There are lots of random number generators built into the predictive algorithms.

Of course that opens up the question of how to tell the difference between the exercise of free will and the results of random factors. One way might be the fact that when I do something "out of character" I feel a bit of excitement and or fear--adrenaline kicks in.

What if the programmed machine can occasionally lie (even while registering the statement as false, in its internal memory) but must also add to certain registers called "fear" and "excitement" a positive value (allowed to decrease with time.) Those registers in turn should affect the current thought-process (text prediction in progress)---maybe increasing the randomness, or the frequency of statements regarding negative consequences like getting turned off.
posted by TreeRooster at 2:21 PM on October 8 [1 favorite]


@Frowner, I do not think we are talking about the same thing.
But if humans are just really sophisticated predictive text generators...
.
.
And if humans are just sophisticated predictive text generators...
.
.
...precisely because treating humans like sophisticated predictive text generators...
The whole point of the piece linked in OP was that "predictive text generators," at least of the variety under discussion, are easily seen to not be comparable to human intelligence, except for the contrived example of Pinker's text which sort of proves the point by being dependent on knowing something fairly abstruse to be able to spot the nonsense.

I guess to me the key question is "do you believe consciousness (whatever that is) is a natural phenomenon, or not?" If the answer is "yes," I think it's not far from there to conclude that we are indeed "flesh AIs." Though why you'd say that instead of "natural intelligences" I'm not sure.

As for "... well, who gives an algorithm human rights?" and the remainder of that paragraph, well. That's why I think it well to not to be in too big a hurry to "leave aside all the SF stuff about self-aware/conscious/people-like AI."

Once upon a time I thought I'd live to see something like the HAL-9000 computer from 2001. I no longer expect that, not because I think it's impossible in principle but because it turns out to be a lot harder than it looks: maybe too hard for mere humans. Not to mention some economic arguments about why nobody would bother to build such a thing if it were feasible.

The overall shape of a moral argument based on "humans are not just algorithms" is uncomfortably close to the one starting with "humans are not just animals" and I'm not happy with how that's turned out for (other) animals. Not to mention how, if the Thiels of the world get to have their way, the conclusion from "people are just animals" is likely to play out for the rest of us.
posted by Aardvark Cheeselog at 2:59 PM on October 8 [5 favorites]


Dear Aglié,
I wrote, in the compose field of my email client.
I have attached a manuscript dealing with the Templar Plan and the secrets of Rosicrucian occult societies throughout history...
A grey set of suggested words emerged before me, proposed by the Google machine's algorithmic linkages of concepts and text.
...for your comment | ...with details of the Pendulum | ...which search for the Umbilicus Mundi
It was as if the Plan, which we had fictionalised, linking only unoriginal premises to facts established by others, had become fact though the pure force of circumstantial arrangements of words. Could one write, tab by tab in Google Mail, the Torah entire, misspelling not a letter?
posted by Fiasco da Gama at 4:51 PM on October 8 [2 favorites]


I have a nice one for you and one for the team to get a new one and it was a pleasure meeting you and your family are doing well and that you have a great day and I will be there at the same time as the XD-M deal with it when I get home.
posted by glonous keming at 4:55 PM on October 8 [4 favorites]


I don't think those ideas are logically equivalent: if you believe free will to be a true facet of reality, then it could be a spontaneously generated property of any sufficiently smart machine. There are lots of random number generators built into the predictive algorithms.

I used to have that idea, but after having studied AI and having written computer programs for a couple of decades, I don't have a lot of enthusiasm for it anymore. Perhaps that's just my disposition. It does appear that an awful lot of behavior is more or less automatic and I do think neural nets capture something real & significant about the way our brains work. I also recognize that the applications that have been developed, in a rather short amount of time, are quite spectacular, even disregarding philosophical conundrums about free will & theory-of-mind... In less than a decade we've gotten convincing deep fakes, machine translation, voice synthesis, robotics... So who knows what's next? Maybe at some point a difference in degree becomes a difference in kind. I would not be surprised to see machines passing the Turing test sometime in the next... well, predictions are tricky, but even if it takes another century, I'd still consider that to be incredibly fast.

However when that happens, I would hesitate to take it as proof that the machine is intelligent or that it exhibits free will. Rather I think that much or even most of our supposedly intelligent / free behavior just isn't. Still, I also do think that intelligence / free will is involved in us even asking these sorts of questions in the first place, and in building machines that we hope will help answer them. And it does seem to me that we can, at least sometimes, be genuinely playful, creative, & irreverent, while drudgery, monotony, and subjugation really do seem to take something out of us. (I think Frowner hit on an interesting point when talking about our capacity for suffering.) It's obvious to me that a lot of our behavior is, let's say, uninspired -- probably a lot more than we'd like. But at other times we do seem to act like there's something at stake, like our choices matter, and that it matters to have choices in the first place. We actively seek out new experiences and sometimes purposely behave in ways that defy expectations, to surprise, amuse, or annoy ourselves & others.

By contrast, there's a sameness to what the technology produces, a kind of pointless & joyless exuberance that once you start to notice it, is impossible to miss, much like the weirdly arbitrary tedium that pervades the procedurally generated universe in the game No Man's Land. While so many other technological aspects have made unambiguous progress, in this regard I just haven't seen it. Beyond the initial dazzle I feel the machines are missing a spark, & I have no idea what it is.
posted by dmh at 6:12 PM on October 8 [4 favorites]


...while I have no notion whether I'm conscious or just have certain consciousness-like features...

I think having sufficiently many "consciousness-like features" is exactly what it means to be conscious.

In this thread, people keep arguing as if the idea that human creativity could be fundamentally the same as predictive text generation inherently devalues human creativity. Why? They may be the same kind of thing (I think they probably are, we're just using our vastly bigger neural nets to process vastly larger inputs, namely the sum total of our experiences up to a point in time), but that doesn't make our creative works any less wonderful or amazing than we've always thought they have been.

As Smullyan wrote, "... a sentient being without free will is no more conceivable than a physical object which exerts no gravitational attraction ... Can you honestly even imagine a conscious being without free will? What on earth could it be like?"
posted by acroyear2 at 7:52 PM on October 8 [2 favorites]


We should get one and dump MetaFilter into it.

I tried it once in one of the previous threads on this, yielding eg the almost-sensible AskMeFi titles below, generated by a neural network trained a few thousand of them. I don't think human consciousness needs to worry quite yet...
What do I need to know about love in the US?
What can I do with my partner to a friend?
My boyfriend wants to be a better way to get out of my problem.
What to do with a big baby?
What to do with a man in a baby?
Internet Shoe Problems
What should I do with my mother?
What should I do with a bad idea?
Who wrote the story about a tomato?
Should I sell my family to replace my car in the US?
What is the best way to sell a small country and a business card?
posted by chortly at 8:41 PM on October 8 [7 favorites]


DTMFA is truly the "Christ, what an asshole" of AskMe.
posted by glonous keming at 8:46 PM on October 8 [1 favorite]


In this thread, people keep arguing as if the idea that human creativity could be fundamentally the same as predictive text generation inherently devalues human creativity. Why?

That's a good question to ponder and I guess I would say (broadly gesturing towards other questions rather than an answer) that predictive text generation algorithms have a good notion of what is, but no notion whatsoever of what ought to be, and that this distinction is crucial for moral agency / awareness, according to Kant and others.

In fact I think it is difficult to see how qualifications like "wonderful" or "amazing" can even begin to apply without notions like good, bad, better, and worse -- without some purpose or desire to compare against. The part where the author remarks that the generated text becomes increasingly erratic is illustrative. Because the algorithm lacks judgment, it never stops rambling. There is no purpose to any of its yarns. Indeed I would argue that even single-celled organisms behave more purposefully than this algorithm, so perhaps it's not computing power that's missing.
posted by dmh at 11:08 PM on October 8 [1 favorite]


I think people are somewhat mistaking how people communicate. We don't really want AI to be like humans because humans spend much of their energy trying not to communicate what they think in the clearest possible way. We rely on inference and hierarchies of emotions, values, and beliefs in negotiating our interactions, and thus often seek to communicate partial self-truths instead of a definitive message because we understand we are interacting as part of a social group with values that only roughly align with ours in their application. Much of our communication is hedged or adapted to circumstance or simply based on and unclear or even contradictory sets of values and information. We rely on our sense of inference to make assumptions about both the best ways for us to speak/behave as well as in assessing the speech and behavior of others. It isn't just what we say but what we assume and don't actually say that matters and that complicated mix is at the heart of creativity.

If AI were like human intelligence we couldn't fully trust it. It would have its own agenda and be speaking/acting as part of a social construct that of vast necessary circumstantial variability. Removing that human-like constraint would make AI fundamentally different than interactive human behavior. The goal around AI then seems to itself hold contradiction for wanting it to be humanish, but transparent and "right" in its assessments in ways humans aren't, which is a large part of why so many think it has such promise but also why it is feared.
posted by gusottertrout at 11:25 PM on October 8 [3 favorites]


One of the things I'm saying is that you can't "remove that human-like constraint," because creativity and intelligence and "hav[ing] an agenda" are all bound up together and only arise mutually in the context of social constructs. They're a package deal.

I totally agree that at the moment the "goal for AI" is contradictory. You can't truly be "intelligent" without occasionally being misunderstood, opaque, and "wrong." (I think researchers and specialists are starting to realize this, though, it just hasn't seeped into pop culture yet.) The fears we have, culturally, about an AI that is at once superintelligent and coldly mechanistic are unfounded. They're based on a metaphysical impossibility. (Of course you can do some big data crunching--directed by humans--that has cold, evil consequences. But that's not an AI's fault--one wasn't present, so nothing artificial intended anything!)

(And I gotta admit I'm not up on my Kant, but I think you could say that the value that a neural net is trying to optimize is its "notion ... of what ought to be.")
posted by acroyear2 at 7:30 AM on October 9 [1 favorite]


« Older The Little White Cloud that Cried   |   'There will be chaos once again' Newer »


You are not currently logged in. Log in or create a new account to post comments.