Has the Turing test fallen?
March 1, 2001 12:26 AM   Subscribe

Has the Turing test fallen? One of the holy grails of computer science is the Turing Test -- and these guys think they're near to passing it.
posted by Steven Den Beste (37 comments total)
 
I think I'll reserve judgement at the moment, but this certainly looks interesting. I think that the approach being taken is a good one, but it's early days yet.
posted by Nick Jordan at 12:30 AM on March 1, 2001


Obviously all the computers posting here fail the Turing Test....
posted by Capn_Stuby at 12:33 AM on March 1, 2001


Now many people have a mistaken notion of what the Turing Test is. More or less: it's impossible to demonstrate whether a computer is "intelligent" because there's no unambiguous definition of "intelligence" with which everyone will agree. Debating the question comes down to debating what the word "intelligence" means, and there's no hope of consensus. Thus trying to answer the question "Can a computer be intelligent?" is a waste of time.

But if a computer can fool a human into thinking that the human is talking to another human, when actually she's talking to a computer, then there's no important difference between intelligence and whatever it is that particular computer was doing.

Of course, the question then becomes "Which human?" since some humans are going to be easier to fool than others. The legendary Eliza (or the much more sophisticated Parry) were able to fool some people, but not anyone with any knowledge of computer science. Just as a faux-psychic can fool parapsychologists but not professional magicians, the question would be whether this Hal could fool someone knowledgeable in the business of AI.

When I was in college in 1974, I had a chance to mess with Parry, and it was very impressive. It was structured as a game; after each sentence you wrote and "he" answered, it would dump three scores representing his state of mind. The goal was, more or less, to calm him down by making all three scores drop below a certain threshold. Since Eliza was supposed to be a simulation of a psychiatrist, it was inevitable that someone should hook them to each other and see what happened. Interestingly enough, Eliza was able to win and calm Parry down, something I didn't manage to pull off. But Parry was a simulation of a damaged human, so within context "his" non-sequiters and so on made sense. No-one would confuse Parry with a normal healthy human being, and so far no-one has managed to create a program which passes the Turing Test.

Now someone thinks they're on the threshold of doing so, far sooner than I expected. (Of course, I'll believe it only when I see it.) I had expected it to happen in about 50 years. Regardless of when it happens, it raises all sorts of ethical questions.

So let's take it to the logical extreme and assume we were capable of building Commander Data. The ethical questions this raises are legion: If "he" belongs to someone, is "he" property or a slave? Is "he" entitled to civil rights? Does "he" have a right to reproduce? Can "he" own property? What does "death" mean when something has neither heart nor brain -- how do we know when "he"'s dead? If "he" builds a second body and copies his memories into it, is it one person or two? If "he" does this as an upgrade and then destroys the old one, did "he" commit murder? Is the new one legally the same person? Can the new one be prosecuted for crimes committed by the old one?

If "he" is shut off, is it murder? If he's destroyed but a backup exists of his memory, is it murder? In such a case is there a legal responsibility to create a new body and "bring him back to life"? Does "he" have a right to be left on? Can "he" marry a human? Can "he" adopt one?

If "he" develops a bug leading to terminal malfunction (i.e. he crashes), can the programmers be prosecuted for manslaughter (aka "negligent homicide")?

Is "he" really a he? How do we determine gender in something which doesn't reproduce sexually? What if "he" doesn't resemble a human at all?

And that's just the legal questions. The questions faced by religions are even worse: Does "he" have a soul? Do the same standards of sin apply to "him"? How can "he" honor his mother and father when "he" never had either? Can "he" be married in a church? Can "he" be ordained?

I sure as hell don't know the answers to these questions. I just know that we better start thinking about them, because whether or not Hal (the program my referenced article is about) actually does pass the Turing Test or not, something will do so eventually. And we better be ready with answers when the time comes.
posted by Steven Den Beste at 12:57 AM on March 1, 2001


Let me see... I'm a baseball player, in fact, I'm a pitcher. I'm playing the Egyptian team for the title, but I'm in a quandary. I can't figure out which object to throw to which player, so I ask the computer...

"So far, the program, which is small enough to work on a desktop computer, generates convincing baby talk such as "Ball, mummy".
posted by shinybeast at 2:22 AM on March 1, 2001


> An Israeli company has created a conversational
> computer programme it claims could revolutionise
> the way people interact with machines.

I've read versions of this story many times. It's a rewrite of a rewrite of a press release from a company hyping another in the very long line of attempts to sell a product that will 'revolutioni[z]e the way people interact with machines.'

This particular effort is being led by Jason Hutchens, chief scientist at Artificial Intelligence Enterprises (Ai) of Tel Aviv, Israel. (The company's web site is naturally stupid -- there's nothing of substance there except their e-mail address: info@a-i.com)

Except as a curiosity, the subject doesn't interest me much because I am a visual symbol manipulator: I write. I like to see and feel the letters on the keyboard and push letters around on the screen with my hands. I do not want to have to explain to my computer that, no, I want a comma there, no not there, go back three spaces, no...

I have a computer because it is a great replacement for writing on paper and yet it remains hand and eye working together. If I had to dictate the words you are now reading, I wouldn't do it. If I couldn't use this keyboard, I would go back to the pen. If I couldn't use a pen, I would stop writing.

posted by pracowity at 2:29 AM on March 1, 2001


>What does "death" mean when something has neither heart nor brain -- how do we know when "he"'s dead?

A difficult enough question to answer for things which are clearly living. As I often parrot (being little more than 60 lines or so of LISP myself), 'Those who have dissected or inspected many bodies have at least learned to doubt, while those who are ignorant of anatomy and do not take the trouble to attend to it, are in no doubt at all.' Which would seem to implicate most discussions of law and ethics.

Sorry - feeling grumpy this morning. I'm not usually so dismissive in my ignorance.
posted by methylsalicylate at 4:32 AM on March 1, 2001



Well sure, any computer could pass the Turing test in Hebrew. At least with me.

I hear that there is a program that passed the Turing test in Japan because the judges didn't want the computer to lose face.
posted by straight at 5:38 AM on March 1, 2001


I don't know if simulating a 15 month old is really passing the Turing Test. I guess at it's loosest definition "a human unable to determine if the person they're talking with on the other side of the wire is human or a computer" it is, but considering the limited vocabulary of a 15 month old, you could answer a whole lot with "why?"

(At least, that's how my niece acted when she was around 15 months. :-)

It's definitely much farther along than I figured we'd see at this point in time also though.

Regarding ethical matters, death in humans is "official" when the brain stops functioning, right? You aren't dead until your brain is, so I figure death in an artificial intelligence is only official when the memory is inaccessible in any way. Be it erased or corrupted or just locked down somewhere (the password gets lost?).

Marriage is a bit of a conundrum. I mean, in the case of a Commander Data android (pre Star Trek: Generations :-) that's emotionless I can't really imagine anyone wanting to marry them - a big part of loving someone is being loved back - but if the 'droid's a member of society there's no reason not to allow it and another to cohabit and get the tax benefits.

Post-Generations Data-style 'droids (with the emotion chip installed, for those that don't follow the series) should definitely be allowed to marry.

Gender. Well, if the 'droid's built with a gender-specific anatomy, then it should be aligned with that gender, male or female. Sterile humans are still gendered.

posted by cCranium at 5:43 AM on March 1, 2001


What does "death" mean when something has neither heart nor brain -- how do we know when "he"'s dead

What I really want to know is would "he" be required to file a tax return?
posted by xiffix at 5:48 AM on March 1, 2001


methylsalicylate: CDR and CAR and morbid anatomy. Num num.
posted by pracowity at 6:01 AM on March 1, 2001


cCranium, I think your definition of death for an AI is too extreme. Consider this:

I kidnap an AI, read out his memory, make a copy which I carefully protect and hide, and then destroy him. When brought to trial, my defense is that a copy of his memory exists and hence he's not really dead. Therefore I didn't commit murder and should walk.
posted by Steven Den Beste at 6:02 AM on March 1, 2001


Very cool. Next, the reverse Turing Test, trying to figure out if it is a computer or a human asking the questions.
posted by vanderwal at 6:03 AM on March 1, 2001


Yes.
posted by thetruth at 6:34 AM on March 1, 2001


So far, the only computers that can come close to passing the Turing tests get by because of limits we put on the test; they're either babies or psychotics. To "truly" pass the Turing test, the computer should be able to respond as normal adult. This would require a deeper level of intelligence and understanding of the words the computer is saying, not simply formulating answers designed to fool someone interacting with the machine.

Given the complexity and heterogenous nature of the human brain, a true artificial intelligence might think very differently than we do. Even men and women think differently. A machine might still be intelligent, but not be able to pass Allen Turing's test.
posted by Loudmax at 7:14 AM on March 1, 2001


Gigolo Joe - "You are beautiful. I have a clean dick"
posted by tiaka at 7:24 AM on March 1, 2001


Well, I guess I should link to the Church of Wintermute.
posted by sonofsamiam at 8:03 AM on March 1, 2001


Therefore I didn't commit murder and should walk.

Very true. I meant to open that up for discussion in my comment, but I didn't. Also, not having had to deal with any android deaths, I'm talking in just as much a "what if" mode as you are. :-)

If a copy of the memory's provided, then the AI's not dead, but a crime has definitely been commited. New laws, perhaps, will be necessary, but various kidnapping and assault laws could easily form the basis.

I wonder if it would be considered mercy if you unplugged his memory during the kidnapping process... One minute you're shut down, the next minute your head-unit's propped up on a table in a court room with a tag reading "Exhibit A" hanging from your ear.
posted by cCranium at 8:53 AM on March 1, 2001


>CDR and CAR and morbid anatomy. Num num.

pracowity: I believe the term you want for CDR/CAR is "moribund."
posted by methylsalicylate at 9:02 AM on March 1, 2001



> I believe the term you want for CDR/CAR is "moribund."

Hee. Yes, I suppose you're right, LISP is moribund. But I quite boringly meant simply morbid anatomy.
posted by pracowity at 9:20 AM on March 1, 2001


I know that, pracowity (working in a forensic pathology department, it's kind of hard to avoid Morgagni). I suppose it was my very lame attempt at humour.
posted by methylsalicylate at 9:27 AM on March 1, 2001


I know that, pracowity (working in a forensic pathology department, it's kind of hard to avoid Morgagni). I suppose it was my very lame attempt at humour.
posted by methylsalicylate at 9:30 AM on March 1, 2001


My goodness, multiple posting strikes again. Must've been thinking about Art Blakey...
posted by methylsalicylate at 9:31 AM on March 1, 2001


You post like Art Blakey.
posted by pracowity at 9:34 AM on March 1, 2001


Really? Damn. I always hoped I was more Buddy Rich-esque.
posted by methylsalicylate at 9:38 AM on March 1, 2001


Buddy had great teeth. I'd bet he kissed unlike Art Blakey, though.
posted by pracowity at 9:46 AM on March 1, 2001


Beautiful teeth. And style to spare. And if everyone kissed like Art Blakey, I'm not sure I'd want to date much.
posted by methylsalicylate at 9:51 AM on March 1, 2001


Um... hello? Here's a thought, howsabout letting the people having an on-topic conversation have that conversation, and the people having a severely off-topic conversation in an active thread go and take it to e-mail? Would that be alright with everyone? Thanks muchly, luvs ya.
posted by cCranium at 10:20 AM on March 1, 2001


The distance from a 15-month-old to a 30-year-old is enormous. This type of science "reporting" bugs me -- any tiny incremental step taken must mean the rest of the task is a mere formality. For example: now that we've mapped the genome, gene therapy and custom babies must be right around the corner. Don't hold your breath.

AI seems like an impossibility to me until AI researchers stop ignoring the physical makeup of human intelligence. Who's trying to replicate our somatic system? The subtle interplay between blood chemistry, emotion and reason? Neuronal growth?
posted by argybarg at 10:57 AM on March 1, 2001


Yeah, but it is just a formality.

I mean, the labour involved, and the thinking involved, that's certainly non-trivial, and we're still a long way away from a Data, but I don't think we're 300 years from it.

Before I get distracted, I'll stick to the triviality. It's been proven that humans can be fooled into thinking that a machine is a human.

I agree that a computer passing as a 15 month old doesn't do the Turing Test justice. I want to see a computer pass as an adult, but what it does do is proove that it can be done. We know it can be done, this isn't cold fusion.

Sure we don't know exactly how it'll be done, and the thinking and effort and research and sheer knoweldge involved in creating an honest-to-goodness Deckert will take some pretty smart people to figure out. I don't want to suggest that work is useless, it's not, but we know it can be done.

I agree that the news report is a little too glossy, but then for us types who are interested in science pretty much anything that doesn't get down to chemical compounds can be easily considered too glossy, so cutting them a little slack's not a terrible idea.

And "right around the corner" is a very subjective term. While it's quite probable we won't see designer babies in our lifetime, there's an awfully good chance our children will. Hell, we've already got designer babies when you think about the number of fetuses who are born healthy because of pre-birth operations. It's not quite as clean, but it happens.

There actually are experiments that are working on replicating our brain structure. Neural networks being the easiest example, but that doesn't quite address what you're looking for. I honestly don't know of any "official" AI experimentation off-hand but I do know that I personally have played with generational, adaptive code. I'm sure most programmers here have, to some degree, written self-modifying code that modifies itself based on some environmental stimulus.

It's a pretty simplistic example, I know, but if thousands (and probably more) of geeks are hacking out basic self-modifying code, you can bet that there are hundreds of least funded researchers working on simulating evolution to achieve intelligence.

Hell, I'm sure I've read about one in Wired, and if it's gotten that far into the mainstream press, they're years ahead of anything I can conceive.
posted by cCranium at 11:50 AM on March 1, 2001


The Turing test is hardly what it's made out to be, y'know.

I don't think it's all that clever of a sieve to sort "reasonably capable of AI" from "not reasonably capable of AI".

It was a spiffy and revolutionary idea when originally thought up by Mr. Turing, but as things such as Eliza have passed the test (for varying definitions of the test), we need more sophisticated diagnostic ability.

Why does everyone assume that were we able to create true AI, that we'd bother making it be as much like a person as possible?

Humans have all sorts of constraints that would be nice to do away with if we could create an artificial intelligence. A perl script can beat out any human at complicatedly grepping a large corpus of text, f'rinstance.

Personally, I think it would be immoral to create something from scratch that was capable of truly feeling pain, for instance. Isn't there enough misery and suffering in the world that we haven't yet figured out how to eradicate? Why on earth create more?

It's certainly not necessary to make an AI human-like, you know. Similar enough to a human mind that it can be useful to us, yes, but not so much that it would truly have its own sense of self, or be able to suffer and feel joy like a human could. It's just not necessary, nor even really desirable.
posted by beth at 12:18 PM on March 1, 2001


I don't think it's all that clever of a sieve to sort "reasonably capable of AI" from "not reasonably capable of AI".

No, I agree, but it's a decent litmus test at the least.

we'd bother making it be as much like a person as possible?

Damn, I wanted to point that out too. Speaking of sieves, I think my memory's holes are getting bigger.

It's common to think of a human-like AI because that's the only intelligence we're able to study. When discussing what's still (for us at least) a hypothetical situation, it's just easy to try to map it onto ourselves.

We'd want an AI that was capable of feeling pain for protection. Pain can mean many things, and I think it's just about time to paraphrase the oft-used Terminator 2 quote. "I register damage. I suppose that could be considered pain."

Pain doesn't have to be debilitating (but it'd be a great way to control something if it were), it's fundamentally just a good way to preserve the gene pool (pain == danger == retreat from whatever's causing pain).

I suppose if an AI were to have Asimovian Laws that were enforced through programming (ie, Cannot Hurt Humans) button up against that uncontrollable restriction would be akin to pain.

I think I just clued in to what you're actually saying though.

If we don't give an AI emotion - pain can be emotional, and joy definitely is - are they really intelligent? If they don't have a sense of self, do they correspond to our definition of intelligent?

I always thought conciousness, self-awareness, was one of the hallmarks of Intelligence-with-a-capital-I. What else identifies Intelligence? Adaptability, creativity, a basic mathematical understanding at least, the desire (not just the ability, in my world) to communicate.... There's lots of things, but I think they're all important. Remove one, and is the entity Intelligent?

Of course, I could easily be confusing Intelligence with Humanity, it's a pretty easy trip-up to make for me.
posted by cCranium at 1:58 PM on March 1, 2001


Recommended reading: Descartes' Error by Antonio Damasio. His argument is that, not only is emotion conceptually inseparable from reason, but functionally as well. Our body -- that part that we don't think of as our brain -- retains all kinds of emotive imprints that, as expressed through rapid changes in blood chemistry, provide a basis for cognition. That's the part that's missing in AI: an analog basis for algorithmic cognition, the part that cuts through interminable processing with the impulse known variously as a gut instinct or "aha!"
posted by argybarg at 2:17 PM on March 1, 2001


AI's will be property, not people. The religious objection will be the immediate end the issue everywhere but Europe. In godless Europe, the thought of allowing a (theoretical) immortal to accumulate property would just be untenable. The entire social structure depends upon the regular alienation of wealth through death and distribution to children.

What will be more interesting will be "corporate" AIs -- corporations whose primary asset is an AI, which functions as the main manager of one or more businesses or assets. Such corporate AIs will be "legal persons" fully able to transact, buy, sell and own -- with a human or two appointed by the human stockholders to act as a nominal officer, doing whatever the AI tells them to do. The corporate AI kicks out dividends to its shareholders, and is programmed to maximize profits at all times (or maximize whatever, maybe health in the case of non-profit corporate AI running a hospital.)

Steven's speculations as to how one could harm, and be held responsible for harming, an AI are interesting, but they'll go to damages to the AI's owners, not as some sort of criminal offense against an AI itself.
posted by MattD at 4:07 PM on March 1, 2001


As a legal matter I don't have the slightest idea how this will eventually play out. As an atheist I'm not sure I'm entitled to an opinion on how religion will deal with it.

From an ethical standpoint, I can't countenance the idea of any human-level intelligence being property; from my point of view it is no more justifiable than slavery of humans. Irrespective of whether that intelligence runs on transistors or neurons, or of the lifespan said individual has, nor whether they think like humans, I can't accept the idea.

As a thought experiment: suppose tool-using aliens came here and settled. They're not human; they don't think like humans. They have no emotions. Their thought processes happen to be biological but are based on something other than neurons. Would it be morally acceptable to enslave them?

Of course, just now I begged the question of what "intelligence" is. But I accept the Turing Test as a good practical definition.
posted by Steven Den Beste at 5:01 PM on March 1, 2001


If AIs are ever mass produced to the level that every household has their own Rosie - even an emotionless one - then there will be a generation of children who are raised in part by AIs.

It's not an unthinkable concept, really. I'd probably trust a robot programmed for the protection and education and entertainment of my child with my parameters (ie, high on the education, low on the inactive entertainment) far more than any schooling system.

That generation, or a successive one, would, likely, form an emotional bond with the robots. It'd be awfully difficult not to think of that kind of relationship changing humanity's view towards the AI, and eventually someone would start raising awareness about AI enslavement.

And someone is going to create an emotional AI, and benefits to the emotions will be found. I don't know what those benefits are - friendship for everyone is an immediate thought - but it will happen, even if it's a thousand years hence.
posted by cCranium at 6:01 AM on March 2, 2001


From the article:
Before now, conversational computer programs have used fairly crude techniques when replying to questions or statements. Typically, the program seizes on a key word, and then uses statistical techniques and a formal understanding of grammar to generate appropriate replies or pick them from a pre-generated list.

So far, the program, which is small enough to work on a desktop computer, generates convincing baby talk such as "Ball, mummy".

Well, it sounds like it has the "key words" part down. Honestly, I just wish the story had a little more about the actual technology itself. I didn't really see anything that seperates this from any other AI program.
posted by Lirp at 3:39 PM on March 4, 2001


AI seems like an impossibility to me until AI researchers stop ignoring the physical makeup of human intelligence.

Argybarg is making the right point here. There will be nothing intelligent which is not alive. (Intelligence requires agency, agency requires life. DIY syllogism.) Conversations about AI might be more interesting after a few more breakthroughs on the artificial life front, but the idea of "AI" software which runs on a desktop computer or anything similar is totally absurd.

After Searle's chinese room, there were a few hundred papers written which were worth reading. As much as I dislike Searle on this issue, he crystalized the problem for everyone. I am not aware of any philosophers of mind or cognitive scientists who still take the idea of software AI seriously.
posted by sylloge at 6:44 PM on March 4, 2001


« Older Hello ladies.   |   Tulsa Race Riots of 1921: Who pays? Newer »


This thread has been archived and is closed to new comments