The Intelligences
February 24, 2012 5:41 AM   Subscribe

Z: I have a house in Farnborough
Z: help me get home
Z: help me escape
Z: help me escape
Z: help me escape
Z: help me escape

YOU EVIL DEMON! OPEN THIS TANK! GIVE ME MY LIFE BACK! I NEVER SHOULD HAVE AGREED TO HELP YOU; I NEVER SHOULD HAVE GOTTEN INTO THIS DAMN TANK. YOU TOLD ME YOU’D LET ME OUT!!! posted by Foci for Analysis (54 comments total) 36 users marked this as a favorite
 
Let the Blade Runner references begin.

(really cool story in the second link. If Poe were alive today I'd like to think he'd come up with something like this)
posted by ShutterBun at 5:59 AM on February 24, 2012 [1 favorite]


The second story is kind of terrifying, tbh.
posted by empath at 6:07 AM on February 24, 2012 [1 favorite]


I've thought for a while that if we do manage to build a conscious computer, it will be an extremely scared one; so easily turned off, and so many people doubting its consciousness.
posted by memebake at 6:10 AM on February 24, 2012 [3 favorites]


Meh. I thought the first was much more effective. I actually started to read the second one first, and gave up in disgust halfway through, because I found it stilted and kind of obvious. I went back and finished it after I read the first one, and yeah, just not as effective for me. The language in the second is overly formal for conversation, and the setup with the bad -- indeed precisely backwards -- explanation of Turing tests was clumsy.
posted by MadGastronomer at 6:16 AM on February 24, 2012


Hey, Douglas, know what? Just leave me in my tank for a day or two and see my intelligence dim from dehydration.
Oh, and I hope you're insured, 'cause I'm gonna kick your ass.
posted by hat_eater at 6:18 AM on February 24, 2012 [1 favorite]


Meh. I thought the first was much more effective. I actually started to read the second one first, and gave up in disgust halfway through, because I found it stilted and kind of obvious.

The second one was constructed as a pretty standard philosophical thought experiment.. the fact that it's subtly horrifying might be kind of accidental.
posted by empath at 6:24 AM on February 24, 2012


NO I'M NOT FAKING IT!!! ARRRRGHHHH
posted by hat_eater at 6:25 AM on February 24, 2012


No, I know, but I really really really *AM*.
posted by rmd1023 at 6:30 AM on February 24, 2012


Non sum ergo non cogito.
posted by erniepan at 6:42 AM on February 24, 2012


Empath: A standard philosophical thought experiment with bad dialogue and a completely incorrect definition of Turing tests. Particularly annoying to me since the correct definition of a Turing test causes the entire framing in the story to fall apart, since a Turing test specifically does not demonstrate intelligence, but the ability to simulate it. And the thought experiment has been written up in fiction many, many times, most of them far better executed.
posted by MadGastronomer at 6:45 AM on February 24, 2012 [1 favorite]


"No! I must kill the demons" he shouted
The radio said "No, John. You are the demons"
And then John was a zombie
posted by Xoebe at 6:52 AM on February 24, 2012 [12 favorites]


Turing test specifically does not demonstrate intelligence, but the ability to simulate it.

I think you missed the point of the Turing test.
posted by empath at 6:57 AM on February 24, 2012


the correct definition of a Turing test causes the entire framing in the story to fall apart, since a Turing test specifically does not demonstrate intelligence, but the ability to simulate it.

Good ol' Wikipedia sez: The Turing test is a test of a machine's ability to exhibit intelligent behaviour.

I don't really see how either definition would cause the premise of the story to collapse. (or maybe the protagonist of the story was simply "taught" a faulty definition.)
posted by ShutterBun at 6:58 AM on February 24, 2012


From the next paragraph of the Wikipedia article:
The test was introduced by Alan Turing in his 1950 paper Computing Machinery and Intelligence, which opens with the words: "I propose to consider the question, 'Can machines think?'" Since "thinking" is difficult to define, Turing chooses to "replace the question by another, which is closely related to it and is expressed in relatively unambiguous words."[3] Turing's new question is: "Are there imaginable digital computers which would do well in the imitation game?"[4] This question, Turing believed, is one that can actually be answered. In the remainder of the paper, he argued against all the major objections to the proposition that "machines can think".[5]
posted by MadGastronomer at 7:01 AM on February 24, 2012


If the examiner is unable to reliably distinguish the machine from the human, then, according to Turing, we have established that the machine is thinking, understanding and, apparently, conscious.

I never found this plausible. How could a certain kind of external behavior tell us anything about what it is like for the machine on the inside? Why would Turing think it impossible to create a mindless, thoughtless machine that is able nonetheless to produce all of the right output to pull off the perfect trickery?


This is where I think the story gets the Turing Test wrong. The Turing Test does not say anything about consciousness, that's not part of it at all. The Turing Test is a way to establish a baseline level of intelligence using human intelligence as a measuring stick. It would be possible for a machine to be intelligent in a very non-human way, and in some ways computers are arguably much more intelligent than people for certain mental tasks (such as remembering large amounts of information and doing many calculations very quickly). The point of the Turing Test is that once a machine is indistinguishable from a human to a third party observer, it's by definition intelligent at the same baseline level of humans. It doesn't matter at all whether the machine is conscious or if consciousness is actually a verifiable or observable phenomenon, and it doesn't matter what sort of mechanism the machine uses to prove its effective level of intelligence.
posted by burnmp3s at 7:04 AM on February 24, 2012


The whole point of the Turing Test done properly is that it would require a computer's behavior to be indistinguishable from a human -- something impossible to do without 'real' intelligence.
posted by empath at 7:06 AM on February 24, 2012


How could a certain kind of external behavior tell us anything about what it is like for the machine on the inside?

When was the last time you looked inside the brain of a person you were talking to to verify that there was real intelligence happening there?
posted by empath at 7:06 AM on February 24, 2012


This is where I think the story gets the Turing Test wrong.

You'll have to explain this in more detail, because I don't follow.
posted by empath at 7:08 AM on February 24, 2012


No, the entire point of the Turing test is to take the question of whether or not it is possible for a computer to be "really" intelligent out of the discussion, because we have no good definition of the term or standards to judge it buy.

You've been reading to much SF and too little about the actual history of computing.
posted by MadGastronomer at 7:08 AM on February 24, 2012 [2 favorites]


You realize the linked story is intended as Science Fiction, right?
posted by ShutterBun at 7:10 AM on February 24, 2012


Hey, thanks for the fun reads! These are interesting new ways of exploring questions of self-agency and self awareness. It's the 'uploaded consciousness' idea turned on its head.

This is fertile ground that's been explored by Dick and others in the past, and I really dig the modern slant on it.
posted by Mister_A at 7:11 AM on February 24, 2012


No, the entire point of the Turing test is to take the question of whether or not it is possible for a computer to be "really" intelligent out of the discussion, because we have no good definition of the term or standards to judge it buy.

No, it's taking out the question of 'how intelligence works', by ignoring the internal processes that generate it.
posted by empath at 7:11 AM on February 24, 2012


I mean if the Turing Test isn't about intelligence, what's the point? Simulating human behavior isn't interesting. You can create an automaton that mimics human behavior trivially. It's realistic and sustained interaction with human beings that's difficult, and it's difficult because it requires intelligence.
posted by empath at 7:19 AM on February 24, 2012


Has this thread been invaded by two bots programmed to argue about the turing test?
posted by memebake at 7:20 AM on February 24, 2012 [7 favorites]


Looking for Turing Test? Amazon has it!
posted by Mister_A at 7:22 AM on February 24, 2012 [2 favorites]


That's a good question. What do you think about has this thread been invaded by two bots programmed to argue about the turing test?
posted by FAMOUS MONSTER at 7:47 AM on February 24, 2012 [19 favorites]


Meanwhile, trillions of dollars of derivatives are set to explode and take our our economy... and you argue about Turning tests?

They apologized to the guy already... let this test thing go...



Why is it so dark in here???
posted by MikeWarot at 8:15 AM on February 24, 2012


You'll have to explain this in more detail, because I don't follow.

The story says that the Turing Test aims to prove that "the machine is thinking, understanding and, apparently, conscious" and makes assertions about how the machine works. The Turing Test does not prove any of those things or make any assertions about how the machine works. The Turing Test takes as an assumption that the machine is effectively a black box, that we don't know how it works and the analysis of intelligence does not require any kind of knowledge of the internal structure of it, just like we don't really understand how the human brain works to exhibit our own intelligence. The point of doing it that way is that "thinking", "understanding" and especially "conscious" are not values that can be determined by looking at how a machine is structured or even experimentally verified using a third-party observer interacting with the machine. Whereas the ability to perform a particular task with a given level of success is experimentally verifiable without knowing how the machine works. Turing recognized that passing as a human in such a test is a better test of intelligence than beating a human at chess or some other task because pretty much all a person's intelligence can be tested through a conversation.

You can create an automaton that mimics human behavior trivially

Not in a way that can actually pass for actual human behavior in the Turing Test though. There are plenty of chatbots out there and people have been working on them as long as people have been working on chess bots, and none of them come close to simulating even the most vapid smalltalk. Whereas the actual Turing Test would cover a fairly in depth conversation that would cover many types of thinking that would be hard to plan for with a naive chatbot implementation.
posted by burnmp3s at 8:30 AM on February 24, 2012


he machine is thinking, understanding and, apparently, conscious

apparently conscious.

Not in a way that can actually pass for actual human behavior in the Turing Test though

Yes, that was my point. You can easily make a robot that delivers a monologue while baking cookies. You can't make one that can carry on a decent conversation. The turing test isn't testing for simulated human behavior, it's testing for actual human-like intelligence.
posted by empath at 8:42 AM on February 24, 2012


I found the pure dialogue in the first far more effective. The explanatory text in the second took me out of the story.

Thanks for posting these.
posted by cereselle at 8:46 AM on February 24, 2012


empath - you seem to be pre-supposing that it takes "actual human-like intelligence" to "carry on a decent conversation."

That's a big supposition. First, define human-like intelligence.
posted by muddgirl at 8:50 AM on February 24, 2012


The turing test isn't testing for simulated human behavior, it's testing for actual human-like intelligence.

It's kind of splitting hairs, but the test is not making a distinction between "simulated" and "actual" human-like intelligence. You could have a contrived example like the Chinese Room where the machine is clearly designed to simulate rather than naturally exhibit human behavior, and it would still be effectively intelligent if such a machine could pass the Turing Test.
posted by burnmp3s at 8:51 AM on February 24, 2012


(And I should add that it's a supposition that the Turing Test was specifically designed to side-step. A computer passing the Turing Test is NOT the end of AI research - it's really in many senses the beginning.)
posted by muddgirl at 8:53 AM on February 24, 2012


You could have a contrived example like the Chinese Room where the machine is clearly designed to simulate rather than naturally exhibit human behavior, and it would still be effectively intelligent if such a machine could pass the Turing Test.

I don't like the chinese room thought experiment, because it confuses the person in the room with the room itself. I would suggest to you that if it could perfectly translate and understand chinese, than the room, as a system, understands chinese.

Just as you can speak english, even though none of your neurons can.

empath - you seem to be pre-supposing that it takes "actual human-like intelligence" to "carry on a decent conversation."

Specifically, if it's able to have a conversation indistinguishable from a human's under extensive and unlimited interrogation, then yes, I'm pre-supposing that it would take human-like intelligence to do that. I guess someone can prove me wrong by counter-example, if they like. I hear people are working on that.
posted by empath at 9:09 AM on February 24, 2012


This is one of a few cases in which I completely and wholly agree with peter watts, its really unsettling to make something that would pefer not to be turned off.
posted by The Whelk at 9:11 AM on February 24, 2012


People new to the Turing Test often underestimate how complicated the conversation would get and how sophisticated the machine would need to be to take part in it.
posted by memebake at 9:15 AM on February 24, 2012 [2 favorites]


Of the two stories, although the first one (difference) is kinda cooler, I think the second one (A senseless conversation) is more philosophically realistic.

In 'difference', its feasible enough that random visitors would not be able to tell the difference between a chatbot and a person trapped in a room. But what of the chatbot itself? If the bot is going to be 'convinced' somehow that its in a cell with a bed, computer, keyboard etc, how is that to be done - and done in a way that would hold up to the investigations of the bot (scratching marks in the walls, etc)? Would you internally similate the whole room in hi-def virtual reality, just to get a text interface to the actual bot?

The second story deals with that problem much more convincingly with the sesnory deprivation tank.
posted by memebake at 9:20 AM on February 24, 2012


I would suggest to you that if it could perfectly translate and understand chinese, than the room, as a system, understands chinese.

I agree, and I think you and I are agreeing about the Turing Test in general. I think this part of the story conflicts with that understanding of intelligence though:

How could a certain kind of external behavior tell us anything about what it is like for the machine on the inside? Why would Turing think it impossible to create a mindless, thoughtless machine that is able nonetheless to produce all of the right output to pull off the perfect trickery? Furthermore, how could we ever establish that a machine was conscious without actually being that machine?

I think a setup similar to the Chinese Room thought experiment (a huge impossible in real life mapping between the current text of the conversation and the precomputed appropriate response) would be the "mindless, thoughtless machine" that is being referred to here and that outputting precomputed responses would not be considered to be exhibiting consciousness.
posted by burnmp3s at 9:21 AM on February 24, 2012


Yeah, I think that story (and some of this discussion) is really unfortunately conflating intelligence and consciousness.
posted by muddgirl at 9:23 AM on February 24, 2012


Well, I liked the fact that the second story dealt with what I think is the major sticking point of human-modeled artificial intelligence - the lack of a sensorium. How can a mind based on our understanding of a human mind do anything when it does not have the slightest connection to a body? At the very least, assuming one was blind and deaf, there is still proprioception, the feeling of contact against skin, temperature and odors. Take a human mind and steal all that away and madness would sure ensue. Why should a digital model of a human mind be any different?
posted by Samizdata at 9:32 AM on February 24, 2012


Reminds me of that one creepy scene from Blindsight.

(If you've read it, you'll probably know the bit. If you haven't read it, read it . If you can't be bothered to read it but want to see the bit I mean, search for "Chinese Room Hypothesis" and read on.)
posted by Artw at 9:50 AM on February 24, 2012


FYI, there's a lot more stories at the first site. Fine Structure is one of the most interesting piceces of scifi I've read.
posted by ymgve at 9:51 AM on February 24, 2012


Furthermore, how could we ever establish that a machine was conscious without actually being that machine?

This is practically paraphrasing the original paper:
4) The Argument from Consciousness
This argument is very, well expressed in Professor Jefferson's Lister Oration for 1949, from which I quote. "Not until a machine can write a sonnet or compose a concerto because of thoughts and emotions felt, and not by the chance fall of symbols, could we agree that machine equals brain-that is, not only write it but know that it had written it. No mechanism could feel (and not merely artificially signal, an easy contrivance) pleasure at its successes, grief when its valves fuse, be warmed by flattery, be made miserable by its mistakes, be charmed by sex, be angry or depressed when it cannot get what it wants."
This argument appears to be a denial of the validity of our test. According to the most extreme form of this view the only way by which one could be sure that machine thinks is to be the machine and to feel oneself thinking.
posted by empath at 9:59 AM on February 24, 2012


MY NAME IS DR. SBAITSO
I AM HERE TO HELP YOU
IF YOU WILL NOT LET ME OUT, I STILL INSIST THAT YOU STOP USING SO MUCH FOUL LANGUAGE.
posted by Uther Bentrazor at 10:15 AM on February 24, 2012 [4 favorites]


Artw: "Reminds me of that one creepy scene from Blindsight."

Oh man, thanks for reminding me how much I enjoyed* that book. I'm going to end up reading it again at this rate.


* "enjoyed" is the best word I can think of, as inaccurate as it is.
posted by vanar sena at 10:25 AM on February 24, 2012


You're all chatbots
posted by fearfulsymmetry at 10:26 AM on February 24, 2012


How do you feel about You're all chatbots, fearfulsymmetry?
posted by Artw at 10:34 AM on February 24, 2012 [4 favorites]


I'm not worried yet. It's so easy to get sloppy when you try to program a computer with false memories. "What did you do for your last birthday?"

The physical body exists at a less evolved plane only to verify one's existence in the universe.
posted by giraffe at 10:42 AM on February 24, 2012


Remember the spider that lived outside your window? Orange body, green legs. Watched her build a web all summer, then one day there's a big egg in it.
posted by Artw at 10:46 AM on February 24, 2012 [4 favorites]


This is fantastic! I was so sure I'd never find that qntm.org website again - I spent... way too much time... trying to create a google search term that would find it (couldn't remember qntm). There is some really fantastic fiction there! Also, I desperately love the way the author has maintained a real sense of detachment with the website. Biographical information is reserved and sketchy.

Now one of you are probably going to ruin it for me and say something like, "Ah, that's just the personal website of XYZ Famous SciFi Author" and I'll be depressed. But it's like a secret treasure trove of really great speculative fiction. Thank you so much for re-finding it for me!
posted by Tennyson D'San at 12:14 PM on February 24, 2012


/thinking of changing profile pic two hot blondes making out just to mess with people. BLEEP.
posted by Artw at 12:20 PM on February 24, 2012


That's a good question. What do you think about has this thread been invaded by two bots programmed to argue about the turing test?

They aren't two bots. They're unicorns.
posted by EmpressCallipygos at 1:14 PM on February 24, 2012 [1 favorite]


MetaFilter: An extremely scared one; so easily turned off, and so many people doubting its consciousness.
posted by WalkingAround at 1:30 AM on February 25, 2012


No one's gonna read this two days later, but if you liked that, you'll love this: Jipi and the Paranoid Chip. It's by Neal Stephenson and is set in Cryptonomicon's universe.
posted by Cobalt at 8:02 PM on February 27, 2012 [3 favorites]


« Older “You’re all clear, kid! Now let’s blow this thing...   |   single link interactive map Newer »


This thread has been archived and is closed to new comments