60 Second Adventures in Thought
October 22, 2011 3:19 AM   Subscribe

 
Found while Googling for a cute cat video -- the Schroedinger's cat one came up. Insomniac mindless silliness actually led to learning something!
posted by bluefly at 3:23 AM on October 22, 2011 [1 favorite]


Why is it that science and mathematics are instantly more understandable when explained by an enthusiastic British voice? David Attenborough's documentaries are an example. More evidence.
posted by GenjiandProust at 4:24 AM on October 22, 2011 [3 favorites]


These thought experiments aren't just in math and science, many are taken from philosophy.

(This distinction is important in direct proportion to how discontinuous you take those three fields to be.)
posted by oddman at 6:01 AM on October 22, 2011


I had always wondered why, at one point in a Rifftrax I can't recall, Kevin Murphy said, "Hilbert and his hotel be damned, we have stumbled on an infinite series of events." (I had assumed it had to do with a depressing European play.)
posted by Countess Elena at 6:51 AM on October 22, 2011


I never understood how the Chinese Room argument was supposed to work. Of course a guy following rules on paper isn't going to understand Chinese... because that's too slow. Memorization is only different from looking stuff up in that it's more efficient. That efficiency rightly marks the distinction between a person who understands Chinese and a person who can use a phrasebook to communicate effectively.
posted by LogicalDash at 6:55 AM on October 22, 2011 [1 favorite]


Cool post!
posted by Renoroc at 7:33 AM on October 22, 2011


LogicalDash: I think the idea is that, in the Chinese Room concept, you aren't even using a phrase book -- it's more like: the person outside the room sumbmits a phrase, you break it down according to a series of rules (that don't relate any meaning from the phrase to you), and you respond with a phrase according to more rules. At no time do you, the "computer," have any interaction with the meaning of the characters, so you can't learn anything except the rules.
posted by GenjiandProust at 7:44 AM on October 22, 2011


you break it down according to a series of rules (that don't relate any meaning from the phrase to you)

This is contradictory. Once you break something down according to rules, you've derived some meaning from it. It might not seem very interesting ("appositive phrase marker") but it's still meaning.
posted by LogicalDash at 7:49 AM on October 22, 2011 [1 favorite]


Suppose a new guest arrives and wishes to be accommodated in the hotel. Because the hotel has infinitely many rooms, we can move the guest occupying room 1 to room 2, the guest occupying room 2 to room 3 and so on, and fit the newcomer into room 1.

"Hey Math Boy! Housekeeping's outside and they want to have a finite number of words with you."
posted by PlusDistance at 8:05 AM on October 22, 2011 [1 favorite]


Slow down here, LogicalDash. You're still thinking of it as something else.

The idea is that the giant task of translating is broken down into minuscule fragmentary tasks, each completely meaningless, that get distributed among, let's say, 10,000 workers in a room. They each do their little micro task and pass their results along. The end result, let's imagine, is a translation – but at no point is any individual doing something that could be thought of, by that person, as "translating."

The end result is that the room speaks Chinese. Or does it? How you answer this question says a lot about your stance on intelligence and AI.
posted by argybarg at 8:05 AM on October 22, 2011 [2 favorites]


The idea is that the giant task of translating is broken down into minuscule fragmentary tasks, each completely meaningless,

Do the tasks all contribute to the translation? Yes? They're all meaningful, then.

Maybe they don't have as much meaning as you'd like them to. I don't know.

Hundreds of people work on a movie. Only one is the director. The others, we don't say they made the movie, but they still participated in film-making. Many of them had a strong influence in how it turned out--the actors, for instance.

Isn't it pretty common for large translation projects to have multiple translators? They do it in several passes, and if at any point one translator doesn't know what a phrase means, they leave it as is and pass it to the next one. Computer translations have smaller steps, I guess. I don't think you can identify any point at which a step becomes too small to be meaningful.

Can neurons think? How about cortical stacks? The occipital lobe? The frontal lobe?
posted by LogicalDash at 8:20 AM on October 22, 2011 [1 favorite]


I took two classes with John Searle at Berkeley (Phil. of Mind, Phil. of Language) and I’m still pretty fuzzy on the Chinese Room argument. He was a major opponent of functionalism, and I still remember his analogy that intelligence was something minds and brains “did” the same way that digestion was something teeth, stomachs and intestines did together. The Chinese Room couldn't be said to understand Chinese because it lacked all the other context for what a human mind was: the sensory organ of living, moving human animal.

I think one upshot is that if you build a computer to communicate in Chinese, it's not really intelligent. If you build a computer to participate in the world intentfully using language, then maybe it is? Ted Chiang’s Lifecycle Of Software Objects explores this idea.
posted by migurski at 8:31 AM on October 22, 2011


So, what sort of input is the Chinese Room processing, anyway? Taxes? If so, then I'd expect the man in the room to have a pretty thorough understanding of Chinese tax law by the end of it. He'd have to do the math at some point. Then he'd know how to use Chinese to talk about monetary transactions. Maybe not about anything else; but then, people who learn English for business only often have trouble socializing in it.

Actually, wait, I'm presuming that the man is familiar with "money" and what it means to "pay" someone for something. This happens to be a reasonable assumption for a modern human, though perhaps not for a computer--well, unless this computer ALSO handles regular banking transactions--and, er, I guess it would have to associate stuff that happens in the Chinese Room with stuff that happens in the NYSE Room...

You know what? This argument can't be bothered to specify what it's arguing for or against. Fuggeddaboudit.
posted by LogicalDash at 8:45 AM on October 22, 2011


Yay! David Mitchell!
posted by phunniemee at 8:49 AM on October 22, 2011


But once again, LogicalDash, you're moving the thought experiment over to settings in which each task is a cognitively rich one, performed by a single agent we already know to be rational.

The Chinese Room is meant as a model of a setting in which each task is performed without making any intelligent choices, and without any holistic awareness of the entire task. Presumably the set designer on the movie has read the script and understands it. But that's precisely the opposite of the Chinese Room.

In the Chinese Room, each individual member simply does a precoded task — say, reordering tiles according to a decision tree. The individual at the desk does not need to understand what is on the tiles, does not need to know what happened at the start or end of the process, does not need to "think" in any meaningful way.

Suppose we arranged for a room that worked in this way, and it provided workable translations of English sentences into Chinese. Does the room, then, "understand" Chinese? Certainly none of the people at the desk do.

When you ask if neurons think, then you're getting closer. No one claims that intelligence resides in a single neuron any more than a rhythm resides in a single drumbeat. But then where is the intelligence? Where is the control?

It leads to interesting questions if you play with it instead of arguing against it.
posted by argybarg at 8:49 AM on October 22, 2011 [2 favorites]


(Pardon me if my tone seems sharp. It didn't until I reread.)
posted by argybarg at 8:51 AM on October 22, 2011


It leads to interesting questions if you play with it instead of arguing against it.

That's kind of a weird way to engage with the Chinese Room Argument.
posted by LogicalDash at 8:52 AM on October 22, 2011


LogicalDash, the room is processing ordinary, conversational written Chinese. If the person outside the room is convinced they’re communicating with a Chinese speaker, and the person inside the room has no idea what the symbolic rules they’re following do, then is there a Chinese speaker in or around the room and if so where?

Wikipedia has a good summary of responses to the argument.
posted by migurski at 8:58 AM on October 22, 2011


It is ridiculous to suggest that the sum of the parts of the Chinese Room constitute a system that understands Chinese.

It is perfectly obvious to maintain that the sum of the biological parts of the body plus magic constitute a system that can understand language.
posted by Zed at 8:59 AM on October 22, 2011 [2 favorites]


That's kind of a weird way to engage with the Chinese Room Argument.

That's why I prefer to think of it as a thought experiment. Searle thought of it as proof against AI, but we don't have to.
posted by argybarg at 9:00 AM on October 22, 2011


This is contradictory. Once you break something down according to rules, you've derived some meaning from it.

In your view, is it possible for a computation to be meaningless?

It might not seem very interesting ("appositive phrase marker") but it's still meaning.

Well, "appositive phrase marker" is a bad example. If I'm reading a rulebook and it says "if X, Y, and Z, then word 3 is an appositive phrase marker," I can think, "oh, I guess there's apposition happening here, and word 3 serves to mark the phrase where it's happening". I have obtained meaning because the rulebook told me meaning. The Chinese Room rulebook doesn't do that.
posted by stebulus at 9:16 AM on October 22, 2011


Wikipedia has a good summary of responses to the argument.

So does the Stanford Encyclopedia (linked in the post)
posted by bluefly at 9:18 AM on October 22, 2011


An interesting reply to the chinese room brought up in the Wikipedia link removes the context of computing translations at all: what if the person or "computer" inside the room is simulating the brain of a chinese speaker?

How can the system be said not to understand chinese if it is performing the EXACT same actions as a real chinese speaker, down to simulating the atoms of every neuron? (Obviously this is impractical today, and might be so forever, but as a thought experiment it's valid)

Searle's reply (at least according to wikipedia) seems to be "well, the MAN doing the computations still doesn't understand chinese, so nyah-nyah". That is very unsatisfying, because unless you have a dualistic interpretation of mind, you could just as easily argue that no human brain REALLY understands anything either.
posted by Nutri-Matic Drinks Synthesizer at 9:25 AM on October 22, 2011


To design a Chinese Room ruleset you'd need to build up a database of words and meanings and rules and stuff. Who did that? Why aren't they considered part of the system?

In your view, is it possible for a computation to be meaningless?

Eh... when we say something's "meaningless" we usually mean that it's not important to us. Computations might not be important to us, but 2 + 2 = 4 does say something about two, two, and four. So it's not totally meaningless, no.
posted by LogicalDash at 9:29 AM on October 22, 2011


when we say something's "meaningless" we usually mean that it's not important to us.

I guess so, but that's not what I meant. Let me try to clarify by giving an example.

Consider the paper-and-pencil algorithm for addition. It involves operations on some symbols. Now, those symbols refer to numbers, but you could train somebody to perform the algorithm without their knowing what numbers are, right? They don't have to know about counting, or quantity, or anything, in order to carry out the steps of the algorithm. This is why machines can add, even though they understand nothing.

The symbols have meaning, but that doesn't matter for the algorithm. Algorithms of this type do not traffic in meaning. As far as the algorithm is concerned, the symbols might as well be meaningless.

Does this make sense?
posted by stebulus at 9:47 AM on October 22, 2011


I have a number of thoughts regarding pattern matching learning an neurology. I think this falls apart in anything other than the degenerate case. While it is unlikely you will learn Chinese, if you can learn at all 2 things are like, firstly you will start to see the threads of the underlying patters of the arrangement of the symbology. that is, certain groups of symbols are likely to be in the responses to other groups of symbols.

Secondly, (because this is chinese) you will start to see the how some of the symbols are conjunctions of the particles. This is something you can learn beyond the rule book in front of you, and of course patters will emerge from that.

Now unlike in alphabetic languages, a vertical line in one symbol say 'R' has no meaning and is unrelated to the vertical line in 'M'. However you can start to see that certain letter patterns happen more frequently and some used more than others. (like when you solve letter substitution games).

So really this is less about AI, per se and more about if artificial learning can exist. Not to mention the whole chinsese room thing is based on the premise of no other information other than the response book. Other than overseas tech support, nothing really works that way :)

LogicalDash: "you break it down according to a series of rules (that don't relate any meaning from the phrase to you)

This is contradictory. Once you break something down according to rules, you've derived some meaning from it. It might not seem very interesting ("appositive phrase marker") but it's still meaning.
"
posted by MrLint at 9:48 AM on October 22, 2011


It’s about AI in the sense that the argument is specifically a response to the idea of “Strong AI”, that you can build computers and programs actually intelligent in the same way that we are. The Chinese Room shows that whatever’s going on in that room, it isn’t the same kind of intelligence that we have as language-using humans. To understand language like ours, you need a mind like ours. To have a mind like ours, you need a brain like ours with all its connected bits. It’s still a machine, just a very specific machine whose higher-level functions depend on its physical neurons and chemical transmitters. The logical symbols of a machine simulation are not enough.

Searle’s definition of “Strong AI” says that if you simulate enough of what brains do in software, you get intelligence like ours. Searle believes otherwise, but then again so do a lot of actual AI researchers.
posted by migurski at 10:02 AM on October 22, 2011


The Schrödinger's Cat one is a bit of fail. It clarifies that it concerns a cat and a box and some poison and some doubt as to what state the cat's in, but no clear extrapolation as to what this tells us about the nature of life the universe, everything. I prefer this following exchange (overheard on an acid trip many years ago):

"There's a box and there's a cat in it and some poison. As long as the box is closed, the cat's either dead or alive, but you can't be sure. Therefore, fundamental uncertainty."

"But what if the cat's eaten the poison and isn't dead yet -- it's just dying."

"We're all dying."


I didn't say it clarified anything. I just prefer it.
posted by philip-random at 10:50 AM on October 22, 2011


Yeah, that's... kind of a weird definition of strong AI. In order to speak language it would at least need an enormous database attached. Probably it would be one of those newfangled semantic databases that records what words are used in what contexts and cross-references that and stuff.

The AI would be totally nonfunctional if that database didn't contain all the semantic information we need for communication... but apparently that information doesn't count as software? Huh.
posted by LogicalDash at 10:51 AM on October 22, 2011


Sure it does, it’s just the next item on the long list of things you need before you have created intelligence: rules, context, feedback, memory, etc. While 1980’s AI researchers figured you could get by with symbolic juggling and a sufficiently-large database, Searle said that you don’t reach the end of the list until you’ve recreated an entire human being with all the wet, chemical parts intact: “Brains cause minds”. Therefore, neither the room nor the person in it can be said to understand Chinese.
posted by migurski at 11:01 AM on October 22, 2011


The chinese room argument is stupid because it doesn't matter if the guy in the room knows chinese. The room itself, in its totality does.
posted by empath at 12:11 PM on October 22, 2011


The reason the Chinese Room is a bad thought experiment is that the premise is unrealistic. In order to fool a Chinese-speaker that the person in the room actually understood the messages, the rules would have to be massively complicated -- so complicated that that there's no possible way that a human being who has no idea what it all means could possibly follow them without making mistakes. And with no way to gauge the importance of the various steps in the process, a huge, meaning-obliterating mistake would be as likely as a small one.

You might think that I'm quibbling with details and missing the point, but I disagree. Just try to come up with a version of the Chinese Room that could plausibly be carried out in the real world and still shows what you think it shows. If you can't, then the experiment doesn't show anything about reality; it just shows something about the imaginary world where you imagine it taking place.
posted by baf at 12:27 PM on October 22, 2011 [1 favorite]


Searle said that you don’t reach the end of the list until you’ve recreated an entire human being with all the wet, chemical parts intact: “Brains cause minds”. Therefore, neither the room nor the person in it can be said to understand Chinese.

Is that supposed to be a recapitulation of his argument? Because if you assume that you need a brain to get a mind, you're assuming the conclusion that the Chinese Room argument was intended to prove.
posted by LogicalDash at 12:28 PM on October 22, 2011 [1 favorite]


I'm just not sure how the Chinese Room experiment benefits anything. Okay, so the man doesn't really understand Chinese. Computers don't know things like humans do. The subjective internal experience (qualia? consciousness?) is important, machines designed to simulate human minds don't have them. Okay, and what?

I guess I see this being important when you start thinking of the ethics of artificial intelligence -- like perhaps the lack of this special something means that AI should not be considered persons, with all the ethical and legal and social ramifications that entails. Other than that, though, people are going to keep trying to create mechanical things that act like people, and whether or not Siri helps you with mundane shit matters more than whether or not the phone actually understands English. So why is this important?
posted by DLWM at 12:29 PM on October 22, 2011


Yeah, I think the real answer to the Chinese Room is that it's an absurd hypothetical in practice - consider the enormous amount of rules and speed of their processing which occur in something like Google Translate with results that would still never pass the Turing Test, and you get some sense of the scale - that we wouldn't be talking a single man in a room with a book, but more a giant factory with various sub-operations causally interwoven with each other, moving at extremely high speed. It's like a critique of evolution that asks the reader to imagine an ape giving birth to a human

In essence the Chinese Room becomes an argument against reductionism. Any system which can be reduced to "bottom level" parts whose actions we can wholly understand cannot be intelligent. I think on some level this might be correct, but it's worth considering some context - the brain clearly modifies itself over time as an integrated part of its functioning. Therefore we're not just talking about "rules" which get followed to produce output, but rules which constantly write and alter themselves through time. It's not clear that one could be said to really understand such a self-modifying system like this without *being the system*
posted by crayz at 12:42 PM on October 22, 2011


Any system which can be reduced to "bottom level" parts whose actions we can wholly understand cannot be intelligent.

This is absurd. If you want to explain anything, you need to explain it in terms of something which is substantively different, or you haven't explained it at all.

It's not enough to say that water is wet because it is made of wet stuff, and its not enough to say people of intelligent because the brain is made of intelligent stuff.

If you want to explain intelligence, you have to explain how it arises from the action of things which are not themselves intelligent, or you've merely created another thing you have to explain.
posted by empath at 12:59 PM on October 22, 2011


The chinese room argument is stupid because it doesn't matter if the guy in the room knows chinese. The room itself, in its totality does.

Well, Searle would say that it's nonsensical to say that the room in its totality understands Chinese, since it's deliberately constructed as a system that cannot be said to understand something just because it's totally rules-driven and deals with Chinese in a purely syntactic way, with no semantic content. The fact that the totality can fool a Chinese speaker just shows that fooling someone is not synonymous with the presence of intelligence.

Paul Ziff had a related counterargument that I've always found compelling. Imagine your neighbour has a very pretty flower growing in his yard that you admire throughout the summer as it grows, blossoms, and eventually fades in autumn. You ask your neighbour about it and he shows you the flower, inviting you to take a much closer look, whence you notice that it has a tiny hatch in the stem. Opening that, you see a variety of gears and rods, and realize that the flower is entirely artificial.

Ziff's argument is that, faced with such a flower, we wouldn't say that it's a living flower made of metal and gears and rods, we'd say that it's a remarkable artifice. In other words, if the exposure of the artifice is sufficient to declassify something that is functionally classified as X, that indicates that the means of achieving functional classification aren't sufficient.
posted by fatbird at 1:05 PM on October 22, 2011


This is absurd. If you want to explain anything, you need to explain it in terms of something which is substantively different, or you haven't explained it at all.

It's not enough to say that water is wet because it is made of wet stuff, and its not enough to say people of intelligent because the brain is made of intelligent stuff.
Wetness comes up a lot in this same context. You can’t explain water’s wetness in terms of what it’s made of, but then again you can’t explain water’s wetness in terms of anything other than how people or animals interact with it, all the way up here in sensory terms. You get down to H2O and there’s no such thing as wetness anymore. Richard Feynman says something similar about magnetism (6:10 is where it comes together). The Chinese Room argument is Searle’s way of saying that intelligence is something like wetness and magnetism: irreducible, and unexplainable in other terms.
posted by migurski at 2:29 PM on October 22, 2011


You get down to H2O and there’s no such thing as wetness anymore.

Yes there is. It's called molecular adhesion. Water molecules tend to stick to most other common molecules.
posted by LogicalDash at 3:00 PM on October 22, 2011 [1 favorite]


But wetness IS explainable in other terms! It's a completely understood phenomenon.
posted by empath at 3:00 PM on October 22, 2011 [1 favorite]


You can explain the adhesion part and contrast it with lipophilicity or whatever, but you’re still missing the sensory experience that makes it “wet”. It’s kind of a toy example and perhaps not a very good one, but goes at least part of the way toward showing how effective the Chinese Room argument is in quickly taking you down to questioning some very basic, normally-unstated assumptions. Who understands Chinese: the man, the room, or does “understanding” require something more?
posted by migurski at 3:25 PM on October 22, 2011


it's deliberately constructed as a system that cannot be said to understand something just because it's totally rules-driven
This just begs the question, though; who proved that rules-driven systems can't be said to understand something? We all seem to be a combination of rules and randomness. If p-zombies do exist, there ought to be a less parochial way of identifying them than "they're rules-driven and not made of meat".
posted by roystgnr at 3:30 PM on October 22, 2011 [1 favorite]


This just begs the question, though; who proved that rules-driven systems can't be said to understand something?

Searle has this much of a point: The only grounds for saying that the room understands Chinese is just Turing's point that it is functionally indistinguishable while being ontologically distinct from "intelligence". In other words, you can't say that it understands Chinese without relying on a purely functional definition of "understands" (i.e., appears to understand well enough to escape detection). The Chinese Room isn't so much an argument against functionalism as it is a forcing of functionalists to completely discard non-functional considerations. If you think the Chinese Room understands Chinese, you have nothing but functionalism left--and if functionalism fails elsewhere, then you must admit that the Chinese Room doesn't really understand Chinese.
posted by fatbird at 3:54 PM on October 22, 2011 [1 favorite]


I think the systems reply is an obvious and correct reply to the Chinese Room. I don't think Searle has said anything sensible to find a flaw in it. As it stands, the Chinese Room seems to beg the question, since all the replies to objections seem to boil down to definitional trickery.

More interesting, I think, is the China Brain: if you gave every Chinese a bunch of telephones and rules for accurately simulating neurons, would the aggregate, the China Brain, be conscious/intelligent? Unless ephaptic coupling plays a significant role in consciousness, I'd have to say yes.
posted by simen at 3:55 PM on October 22, 2011


The Chinese Room isn't so much an argument against functionalism as it is a forcing of functionalists to completely discard non-functional considerations.

I'm not sure I understand what sort of magic dust the brain is supposed to have that make it qualitatively different from any imaginable programmable "computer", though. The ephaptic coupling link above is fascinating, but fundamentally whatever sort of system the brain is, it seems obvious that it is possible to create such a system "artificially", without natural conception/child birth, and unlikely in the extreme that the specific chemical and physical structure of the brain is the only possible way to achieve consciousness - any more than you need eagle feathers to fly
posted by crayz at 4:51 PM on October 22, 2011 [1 favorite]


fatbird, functionalism as wikipedia explains it seems to say that "consciousness" can be understood as the interaction of a number of independent subprocesses to handle language, vision, counting, etc. These subprocesses could turn out to be interdependent, thereby revealing functionalism to be incomplete, but I don't see what effect that has on the Chinese Room's ability to understand Chinese. It means that in order to make the room in the first place you need to design every rule and every filing cabinet with every other rule and filing cabinet in mind. In real life nobody can do this, but then, in real life the Chinese Room could never work for any number of other practical considerations.

I expect really good automated translation tools would have to self-modify all the time and for each individual user. Already we have voice recognition systems that learn to recognize individual voices better.
posted by LogicalDash at 4:53 PM on October 22, 2011


I'm not sure I understand what sort of magic dust the brain is supposed to have that make it qualitatively different from any imaginable programmable "computer", though.

This is just the dilemma of cognitive scientists: We want to reduce the brain/mind to materialist considerations only, to avoid ghosts in the machine, but we continually fail to reproduce consciousness materially. The study of AI is famous for repeated invocations of "we've found the magic dust!" only to run up against hard, mathematical computational barriers. Even when we find useful things,

Put another way: When Deep Blue beat Kasparov, that didn't prove that Deep Blue was conscious, it proved that a different method for playing Chess was superior.

functionalism as wikipedia explains it seems to say that "consciousness" can be understood as the interaction of a number of independent subprocesses to handle language, vision, counting, etc.

This is just to say that the black box works because it contains many smaller black boxes that interact--it's explanitorially vacuous. What it says about the CR is that it doesn't matter if it's an Englishman looking up symbols in a table, it's sufficient to fool someone; what the CR says about functionalism is that the CR is obviously not a conscious system--it may fool someone, but it articulates nothing about our own consciousness (since it's obviously not what's going on in our brain either consciously or materially).

You can say "it's not obvious", and many have. I don't think the CR is a knockdown argument, but it really highlights the issue well.
posted by fatbird at 5:27 PM on October 22, 2011 [2 favorites]


What it says about the CR is that it doesn't matter if it's an Englishman looking up symbols in a table, it's sufficient to fool someone; what the CR says about functionalism is that the CR is obviously not a conscious system--it may fool someone, but it articulates nothing about our own consciousness (since it's obviously not what's going on in our brain either consciously or materially).

He's assuming that you could construct a room which could perfectly translate chinese without also constructing a room which was conscious.
posted by empath at 7:28 PM on October 22, 2011


He's assuming that you could construct a room which could perfectly translate chinese without also constructing a room which was conscious.

Yeah, which is why it seems like begging the question. It's assuming the room would be a believable p-zombie, which actually seems more absurd than simply believing you could make a conscious room
posted by crayz at 8:29 PM on October 22, 2011


Those are pretty cool animations. The only one I had trouble understanding was Hilbert's Grand Hotel, no doubt because of all those fuzzy infinities flying around muddying up the place. Here's a NYT blog post that goes into some detail about the concept:

The Hilbert Hotel
posted by Kevin Street at 10:01 PM on October 22, 2011 [1 favorite]


Well, I mean forget the chinese room for a second, since we now have something very like it in the real world-- does Siri understand english, even in a limited way? I'd argue that it does.
posted by empath at 10:39 PM on October 22, 2011


How does the the Chinese Room use the submissions and responses? What does it use them for?
posted by wobh at 11:12 PM on October 22, 2011


[One-armed man pops out of Chinese room.]
One-armed man: And that's why you don't explain a thought experiment without first setting up the argument or theory it was designed in response to!
posted by anotherbrick at 11:17 PM on October 22, 2011


crayz: "Any system which can be reduced to "bottom level" parts whose actions we can wholly understand cannot be intelligent."

Only if you totally ignore emergent properties.
posted by MrLint at 11:24 PM on October 22, 2011


He's assuming that you could construct a room which could perfectly translate chinese without also constructing a room which was conscious.

No, he's not, and this is a crucial point: the CR is sufficiently advanced to fool a native Chinese speaker. The CR was pretty directly a response to Turing, who posited a standard of "we can't tell the difference", not perfect behaviour.

Now that we have experimental results where current chatbots fool humans 50% of the time, while real humans can do so only 70% of the time, this is seeming like a pretty low standard.

does Siri understand english, even in a limited way? I'd argue that it does.

You're being a bit clever with "understand" here. Siri is obviously not conscious; we can say that she understands English, for a metaphorical meaning of "understand", but as with the chess program beating a human player, the mechanism is different, and on becoming aware of the difference she is disqualified from the literal meaning of the word understand--that is, unless you commit yourself to arguing that we're just more complex versions of Siri.
posted by fatbird at 11:53 PM on October 22, 2011


I think that's beside the point, MrLint. The man, the room and the rulebook may (or may not) have some kind of emergent intelligence when considered as a unit, but this does not logically follow from the fact that they can manipulate Chinese characters. Or to put it another way, their ability to process information syntactically has no bearing on whether or not the system can process the semantic content of the same information.

You could test this by transposing some of the Chinese characters. For instance, if you transposed the symbols for "flower" and "garbage" in the incoming messages and the rulebook, the syntax of the rules would still make sense and the Chinese room could keep chugging along - even though the semantic content of the outgoing responses was now nonsensical. It could produce outgoing messages like "the garbage smells lovely today" and not "understand" that anything was wrong.
posted by Kevin Street at 12:07 AM on October 23, 2011


But any system which would be tripped up by such a change would be a poor system indeed. Imagine asking it to translate poetry, for example.
posted by empath at 12:15 AM on October 23, 2011


That's why Google is having so much trouble with their service. To compensate for a lack of semantic understanding they have to substitute a whole lot of rules, and millions and millions of human generated texts and recordings that their system can extrapolate new rules from. It's not at all like the way a human learns languages.
posted by Kevin Street at 12:24 AM on October 23, 2011


I like how about 8 different people in this thread have said the Chinese Room argument is stupid for contradictory reasons, proving that it's a worthwhile thought experiment.
posted by auto-correct at 1:44 AM on October 23, 2011


The whole wetness thing is interesting. If you met a being who had never experienced wetness before then how would you explain it to them?
posted by IndigoRain at 4:51 AM on October 23, 2011


Oh, by finding some analogy with that being's experience, I suppose. Perhaps it's familiar with magnets?
posted by LogicalDash at 10:39 AM on October 23, 2011


You could test this by transposing some of the Chinese characters. For instance, if you transposed the symbols for "flower" and "garbage" in the incoming messages and the rulebook, the syntax of the rules would still make sense and the Chinese room could keep chugging along - even though the semantic content of the outgoing responses was now nonsensical. It could produce outgoing messages like "the garbage smells lovely today" and not "understand" that anything was wrong.

Scramble the right neurons in a human brain and we do the same thing.
posted by Zalzidrax at 2:18 PM on October 23, 2011 [2 favorites]


You don't even need to scramble neurons. You can just operate on not enough sleep and get the same effect.
posted by empath at 3:19 PM on October 23, 2011 [1 favorite]


I missed my station coming home tonight while reading the Chinese Room debate. My station is 松ヶ崎. Figure it out.
posted by planetkyoto at 8:14 AM on October 24, 2011


Some here are taking the Chinese Room too literally -- it's like finding fault with the Grand Hotel because "there's no way you could construct an infinite structure in a finite universe!" That's not the point -- it's just being put in human-scale terms to make the argument more readily relatable.

Imagine a rudimentary chatbot -- when it receives input containing "Hi," "Hello," etc., it responds with "Hi there! What's your name?" Then when it receives back "My name is [X]," it flags the [X] as the subject's name and uses it in future responses ("How was your day, [X]?") It's not being polite or understanding that it's talking to someone named [X], it's just manipulating character strings according to preset rules.

You could replicate such a crude system with a guy with a rulebook in a room full of Chinese cards. Dude gets a card saying "您好". He looks those symbols up, and it tells him to respond with "您好!你叫什么名字?" He gets back "我的名字是鲍勃。" The rulebook tells him to set aside the "鲍勃" to use later. Etc. At no point does the guy realize what these cards mean, he's just following the rulebook. And of course the rulebook doesn't "understand" anything -- it's just ink on paper. But it still outputs meaningful language.

Granted, such a system is crude and limited (in chatbot or man-in-room form), but some AI proponents argue that if we could just develop a system of algorithms complex enough, we'd end up with a program that could reliably pass as a human being, and thus be intelligent for all practical purposes. The point of the Chinese Room is that such a system could never be truly conscious -- no matter how intelligent it appears, it is, at heart, a bunch of dumb transistors and circuits carrying out basic calculations according to a prewritten program. The abstraction from that to a warehouse full of people blindly processing discrete transactions in a language they don't really grasp just drives the point home. They're functionally identical -- you could carry on a meaningful conversation with an advanced chatbot or more slowly using cards processed analog-style by a small army of workers instead of electrons, but either way there's no real intelligence behind it all. You can write "Fuck you!" and get back a, "No, fuck you!", but there's nothing feeling anger there. It's all a (convincing) charade.

Of course, you could say the same thing about the human brain -- it is, at heart, trillions of neurons firing electrical and chemical symbols according to a biological blueprint. But then where does consciousness come from? At what point is a system intricate enough to become self-aware? Can a human mind be replicated in software? Or on (a lot) of paper? Why or why not? It raises a lot of interesting questions, even if it's technically impossible.
posted by Rhaomi at 9:12 PM on October 24, 2011


Of course, you could say the same thing about the human brain -- it is, at heart, trillions of neurons firing electrical and chemical symbols according to a biological blueprint. ?

Yes, this is exactly the problem with the chinese room.
posted by empath at 9:58 PM on October 24, 2011 [1 favorite]


I really love the Chinese Room because it so clearly illustrates the concept of "level error" (or "category error").

Of course the man doesn't understand Chinese. The room does.
posted by DU at 6:50 AM on October 25, 2011


They must have awfully weird rooms where you come from, DU.
posted by oddman at 9:12 AM on October 26, 2011


It's the elephant in the room that understands Chinese.
posted by stebulus at 4:13 PM on October 26, 2011


Here's an interesting article on Google Translate -- is it a Chinese Room?
Also, I'm tickled that this post engendered such an interesting discussion.
posted by bluefly at 6:52 AM on October 31, 2011


« Older The other 1%   |   "It is not strength, but art, obtains the prize" Newer »


This thread has been archived and is closed to new comments