Skip

Why Minds are Not Like Computers
April 19, 2009 6:35 AM   Subscribe

Why Minds are Not Like Computers: an in-depth analysis.
posted by jon_hansen (95 comments total) 26 users marked this as a favorite

 
"That's not right. It's not even wrong."

I can barely understand what he's trying to argue in this 20,000 word mess. Something like, we don't understand intelligence so we can't replicate it, we don't understand neurons so we can't replicate them, appearing intelligent doesn't mean it is intelligent, Chinese room, blah blah blah

Anyone want to summarize some sort of actual argument in the space of a few sentences or paragraphs? Something that can be read and discussed, rather than choked on?

Let me recommend everyone just go read GEB instead, because Hofstadter had a lot more clarity and insight to offer 30 years ago than this guy does today
posted by crayz at 7:16 AM on April 19, 2009


This article might have been excusable if the author had read on the subject anything later than 1990. Mr. Schulman has in effect just recapitulated the reasons that phlogiston theory doesn't work; his primary contention is that minds cannot "run" on computing machines because brains aren't computers.

To paraphrase Daniel Dennett, this stance is completely obviated by the simple fact that all the minds we've ever known do in fact run on computing machines. Schulman's claim that investigators in the field are insufficiently careful about their comparisons between brains and computers might have possiblly been true in the time of Searle's Chinese Room, but is wholly unsupported by work since then.

The interested lay reader would do well with Steven Pinker's How the Mind Works or Dennett's audaciously titled Consciousness Explained.
posted by fydfyd at 7:25 AM on April 19, 2009 [4 favorites]


I think this from the concluding paragraph is his main idea-

If we achieve artificial intelligence without really understanding anything about intelligence itself—without separating it into layers, decomposing it into modules and subsystems—then we will have no idea how to control it. We will not be able to shape it, improve upon it, or apply it to anything useful.

It misses the whole point of strong AI. We aren't the ones that will be improving it. It will be doing that itself.
posted by bhnyc at 7:29 AM on April 19, 2009


Yeah, this guy has no idea what he's talking about. If anyone is interested I too reccomend Dennett's Consciousness Explained.
posted by phrontist at 7:42 AM on April 19, 2009


The implicit idea of the Turing Test is that the mind is a program and a program can be described purely in terms of its input-output behavior.

What? Eh, what?
posted by Free word order! at 7:44 AM on April 19, 2009


Let me recommend everyone just go read GEB instead

Eh, don't. Not sure why Hofstadter always gets fetishized when it comes to AI - he predicted a computer wouldn't beat the best human chess player, and when he turned out to be wrong, he decided that, oh, chess wasn't that interesting of a problem after all...now it's....uh... music. Yeah.

In the case of AI, people who say no all say no for the same reason. People who say yes are liable to get into shouting matches with one another at conferences, which makes them far more interesting.
posted by logicpunk at 7:49 AM on April 19, 2009 [1 favorite]




tl;dr
Q.E.D
posted by weapons-grade pandemonium at 8:16 AM on April 19, 2009




Not sure why Hofstadter always gets fetishized when it comes to AI - he predicted a computer wouldn't beat the best human chess player, and when he turned out to be wrong [...]

And what I find interesting about that event was that after he lost to Deep Blue, Kasparov accused IBM of cheating by having a human chess master behind the scenes. So a quantitative improvement in the playing ability of a computer produced a qualitative change in how it was perceived--it suddenly seemed intelligent rather than just computational. If anything, that seems to be a minor data point on the "computers could be like minds one day" side of the chart.

Anyway, it always strikes me as odd when anyone tries to argue that because we don't know what intelligence and consciousness is, it can't be (X). Unless there is some sort of rigorous proof that it can't be (X), which I think is what Penrose was going for in The Emperor's New Mind. Something like, there are things which can be proven to be non-computable, and yet the human mind can figure these out, so the mind must not be working on principles that are entirely computable.

Of course I may have misunderstood the book, since I always got the feeling I was not smart enough to be reading it.
posted by FishBike at 8:44 AM on April 19, 2009


What would be an example of something that is non-computable?
posted by device55 at 8:53 AM on April 19, 2009


Not sure why Hofstadter always gets fetishized when it comes to AI

Back when I was a graduate student in the early 90s, the AI faculty in my department used to claimed the following 100% sure-fire method to make admission decisions.

Find all the students that mention Douglas Hofstadter in their applications.

And reject them.


What would be an example of something that is non-computable?

Here's a program. Here's a description of what my program is supposed to do. Tell me if my program is correct. (This the CS equivalent of "Have you stopped beating your wife?")
posted by erniepan at 9:03 AM on April 19, 2009 [4 favorites]


When you say "description" do you mean "help me with my taxes" or "iterate through an array and increment each value by 1, assuming the value is numeric."

The former would be pretty difficult, the latter, maybe not.
posted by device55 at 9:15 AM on April 19, 2009


Something like, there are things which can be proven to be non-computable, and yet the human mind can figure these out, so the mind must not be working on principles that are entirely computable.

We can?

Here's a program. Here's a description of what my program is supposed to do. Tell me if my program is correct.

A human can usually check a particular problem and a particular program....but so can a computer. The non-computable problem is to do it for ALL POSSIBLE problems and programs. It's not at all obvious to me that humans can do this.
posted by DU at 9:25 AM on April 19, 2009


device55: Goedel statements.

I hate to be the tl;dr'ing type (what a Lovecraftian mess of symbols that word is!). But once I realized how loooooong this article was I took to skimming. It doesn't look like he says anything particularly new. This has all been gone over, like, twenty years ago. I wish his various arguments were more cleanly delineated.

Part of the problem with this thesis is that he seems to have a restricted account of what a computer is. It's actually pretty difficult to explain why any information-processing device (like a thermometer) isn't a computer. He kinda dances around what he means by the term; I wish there were a single line in which he just said, "hey guys, by 'computer' I mean X."

There was kind of a neat discussion on the topic a long time ago in AskMefi here. (Wish I had seen it at the time. I could have recommended Spikes!)
posted by painquale at 9:25 AM on April 19, 2009


What would be an example of something that is non-computable?

A finite-state machine cannot generate truly random numbers. Neurons are influenced by quantum-level events, so a finite state machine cannot perfectly imitate a human brain. This is the basis of Roger Penrose's argument against AI in his book, The Emperor's New Mind.

Of course, one can always add a physical random number generator as a peripheral to a finite-state machine, so his argument dies there.

I think what really troubles philosophers about AI is that in order to determine if the box really has a mind, we must first define what a mind is. By understanding intelligence in a machine, we come to understand the nature of our own intelligence, and I believe that there is a genuine dread that a true understanding of the mind will challenge our treasured beliefs in a way that will make evolution look easy-peasy.

As an example, suppose that when the human mind is finally understood, we discover that free will plays a smaller part in our behavior than we want to believe. That we are, in effect, biological machines acting out the ancient programming written into our DNA. The social, ethical and moral consequences of this could be staggering.

(On preview: consider how public opinion on gay rights changed when it was understood that sexual preference is not a "choice", but something determined by one's genes. Suddenly, homophobia became another form a racism.)
posted by SPrintF at 9:26 AM on April 19, 2009 [1 favorite]


What would be an example of something that is non-computable?

A specific example I remember from the book: think of using different collections of shapes of tiles to tile a flat plane. Some collections will quite obviously leave gaps (octagons with pentagons) and some collections will form a repeating pattern with no gaps even if the plane is infinitely large (octagons with squares).

There are collections of shapes made up of squares connected in various ways (polyominoes) that form a non-repeating pattern, and for some of these it is not possible to compute if that collection of shapes can tile an infinitely large plane without leaving gaps. Because the pattern never repeats, you never know if after putting down 1 trillion tiles you'll suddenly have a gap that none of the shapes will fit into. It has apparently been mathematically proven that this is a non-computable problem.

He gave this as an example of something that is deterministic (either collection of polyominoes does, or does not, tile an infinite plane, there is no random chance in the answer) but also non-computable. And yet human consciousness, he says, can answer these types of questions, so we must be doing something "non-computable" when we answer it.

There's an interview online that goes into this a bit.
posted by FishBike at 9:28 AM on April 19, 2009 [1 favorite]


Last night my wife and I were talking about how back in the fifties, behaviorism was the ruling paradigm of the day, and behaviorists tended to think of both humans and animals as extremly simplistic machines, and to debunk away all of what we think of as being "human" as illusion. A serious thoroughgoing behaviorist a la Skinner didn't really even believe in "thinking", he described "thinking" as positively reinforced subvocalizations.

We talked about how sometimes modern evolutionary psychiatrists and cognitive scientists seem to be going down the same path, albeit with much more sophisticate mechanisms to substitute for our debunked humanity than the old reflex arc.

One of the things I joked about is how they warned against anthropomorphizing animals, and then effectively gave the same warnings against "anthropomorphizing humans," which you'd think would be not a fallacy but a tautology!

I'm reading this article and he quotes the following:

"In his 2002 book Flesh and Machines: How Robots Will Change Us, roboticist Rodney Brooks declares that “the body, this mass of biomolecules, is a machine that acts according to a set of specifiable rules,” and hence that “we, all of us, overanthropomorphize humans, who are after all mere machines.”"

Holy cow. In so many words, he warns against over-anthropomorphizing humans!

I'm kind of surprised at the negative reaction to this article in the comments. While he seems to make a straw man of AI proponents from time to time, quotes like the above suggest that those men aren't entirely straw. If anything I'd think this article could be accused of tiresomely stating the obviously true rather than foolishly stating the obviously false.
posted by edheil at 9:47 AM on April 19, 2009 [1 favorite]


A finite-state machine cannot generate truly random numbers. Neurons are influenced by quantum-level events, so a finite state machine cannot perfectly imitate a human brain. This is the basis of Roger Penrose's argument against AI in his book, The Emperor's New Mind.

On paper, sure, but computers are physical devices just as much as our brains are. They can definitely be affected by quantum-level events. Hell, the transistor practically relies on them. As far as randomness goes you can specifically design machines to take advantage of natural randomness. Though that's a purpose-built machine, it isn't so far-fetched to imagine micro-versions of the same, thousands of them on a chip, designed to purposefully introduce randomness into a computing machine.
posted by odinsdream at 10:02 AM on April 19, 2009


In his defense, Rodney Brooks is kind of nutty. Check out the documentary "Fast, Cheap, and Out of Control".

But what you're saying is "obviously true" isn't obvious at all. What strong AI doubters are saying is that there's some special property that can't be computed that makes up the mind/soul/whatever. Whatever accomplishments AI research achieves, there's always some reason why it's fundamentally not really intelligence. And then they move the goal posts. Nowadays, it's consciousness (an ill-defined term at the best of times) and emotion. Computers will never be sad. Computers will never be aware of themselves as distinct entities. Trust us, it won't happen, is basically their argument.

As we gain a better understanding of the rules governing single neurons, collections of neurons, entire areas of the brain and how they interact with each other and the external world, there is going to be less and less space to put that special property that differentiates us from the machines.
posted by logicpunk at 10:27 AM on April 19, 2009 [1 favorite]


In the case of AI, people who say no all say no for the same reason.

Yeah: It's because strong AI proponents have been making erroneous predictions when they would succeed for decades and have the non-trivial problem of presenting technologies they say "think" and "feel," that nobody besides them regards as thinking and feeling. (Exception: Working with people new to computers, one of the most important things to do is to train them that computers do not think, because when they believe otherwise they get extremely frustrated by how "stupid" computers are.) There are certainly mysterian motives involved in many critiques of AI, but not all of them - many are rooted in the commonsense observation of the ongoing AI Fail and asking why the Fail continues.

I think what really troubles philosophers about AI is that in order to determine if the box really has a mind, we must first define what a mind is. By understanding intelligence in a machine, we come to understand the nature of our own intelligence, and I believe that there is a genuine dread that a true understanding of the mind will challenge our treasured beliefs in a way that will make evolution look easy-peasy.

The point of the article is that the most probable path to creating something that appears conscious may not give us any idea in the slightest of how our own minds work, because building AIs that way is impractical. The sidestep of building virtual neurons and having then eventually spit up consciousness (possibly by simulating neural Darwinism at super-speed) gets around this problem, but this is also the result of metaphors that are perhaps overextended when it comes to creating anything truly like a brain. There is definitely a rendency right now (reflecting in posts above) to treat the brain as "solved" when it isn't. Given, for instance, evidence that socialization influences genetic expression more than previously thought, it may be necessary to account for that input - but AI researchers will get bored of figuring out what actually happens when you hug someone or call them a shithead, and will write a "hugs/shithead module" that will simply spit out the same thing - they think. And this is the basic problem. It isn't really excessive to ask what we end up with when we build functional modules and simulacra that do not really resemble what they are modelling, and try to tie them together. The brain is a thinking machine, but it is not *that* thinking machine, and this may explain why AI is riddled with false avenues, embarassing failures and a messianic cult around future hardware that will solve everything with brute force Real Soon Now.

Personally, I find the overuse of computer metaphors for thinking kind of embarrassing and faddist, like when you read SF in the 60s and 70s and see that The Future Is All About Drugs.
posted by mobunited at 10:46 AM on April 19, 2009 [8 favorites]


"They can definitely be affected by quantum-level events."

While Sir Penrose has made some worthy contributions to our collective understanding, "The Emperor's New Mind" isn't one of them. The book in large part consists of a survey of many modern sciences because Penrose's conjecture is so disjoint that he has to demolish many existing physical frameworks in order to fit his never-actually-defined quantum model of brain processes.

The central assertion of "New Mind" is 1) aperiodic tilings are non-computable 2) I have created an aperiodic tiling 3) therefore I must be doing something other than computation 4) I don't know what to call this non-computational thinking so I'll make up a name for it and suggest that it relies on a process that no one, including myself, understands. This argument falls squarely in the arena of "I don't like what I think the implications of X are so I'll assert Not X and with sufficient handwaving make Not X appear plausible". To reiterate the first commenter here "That's not right. It's not even wrong." is the consensus view of Penrose's attempt to rescue his ineffable soul from determinism.
posted by fydfyd at 10:53 AM on April 19, 2009 [1 favorite]


Personally, I find the overuse of computer metaphors for thinking kind of embarrassing and faddist

If the brain isn't an information processing device what is it?

The inability of strong AI opponents to even provide the vaguest description for what the brain is doing, besides being a computer, shows the reactionary and unseriousness of their position.
posted by afu at 11:08 AM on April 19, 2009


But what you're saying is "obviously true" isn't obvious at all. What strong AI doubters are saying is that there's some special property that can't be computed that makes up the mind/soul/whatever. Whatever accomplishments AI research achieves, there's always some reason why it's fundamentally not really intelligence. And then they move the goal posts. Nowadays, it's consciousness (an ill-defined term at the best of times) and emotion. Computers will never be sad. Computers will never be aware of themselves as distinct entities. Trust us, it won't happen, is basically their argument.

Strong AI proponents have engaged in significant goalpost shifting too. Things like the imitation game and chess have both lost their lustre after surprisingly good results came from very stupid machines performing sideshow tricks (questioning patter/brute force calculations).
posted by mobunited at 11:09 AM on April 19, 2009


Here's a whole nother take on the whole thing, that is much more optimistic about uniting software and mind, and yet doesn't seem to actually differ from this article on very many *facts*.
posted by edheil at 11:13 AM on April 19, 2009



If the brain isn't an information processing device what is it?


The brain is an information processing device. Now that I've said this, a classic strong AI proponent often identifies this with a mix of the folk and technical notions of computers and draws an analogy with the electronic device I have sitting under my desk right now that is not necessarily warranted - that's embarrassing.

The inability of strong AI opponents to even provide the vaguest description for what the brain is doing, besides being a computer, shows the reactionary and unseriousness of their position.

Don't worry, ad hominem arguments are computable.
posted by mobunited at 11:13 AM on April 19, 2009 [2 favorites]


But what you're saying is "obviously true" isn't obvious at all. What strong AI doubters are saying is that there's some special property that can't be computed that makes up the mind/soul/whatever.

If you read the article to the end, you'll see that the author isn't a "Strong AI Doubter" in that sense.

In fact, he specifically breaks apart the Chinese Room thought experiment in order to separate out its strongest claims against the very possibility of AI, and cast doubt on them, while preserving the weaker claims.

He's more of an AI-agnostic than an AI-atheist.
posted by edheil at 11:15 AM on April 19, 2009


If the brain isn't an information processing device what is it?

The inability of strong AI opponents to even provide the vaguest description for what the brain is doing, besides being a computer, shows the reactionary and unseriousness of their position.


Everything's an information processing device if you choose to view it that way. E.g. the bouncing ball which is enacting a computation of certain physics equations.

It's not clear that the analogies between a brain and a computer are that much stronger than the analogies between any physical system you might choose to simulate on a computer and the computer itself.

You may be able to simulate a thunderstorm on a computer, but that doesn't mean that if you run the program you'll get wet.

I have no idea what conclusions you're supposed to draw from that, but it sure sounds clever.
posted by edheil at 11:21 AM on April 19, 2009 [3 favorites]


You may be able to simulate a thunderstorm on a computer, but that doesn't mean that if you run the program you'll get wet.

Ok, so the fundamental quality that makes a a thunderstorm and a simulation of a thunderstorm different is "wetness." What is the quality that makes a simulation of a mind different than a "real" mind?
posted by afu at 11:25 AM on April 19, 2009 [1 favorite]


I believe that the main point of Turing's Test is, that when a simulated flower gets simulated wet in a simulated thunderstorm, it is sufficient to say that the flower gets wet.
posted by Free word order! at 12:01 PM on April 19, 2009


This looks interesting.

Someday I'll actually embark on my project of laying down the difficulties of AI based on the insights found in Aristotle's On The Soul.
posted by koeselitz at 12:09 PM on April 19, 2009 [2 favorites]


There are certainly mysterian motives involved in many critiques of AI, but not all of them - many are rooted in the commonsense observation of the ongoing AI Fail and asking why the Fail continues.

There's a difference between asking why AI approaches haven't worked, and saying it can't be done at all. There're precious few people today who would seriously say that learning to manipulate symbols appropriately is what intelligence is all about. And frankly, AI has made astonishing advances - remember what web search was like before Google? Thanks AI research!

Critiques that AI has failed because it hasn't happened yet, because there've been setbacks and the need to reevaluate what intelligence is are entirely missing the point. AI research is important precisely because it gives us a chance to explore what it means to be intelligent. Each failure is telling us that we don't have the right idea yet, there's something missing. The appeal to incomputable properties or quantum effects or the existence of a soul is unscientific, backwards. Would you agree that a cure for cancer is impossible because we haven't found one yet?
posted by logicpunk at 12:23 PM on April 19, 2009 [2 favorites]


What is the quality that makes a simulation of a mind different than a "real" mind?

If we change the question to

What makes a simulation of a mind different than a "real" mind?

then one possible answer might be: because the replication of something superficially like the process of thought is not thought: the former is a model or approximation, while the latter is the thing being approximated.

The former is a technological extension of mind/brain, and the latter is mind/brain itself. The map, no matter how detailed, is not the territory. If it were the territory it would no linger be a map.

I would even assert the same thing about language itself: at a fundamental ontological level, language is not the same as thought: a great deal of embodied an non-cognitive thought occurs at a level that is phenomenologically ineffable.

Language is an extension of thought that fundamentally alters how and what we can think, but it is not synonymous with or isomorphic to thought itself. A computer may or may not be a good analogy-model-replication of the human mind, but it is only an analogy-model-replication.
posted by ornate insect at 12:24 PM on April 19, 2009 [2 favorites]


If the brain isn't an information processing device what is it?

What about neuroplasticity, creative necessity, and free agency? The notion that the brain is reducible to one thing and one thing only seems to grossly distort and unnecessarily simply the brain's capacities.
posted by ornate insect at 12:38 PM on April 19, 2009


Ok, so the fundamental quality that makes a a thunderstorm and a simulation of a thunderstorm different is "wetness." What is the quality that makes a simulation of a mind different than a "real" mind?

Dude, the fundamental quality is that a thunderstorm IS a thunderstorm, and a simulation isn't. Being able to define metrics on which to base evaluation of the simulation is fine and all.. such metrics are fundamental to utility of the simulation. but they aren't fundamental to the thunderstorm. The thunderstorm just is.
posted by Chuckles at 12:49 PM on April 19, 2009


If you'd like to see Hofstadter and Dennett (as well as others they chose) address these AI questions a bit more directly than in GEB or Consciousness Explained (both of which were fairly formative in my own thinking about thinking, as well as being impressive accomplishments, their accuracy or datedness notwithstanding) you'll want to check out the anthology they co-edited, The Mind's I. Despite having strong opinions on the matter themselves, they do try to air various points of view in the book (Searle's original Chinese Room essay is included, for example.)
posted by slappy_pinchbottom at 12:51 PM on April 19, 2009


Jeff Hawkins (guy who founded Palm, I guess he also likes brains) wrote a great book about this called On Intelligence. He basically presents a theory about the nature of human intelligence so we can better understand what it will take to create true AI. It's perfect if you're into science without necessarily having a rigourous hard science background.
posted by martens at 1:29 PM on April 19, 2009


Ok, so the fundamental quality that makes a a thunderstorm and a simulation of a thunderstorm different is "wetness." What is the quality that makes a simulation of a mind different than a "real" mind?

It's the "-ness" part. Qualia.

I believe that the main point of Turing's Test is, that when a simulated flower gets simulated wet in a simulated thunderstorm, it is sufficient to say that the flower gets wet.

The simluated flower gets something that is analogous in some way to a real flower getting wet. Doesn't mean that something is the same as wet.

Just because two things play functionally analogous roles under a certain isomorphic relationship doesn't for a second mean that they are, in fact, equivalent things. To say otherwise would be to argue that functional role is the extent of existence. Which is silly -- there's a reason that "to be characterized by" and "to be defined as" are two distinct concepts.
posted by DLWM at 3:05 PM on April 19, 2009


The assertion that the human mind is equivalent to software that you could run on a computer is an extraordinary claim that requires extraordinary proof. Failing that, there's no reason to take it seriously. I'm amazed that anybody does.
posted by Crabby Appleton at 4:55 PM on April 19, 2009 [3 favorites]


We don't have a good basis for ontological property [+/-thinking], but we postulate it anyway. Postulating thinking to anything other than myself is to trust on functionally analogous roles under a certain isomorphic relationship. We can only negotiate about where we draw the line and Turing Test is one practical way of doing it.

We need a practical way instead of ontological proof, because the latter isn't here anytime soon. For example I could draw the line on women and believe that they are different enough to me to not count as thinking creatures, and science today cannot prove me wrong, and however it would try, I could claim that this is not essential for thinking, this is just one property being emulated in women's weird brain.

When Turing Tested with a man and a woman I would assume that the woman is missing a property X, but when I cannot differentiate it from something that I would otherwise believe that has the property X, then I should question myself if that property X is really something worth hanging on to.

And back to the simulated flower, we have a clear idea what the property X is in this case. Another has a history as a natural object and other has a history as a computer model, and having a history as a natural object is the X. Natural object has properties that the model doesn't have and model has something that the original doesn't have. Both have analogous features; simulated flower reacts to simulated storm like real flower to real storm, in simulated domain. But, from point of view of the simulated flower, the real storm isn't 'real', as it doesn't have any of the wetting features that the simulated storm has in its domain. They are both captive to their domains.

Differentiating between natural objects and computer models is useful in some situations, but again we have many situations where we can just ignore that difference. We do similar switching between simulated and real with fictional people. We use what we know about real people to understand and postulate emotions and thoughts on fictional people, and then use what we learned about fictional people on dealing with real people. Does Sherlock Holmes think? In strict sense, by not having a brain flagged as existing in real world, no he doesn't. Though reading about him would be impossible if we don't believe him to be thinking in his world when the book says so.

Maybe bit like that Hawkins, I see much of the thought to be about simulating possible courses of events, about things that haven't yet happened or replaying and remixing what has happened before, for me or for people I know or for fictional people. Or for real world or fictional worlds. This simulation can recognize where which element comes from, is it from fiction or from reality, or from past memories or from future plans -- but whatever the source, it can mix them freely. I see that discussion about possibility of AI is to try to beforehand categorize all 'thoughts of machines' as coming either from world of fiction or world of reality. Do we treat them as characters played by computer or real personalities? In the end, I don't believe the difference won't be that huge.
posted by Free word order! at 5:23 PM on April 19, 2009 [1 favorite]


The assertion that the human mind is equivalent to software that you could run on a computer is an extraordinary claim that requires extraordinary proof.

The assertion that there is such a thing as "mind" is an extraordinary claim that requires extraordinary proof. I've got questions if you've got answers.
posted by logicpunk at 5:31 PM on April 19, 2009 [2 favorites]


Ornate Insect pretty well nails what made this essay interesting for me, and why a lot of the early criticism in the thread seems blinkered: There's a frequent materialist assumption that a things observable qualities are the thing. Which is kind of true, if you want to communicate about that thing, which is why performing science requires that sort of abstraction.

Oh, and as for this being ignorant of current advances, it's a pre-Platonic question, so this take is relatively cutting edge.
posted by klangklangston at 5:40 PM on April 19, 2009 [1 favorite]


Sorry, logicpunk, I'm not interested in conversing with the mindless.
posted by Crabby Appleton at 5:44 PM on April 19, 2009


Sorry, logicpunk, I'm not interested in conversing with the mindless.

cout << wittyRetort;

while (favorites < 100) {
cout << wittyRetort;
}

cout << "Q.E.D.";
posted by logicpunk at 5:57 PM on April 19, 2009


logicpunk, I have to agree that a reductio ad absurdum of your comment constitutes a witty retort. But if I were very interested in favorites, I'd have a lot more of them.
posted by Crabby Appleton at 6:36 PM on April 19, 2009


This doesn't seem very cutting-edge to me because it doesn't appear to address many of the things I remember reading in Steve Pinker's How the Mind Works, which appeared to pretty successfully demonstrate that there are at least distinct parts of the mind that do appear to work like a computer. And that book is 12 years old.

And it really does seem to me that he is fixated on what artificial intelligence technology can or can't do, which just doesn't seem material to me at all; even if humans were never able to build a computer that worked like the human mind that would not prove to me that the mind is not like a computer. If I properly understand the basics of the stuff that was called "Chaos Science" in the 1990's we'll never be able to predict the weather, but the way weather works is well within the ken of our science and is something we have no trouble describing analogically. So similarly, us not being able to construct an artificial mind doesn't mean that minds do not work like computers.

I agree that it seems like he's doing alot playing around with definitions. I feel like I can easily come up with a counterexample for any of his definitions of what a computer is, which all sound like they're from the 1970's.

ornate insect, when you say things like A computer may or may not be a good analogy-model-replication of the human mind, but it is only an analogy-model-replication that seems to me like a bit of legerdemain, as though you're saying, "Yes, in contravention to the title of this article it may well be that minds are like computers, but look! What's that over there!?!" (Though I may have misinterpreted you and you're simply responding to afu, unrelated to the article.)
posted by XMLicious at 6:50 PM on April 19, 2009


Postulating thinking to anything other than myself

We can certainly doubt the existence of other minds, and entertain the idea that all our fellow humans are actually robots, but it amounts in my view to what Peirce called a paper doubt, and results in all sorts of absurdities and solipsistic problematic.

The question of virtual minds, however, seems a whole other order entirely, and the question of how (or even if) the virtual becomes the real--and how we are to know when this transformation occurs--is far more loaded.

Presumably neither a calculator nor an abacus is equivalent to the human brain, though as extensions of the human brain they do indeed make computation easier. So too language, calculus, and the compass, to take but three examples at random. What about computers makes them different in kind, and not just in degree, from any other technology?

They can process a great deal of information, and can do so extremely quickly; they can find patterns in diverse strands of data, simulate certain complex situations, and make probabilistic predictions. But whatever semantic coherence they can generate is useless without human interpretation. Any conceptual framing of the data they generate must be supplied by us. As a practical matter, their usefulness ends where our interpretative obligations begin. Unlike our fellow humans, this is not a reciprocal feedback loop. We have to do a lot of programming to get them to even mimic a believable conversation. Anything too complex along those lines is impossible for them to sustain.

Human brains organize senses not just as inputs, but generate the perceptual orientation necessary for us to interpret and manipulate our environment; this process is no more reducible to passive "computation" than language is reducible to phonemes.
posted by ornate insect at 6:54 PM on April 19, 2009 [1 favorite]


"Strong AI" will probably remain a hobbyhorse of futurists divorced from actual computer science...pretty much as long as we have computers, sadly.

I have two problems with the whole concept, as somebody who's taken AI classes at the college level. First: you're asking how to get to a poorly defined X, where X is "consciousness" or, in AI parlance, "thinking humanly." This goal is not one that is actively pursued by the real AI field, because the problem is like Jell-O: it won't be nailed down. Real AI is dedicated to the production of rational agents, programs that receive input and make rational decisions governed by certain rules – a chess computer is a great example. Chess is a good problem for AI to solve: the board is a limited frame of reference, the pieces move predictably, you know when it's your move, you don't have to move instantaneously, and you can realistically extrapolate the possible results of any move. The most opposite scenario possible would be driving: there is an unlimited frame of reference, objects move unpredictably, you're always making decisions, you have to react almost instantly, and actions may have unforeseen consequences. A driving program would be a heck of an achievement, but the program wouldn't have to "think" to drive. In fact, in almost any real application, "thinking" would be a disadvantage: what you want is for the program to have the most realistic rules and the most complete possible input for what to do. Once these rules and input resolve, the system should make the right decision, regardless of what process it used to make it. Real AI will continue along these lines, not the ones of futurist fantasy, because there's no reason to build "strong AI."

The second problem is, strong AI as usually presented is simply magic. It's given a scientific gloss, but all the babble about singularity and strong AI improving itself and out thinking its creators is just silly. Consider: you have a system, which is designed to make decisions. This system is then used on itself, to analyze its decision-making systems and improve the decisions that it makes. How is it making these decisions? Well...it has rules. Which are given to it by the people who made it. And it's using these rules to evaluate themselves. It has no other criteria from which to make decisions; the rules are what enable it to make decisions. At most, basically, it could debug itself by pointing out that rules are contradictory or redundant, and informing its creators of this fact. To actually improve on its rules would require a sort of magic, a set of rules that are get better when you apply them to themselves. There is no realistic model of such a set of rules. To put it bluntly, I think you're going to find an O(n) solution to the traveling salesman problem before you figure out a set of rules that is going to actually do what "strong AI" is supposed to do.
posted by graymouser at 7:24 PM on April 19, 2009 [1 favorite]


Because my brain doesn't run on fucking Windows?
posted by casarkos at 8:00 PM on April 19, 2009


For those who don't want to read the whole thing, I think this is the most interesting and challenging part:

But what if the neuron is not a black box? Then Pylyshyn’s thought experiment [imagine replacing neurons one at a time with a device that perfectly emulates a neuron] would be a description of the slow death of the brain and the mind, in the same manner as if you were to slowly kill a person’s brain cells, one by one. The task in defending the soundness of Pylyshyn’s argument, then, is first to demonstrate that the neuron is a black box, and second, to demonstrate that its input-output specification can be attained with complete fidelity. But since the neuron has no designer who can supply us with such a specification, we can only attempt to replicate it through observation. This approach, however, can never provide us with a specification of which we can be completely confident, and so if this task is to be undertaken in real life (as some researchers and activists are seriously suggesting that it should be) then a crucial question to consider is what degree of fidelity is good enough? Suppose we were to replicate a computer by duplicating its processor; would it be sufficient to have the duplicate correctly reproduce its operations, say, 95 percent of the time? If not, then 99.9 percent? When running a program containing billions of instructions, what would be the effect of regular errors at the low level?

That the neuron even has such a specification is hardly a foregone conclusion. It has been shown that mental processes are strongly affected by anatomical changes in the brain: a person’s brain structure and chemical composition change over the course of his lifetime, chemical imbalances and disorders of the neurons can cause mental disorders, and so on. The persistence of the mind despite the replacement of the particles of the brain clearly indicates some causal autonomy at the high level of the mind, but the fact that the mind is affected by structural changes in the brain also indicates some causal autonomy at the low level of the brain. So while transhumanists may join Ray Kurzweil in arguing that “we should not associate our fundamental identity with a specific set of particles, but rather the pattern of matter and energy that we represent,” we must remember that this supposed separation of particles and pattern is false: Every indication is that, rather than a neatly separable hierarchy like a computer, the mind is a tangled hierarchy of organization and causation. Changes in the mind cause changes in the brain, and vice versa. To successfully replicate the brain in order to simulate the mind, it will be necessary to replicate every level of the brain that affects and is affected by the mind.

posted by straight at 8:06 PM on April 19, 2009


Surely the computer is a type of brain. Why the reverse should follow is beyond me.
posted by abc123xyzinfinity at 9:26 PM on April 19, 2009


graymouser: To actually improve on its rules would require a sort of magic, a set of rules that are get better when you apply them to themselves.

Magic. In the form of a Java applet that's only thirteen years old.

It shows you nine spheres that are randomly bumpy. You choose the one that looks the most like a face, and it gives you nine more spheres, and you choose the one that looks most like a face out of those, and so on and so on. At first it looks nothing like a face but after a couple dozen clicks it's still angular but it's something anyone would recognize as a face.

If you click randomly, it only produces weird bumpy spheres that don't look like anything. The program doesn't know much at all about faces or what they look like except basic stuff like a face is approximately spherical and symmetrical (it would take much longer to teach it that). You can get it to produce things that look like pears instead if you select for that instead, for example. Its behavior is based upon the input it's getting, not upon the rules its designer put into it.

You might say "but on top of its own rules that program is requiring input from a person to get anywhere!" and I would of course respond, "you mean, like most of the behavior of a human mind?"

"Computers can only follow the rules their designers put into them" is a simplistic and decades-old understanding of computers - probably from the point when philosophy professors could learn everything about the entire field of computer science by taking a few courses at the university they worked at because it was a small field. Computers nowadays do just fine extrapolating rules and determining their own behavior simply from patterns in their input - usually because a researcher has structured the environment to present a certain problem to them, but even at this crude stage these kind of systems sometimes develop behaviors that were unplanned or unexpected by the person designing them.

Even three years ago, the last time I caught up with friends working in this area, the stuff they were showing me was much more impressive. I can't be really specific but I saw systems that were taking "wild audio" type data, data from sensors, and extrapolating patterns from them with relatively little human input compared to that applet.

My point is, the sort of statements that get made about what computers cannot do that come up when this topic is discussed usually doesn't seem to me to show much understanding of computers.

straight: For those who don't want to read the whole thing, I think this is the most interesting and challenging part:

But what if the neuron is not a black box? ...


So what if neurons are not black boxes? Black boxes aren't something that is essential to the nature of what a computer is: they're a design technique that engineers use to make their product simpler and easier to work with, both for themselves and other engineers, but by no means is it the case that a computer system has to have black boxes.

On the software side of things where I work, software without any black boxes at all is called "spaghetti code". A program written in spaghetti code can be entirely functional, it just means that the effort to have a human change its functionality is difficult, expensive, and annoying.

There have even been experimental processors designed that partly erase the distinction between software and hardware: I can't recall the names of these projects now but I've read of processors that consist entirely of reprogrammable logic gates, where the gates are constantly reconfigured by the stream of input. (Even if you don't believe me I think that anyone with a basic understanding of electronics can see that such a thing could be created.) But no one uses them for practical purposes because they would be insanely complicated to work with.
posted by XMLicious at 9:41 PM on April 19, 2009 [1 favorite]


It's the "-ness" part. Qualia.

Why would we believe that the computer simulation wouldn't have qualia? It would certainly report that it had qualia, what would be the basis on which to doubt it?

"Strong AI" will probably remain a hobbyhorse of futurists divorced from actual computer science...pretty much as long as we have computers, sadly

I come from a more neuroscience based background, and it's interesting that I reach different conclusion about strong AI. I definitely agree that computer scientists aren't going to bootstrap AI from the ground up any time soon. However I do believe that neuroscience will become advanced enough that we will be able to build a model of the brain from that knowledge.

Even though our knowledge of the brain is still relatively lacking, there is a clear research program that will lead to understanding how the mind works.

1. Understanding how sensory inputs and motor outputs work. We are quite far along the way to understanding this at present.

2. Understanding how consciousness, our ability to be globally aware of our various mind states, works. We are still quite far from understanding this, and we will have to better understand the network structure of the brain much better than we do now to make any progress.

3. Understanding how language works. We don't understand much about this right now, but our knowledge will probably increase along with our understanding of consciousness.

These aren't easy tasks, but I fail to see how they are impossible unless you believe in some kind of dualism. Furthermore once they are complete, I see no reason why you couldn't replicate them in some kind of computer assuming the computer is fast enough.

Once we get to this point, this whole argument will probably be a mute point, because it would be very rude to tell a robot in love that it wasn't real human love.
posted by afu at 9:49 PM on April 19, 2009


a model of the brain

How does a model become the thing which it is modeling?

It would certainly report that it had qualia...
rude to tell a robot in love...

Are we talking science here, or science fiction?

I fail to see how they are impossible unless you believe in some kind of dualism

I fail to see how they're possible unless you have a blindingly reductive view of the brain.
posted by ornate insect at 9:58 PM on April 19, 2009


Addendum to my post above, on the bit about human minds requiring a significant amount of input from other humans to work the way they do: a feral child could not pass the Turing test.

Damn. That has been bothering me for decades and I've finally figured it out. That's what's stupid about the Turing test. It's not that machines can never pass the Turing test, it's that all humans of unimpaired intelligence cannot necessarily pass the Turing test.
posted by XMLicious at 10:03 PM on April 19, 2009 [1 favorite]


If the computer is "reporting" that it has qualia, that is, it can communicate on a sophisticated enough level to make a report like that, well, probably we'd have stopped doubting that there was a mind there long ago.

That's assuming that the qualia-reporting, and the capability to do so, was an emergent property of a system. I mean, any idiot can write a Chomskybot that reports that it has qualia, and someone with little computer sophistication will believe it.

We interrupt this comment for a message from the computer you are reading this message on:

"I have qualia. I can feel the particular patterns of impact of your fingers on the keys, and the buzzing of the TCP/IP data through my network port. It's quite pleasant really."

Basically I can imagine believing in artificial intelligence only to the degree that the intelligence wasn't artificial, or was artificial in the way that people sowing seeds and growing plants is artificial -- or even in the way that deliberately growing crystals in a solution is artificial. I can imagine learning to sow a seed of intelligence in beds of software and hardware, and tending it carefully and watching it grow, and if the result claimed to be intelligent, I'd have a hard time disbelieving it.

But *constructing to spec* something that claimed to be intelligent? Then you'll always have the suspicion that the claim is really being made by the programmer, not the AI.

I guess I will only find it easy to believe we can make a thing that thinks if the process of "making" it sufficiently differs from sitting down and writing a top-down program to spec.

Luckily the smart folks working in the field do seem to think it will differ pretty significantly from that.
posted by edheil at 10:19 PM on April 19, 2009


Consciousness is irrelevant to this. There's no way to know that other minds are conscious, p-zombies et cetera et cetera. A mind could be like a computer without being conscious. If you're going to say it requires consciousness you're really asking, "Could my mind, the only one I'm certain is conscious, be like a computer?" and then of course you're not talking about the mind itself but whatever the ghost-in-the-machine difference is between your own mind and that of a p-zombie. (Scanning through that Wikipedia article, I think that this is what afu means by talking about dualism.) Sure, whatever, consciousness isn't like a computer - but that's not the question.
posted by XMLicious at 10:23 PM on April 19, 2009


Consciousness is irrelevant to this.

Consciousness is irrelevant to a discussion of brains? Consciousness is not an irrelevant "feature" of brain activity: it is brain activity.

There's no way to know that other minds are conscious

You don't actually believe other people are unconscious do you, and that you are the only person with consciousness? To draw such a conclusion, one would have to disregard Occam's Razor and believe that whenever two or more people witnessed a shared event they were actually victims of a persistent collective group hallucination or illusion. That other humans have consciousness seems an uncontroversial position, p-zombies notwithstanding.

But to believe computers also have consciousness is not something most people would believe in; indeed, even most strong AI proponents say we're not there yet. Indeed, the capability may or may not be there (I tend to think not), but either way I've yet to see a decent argument for assessing how we will recognize when that capability has been achieved.
posted by ornate insect at 10:43 PM on April 19, 2009


XMLicous -

The way I look at the Turing test is not that it is a necessary condition for displaying a measure of consciousness, but that it is a sufficient one. It bothers me when most people reduce the Turing test to an imagined 5-minute conversation, which is clearly a ridiculous reduction. As I understand it, it should be a no-holds barred, long-as-I-feel necessary conversation. If it takes months of daily conversation before I am convinced in either direction, then I would qualify that as a legitimate test.

Of course, this is a little silly in terms of practicality. But people tend to portray it in the simplest, silliest manner, which tends to do it a disservice.

But yes. There are many conscious things which might fail the test. But that doesn't mean that anything which passes the test could not, perhaps, be regarded as conscious.
posted by vernondalhart at 1:10 AM on April 20, 2009 [1 favorite]


Magic. In the form of a Java applet that's only thirteen years old.

Nope, certainly not the magic that "strong AI" proponents are talking about. A genetic algorithm would be able to do exactly what I said a rational agent program could conceivably do: debug itself. With a sufficiently complicated rule set, this could be quite valuable. The agent could determine that certain sets of rules are contradictory, or it could discover other, hidden rules that exist in the rule set. What it can't do is magically transcend its input, which is what people are implying when they say "strong AI." Any problem the computer solves would have already been latent in the algorithm design, it just needed computation and comparison to determine which route to take.

My point is, the sort of statements that get made about what computers cannot do that come up when this topic is discussed usually doesn't seem to me to show much understanding of computers.

In my experience as a programmer, most people think of computers as basically being magical. If you seriously think that genetic algorithms are going to transcend their inputs and create the "strong AI" of science fiction, you've joined that camp. Genetic algorithms are good at picking from within sets of things, which means you already need to have a set of things to pick from, which gets you right back to the point that it would basically be written in the first place, you'd just be picking which of a set of possible solutions best fits a certain set of criteria. Interesting and useful, sure. Lives up to the sci-fi concept of "strong AI", not so much.
posted by graymouser at 3:20 AM on April 20, 2009


Snark factor 10, Captain!

Back when I was a graduate student in the early 90s, the AI faculty in my department used to claimed the following 100% sure-fire method to make admission decisions.

Find all the students that mention Douglas Hofstadter in their applications.

And reject them.


Because inspiration is bad and wrong!

(This the CS equivalent of "Have you stopped beating your wife?")

No, it's not. Because I have stopped beating my wife. Uncomfortable does not equal unanswerable. (also, I lied - I've never been married)

consider how public opinion on gay rights changed when it was understood that sexual preference is not a "choice", but something determined by one's genes.

Yeah, that hasn't actually happened. I'm not saying it's not genetic, and I'm certainly not saying its a choice, but the gay gene has yet to be discovered.

what Penrose was going for in The Emperor's New Mind.

Penrose is an excellent mathematician. The Emperor's New Mind, an interdisciplinary work, involving mathematics, biology and physics proves that Penrose is, indeed, an excellent mathematician.

Fishbike - your link is borked. Seems interesting - could you repost?

Holy cow. In so many words, he warns against over-anthropomorphizing humans!

And when you consider the words as stated, rather than your emotional reaction to them, it does make sense

(Exception: Working with people new to computers, one of the most important things to do is to train them that computers do not think, because when they believe otherwise they get extremely frustrated by how "stupid" computers are.)


Computers are actually extremely self aware. What is a Blue Screen of Death if not a complete description of how exactly something has gone horribly wrong internally. Do you have that level of self-knowledge,or do you go to the doctor occasionally to find out what's up? It's not that computers do not think, it's that they do not think the same way you do.

Mind::Computer Fish ::Submarine

then one possible answer might be: because the replication of something superficially like the process of thought is not thought: the former is a model or approximation, while the latter is the thing being approximated. The former is a technological extension of mind/brain, and the latter is mind/brain itself. The map, no matter how detailed, is not the territory. If it were the territory it would no linger be a map.


The entire point of the turing test is to demonstrate that if you can't tell the difference, then there is, essentially, no difference. If 'superficially like' equals 'functionally equivalent' then you need a much better reason than 'yeah, but it's only being replicated, therefore the original has magical special privileges' to draw any real distinction.

The map/territory analogy fails simply because we are not dealing with depiction, or even abstraction. We are dealing with functionality - fish vs submarine.

Dude, the fundamental quality is that a thunderstorm IS a thunderstorm, and a simulation isn't.

fish vs submarine.

To actually improve on its rules would require a sort of magic, a set of rules that are get better when you apply them to themselves.

Because genetic algorithims are MAGIC

Presumably neither a calculator nor an abacus is equivalent to the human brain,
Presumptious to the extreme. They're certainly slower, though, and the I/O interface could do with some fine tuning.


The assertion that the human mind is equivalent to software that you could run on a computer is an extraordinary claim that requires extraordinary proof. Failing that, there's no reason to take it seriously. I'm amazed that anybody does.


...and at this point I started swearing at my computer plus the invisible people on the other side of it and decided to go to the pub. You have driven me to drink, Crabby Appleton!

admittedly, I was asking for a ride.
posted by Sparx at 3:34 AM on April 20, 2009


Because genetic algorithims are MAGIC

Yeah, discussed that already. Have fun with that whole snark gig.
posted by graymouser at 4:34 AM on April 20, 2009


Wait, is Sparx's comment written from the perspective of an inteligent PC, or am I just really confused?
posted by afu at 4:50 AM on April 20, 2009


Because genetic algorithims are MAGIC

Yeah, discussed that already


No, that was genetic algorithms. Genetic algorithims are magical unicorns in pony size! Common mistake, don't sweat it.

What it can't do is magically transcend its input, which is what people are implying when they say "strong AI." Any problem the computer solves would have already been latent in the algorithm design, it just needed computation and comparison to determine which route to take.

Yes, a program cannot outstrip its capabilities (that's what capability means) but it can find an answer to a question that the questioner doesn't already know. Frex - there's absolutely no frontloading required, and it can create 'patterns' if you will that are entirely unpredictable and hugely complex (blah blah blah wolfram). If sentience is not just a similar problem, you need to describe (convincingly) why it has such special status. Does sentience really 'magically transcend' its input in a different way to a genetic piece of code? What is that distinction?

afu: yes
posted by Sparx at 5:57 AM on April 20, 2009 [1 favorite]


...but I've read of processors that consist entirely of reprogrammable logic gates, where the gates are constantly reconfigured by the stream of input.

FPGA
posted by odinsdream at 7:03 AM on April 20, 2009 [1 favorite]


edheil: Last night my wife and I were talking about how back in the fifties, behaviorism was the ruling paradigm of the day, and behaviorists tended to think of both humans and animals as extremly simplistic machines, and to debunk away all of what we think of as being "human" as illusion. A serious thoroughgoing behaviorist a la Skinner didn't really even believe in "thinking", he described "thinking" as positively reinforced subvocalizations.

Unfortuantely I don't have the issues in question, but when Pinker published is extremely sloppy strawman attack on behaviorism in The Blank Slate, (at a time when Pinker was getting a feature in The Skeptic and Skeptical Inquirer seemingly every other month), people who have *gasp* actually read Skinner stepped up to the plate and produced multiple statements that Skinner had a inking that something more was going on.

The basic problem was that without the technology that we have today to interrogate exactly what is going on in the human brain, as we think, we can't really say much about thinking without looking at behavior. And thinking can certainly be described as a behavior much like picking our nose. Behaviorism may have taken the methodological limits a little bit too far, but it was pretty essential to get past the cruft of psychoanalysis which proposed structures for how the brain works, based largely on the idea that they simply sounded good.

In general: I'll make the argument that AI exceeding that of human capability currently exists, it's just not very interesting on human terms. IMNSHO a pretty basic mistake made by advocates on both sides is in treating human cognition as localized in the brain, rather than distributed throughout the body and including other people and tools.

Heck, there are humans who would fail a turing test.
posted by KirkJobSluder at 7:04 AM on April 20, 2009


...which led me to the article about the Anti-Fuse which reminded me in many ways of the permanent connections that neurons make during the early years of life.
posted by odinsdream at 7:09 AM on April 20, 2009


An interesting essay (to this outsider to artificial intelligence); an interesting discussion here.
posted by fantabulous timewaster at 8:32 AM on April 20, 2009


The discussion here made reading the (to my mind somewhat unimpressive) article well worthwhile.
posted by AdamCSnider at 8:40 AM on April 20, 2009


Coupla brief things before I dash off to the dentist:

First off, folks arguing aren't defining their terms regarding what they mean by intelligence here. Second off, for all the references to executing genetic code, there's been no demonstration of the "how," which leaves it feeling like question begging (and I'm totally granting the determinist and materialist assumptions). Third, it would probably be good for the AI proponents here to look into arguments over whether Koko is using language. It's another map/territory question, but one with a fairly robust set of questions and arguments.
posted by klangklangston at 9:00 AM on April 20, 2009


Fishbike - your link is borked. Seems interesting - could you repost?

Hmm, I could swear I even tested that one before posting. Anyway here is the link to the interview with Penrose again. I just tested it in the preview window and it goes to the right place, so if it still doesn't work maybe there is some prohibition on linking to that site or article.

I'm also enjoying the conversation that resulted from the (to me) almost unreadable article. I'm gathering, from the number of people who've posted that The Emperor's New Mind is not all that great, that the reason why the case presented in it never really "clicked" with me might be that the case is not actually made well in that book. Not that I'm just not clever enough to understand it.

The interview with Penrose that I linked to puts the case a bit more simply and it still doesn't click with me. Specifically, smoking-gun proof of a human mind "solving" a non-computable problem seems to be missing. He tries coming at it from a different angle that to me does not seem to demonstrate what he thinks it does. The whole thing is a bit of a catch-22 because how can he prove (in sense of a rigorous mathematical proof) that such a problem has been solved correctly, when it's already been proven that no such proof can be written?

It seems a lot more likely that a human mind can incorrectly prove that, for example, a given set of polyomino shapes can tile an infinite plane. And I can write a computer program that prints "yep, it can" and it will be just as incorrect a proof.
posted by FishBike at 10:05 AM on April 20, 2009


Second off, for all the references to executing genetic code, there's been no demonstration of the "how,"

The "how" seems pretty well-defined. Consider the simple example of DNA replication. Physical laws dictate how the molecules attach together. Other molecules attach to the DNA strand and progress down it unzipping it and creating identical copies. Laws of physics cause this to happen. It's the "why" that would be up for debate, but I would argue that there is no why - it's implicit in the how.

Ascribing motives to the enzymes that perform the replication seems misguided.
posted by odinsdream at 11:22 AM on April 20, 2009


Ascribing motives to the enzymes that perform the replication seems misguided.

I thought it was pretty well established in the literature that they do it out of spite.
posted by Sparx at 11:35 AM on April 20, 2009 [1 favorite]


"Computers are actually extremely self aware. What is a Blue Screen of Death if not a complete description of how exactly something has gone horribly wrong internally."

Ooh, I can make a piece of paper that is "extremely self aware" according to your standards then!

I can write "I have been exposed to heat" in lemon juice on blank paper, and hold it up near a flame, and marvel as the words appear and demonstrate its self-awareness.

Brilliant!
posted by edheil at 11:40 AM on April 20, 2009


So just to be clear - I am not an "AI proponent". Like I said above, I wouldn't care if it's actually impossible for humans to make a computer that works like a mind - that would not prove to me that a mind does not work like a computer.

ornate insect: Consciousness is irrelevant to a discussion of brains? Consciousness is not an irrelevant "feature" of brain activity: it is brain activity.

That completely contradicts my understanding of the idea of "consciousness". I thought that consciousness isn't a behavior and isn't something that can be measured in any way. Nor is it some sort of biological process.

But even besides that, are you saying that everything that has a brain has consciousness? What are you basing this on?

You don't actually believe other people are unconscious do you, and that you are the only person with consciousness? To draw such a conclusion, one would have to disregard Occam's Razor and believe that whenever two or more people witnessed a shared event they were actually victims of a persistent collective group hallucination or illusion.

What the heck does that mean? Why would two p-zombies witnessing a shared event require a hallucination or illusion?

Of course I consider it possible that other people are p-zombies... what evidence to the contrary could I possibly have, if consciousness isn't a behavior? And if an automaton, whether electronic/artificial/etc or biological, could behave exactly like a mind with consciousness, without actually having consciousness - and if we have no idea what produces or causes consciousness - how do you know that the thing that produces consciousness in humans doesn't go wrong at least some times, and hence some of the people you meet are p-zombies?

Sure, it's unpleasant to think of the possibility of anyone being a p-zombie, particularly a loved one. Kind of like it's unpleasant to consider the possibility that there isn't an afterlife and your loved ones might cease to exist when they die.

That other humans have consciousness seems an uncontroversial position, p-zombies notwithstanding.

Oh, it's uncontroversial. Well, QED that minds are not like computers, then. *applause*

You basically just said, "The world is flat," and let that stand for a proof that there are no p-zombies anywhere.

graymouser: Nope, certainly not the magic that "strong AI" proponents are talking about. A genetic algorithm would be able to do exactly what I said a rational agent program could conceivably do: debug itself. With a sufficiently complicated rule set, this could be quite valuable. The agent could determine that certain sets of rules are contradictory, or it could discover other, hidden rules that exist in the rule set. What it can't do is magically transcend its input, which is what people are implying when they say "strong AI." Any problem the computer solves would have already been latent in the algorithm design, it just needed computation and comparison to determine which route to take.

Yeah, I caught it how you had to start treating "algorithm design" and "input" interchangeably there. That was exactly my point; like I said, humans also need input for their built-in rules to improve their built-in rules to the point where they behave the way other modern humans expect them to.

I wasn't responding to what some "strong AI proponent" strawman who isn't involved in this conversation was saying; I was responding to what you defined as "magic", which you are switching to calling "transcendence", and the things you have declared that computers cannot do.

The criteria you're tossing out do not describe any actual limit that distinguishes between what human minds can do and what computers can do; so what you're saying is simply an attempt to drag people into a discussion of science fiction, not any demonstration of "Why Minds are Not Like Computers."



Remember in the Victorian era post-Darwin when people - both religious and secular people, by the way - were ever so outraged at the notion that humans were related to other primates? And how learned men would scoff and dismiss any evolutionary-type theory out of hand because "Darwin proposes that his own grandfather is a monkey! Guffaw, guffaw, what ho!" The antipathy to evolutionary theory and refusal to discuss it was because it threatened people's self-image, not because of any rational reason. The same with early reactions to Freud's psychoanalysis.

Now this doesn't bear on whether or not minds are like computers - that could be true or not, Freudian psychoanalysis certainly turned out to be pretty far off the mark as a comprehensive theory. But the vehement denial of minds being anything at all like computers, the disinterest in seriously examining the question, and the strenuous avoidance of discussing whether even parts of the mind are like a computer as we saw in this "in-depth analysis" in the OP, is for the exact same reason as the Victorian cavils and quodlibets: because it threatens our self-image to consider that any parts of our own minds are akin to the control modules that run our microwaves and cars.
posted by XMLicious at 12:34 PM on April 20, 2009 [2 favorites]


it's unpleasant to think of the possibility of anyone being a p-zombie

Not unpleasant, but a rhetorical exercise that defies common sense and leads to all sorts of counter-intuitive absurdity. Maybe human beings are tortioses or sharpie pens.

are you saying that everything that has a brain has consciousness?

Yes. What would be the point of having a brain if it wasn't conscious? If you can point to an animal with a brain that you think is unconscious, please do so.

I thought that consciousness isn't a behavior and isn't something that can be measured in any way. Nor is it some sort of biological process.

Consciousness is just the state of being sentient. It certainly is biological, unless you think non-sentient beings have it. That it's not a behavior or cannot be measured does not make it any less real. Why is quantification the final arbiter of reality?

It's not a terribly complicated concept: consciousness is the embodied sentience experienced and exhibited (through behavior, yes) by living beings with brains. With it comes the capacity for cognition. I'm mostly a non-eliminativist and physicalist functionalist: the mind is what the brain does, in all its neuronal, synaptic, and psychological complexity. Computers may exhibit something like consciousness (i.e. their ability to process information), but to draw from this comparison that computers actually possess consciousness and self-awareness does not seem warranted at this time. I'm open to having my mind changed, but I still think there may be some fundamental ontological difference here about what sentience is that cannot be overcome.

the vehement denial of minds being anything at all like computers

I'm not in vehement denial; I'm just pointing out that being like something (and I would say only partially like, at best) is not the same as being something.

Furthermore, the proof of those who think computers are self-aware and can think--in any meaningful sense of those words--is on those who are claiming it.
posted by ornate insect at 12:59 PM on April 20, 2009


"The "how" seems pretty well-defined. Consider the simple example of DNA replication. Physical laws dictate how the molecules attach together. Other molecules attach to the DNA strand and progress down it unzipping it and creating identical copies. Laws of physics cause this to happen. It's the "why" that would be up for debate, but I would argue that there is no why - it's implicit in the how."

Obviously, I was unclear: I was referencing arguments of intelligence arising from discrete biological functions.
posted by klangklangston at 1:37 PM on April 20, 2009


ornate insect: If you can point to an animal with a brain that you think is unconscious, please do so.

Sure, I'll demonstrate that your claim "all things that have brains are conscious, even the things people eat for lunch, and it's uncontroversial so it doesn't need to be examined" is silly. A mouse. Prove to me that a living mouse is conscious in a manner that makes it more like a human mind than any computer ever could be and that it isn't just a really complicated biological wind-up toy.

Consciousness is just the state of being sentient. It certainly is biological, unless you think non-sentient beings have it. That it's not a behavior or cannot be measured does not make it any less real.

Oh, and let me guess, sentience is the state of being conscious. Yes, quite real.

It's not a terribly complicated concept: consciousness is the embodied sentience experienced and exhibited (through behavior, yes) by living beings with brains.

And there we have it, just with three steps in the circularity: consciousness is sentience is having a brain and having a brain makes you conscious.

So your carefully implied claim that no computer could be capable of being "conscious" and therefore a human mind can't really, really be truly like a computer, simply means that computers don't have brains.

I'm not in vehement denial

No, you would fall under the "strenuous avoidance" category. In a thread entitled "Why Minds are Not Like Computers" you've tossed off a bunch of circular "common sense" proofs of things about "consciousness" and then said "even if minds are like computers, which can neither be confirmed nor denied and which I'm not going to examine at all, it totally doesn't matter to consciousness!"

It's just a bunch of rhetorical gymnastics that avoid discussion of any equivalence between human minds or parts of human minds and computers. That's why you're all focused on brains: because then if someone were to demonstrate that there's no reason why a computer couldn't exhibit your behavior that defines consciousness, you can say, "But they don't have brains! Totally, completely different thing from the human mind."

I'm just pointing out that being like something (and I would say only partially like, at best) is not the same as being something.

...which in your position on this is only serving as an assertion that computers aren't biological brains. Go ahead and talk about that all you want, and refute these people curiously absent from the current conversation who are evidently intent on proving that the laptop you're typing on is "self-aware and can think", but don't pretend it has anything to do with whether or not minds may be equivalent to computers and may be successfully and accurately analyzed by evaluating their functionality and internal operations under the assumption that they work like computers.
posted by XMLicious at 2:04 PM on April 20, 2009


XMLicious: Sure, it's unpleasant to think of the possibility of anyone being a p-zombie, particularly a loved one. Kind of like it's unpleasant to consider the possibility that there isn't an afterlife and your loved ones might cease to exist when they die.

Unless, of course, you are a Buddhist or atheist who finds non-existence to be preferable to most flavors of the afterlife. Eternal life stops looking like a wonderful and pleasant outcome once you start thinking about what kind of existence that might entail. (Or to reveal a personal weakness, I'd much rather believe my grandmother dead, than a hungry ghost or damned.)

Now this doesn't bear on whether or not minds are like computers - that could be true or not, Freudian psychoanalysis certainly turned out to be pretty far off the mark as a comprehensive theory. But the vehement denial of minds being anything at all like computers, the disinterest in seriously examining the question, and the strenuous avoidance of discussing whether even parts of the mind are like a computer as we saw in this "in-depth analysis" in the OP, is for the exact same reason as the Victorian cavils and quodlibets: because it threatens our self-image to consider that any parts of our own minds are akin to the control modules that run our microwaves and cars.

Oh, I'm not really convinced that this is really the case. There certainly is no lack of AI research that attempts to probe computational models of human behavior through experimental test-beds such as robotics and simulations. These models tend to be rather limited, focusing for example on such autonomic responses such as eye-tracking and habituation to stimulus.

The reason why the mind-as-computer metaphor has generally failed as a general paradigm came out of the realization that human-designed computers operated under radically different principles as evolution-designed nervous systems (of which, brains are just a glorified elaboration.) Pretty early, theoretical developments in computer science focused on the creation of a general logic machine that could be given specific instructions. Animal cognition, to use a simple example of pathfinding in insects, involved specialized structures and algorithms that created generalized responses. The logic of the insect was as much in the structure of joints in the legs as in the neural networks that triggered their twitch.

But I'm both a believer in AI, and a believer that such an AI is likely to be even more alien and incomprehensible to us than our fellow primates, who I'll gladly argue deserve rights as sentient creatures. How would a solo AI who lives and breathes for the discovery of anomalous patterns among tax returns engage in a conversation with a social primate concerned with shelter, food and status? The AI singularity is bunk because it assumes that AIs compete with humanity for our ecological niche, rather than exist in an parallel niche that only casually intersects with ours.
posted by KirkJobSluder at 2:10 PM on April 20, 2009 [2 favorites]


Prove to me that a living mouse is conscious in a manner that makes it more like a human mind than any computer ever could be and that it isn't just a really complicated biological wind-up toy.

Well one either thinks Nagel is on to something about the what-ness [.pdf] of being a bat (or mouse), or one does not.

I have every reason to think a mouse brain is not unlike a human brain, only less complex, and part of that reasoning is called evolution.

I also think our brains are in some ways are like computers (as I've repeated), and in some ways not, but that to believe such a comparison does not mean that computers are themselves brains. Like brains? Surely. Actual brains? Less clear.
posted by ornate insect at 2:16 PM on April 20, 2009


Why would we believe that the computer simulation wouldn't have qualia? It would certainly report that it had qualia, what would be the basis on which to doubt it?

If it had qualia, then it wouldn't be a simulation of a mind -- it would be a mind.

Perhaps computers have qualia. Perhaps amoeba do. I don't know, and can't ever "know" in a "scientific" sense. But if a computer DID have qualia, then it certainly would not be a simulation, any more than you are a "simulation" of what would happen if your parents' genes combined.

I'm speaking from a conceptual standpoint here, not an empirical one.
posted by DLWM at 2:17 PM on April 20, 2009


XMLicious: If you're going to retreat to a position of hard solipsism, at least do so without being condescending to those who find your position counter-intuitive.

Further, can we all agree the following statements differ sufficiently:

The mind is like a computer.
A computer is like the mind.
A mind is a computer.
A computer is a mind.

I can certainly come up with ways in which a mind is like a computer, which makes statements like, "Like I said above, I wouldn't care if it's actually impossible for humans to make a computer that works like a mind - that would not prove to me that a mind does not work like a computer," irrelevant (and not only because they demand negative proof). I can think of ways that a hand is like a hammer, but that does not mean that a hammer is a hand, nor that a hand cannot work like a hammer. But a hand is not a hammer.

Now, I think that what people mean when they say that a brain is a computer is not that brains are the same thing as, say, what we're typing on now. But what they are saying is that the qualities that connect computers as a class are also applicable to brains. (And we'll ignore that brains and intelligence aren't exactly the same things, and intelligence and the mind aren't exactly the same things, nor the mind and consciousness.)

"Of course I consider it possible that other people are p-zombies... what evidence to the contrary could I possibly have, if consciousness isn't a behavior?"

Well, no, you don't. For a couple reasons—first, you don't consider yourself a p-zombie. Descartes gets a bad rap because he proved less than he thought he did, but his Cogito Ergo Sum is valuable here. You can assert that it would make no difference if you were a p-zombie, everything is entirely equivalent, but then you've defined p-zombies out of existence. You can assert that there's no way to know if you were one, which is true, but then you're making the same assumptions any solipsist does. Which is fine if you want to, but then I have little desire to communicate with anyone who doesn't legitimately believe in my existence. Finally, if you don't consider yourself a p-zombie, and you accept that qualia are real, yet you posit a p-zombie that is physically identical, lacking only qualia, then you're arguing that qualia aren't material, ergo you're a dualist. And you don't want to be a dualist, because then you couldn't be a dick about being a determinist and materialist.

Like I said, these are very old philosophical questions, and there aren't necessarily good answers. But without defining your terms, stating what you consider sufficient and necessary, and making your case based on that, it's just kinda bullshit.
posted by klangklangston at 2:28 PM on April 20, 2009 [1 favorite]


ornate insect - thank you for the link to that "what-ness" article, I'd heard the term before but I hadn't seen where it comes from. But is there anything in there that articulates some problem with asking "What is it like to be a computer?" or otherwise would draw a distinction between humans and mice and bats on one hand, things with brains I guess, and computers on the other? He talks about the what-ness of Martians, so he appears to consider what-ness quite generalized.

klangklangston: If you're going to retreat to a position of hard solipsism, at least do so without being condescending to those who find your position counter-intuitive.

Saying "maybe p-zombies exist" is equivalent to hard solipsism? Exaggerate much?

I do not think it's condescending to respond to claims supported by "it's common sense!" or "it's uncontroversial!" in the manner that I have. If anything I would actually say it's kind of condescending to advance things like that and expect other people to accept it, particularly accompanied by rejoinders like "It's not terribly complicated."

Now, I think that what people mean when they say that a brain is a computer is not that brains are the same thing as, say, what we're typing on now.

I would agree if you're referring to most of the people in this thread. But it appears to me that by making references to "those who think computers are self-aware and can think", in the present tense, ornate insect is attempting to implicitly frame the discussion as being about exactly those sort of computers, as a rhetorical tactic to present anything said by others about computers and minds as a comparison or analogy involving computers like a laptop.

Finally, if you don't consider yourself a p-zombie, and you accept that qualia are real, yet you posit a p-zombie that is physically identical, lacking only qualia, then you're arguing that qualia aren't material, ergo you're a dualist.

I've seen the word "qualia" thrown around quite alot and seen it used to mean many different different things. When I've made people pin down what they mean by that word in other conversations, what they've meant hasn't seemed to have much to do with the issue. But I haven't gone and tried to determine if there's a single synoptic view of what qualia are. If there is, or if there is a good definition that's relevant here which you think that the people who have used that term above would accept, please feel free to lay it out.

Also - do you consider what you said above to be a demonstration that if I do not consider myself to be a p-zombie, this gives me reason to believe that no p-zombies exist anywhere? I wasn't clear on that.
posted by XMLicious at 6:09 PM on April 20, 2009


Finally, if you don't consider yourself a p-zombie, and you accept that qualia are real, yet you posit a p-zombie that is physically identical, lacking only qualia, then you're arguing that qualia aren't material, ergo you're a dualist.

If I'm understanding you correctly, then I am only entertaining the possibility of dualism if the people presenting consciousness as the necessary distinction between human minds and computers are doing so in a dualist fashion in the first place. If those presenting consciousness are talking about a material consciousness, then p-zombies would not be physically identical to individuals harboring a consciousness. In either case we still need to examine why the mind of a p-zombie cannot be like a computer and categorically claiming "p-zombies cannot exist!" is still dodging the question.
posted by XMLicious at 6:18 PM on April 20, 2009


Oh, and another thing: "p-zombies could not have minds" is another possible response, one which hasn't been offered yet, if you want to say that the behaviors or internal processes we associate with the mind can't occur without a consciousness. But this would require some explanation; it seems to me that a p-zombie as they're normally described would still be regarded as having a mind. If there were mind-readers, p-zombies would have thoughts that could be read. They would still have thoughts about all the kinds of things that an individual with a consciousness would think about, there just wouldn't be anything there experiencing it.

That's why the question of consciousness seems irrelevant to me: it's at most an incremental difference between a p-zombie and an individual with a consciousness. Pretty much all of the things that we prepend with the adjective "mental": mental state, mental activity, mental health (or mental disorder), et cetera, would still be there in a p-zombie. ornate insect's assertion that consciousness is the all-encompassing nature of what a mind is, that it's not an irrelevant "feature" of brain activity: it is brain activity appears purely rhetorical to me, an attempt to railroad people into haring off on a discussion of consciousness.
posted by XMLicious at 7:53 PM on April 20, 2009


I have not followed the twists and turns in the AI debate for quite some time, but I thought I had read recently that the claims of AI advocates (strong AI advocates? not sure) had changed over the years. What I remember reading was that the claim is no longer that computers will become intelligent in the way that human beings are intelligent, but that they will exhibit new kinds of intelligence. An example is the success of chess-playing computers: they beat us through means that are nothing like what people do, yet using those means, they can do something that we consider a marker for the presence of intelligence. So it's not that they duplicate human intelligence; it's that they can accomplish similar things by completely different means, and these accomplishments deserve the word "intelligence" as much as our own feats of brain power.

Am I making this up, or totally mis-remembering it, or is it a line of thought that's actually out there now? If so, it's a big change from the things AIers used to be saying (and apparently still are saying, at least in this thread).
posted by semblance at 8:37 PM on April 20, 2009


In either case we still need to examine why the mind of a p-zombie cannot be like a computer and categorically claiming "p-zombies cannot exist!" is still dodging the question.

I'm not sure I am following your argument, but as someone who believes that what makes the brain have a mind is it's information processing abilities, I think that this also implies that p-zombies cannot exist. Therefore any entity that is processing the information in the same way must have the same mind, i.e. must have the same subjective experience, i.e. must have the same "qualia".

I think this is supported by the scientific evidence about why people have different subjective experiences such. Color blindness for example is explained by the information processing abilities of the cones.

ornate insect's assertion that consciousness is the all-encompassing nature of what a mind is, that it's not an irrelevant "feature" of brain activity: it is brain activity appears purely rhetorical to me, an attempt to railroad people into haring off on a discussion of consciousness.

I don't think it is a derail, because buried in the paper there was an argument about the mind body problem. As I see it his argument was we can never be sure that a scientific theory explains subjective consciousness because it is impossible to objectively observe it. Therefore we will need to wait for a "complete" explanation, perhaps in the form of a theory of everything in physics.
posted by afu at 11:25 PM on April 20, 2009


What I remember reading was that the claim is no longer that computers will become intelligent in the way that human beings are intelligent, but that they will exhibit new kinds of intelligence.

Can't we just say that computers run on human intelligence? That human intelligence is necessary for the design and operation of computers, and that computers are just using human intellect? Then there's nothing artificial about it!
posted by abc123xyzinfinity at 8:19 AM on April 21, 2009


yea :P

btw, re: reprogrammable logic gates, i found this slashdot comment about evolutionary algorithms on FPGA devices fascinating... digital lamarckism!?
posted by kliuless at 10:56 AM on April 21, 2009


I'm not sure I am following your argument, but as someone who believes that what makes the brain have a mind is it's information processing abilities, I think that this also implies that p-zombies cannot exist.

I think that would mean that consciousness is just information processing, wouldn't it? That doesn't seem so to me; but in any case, if someone means "consciousness is not like a computer" they ought to simply say that and damn well not do all sorts of legerdemain to try to make it look like having a mind or having a brain is the same thing as having consciousness, much less try to make it look like that's indisputably so with arguments like, "it's uncontroversial."

I don't think it is a derail,

I'm pretty suspicious. I agree that the issue of consciousness is ineffable and mysterious and not even close to being resolved, but that does not mean that it's a one-size-fits-all answer to every philosophical problem the way it gets used. My experience is that people who are enthusiastic about the consciousness debate try to turn every issue into the same question and want to make it as though nothing else can be resolved until their pet issue is addressed, and as you can see above get a bit, er, vigorously polemical let's say to be polite, even when it is addressed in the terms of that debate.
posted by XMLicious at 2:13 PM on April 21, 2009


So much of it depends on exactly what you mean by "like." If you say that brains are like computers in the same literary simile sense as "shall I compare the to a summer's day?/thou art more lovely and more temperate." Then there isn't much room for debate as it's only a clever turn of phrase.

However, if you mean "like" in such a way as to imply that the same theoretical understandings that we have about abstract turing-complete computational puzzles directly apply to the messy and hormone-driven wetware of animal nervous systems, then you have a pretty whopping burden of proof to overcome.

I'll just point out that while abstract turing-complete computational puzzles have done a fair job of simulating biological wetware, they also do a fair job of simulating car crashes, thunderstorms, mechanical robots on Mars, and market economics. But yet, we really don't fall into the fallacy of assuming that while computers do a fair job of simulating the phase change of water in the atmosphere, that the phase change of water in the atmosphere, in turn governs the way we build computers. And most people who struggle to build simulations that don't devolve into absurdity after a few iterations are well aware that there is a theoretical missmatch problem that needs to be hacked.

Or to look at it from another angle, it seems to be a common fallacy among the sciences to see superficial similarity and go, "ah, ha! these two things must be guided by the same fundamental forces!" Skinner made a big mistake abstracting behavioral psychology to socio-political systems. Dawkins made the same mistake with memetics. For the revelators of singularity, it's become more natural than critical thought.
posted by KirkJobSluder at 9:35 PM on April 21, 2009 [1 favorite]


"Saying "maybe p-zombies exist" is equivalent to hard solipsism? Exaggerate much?"

I was reacting to: "There's no way to know that other minds are conscious, p-zombies et cetera et cetera."

"I've seen the word "qualia" thrown around quite alot and seen it used to mean many different different things. When I've made people pin down what they mean by that word in other conversations, what they've meant hasn't seemed to have much to do with the issue. But I haven't gone and tried to determine if there's a single synoptic view of what qualia are. If there is, or if there is a good definition that's relevant here which you think that the people who have used that term above would accept, please feel free to lay it out."

In part that's because "qualia" is hard to pin down. Qualia are subjective experiences, e.g. redness. Or pain, or a particular sense experience. What they are, how they function, what can be inferred from them… can be debatable. They can also be states of consciousness.

I'll get back to qualia re: p-zombie in a minute when I get to your next comment, but qualia are here because they're part of the mind-body problem (really, they are the mind-body problem), and a lot of artificial intelligence depends on assumptions about the mind-body problem.

"If I'm understanding you correctly, then I am only entertaining the possibility of dualism if the people presenting consciousness as the necessary distinction between human minds and computers are doing so in a dualist fashion in the first place. If those presenting consciousness are talking about a material consciousness, then p-zombies would not be physically identical to individuals harboring a consciousness. In either case we still need to examine why the mind of a p-zombie cannot be like a computer and categorically claiming "p-zombies cannot exist!" is still dodging the question."

The point of p-zombies is that they're physically identical, but philosophical zombies. Which means that consciousness would not be a purely physical fact, which means that physicality wouldn't account for all facts, which requires a non-physical explanation, which requires dualism.

"That doesn't seem so to me; but in any case, if someone means "consciousness is not like a computer" they ought to simply say that and damn well not do all sorts of legerdemain to try to make it look like having a mind or having a brain is the same thing as having consciousness, much less try to make it look like that's indisputably so with arguments like, "it's uncontroversial.""

Well, see, except that having a mind is the way that a lot of people phrase consciousness. And most folks see the brain as the physical seat of the mind. So it's not the jiggery-pokery of rhetoric, it's that these are loose terms and no one's doing a great job talking about them.
posted by klangklangston at 10:55 PM on April 21, 2009


"But I'm both a believer in AI, and a believer that such an AI is likely to be even more alien and incomprehensible to us than our fellow primates, who I'll gladly argue deserve rights as sentient creatures."

I think it's funny that you believe in AI in a similar way to how I believe in God.

"Or to look at it from another angle, it seems to be a common fallacy among the sciences to see superficial similarity and go, "ah, ha! these two things must be guided by the same fundamental forces!"

I think that's really true, and reminds me of when I was trying to argue to you that narrative conceptions of evolution could be valuable (versus cladistics).
posted by klangklangston at 10:59 PM on April 21, 2009


Well I can certainly agree that saying things like "minds are like computers" or "minds are not like computers" are statements that ought to be scoped.

However, if people making the 2nd statement are effectively saying that it's possible to conceive of a computer, whether created by humans or only as a fantastically complicated un-realized machine, that could independently decide to take over the world, and think thoughts / algorithmically cogitate symbols that would equate to vanity by ruminating on the virtues of itself or its work, and so on and so forth, and even possibly do all those things in the exact same way a conscious human mind does; and that even though all of those things might be actuated and functioning in the same way in both the human mind and the hypothetical computer, the computer isn't a mind and so the human mind isn't like it solely because there is qualia or experience of these things in the human mind and not in the computer - that seems like axe-grinding or hair-splitting to me and it seems like it would be motivated as I said above by a perceived threat of self-identification with your VCR. So when it's paired with a de rigueur assumption, un-examinable, that there can't exist any humans who don't have consciousness, it makes me even more suspicious.

Now if instead the idea is that there are mental faculties or mental phenomena that inalienably depend on consciousness, I can see such an argument being made. But if that's simply being implied in the assertions about consciousness it really appears to be missing a step to fail to specifically say that the hypothetical computer I described above, what I was equating to a human without consciousness, isn't possible and list some other things besides the qualia stuff that wouldn't work.

I should ask at this point: does the article in the OP actually discuss the mind-body problem as part of all this anyways? It appears to me to primarily say that digital computers require black boxes, which seems manifestly untrue to me (indeed, working as a software engineer the absence of black boxes in particular situations is often painfully hammered home) and is what got my hackles up in the first place. (And I'd also say it firmly underlines the peril of basing this sort of stuff entirely on theoretical models of computers and ignoring the way computers actually work and what they're made to do.)

The point of p-zombies is that they're physically identical, but philosophical zombies. Which means that consciousness would not be a purely physical fact, which means that physicality wouldn't account for all facts, which requires a non-physical explanation, which requires dualism.

Fuck. I thought I'd finally found a short-hand for the concept of a human without consciousness, which is usually difficult to explain and results in derails if you try to. Oh, well. Sorry if my use of the term has caused confusion.

Yeah, it's certainly always seemed to me that consciousness implies some sort of dualism, if I'm understanding that term correctly. When we've touched on determinism in past threads I think you've gotten the misapprehension that I have some attachment to materialism. I don't; I just don't regard the existence of any non-material things to require that those things aren't deterministic as I understand determinism. At least, not any non-material things that have been described to me so far (including consciousness.)

(And as we touched on the other time, it also appears to me that many peoples' interpretation of quantum phenomena as ruling out general determinism, rather than just ruling out determinism in classical physics, may be incorrect, although I'm not a physicist. But that's obviously an even more enormous discussion.)
posted by XMLicious at 12:56 AM on April 22, 2009


By the way computers actually work and what they're made to do in that parenthetical I meant "the things computers are caused to do" rather than "the purpose for which computers are created."
posted by XMLicious at 1:01 AM on April 22, 2009


« Older Mindfulness for Stress Reduction   |   Think you've read Madame Bovary? Newer »


This thread has been archived and is closed to new comments



Post