Join 3,514 readers in helping fund MetaFilter (Hide)


Nagel on the Materialist Neo-Darwinian Conception of Nature
March 19, 2013 7:02 AM   Subscribe

Andrew Ferguson explains and defends eminent philosopher Thomas Nagel, who has been stirring up outraged refutations (e.g. here or here) with his new book Mind and Cosmos: Why the Materialist Neo-Darwinian Conception of Nature Is Almost Certainly False. Also in the defense column is philosopher Edward Feser's extensive series on Nagel's book.
posted by shivohum (163 comments total) 19 users marked this as a favorite

 
That azalea outside the window may look red to you, but in reality it has no color at all. The red comes from certain properties of the azalea that absorb some kinds of light and reflect other kinds of light, which are then received by the eye and transformed in our brains into a subjective experience of red. And sounds, too: Complex vibrations in the air are soundless in reality, but our ears are able to turn the vibrations into a car alarm or a cat’s meow or, worse, the voice of Mariah Carey.

Oh, dear God. 'In reality'. The man is an idiot.
posted by unSane at 7:16 AM on March 19, 2013 [2 favorites]


Philosopher of biology Elliot Sober's op-ed in the Boston Review is a good, non-outraged critique.
posted by Beardman at 7:17 AM on March 19, 2013 [3 favorites]


I believe Mariah has actually defended herself against the noumenal critique: "Baby it's for real, for real, for real / Baby I'm for real."
posted by Beardman at 7:25 AM on March 19, 2013 [16 favorites]


Oh, dear God. 'In reality'. The man is an idiot.

Read the surrounding context a little bit closer. "Reality" is taken to mean a shift towards a causal, science-based perspective of things.
posted by kuatto at 7:28 AM on March 19, 2013 [1 favorite]


If the debate is just wondering where human consciousness came from, it might help to consider that when an animal compulsion system is locked in indecision (as our primitive brains might have been on a daily basis roaming around encountering new possibilities) then it must first become aware that it may adopt the best way by consciously weighing the risks. Therefore we become aware of ourselves in order to understand the significance of the task in order to succeed. So the imagined soul exists somewhere between freedom and nature, which is analogous to an urge to be free.
posted by Brian B. at 7:35 AM on March 19, 2013 [1 favorite]


If this is a defense of the book then good lord a takedown must be brutal.
posted by Shutter at 7:36 AM on March 19, 2013 [2 favorites]


Thanks for this. I look forward to reading this back and forth when I get the time.

My anticipation is that this is going to get political (ahem... The Weekly Standard?): the right-wing theocrats will attempt to take Nagel's critique of materialism and append "therefore, Christian fundamentalism is true!" to it.

As much as I dislike that inevitable spin, however, I don't think this is Nagel's point (but I'm no Nagel expert--just a dilletante). I have always agreed with his most basic argument (from "What is it like to be a bat?") that subjective experience is absent from the materialist account. I have never been able to buy the idea that we can get around this absence by merely stating that this subjectivity is an emergent phenomenon found in sufficiently (and/or properly designed) complex systems. The problem is, this is utterly without any support, either theoretically (no mathematical model ever made has subjectivity in it) or empirically (no one has yet made a machine that has become self-aware).
posted by mondo dentro at 7:36 AM on March 19, 2013 [10 favorites]


The naturalistic project has been greatly aided by neo-Darwinism, the application of Darwin’s theory of natural selection to human behavior, including areas of life once assumed to be nonmaterial: emotions and thoughts and habits and perceptions.

That's far from the definition of neo-Darwinism.
posted by Buckt at 7:37 AM on March 19, 2013 [4 favorites]


Neo-Darwinism insists that every phenomenon, every species, every trait of every species, is the consequence of random chance, as natural selection requires. And yet, Nagel says, “certain things are so remarkable that they have to be explained as non-accidental if we are to pretend to a real understanding of the world.”

Randomness is the absence of information and it is extremely rare in the universe. Natural selection, although propelled by pseudo or quasi-random events, is not itself random or an accident. It seems to me that there is some unexplained non-random reason why molecules arrange themselves into higher forms of complexity. There is a huge leap between that and a personal God that monitors my every thought and patiently awaits my arrival for His judgement and eternal torture - but the idea that the universe is completely random seems like the harder proposition to prove - and requires the largest dollop of faith.
posted by three blind mice at 7:40 AM on March 19, 2013 [2 favorites]


And then, reading on,

The workshoppers seemed vexed, however, knowing that not everyone in their intellectual class had yet tumbled to the truth of neo-Darwinism.

What the hell is this? A materialistic view of reality comes from physics, not evolution! They're just attacking evolution because they sound like the mystics they are if they attack atoms.
posted by Buckt at 7:40 AM on March 19, 2013 [2 favorites]


It's hard to criticize philosophers on their own terms because they mostly write to their own in-group. It looks and smells like English but they have precise meanings or allusions to other philosophers that the layman will not catch.

That said, having read two or three Nagel papers I feel comfortable saying: that man is a troll and I am inclined to be skeptical of any of his claims.
posted by pmv at 7:42 AM on March 19, 2013 [1 favorite]


Sufficiently advanced philosophy is indistinguishable from stoners.
posted by Artw at 7:42 AM on March 19, 2013 [13 favorites]


Related:

How An Algorithm Feels From Inside

and

Excluding the Supernatural

Another Yudkowsky quote:

During one conversation, I said something about there being no magic in our universe.

And an ordinary-seeming woman responded, "But there are still lots of things science doesn't understand, right?"

Sigh. We all know how this conversation is going to go, right?

So I wearily replied with my usual, "If I'm ignorant about a phenomenon, that is a fact about my state of mind, not a fact about the phenomenon itself; a blank map does not correspond to a blank territory -"

"Oh," she interrupted excitedly, "so the concept of 'magic' isn't even consistent, then!"

Click.

She got it, just like that.

posted by Human Flesh at 7:42 AM on March 19, 2013 [2 favorites]


I read Mind and Cosmos a little while ago, and it was the biggest piece of shit I'd encountered in a loooong time, and I regularly run by a dog park.

The prima facie impression, reinforced by common sense, should carry more weight than the clerisy gives it.

That -- that right there -- is the central problem of the book. It spins a "what if" out of Nagel's "common sense," which is more or less, "Wow, mind is really special! Don't you think so? I sure do! Anyway it's probly, like, magic or something, 'cause mind is SO awesome."

our everyday understanding


Go fuck yourself with your "our", Ferguson.
posted by Greg Nog at 7:43 AM on March 19, 2013 [4 favorites]


I have never been able to buy the idea that we can get around this absence by merely stating that this subjectivity is an emergent phenomenon found in sufficiently (and/or properly) complex system.
As I was reading through the linked articles I couldn't help thinking of the explanation of the (non-)self in abhidharmic tradition, which seems to be pretty much the one you don't find convincing, but nonetheless served a major philosophical-religious tradition for some centuries, i.e. I'm not sure exactly why this is necessarily a show-stopping problem (if I understand Feser's defence of Nagel, don't think he makes the case it is either). I may have missed the points being made entirely though.
posted by Abiezer at 7:50 AM on March 19, 2013 [1 favorite]


You can’t explain consciousness in evolutionary terms, Nagel says, without undermining the explanation itself. Evolution easily accounts for rudimentary kinds of awareness. Hundreds of thousands of years ago on the African savannah, where the earliest humans evolved the unique characteristics of our species, the ability to sense danger or to read signals from a potential mate would clearly help an organism survive.

So far, so good. But the human brain can do much more than this. It can perform calculus, hypothesize metaphysics, compose music—even develop a theory of evolution. None of these higher capacities has any evident survival value, certainly not hundreds of thousands of years ago when the chief aim of mental life was to avoid getting eaten. Could our brain have developed and sustained such nonadaptive abilities by the trial and error of natural selection, as neo-Darwinism insists? It’s possible, but the odds, Nagel says, are “vanishingly small.” If Nagel is right, the materialist is in a pickle.


True, the earliest human genitals made evolutionary sense. They allowed the propagation of genetic material through coitus, which led to an array of offspring, and some of those offspring -- the most fit -- then survived. So far, so good.

But look now at MY dick. It's utterly beautiful, and capable of so much more than the evolutionists would have you believe. It's aesthetically gorgeous, like a Da Vinci painting; it can plug up holes in drywall; it can even be used as a surface on which to balance potato chips. Truly, this phallus is a thing to marvel at! Is it possible that it arose sheerly by accident? Perhaps, but the odds are "vanishingly small" (unlike my super rad dick imo)
posted by Greg Nog at 7:54 AM on March 19, 2013 [29 favorites]


Like most attacks on materialism, both what I've read of Nagel's book and this article simply suffer from a lack of imagination. There are lots of ways to motivate love and consciousness and redness and all sorts of other things through fundamentally physical processes. People quickly forget that we're not just made of stuff but also patterns of stuff, and patterns of those patterns. There are lots of "things" in the world that aren't things. And some of those "things" are minds and emotions and colors.
posted by cthuljew at 7:56 AM on March 19, 2013 [14 favorites]


cthuljew, yes, but you are beginning to step outside of the reductive orthodoxy in that the way you experience those things, w/ regards to science, is not real (or at least not important to the truth of the matter).
posted by kuatto at 8:03 AM on March 19, 2013 [1 favorite]


The argument for irreducible complexity falls short when you note that the same qualities exist elsewhere in nature, in things we consider utterly mundane. The complexity is thus, reducible. The human four-chambered heart is a marvelous pump, but so are the simpler hearts of reptiles, amphibians, and fish.

The more we learn about our primate cousins, along with animals from entirely different clades, the more it appears that practically everything we call "human" differ only in degree and not quality. Human exceptionalism becomes increasingly hard to justify.
posted by CBrachyrhynchos at 8:06 AM on March 19, 2013


Read the surrounding context a little bit closer. "Reality" is taken to mean a shift towards a causal, science-based perspective of things.

He begs his own question in the first paragraph. 'is taken', really, by whom?
posted by unSane at 8:14 AM on March 19, 2013


But look now at MY dick. It's utterly beautiful, and capable of so much more than the evolutionists would have you believe. It's aesthetically gorgeous, like a Da Vinci painting; it can plug up holes in drywall; it can even be used as a surface on which to balance potato chips. Truly, this phallus is a thing to marvel at! Is it possible that it arose sheerly by accident? Perhaps, but the odds are "vanishingly small" (unlike my super rad dick imo)

The funny thing is that you're actually right, after a fashion. Your preoccupation with your dick is adaptive. The question is why this preoccupation should be associated with pleasure, instead of with a mechanistic instinct that does not bring pleasure. And in fact you have the experience of asexual individuals, who do not feel sexual interest or pleasure, and engage in physiologically standard sexual activity (some masturbate simply because it annoys them not to) but do not frame their experiences of that activity as pleasurable.

The greedy-reductionist account is that these experiences are unimportant; the main thing is to find the causal framework for whatever can be observed or strongly inferred. Your account of your feelings don't matter because you're not trustworthy, or are to be consigned, Marvin Minsky style, to some degree of yet undiscovered physical complexity to figure out later. Of course, Don't Worry, We'll Figure It Out is not really evidence, but an ideological assurance. Of course, we can explain this stuff with a Gouldian spandrel, but that's not properly neo-Darwinian, though it is physicalist.
posted by mobunited at 8:29 AM on March 19, 2013 [3 favorites]


modunited: Your account of your feelings don't matter...

Don't matter in the project of trying to construct pragmatically useful theories about the nature of reality. They may or may not matter when it comes to subjective choices like, "what shall I have to eat for breakfast."

It also strikes me as unfair to call out materialism for not considering subjectivity important or considering it untrustworthy when many rationalist approaches do likewise, being equally reductionist to a different set of axioms.
posted by CBrachyrhynchos at 8:53 AM on March 19, 2013


cthuljew: "People quickly forget that we're not just made of stuff but also patterns of stuff, and patterns of those patterns. There are lots of "things" in the world that aren't things. And some of those "things" are minds and emotions and colors."

Well put!
posted by brundlefly at 8:54 AM on March 19, 2013 [1 favorite]


Don't matter in the project of trying to construct pragmatically useful theories about the nature of reality. They may or may not matter when it comes to subjective choices like, "what shall I have to eat for breakfast."

Those things really, really matter though. It's not about breakfast. The Iliad is about lust above and beyond the physiology of mens' penises and brain states that can be stuffed into reproductive strategies. Again, there are materialist ideas without these problems, but highly reductionist neo-Darwinism isn't one of them.

It also strikes me as unfair to call out materialism for not considering subjectivity important or considering it untrustworthy when many rationalist approaches do likewise, being equally reductionist to a different set of axioms.

Many of these do not avoid people's accounts of the world around them. Mind-alone Buddhist metaphysics is pretty much about people's accounts, and it's also reductionist.
posted by mobunited at 9:05 AM on March 19, 2013


The greedy-reductionist account is that these experiences are unimportant; the main thing is to find the causal framework for whatever can be observed or strongly inferred.

Perhaps your use of "greedy" implies that this is an intentional straw man, but a straw man it definitely is. The whole project of the modern neuroscientific/philosophical search for consciousness is the attempt to explain exactly how such subjective experiences arise out of objective chemicals jostling around inside a skull. Thomas Metzinger's Being No One takes perhaps the boldest attempt at answering this exactly, simultaneously reducing phenomenal events all the way to neurobiological ones and raising chemical ones all the way to the level of experience. It's all about supervenience, not dismissal. The idea isn't "we're just stuff and therefore none of this [emotion/experience/sensation] is real", but rather, "we're just stuff, BUT HOLY CRAP we're stuff that feels and experiences and senses, what the what?! We better have some damn clever ways of motivating this first person nonsense!". Far from being unimportant, these experiences are called the Hard Problem of Consciousness, which doesn't go away just because you've explained all the mechanisms of the brain (despite what Dennett says; for once, he's not thought about it quite carefully enough).
posted by cthuljew at 9:08 AM on March 19, 2013 [2 favorites]


Nagel is scared as shit but he's able to sound really calm about it.
posted by Pope Guilty at 9:13 AM on March 19, 2013 [4 favorites]


If the debate is just wondering where human consciousness came from, it might help to consider that when an animal compulsion system is locked in indecision (as our primitive brains might have been on a daily basis roaming around encountering new possibilities) then it must first become aware that it may adopt the best way by consciously weighing the risks.
That seems unlikely. Probably the stalemate would be resolved by one drive 'fatiguing' at a faster rate. I.e. if you see food in an unknown location, you're both afraid and hungry, you might run out of the chemicals causing fear more quickly then the ones causing hunger. Self awareness seems like the least efficient way to solve problems.

It may be that self-awareness is just a byproduct of a large brain that evolved partly to manage social situations. Social hierarchy is important in all ape societies, so if larger brains helped people attain higher social status in their groups, they may have been more socially successful. So if we've got these large brains specifically optimized to understand other humans, it stands to reason that they would also understand themselves as well.
Randomness is the absence of information and it is extremely rare in the universe.
No you have that backwards, randomness is maximally info-packed. The more random something is the more information it takes to describe it . The less random something is the less information it contains.

So for example a perfect, spherical silicon crystal at 0° Kelvin is could be described entirely with just by giving it's size.

Also, random events happen constantly. Maybe even more frequently then non-random events, depending on if you count virtual particle creation/destruction - and besides that atoms of unstable isotopes (which randomly give off radiation particles) permeate the universe.
posted by delmoi at 9:13 AM on March 19, 2013


Those things really, really matter though. It's not about breakfast. The Iliad is about lust above and beyond the physiology of mens' penises and brain states that can be stuffed into reproductive strategies. Again, there are materialist ideas without these problems, but highly reductionist neo-Darwinism isn't one of them.

I wasn't aware that "highly reductionist neo-Darwinism" was a theory of literary criticism. As is usually the case, it seems that Darwinist critics are engaged in the naturalistic fallacy of applying both Darwinism and methodological materialism well beyond their explicitly claimed limits.

Many of these do not avoid people's accounts of the world around them. Mind-alone Buddhist metaphysics is pretty much about people's accounts, and it's also reductionist.

Buddhist metaphysics is fairly explicit about the premise that those accounts are empty, that is, they have no inherent essence beyond a set of relations with other things that, upon reflection, are equally empty.
posted by CBrachyrhynchos at 9:19 AM on March 19, 2013


No you have that backwards, randomness is maximally info-packed. The more random something is the more information it takes to describe it . The less random something is the less information it contains.

Heh, that's funny. I know that that's how information is defined since Claude Shannon and Kolmogorov and stuff, but you can sort of see why three blind mice said that, can't you? Because if something is really random, we can't get much information out of it — not with our brains, anyway. But if something is even a little non-random, we suddenly get tons of information — that is, a much higher proportion of the available information. Something completely non-random, we have perfect information about it. Almost paradoxical.
posted by cthuljew at 9:23 AM on March 19, 2013


The whole project of the modern neuroscientific/philosophical search for consciousness is the attempt to explain exactly how such subjective experiences arise out of objective chemicals jostling around inside a skull.

Sure. But this is still not even close to "explaining consciousness". It's is, rather, about finding objective correlates to consciousness. I support this activity, certainly, and I also support as one working hypothesis the notion that consciousness is an epiphenomenon. The problem is, it is usually the only working hypothesis. It is far from a settled issue. There is absolutely no evidence (theoretical or experimental) to support the notion that subjective experience can arise from a large enough collection of "masses and springs". So when materialists (and I am one) claim that there is no need for anything outside of the current scientific framework to explain subjective experience, that is a metaphysical statement, not a scientific one.
posted by mondo dentro at 9:36 AM on March 19, 2013 [7 favorites]


Thomas Nagel is not crazy

The author of that piece also wrote recently about Hilary Putnam:

A philosopher in the age of science
posted by homunculus at 9:37 AM on March 19, 2013


As is usually the case, it seems that Darwinist critics are engaged in the naturalistic fallacy of applying both Darwinism and methodological materialism well beyond their explicitly claimed limits.
I actually have no idea what this is supposed to mean. By "Darwinist critics" do you mean the materialists
Because if something is really random, we can't get much information out of it — not with our brains, anyway. But if something is even a little non-random, we suddenly get tons of information
Well, that's not really right. If you get X amount of information by observing something, if that X is enough to describe it then we feel like we "know" it. But if it's not, then we feel like we don't understand it. I guess. But we didn't get "more" information from one or the other, just that maybe the ratio of information acquired to information that exists is higher? Or something?

It's hard to know what someone someone is actually talking about if they use a word to mean the opposite of what it actually means.
posted by delmoi at 9:37 AM on March 19, 2013


As I was reading through the linked articles I couldn't help thinking of the explanation of the (non-)self in abhidharmic tradition, which seems to be pretty much the one you don't find convincing...

Hey, can you expand on this a wee bit? I'm intrigued.
posted by mondo dentro at 9:39 AM on March 19, 2013 [1 favorite]


My anticipation is that this is going to get political (ahem... The Weekly Standard?): the right-wing theocrats will attempt to take Nagel's critique of materialism and append "therefore, Christian fundamentalism is true!" to it.

An Author Attracts Unlikely Allies

NY Times on the Critical Reception of Nagel's "Mind and Cosmos"
posted by homunculus at 9:39 AM on March 19, 2013 [1 favorite]


I actually have no idea what this is supposed to mean. By "Darwinist critics" do you mean the materialists

My apologies. Critics of Darwinism. Which, ok, Dawkins fairly clearly is guilty of the same with his degenerate semiotics, but that's such an obvious misapplication that it's not worthy of much criticism.
posted by CBrachyrhynchos at 9:44 AM on March 19, 2013


But we didn't get "more" information from one or the other, just that maybe the ratio of information acquired to information that exists is higher? Or something?

Right, like I said, we get a higher proportion of the information it is possible to extract. This is contrasting "information" in the mathematical sense ("it would take a lot of effort to describe this") vs. the intuitive sense ("that picture is just random static and doesn't show anything while this picture is a face of such and such a person"). I just think it's funny how the two concepts are related, but are really used in opposite ways from each perspective.
posted by cthuljew at 9:46 AM on March 19, 2013


Hey, can you expand on this a wee bit? I'm intrigued.
Think it's been touched on elsewhere in the thread ("Buddhist metaphysics"); AFAIK the abhidharma was a set of theories about consciousness as the subjective experience of dharmas - arising causally conditioned events - but since they are non-abiding so there is no abiding self. All gets fairly abstruse fairly quickly and I may well be misrepresenting it here.
posted by Abiezer at 9:54 AM on March 19, 2013


How An Algorithm Feels From Inside

That was the biggest bait and switch article title ever. I thought I was going to drive to work this morning finally knowing how an algorithm could have conscious experience, and all I got was some prattle about some people confusing 'acoustic vibrations' and 'auditory experience.' Sorry for the snark-ish comment, but I feel so cheated and disappointed now.

Thomas Metzinger's Being No One takes perhaps the boldest attempt at answering this exactly, simultaneously reducing phenomenal events all the way to neurobiological ones and raising chemical ones all the way to the level of experience. It's all about supervenience, not dismissal.

I need to read this book sometime. I don't think an explanation like this can be accepted as valid unless it can make novel predictions that preferably have a possibility of being verified someday. If Metzinger knows the neurobiological basis for the color red, it seems to me he should be able to tell us if more than just three primary colors are possible. At a minimum he should be able to answer if cross wiring outputs from the retina could cause us to see stop signs as blue and the sky as red, etc., which would seems to disprove externalism at least.

Steven Poole – On teleology
posted by Golden Eternity at 9:57 AM on March 19, 2013


I wrote a little blog post about Nagel's book a while ago: Materialism, Teleology, and Magic. I try to figure out what the "teleological laws" that will supplement or replace our materialistic understanding could possibly look like. Nagel's teleological laws thesis seems to me to go much farther than the welcome caution of "What is it like to be a bat?"

I think the book is kind of silly, but the zeal and anger of the responses seems to me somewhat misplaced. Materialism and "neo-Darwinism" do not need to be defended with the fervor of crusaders. They are good theories. Making every discussion into a skirmish in the war against the creationists dulls the spirit of inquiry. If we are warriors always we will not be good thinkers.
posted by grobstein at 9:57 AM on March 19, 2013 [4 favorites]


Golden Eternity: Alas, the book, despite it's heft and density, doesn't (and can't) go into that much detail. It's coming at the question from a philosophical perspective, searching as much for the basic explananda as the strived-for explanans to be found in neuroscience. Consciousness research is a horribly muddled field right now, and still has at least a decade of sorting out to do before we can ask the right questions, much less get any real answers. But it's definitely worth reading, as it starts to give a very good impression of how we can possibly arrive at subjective experience without needing to evoke any sort of magical forces.
posted by cthuljew at 10:05 AM on March 19, 2013 [1 favorite]


Right, like I said, we get a higher proportion of the information it is possible to extract. This is contrasting "information" in the mathematical sense ("it would take a lot of effort to describe this") vs. the intuitive sense ("that picture is just random static and doesn't show anything while this picture is a face of such and such a person"). I just think it's funny how the two concepts are related, but are really used in opposite ways from each perspective.

Actually, your intuitive notion of information is formalized in machine learning as information gain (see for example). Basically, the idea is that you gain information by reducing the entropy of your beliefs about some state of interest. For example, if you're interested in "who am I looking at", your belief can be described as a probability distribution over all the possible people you could be looking at, and an observation gives you information proportionally to how much it reduces the entropy of that belief distribution (zero entropy is achieved when you are 100% certain that you are looking at a specific person, maximum entropy is when you have a uniform distribution over everyone, i.e., you believe that it is just as likely to be anyone). In that scenario, seeing an image of static would not give you any information about who you are looking at, whereas seeing a face gives you a lot of information.
posted by Pyry at 10:14 AM on March 19, 2013 [3 favorites]


From the "Thomas Nagel is not crazy"
I could know everything there is to know about perception, but I’ll never know what it feels like to be colour-blind
I kind of disagree with this - it should be possible for a person with trichromaitc vision to imagine what it would be like without one of their color receptors.

Also here's the thing with true materialism, it may be the case that we could never describe with language what it truly 'feels like' to be a bat, but let's suppose that that we reduce experience to electro/chemical signals flowing from one area of the brain to another. The question is then "where" does the actual sensation of experience occur? Does our experience of vision actually "happen" in the visual cortex? How do the other parts of the know what we are seeing?

Or what about sound? Suppose someone was deaf and couldn't hear anything could they ever "feel" like they were hearing sound? Well, they might not be able to imagine it - but if the sensation of hearing is really all just electrochemical signals then couldn't we in theory send electrochemical signals directly into that person's brain?

So what happens if you go up a level? Suppose a deaf person's auditory cortex never developed properly. What if you replaced the connections that other parts of the brain have to the auditory cortex, and sent the data along that way. Would they still be "experiencing" sound, or would they simply be experiencing the experience of experiencing sound?

So, what if we could hook up a device that emulates the parts of the bat brain responsible for echolocation?

Could we then directly cause the sensation of experiencing echolocation in a person's brain?

So frankly I think this critique about not being able to know what it feels like to be a bat is wrong. Materialism may not be able to describe what it feels like to be a bat - but in theory if we learn more about the brain we may be able to directly inject experience into the brain including the knowledge of what it feels to be a bat.

We may also someday isolate, extract, copy, and install engrams in people's minds. If we could do that, we could directly transfer knowledge from one person to another, including experiences that can't be communicated with language. Or we might even be able to take memories from animal brains and put them in ours.

So it's wrong to say that from materialist perspective we can never know what it feels like to be a bat, or color blind, or whatever. At least not in a hypothetical sense. Under the materialist view of the world we may not be able to describe what it feels like to be a bat, but in theory we might be able to directly inject knowledge or experience into the brain (that doesn't mean it would ever be technologically possible).

One interesting consequence of this is that you end up reducing experience itself – one part of the brain experiences something, another part experiences the fact that the other part is having an experience. It all feels like one thing now, but perhaps people would be able to tell the difference between those two types of experiences, or have one without the other?
That was the biggest bait and switch article title ever. I thought I was going to drive to work this morning finally knowing how an algorithm could have conscious experience, and all I got was some prattle about some people confusing 'acoustic vibrations' and 'auditory experience.' Sorry for the snark-ish comment, but I feel so cheated and disappointed now.
Well, you didn't read it carefully enough - because he did describe how it feels to run an algorithm. Namely the classification algorithm brains use, which in some cases glitches out causing the sensation of paradox when none actually exists. That sensation is the feeling of caused by the algorithm.

From the materialist perspective, we already know what it feels like to be a computer because our brains our computers.
posted by delmoi at 10:43 AM on March 19, 2013 [2 favorites]


("that picture is just random static and doesn't show anything while this picture is a face of such and such a person")
That's not really true. First of all, look at a totally blank image. There's hardly any information there and it would be quite boring to look at.

On the other hand, if there were four or 16 random pixels, it would look more interesting - because there's more information. As you add more and more pixels it starts to look like less and less, but only because you're not looking closely. If you stare at an image like this or this for a while you'll start to see things - we 'know' that there isn't any 'meaning' behind it, so usually we don't care. But it's like finding clouds that look like things, if you look at it long enough your brain will start finding patterns, recognizing shapes, etc.
posted by delmoi at 10:57 AM on March 19, 2013


So frankly I think this critique about not being able to know what it feels like to be a bat is wrong. Materialism may not be able to describe what it feels like to be a bat - but in theory if we learn more about the brain we may be able to directly inject experience into the brain including the knowledge of what it feels to be a bat.

Yeah, this is the kind of circularity we end up with when it comes to challenges to qualia.

"Qualia is silly, because it can't be backed up with physical evidence. We should be able to reproduce it physically, even though we've never done it, but unverified physicalist theories of mind say we can!"

I'm not a dualist, but come on.
posted by mobunited at 11:02 AM on March 19, 2013 [3 favorites]


"Qualia is silly, because it can't be backed up with physical evidence. We should be able to reproduce it physically, even though we've never done it, but unverified physicalist theories of mind say we can!"
It seems like all possible theories of a physical, material mind say it should be possible.

If you're not a dualist, then where do qualia exist? Are they points? Is there one central location in the brain where experience happens? Or are they composed of parts and happen all over the brain?

If I see red, part of my brain actually receives the signal from my retinas - but another part of my brain gets a signal from my visual cortex informing it that I'm now seeing red. Is that all part of the same qualia?

If you think of the mind as a single point in space, it doesn't make sense to say something like "the experience of having the experience of seeing red", but if the brain is composed of parts you can say "the experience in the cerebral cortex of the visual cortex seeing red"

None of these non-material theories of the mind can ever be verified. At least this kind of thing could hypothetically be tested.
posted by delmoi at 11:26 AM on March 19, 2013


If you're not a dualist, then where do qualia exist?

I really hesitate to bring up my dick again, but
posted by Greg Nog at 11:30 AM on March 19, 2013 [6 favorites]


Well, you didn't read it carefully enough - because he did describe how it feels to run an algorithm. Namely the classification algorithm brains use, which in some cases glitches out causing the sensation of paradox when none actually exists. That sensation is the feeling of caused by the algorithm.

Doh. I kind of glossed over that part. To me there is a big difference between "what it is like to run an algorithm," and "how an algorithm feels." Especially, when what is seemingly meant by the former is the sensations of confusion, understanding, remembering, not remembering, and whatever that happen while we are thinking. I think it is a stretch to say we know that we are just an algorithm and this is how algorithms feel.

If you think of the mind as a single point in space, it doesn't make sense to say something like "the experience of having the experience of seeing red", but if the brain is composed of parts you can say "the experience in the cerebral cortex of the visual cortex seeing red"

Location in time could be very difficult as well. If we are going to say the mind is just an algorithm, then what if we code up the experience of the color red and run it in computer with no parallel processors (and with no dynamic memory or logic or whatever) and slow the clock down to one clock cycle per day. At one point in time does the algorithm experience color sensation?

I wonder how my desktop is feeling right now.
posted by Golden Eternity at 11:56 AM on March 19, 2013


This kind of makes me want to invent neo-neo-Darwinism, a philosophy consisting of me standing on a street corner holding up a sign saying THE DINOSEAURS WILL RETURN AN EAT US ALL UP, MY TIME-TRAVLING NEIGHBOR TOL ME SO.
posted by JHarris at 12:20 PM on March 19, 2013 [1 favorite]


Especially, when what is seemingly meant by the former is the sensations of confusion, understanding, remembering, not remembering, and whatever that happen while we are thinking.
Well, what's interesting about that example is that essentially it's a glitch - the point he's making is that the algorithm itself is 'exposed' (at least according to his theory). It's like if you're playing a video game and you get lost in it, you don't think about the 'algorithm' until something goes wrong - like a bullet going through a wall when it's not supposed too.

He's saying that this glitch that manifests itself sometimes is a way that we can experience the sensation of the algorithm without it being associated with something that actually exists. At least that was my interpretation of what he was saying.
posted by delmoi at 12:21 PM on March 19, 2013


three blind mice: Randomness is the absence of information and it is extremely rare in the universe.

Not according to quantum mechanics.

Natural selection, although propelled by pseudo or quasi-random events, is not itself random or an accident.

Well, actually it is, throughout, inescapably. It's just fueled by a gigantic energy source (the sun) and given billions of years to operate. Those scales are so far outside our own that many non-obvious things may happen. That's why so many refuse to believe that evolution could happen.

- - - - -


 ‘You,’ your joys and your sorrows, your memories and your ambitions, your sense of personal identity and free will, are in fact no more than the behavior of a vast assembly of nerve cells and their associated molecules. Who you are is nothing but a pack of neurons.”

Yes. But if each of those tiny molecules has a tiny, infinitesimal fraction of meaning, then in the aggregate they could work together, pool their meanings, and end up with a human being.

Color, for instance: That azalea outside the window may look red to you, but in reality it has no color at all. The red comes from certain properties of the azalea that absorb some kinds of light and reflect other kinds of light, which are then received by the eye and transformed in our brains into a subjective experience of red.

Actually, it does have color. Because the idea of color actually is that set of perceptions. Objectively speaking color is only a narrow range of wavelengths of light. Our perceptions of it is an interpretation of it. But color has no other existence; seeing something as being "red" is a shared fiction. It has no meaning other than what we make of it, but we all agree on that meaning, and no "objective" entity like God exists to tell us different. We might somehow each see "red" as being a different color (whatever the heck that means), but our descriptions match up, so it doesn't matter.
posted by JHarris at 12:39 PM on March 19, 2013


Consciousness research is a horribly muddled field right now, and still has at least a decade of sorting out to do before we can ask the right questions, much less get any real answers.

Given that philosophy has largely chased its tail on this for most of the last 2000 years without coming to a consensus about what those questions are, I'm not expecting much of the next decade.
posted by CBrachyrhynchos at 12:51 PM on March 19, 2013 [3 favorites]


But the human brain can do much more than this. It can perform calculus, hypothesize metaphysics, compose music—even develop a theory of evolution. None of these higher capacities has any evident survival value, certainly not hundreds of thousands of years ago when the chief aim of mental life was to avoid getting eaten.

This sounds to me like saying that hands can be used to eat ice cream and there's no evolutionary advantage to that. I wouldn't be making claims about what humans did hundreds of thousands of years ago, especially if it weren't my field.
posted by ersatz at 1:06 PM on March 19, 2013


This is not so much related to Nagel as to the certainty that describes the athiest conclave in the Berkshires: I've not read all of the posts, but how does Kant's Critique of Pure Reason not provide an adequate response to modern materialists / determinists (rather than only a response to Hume, to whom the Critique was originally directed)?
posted by mangasm at 1:12 PM on March 19, 2013


So it's wrong to say that from materialist perspective we can never know what it feels like to be a bat, or color blind, or whatever [...] Under the materialist view of the world we may not be able to describe what it feels like to be a bat, but in theory we might be able to directly inject knowledge or experience into the brain (that doesn't mean it would ever be technologically possible).

delmoi, on what basis are you saying that in theory we might inject experience in the brain? Would this include a "technology" like LSD or psilocybin? Sure, I have no doubt that we will be able to perform objective interventions that induce subjective experience. That's what torture is. That's what advertising is. More positively, biomedical science and neural engineering programs are doing just that with things like deep brain stimulation. But such things still are treating the process by which subjective experience arises as a black box, which seems to fall way short of the matter at hand.
posted by mondo dentro at 1:25 PM on March 19, 2013


The fundamental illusion in scientific empiricism is always that it uses the metaphysical categories of matter, force, as well as those of one, many, universality, and the infinite, etc., and it goes on to draw conclusions, guided by categories of this sort, presupposing and applying the forms of syllogizing in the process. It does all this without knowing that it thereby itself contains a metaphysics and is engaged in it, and that it is using those categories and their connections in a totally uncritical and unconscious manner.--Hegel
posted by No Robots at 1:34 PM on March 19, 2013 [6 favorites]


If you're not a dualist, then where do qualia exist?

Who says qualia exists?
posted by PhoBWanKenobi at 1:51 PM on March 19, 2013 [1 favorite]


Vilayanur Subramanian Ramachandran says qualia exist
posted by kuatto at 2:03 PM on March 19, 2013 [2 favorites]


Where does language exist?
posted by Artw at 2:05 PM on March 19, 2013


Who says qualia exists?

I do, for one; I can look at this napkin and see the creases and wrinkles in it. I can also look at that same napkin and imagine that those creases are in the shape of Greg Nog's dick. These are two very different phenomena, very real, very common, and very different.

Materialism has great difficulty accounting for anything such as this, just as it does for anything like 'Mary's' experience of red, insofar as it is about the physical properties of what is there.

For I can assure you; I have never seen Greg Nog's dick; it is not there as a physical property of this napkin; nor are my eyes closed, merely phantasizing; I can imagine it, in and through the napkin, as through a painting. It is not physically there. Moreover, I do not turn attention to my mind in order to imagine it; I direct attention to the napkin in order to imagine it.

In order to consider or criticize Nagel's ideas, one has to be willing to take the import of such cases of intentionality (in this case, imaginative; in other cases, significative, memorial, evaluative, emotive, and so on) on board; otherwise, there is just taking past one another and the like.

Feser's critique of Leiter's review is excellent.
posted by rudster at 2:18 PM on March 19, 2013


Language? In my opinion, language exists in the (reflexive) relations between all things.
posted by kuatto at 2:20 PM on March 19, 2013


Possibly, like qualia, it might be considered to be doing something other than existing then.
posted by Artw at 2:24 PM on March 19, 2013 [2 favorites]


I do, for one; I can look at this napkin and see the creases and wrinkles in it. I can also look at that same napkin and imagine that those creases are in the shape of Greg Nog's dick. These are two very different phenomena, very real, very common, and very different.

Sure, but there are experiences like phantosmia and hallucinations which exist more in a gray space than simply "imagining"--experiences in which the brain is sometimes fooled into thinking that external experiences occur when none are present. I wouldn't say that one who has imagined smelling an orange knows what it is like to smell an orange, but a stroke victim might know what it's like to smell an orange without ever having smelled one. Which is one of many reasons I believe that perceptions of experiences happen in brains and not someplace else; different stimuli, either internal or external, can cause a nearly indistinguishable subjective experience.
posted by PhoBWanKenobi at 2:30 PM on March 19, 2013 [1 favorite]


..And yet we converse. The hidden substance of language notwithstanding
posted by kuatto at 2:47 PM on March 19, 2013


And my computer can interpret echo "hello world", I don't think we need to invoke dualism to make that happen.
posted by Artw at 2:50 PM on March 19, 2013 [2 favorites]


And never the twain shall meet.

But seriously, in what sense do you deny the existence of language?
posted by kuatto at 3:00 PM on March 19, 2013


It sounds to me like he's not denying the existence of language but the "hidden substance" of language.

As a physicalist, I don't know what it means when you say that language exists between things. Because there are only things.
posted by PhoBWanKenobi at 3:05 PM on March 19, 2013


This might not be what's happening here, but keep in mind that philosophers sometimes see value in playfully adopting bizarre positions for the sake of argument. It's not exactly trolling when that happens. Gremlining, perhaps.
posted by justsomebodythatyouusedtoknow at 3:32 PM on March 19, 2013


delmoi, on what basis are you saying that in theory we might inject experience in the brain? Would this include a "technology" like LSD or psilocybin? Sure, I have no doubt that we will be able to perform objective interventions that induce subjective experience. That's what torture is. That's what advertising is.
Well, if it can be done in practice, it can also be done in theory. I'm not talking about something like, if you describe something to someone and they imagine it. I'm talking about causing a specific, and exact sensation - like the sensation that a bat feels while using echolocation. Or someone else's memory. There may be some crude ways to do that now, but not in a way that allows us to replicate experience with high fidelity.
Who says qualia exists?
Uh, the person I replied too? Before we can say whether or not qualia exist we would first need a specific definition of existence that does not depend on any experience. Have fun with that
Materialism has great difficulty accounting for anything such as this, just as it does for anything like 'Mary's' experience of red, insofar as it is about the physical properties of what is there.
I don't think so, you just say that the neurons in Mary's brain that make her see red are firing. Maybe it doesn't describe it to your satisfaction.
Language? In my opinion, language exists in the (reflexive) relations between all things.
...

..And yet we converse. The hidden substance of language notwithstanding,


But seriously, in what sense do you deny the existence of language?
I don't think anyone is denying the existence of language, rather the original question was rhetorical to point out an example of something existing without a specific point of existence. If Qualia doesn't exist, then language might not exist either, while if Language exists, then so can Qualia. It depends on the definition of "exist".

Now, that said how can we define existence without Qualia? You can say "I think therefore I am, and therefore there is existance" but how would you know you're thinking without the the qualitative experience of your own thoughts?
posted by delmoi at 4:18 PM on March 19, 2013


Now, that said how can we define existence without Qualia?

I dunno. Ask Daniel Dennett.

As for the locus of "language": it seems the term language describes a process--of synapses firing, then being expressed via the hands or the lips, of those terms perceived by someone else's sensory receptors and then being broken down by their brains. None of that necessitates the existence of qualia--rather, it's simply a physical process that occurs in sensory receptors and brains.

If the point is that abstract principles (including qualia) "exist," well then sure. But only in our heads. Unicorns exist in my head, too. Doesn't make them real.
posted by PhoBWanKenobi at 5:00 PM on March 19, 2013


I actually bought Nagel's book based on the knee-jerk, dismissive reactions to it and the fact that I've been impressed by the previous work of his I've read. No idea if/when I'll get around to reading it, though.
I'm slightly sympathetic to what I imagine his position to be -- namely, that something like teleological causation might be real. For me, back when I was more involved in philosophy, this took the form of "wouldn't it be nice if I could say that I believe 2 + 2 = 4 because it's true?" (I think "moral truths" are a really shaky ground to build this kind of argument on, so I preferred to stick with simple math and logic.) It seems to be generally agreed, though, that "truth" cannot be a cause of things, and so we either need to give up on our naive idea that we believe things because they're true, or come up with (a) a good, thorough, mechanistic explanation of how we (often) come to believe true things, or (b) buttress the naive conception with something a little more woo-woo a la Nagel.
It sounds, though, like he's chosen some poor lines of argument, especially the stuff about evolution. I have pretty solid intuitions about what might be able to take place in a year, and I can tell that stone steps can be worn down over a few hundred years, and it's pretty clear that a deep canyon can form over many thousands of years, but I have *no freaking idea* what a few billion years of evolution might produce.
I also don't like his (reported) conclusion that science needs to change. There could be teleological causation without it falling into the sphere of things that science can or should concern itself with. For example, suppose God actually does intervene occasionally in the course of the natural world and work miracles. That could be the case ("could be" in the sense of "I can't prove it wrong" or maybe "we can assume it for the sake of argument") without science having anything at all to say about it. Of course, miracles are generally about the laws of nature being broken, whereas I think what Nagel is talking about is more like things tending to go a certain way in cases where the laws of physics don't dictate a specific outcome (which I guess would depend on things like quantum indeterminacy being quite real).
Again, since I've seen good and well-respected philosophers be pretty dismissive of decent ideas -- we all are, at times -- I hope I'll make the time at some point to read the book and evaluate it myself.
posted by uosuaq at 5:03 PM on March 19, 2013 [3 favorites]


I dunno. Ask Daniel Dennett.
Right, but you cannot say that something does or does not exist without a definition of existence.
If the point is that abstract principles (including qualia) "exist," well then sure.
Maybe your definition of Qualia is different, but my understanding is that they are not abstract, and are in fact the only non-abstract things which we can actually perceive. All other knowledge is derived from interpreting Qualia.
Unicorns exist in my head, too. Doesn't make them real.
Okay, instead of "real" let's say material i.e. existing in the material world, governed by classical or quantum physics.

The unicorns are not material. However, your perception of your own imagination is material. There are actual physical neurons that have to fire by sending out actual chemicals in order to create the image of the unicorn in your mind.

Similarly, Qualia are material in the same sense, neurons fire, they send the signal, and so on.

However, some people might argue that material things are somehow not "real" because we don't understand quantum physics, maybe it's all math down there and the physical/material world does not exist. Berkeley believed the material world didn't exist, only the mental, but I imagine he thought Qualia existed too - but if you go on to say well maybe perception is all fake, etc then you might say Qualia isn't real.

That's why we need a definition of "Existence" before we can talk about whether it exists or not.

The most pointless arguments are the ones where people aren't even using the same words to mean the same thing.

Things that exist in your head don't exist outside of your head. However they do exist inside your head.

I obviously can't speak for all materialists, but I think the idea isn't that the mind does not exist, rather the material world is inside your skull as well as outside, and what goes on in your mind is a physical, material process.

Now, from my perspective in order to say something is real it's a necessary condition to perceive some evidence for it. Qualia are self evident in that sense we perceive them directly, and they are their own evidence. All other things need to be "encoded" in Qualia for us to be made aware of them. Obviously perception isn't sufficient, since obviously our perception can be tricked in lots of different ways
posted by delmoi at 6:20 PM on March 19, 2013


I also don't like his (reported) conclusion that science needs to change. There could be teleological causation without it falling into the sphere of things that science can or should concern itself with. For example, suppose God actually does intervene occasionally in the course of the natural world and work miracles. That could be the case ("could be" in the sense of "I can't prove it wrong" or maybe "we can assume it for the sake of argument") without science having anything at all to say about it.
Yeah, that's the thing I don't even think it's the case that science says life is improbable. Lots of people think the universe might be full of life. Also, the fact that Nagel doesn't seem to think that consciousness could have been the result of selective pressure, or a side effect of something else. In fact I learned reading Elliott Sober's review, even Darwin himself came up with a theory for how consiousness might have evolved, which is pretty straightforward:
For instance, the co-discoverers of the theory of evolution by natural selection, Darwin and Alfred Russel Wallace, disagreed about how the human capacity for abstract theoretical reasoning should be explained. Darwin saw it as a byproduct. There was selection for reasoning well in situations that made a difference for survival and reproduction, and our capacity to reason about mathematics and natural science and philosophy is a happy byproduct. Wallace, on the other hand, thought that a spiritualistic explanation was needed. Nagel finds Darwin’s side effect account “very far-fetched,” but he does not say why.
I think one thing people forget about evolution is that we're not just trying to survive in some neutral world, we are sharing that environment with other organisms that also evolved. Not just plants and animals but other humans as well.

So you end up with all kinds of feedback effects where A causes B and then B causes A again.

When you look at Apes they are social and have social hierarchies, and the animals at the top of the hierarchy have more opportunity to mate. So naturally, the ones that were best at navigating social situations would be more likely to pass on their genes. As would the ones who were able to get the most help from other people. I've actually heard some people say the reason apes have some of the sharpest vision is actually so we can do a good job of reading the emotions of others .

In order to do well socially, you need to be able to model the behavior of others and figure out what they want. We have Mirror neurons specifically there to help us understand what other people/animals are feeling. And there could have been a mate selection feedback effect where people who sought out intelligent mates because they were the most successful would pass on their genes for seeking out smart people and you end up with a runaway effect - until you run into physical limitations, like head size.

But anyway, if our brains evolved to understand other people, and the people around us are trying to understand us, that means we also need to be able to model ourselves in order to understand the behavior of other people towards us. Another person might fear us if we are violent, but in order to understand why are afraid we have to understand that they see us as violent.

There's also the question of how much our perception of the conscious is cultural. For a long time people had religious belief in the soul, and they just thought consciousness was part of the soul. Even for non- religious people the concept of consciousness and the 'self' permeates culture. If people didn't grow up with constant exposure to that idea, would it really still be self evident?

It's possible the idea of consciousness is a deeply embedded cultural truth that can't be dislodged - the same way the idea of a single god was deeply embedded in European culture for a long time, and most people couldn't conceive of reality without god. But now it's been dislodged and they can.
posted by delmoi at 6:53 PM on March 19, 2013


Right, but you cannot say that something does or does not exist without a definition of existence.

It's not like I'm the first materialist to ever question the existence of "qualia," delmoi. Seriously, Dennett does.

I obviously can't speak for all materialists, but I think the idea isn't that the mind does not exist, rather the material world is inside your skull as well as outside, and what goes on in your mind is a physical, material process.

Right. Yes. Agree.

Qualia are self evident in that sense we perceive them directly, and they are their own evidence.

Disagree. Personally, I believe that what we believe to be qualia is just an illusion of electricity and synapses. So many of these arguments (Mary, etc.) beg the question--assuming qualia to exist because it seems self-evidently to exist, and then arguing backward from there. But what is seeming? It's nothing but more brainstuff.

Of course, it seems like extraspecial brainstuff (which is why arguments like Nagel's get all tied up with religion)--like Greg Nog's peen. But wanting it to be special doesn't make it special.

(We seem to be really mostly in agreement, though, so I'm not really sure why you're wall o'texting me.)
posted by PhoBWanKenobi at 7:19 PM on March 19, 2013 [1 favorite]


Personally, I believe that what we believe to be qualia is just an illusion of electricity and synapses.

Illusions imply that there's a difference between what seems to be the case and what is actually the case.

Whereas qualia are all about what seems to be the case regardless of what is actually the case (which is why hallucinations, for example, are qualia, despite having no relation to outside reality).

Qualia are pure seemingness. So how can they be illusions? They appear to be regardless of whether they are "right" or "wrong."

For example, it makes sense if I say that the sense we have that the sun seems to rise and set is an illusion. That indicates that an idea we have about the sun is wrong. The fact that we know it is an illusion does not make the seeming rising and setting itself go away; that appearance is a separate issue from whether it is wrong or right.

And that's what materialism will never be able to explain: the appearance.
posted by shivohum at 7:38 PM on March 19, 2013 [2 favorites]


It's not like I'm the first materialist to ever question the existence of "qualia," delmoi. Seriously, Dennett does.
Right, but what I'm saying is before we can even have a discussion about whether or not Qualia exit we have to have a definition of existance. You could define existence it in such a way that they don't.

Like for example, you could say that in order for something to exist, there has to be a contiguous block of matter/energy in their shape. In that case, things that were composed only of arrangements of other material wouldn't "exist" DNA would exist, proteins would exit but not 'genes' Lasers would not exist, even if their photons did. In that case, Qualia would not exist either.

It also depends on the definition of Qualia. If define Qualia the link between the material and mental universe, then for people who don't believe in a separate mental universe - Qualia wouldn't exist any more then the "soul" does.

But if you define it as a reductive unit of experience, and you think experience is a material process, "Qualia" is just label used to describe a quantum of experience.

So I mean, we can't really have a discussion about whether or not Qualia exists without formal definitions. In fact, it could very well be that we totally agree with each-other and just don't know it because we are using the terms slightly differently.
And that's what materialism will never be able to explain: the appearance.
There are lots of explanations, if you want to ignore everything they have to say it's obviously your choice.
posted by delmoi at 7:51 PM on March 19, 2013 [1 favorite]


Whereas qualia are all about what seems to be the case regardless of what is actually the case (which is why hallucinations, for example, are qualia, despite having no relation to outside reality).

Hallucinations have relation to an interior physical reality. Which is really no different at all from an exterior physical reality. It's all physical.

If you want to say that qualia is the name for what happens when higher order brains spark and pulse and subsequently introspect themselves and their existence, fine. But that's not any different than any other kind of sparking and pulsing, and telling me that it seems to be different is just putting magical sparkles on ordinary ol' brains.
posted by PhoBWanKenobi at 7:53 PM on March 19, 2013 [1 favorite]


If you want to say that qualia is the name for what happens when higher order brains spark and pulse and subsequently introspect themselves and their existence, fine.

No, I want to say that neurons and explanations for how neurons work have exactly nothing to do with what a hallucination looks like. Cut open the brain all you like, measure it to the top of high heaven, and you will never in a million years see the color red or the idea of Fermat's Last Theorem or someone's experience of generosity.

Hallucinations, and other appearances, are simply not made of matter. Nothing magical about them, though. They're perfectly natural productions of a non-material substance.
posted by shivohum at 7:58 PM on March 19, 2013


Personally, I believe that what we believe to be qualia is just an illusion of electricity and synapses.

To me this just seems to be saying that conscious experiences are a property of electrical activity in synapses (and possibly other physical things), and nothing else is required to explain them. Dennett would appear to be a property dualist. He tries to keep using words like "seeming" and "illusion" to evade even property dualism, but it doesn't work.
posted by Golden Eternity at 8:21 PM on March 19, 2013 [1 favorite]


No, I want to say that neurons and explanations for how neurons work have exactly nothing to do with what a hallucination looks like.
Which is why science has conclusively proved that hallucinogens have absolutely no effect on neuron function. /HAMBURGER
posted by Proofs and Refutations at 8:27 PM on March 19, 2013 [2 favorites]


Cut open the brain all you like, measure it to the top of high heaven, and you will never in a million years see the color red or the idea of Fermat's Last Theorem or someone's experience of generosity.

You might be able to see a face though.
posted by Proofs and Refutations at 8:33 PM on March 19, 2013


No, I want to say that neurons and explanations for how neurons work have exactly nothing to do with what a hallucination looks like. -- shivohum
This is sloppy thinking.

In theory it's obviously wrong: If you say X has "exactly nothing to do with" Y, then you are basically saying they are statistically independent. But obviously when you see a mouse some neurons fire and when you imagine a mouse some other neurons fire, and there must be some overlap. And when you hallucinate a mouse there is probably even more overlap in terms of what neurons are firing.

It's also just empirically false.

There is actually some interesting research what exactly is happening in the brain when you see certain specific types of hallucinations on drugs and how they relate to the physical structure of the visual cortex.

So to say that there is nothing relating the visual appearance of a hallucination and the physical firing of neurons is false. Not only are they related, they're actually related with a very simple mathematical formula that goes from 3D space in the brain to 2D space on visual plain!


So your argument isn't just theoretically problematic, it's straight up refuted by actual science. And you're using your own ignorance as evidence you're right!
Cut open the brain all you like, measure it to the top of high heaven, and you will never in a million years see the color red or the idea of Fermat's Last Theorem or someone's experience of generosity. -- shivohum
You seem to be having trouble with the concept of representation. If you smash open your hard drive you won't see any of the pictures stored on it. Does that mean that the images actually aren't stored on the drive? Or does the hard drive have it's own spirit in the non-material world where the images actually exist?

Unfortunately, if a person can't think clearly it may be impossible for them to understand the other side's position.
posted by delmoi at 9:10 PM on March 19, 2013 [3 favorites]


Which is why science has conclusively proved that hallucinogens have absolutely no effect on neuron function.

Hallucinogens affect neuron function, but there is an absolute separation between neuron function and the rich, singular visual experience of the hallucination. There is a correlation between certain neural activity and the hallucination, but the correlation still is as far from an actual appearance as the number 000080 on a computer is far from the color blue. One may correlate with the other, but there is still a clear and unexplained gap between the two; one cannot be reduced to the other.

The whole question is: where is the little movie that we're seeing right now? The sights, sounds, colors, and emotions, the little dialoguing voice that talks in our head? Cut the brain open or scan it and you will see, at most, correlations. But where do the experiences come from?
posted by shivohum at 9:25 PM on March 19, 2013


If you cut open a brain, you are not going to see any qualia. I think we can all agree on that. There seem to be two types of responses to this. One is to conclude that materialism is fundamentally limited and that qualia are simply outside its reach. The other is to conclude that our intuitions about qualia are badly misguided.

I think it is clear that our intuitions about qualia are poor. First off, it subjectively seems that qualia are like movie projections that we watch, but this idea breaks down upon closer inspection -- how does our inner eye interact with the non-physical qualia? It also subjectively seems that we could impart consciousness to a wooden puppet -- the physical underpinnings of consciousness are invisible to us, and nothing in our subjective experience suggests that consciousness depends on neurons firing in a massively complex brain.

The poverty of our intuition is not by itself a purely mechanical account of qualia, but for me it is a strong sign that I should mistrust my intuition that such an account is impossible. So as we theoretically build up a brain from zero complexity, increasing its ability to form representations, why couldn't the brain eventually become convinced that its internal representations have some deep metaphysical existence? It's just a brain.
posted by leopard at 10:55 PM on March 19, 2013 [2 favorites]


If we are going to say the mind is just an algorithm, then what if we code up the experience of the color red and run it in computer with no parallel processors (and with no dynamic memory or logic or whatever) and slow the clock down to one clock cycle per day. At one point in time does the algorithm experience color sensation?

Heh. You should read Greg Egan's Permutation City.


Given that philosophy has largely chased its tail on this for most of the last 2000 years without coming to a consensus about what those questions are, I'm not expecting much of the next decade.

Ah, but be careful not to conflate the pre-cognitive revolution philosophy with the post-. People like Chomsky, Hofstadter, Dennett, Metzinger, Damasio (not all of them technically academic philosophers, but that's a different argument) are doing a very different brand of philosophy to Hume or Leibniz or Plato. These people aren't tail-chasing. They're MRI-chasing. They're working with (or are) scientists, and are the first to do so because brain science didn't exist for the last 2000 years. Yeah, there's a lot of hand-wavy, nonsensical philosophy out there, but impugning modern philosophy for the failings of the past misses the advances of modern science, and philosophers' response to such.


And that's what materialism will never be able to explain: the appearance.

You should check out the first couple chapters of Metzinger's Being No One. He does a pretty good job of fitting qualia into a larger scheme of mental phenomena, and, if you can stick with it, those phenomena into a larger scheme of neuronal action. There are ways of motivating the appearance from the physical reality — they're just really, really goddamn complicated.


delmoi: You are insisting on dismissing the objective/subjective gap. There's nothing magical about it — it's some combination of feedback, self-similarity, strange loops, and who knows what else, but it's really there. Saying as blithely as you do that there's no mystery, it's all just neurochemical/electrical, we already understand it, is both dishonest about the state of current science and arrogant of your own knowledge. The color red shares no inherent properties with whatever brain state produces the sensation of redness. For instance, the former is a color, and shares a property with blood and clay, while the latter is an arrangement of neurons, and shares a property with a bundle of rope or a computer processor. This is the patterns of patterns thing I mentioned. Red is a pattern that arises out of the phenomenal process of experiencing that arises out of a mental process of seeing which is itself a pattern that arises out of a computational process of perceiving which, in turn, arises out of a physical process of neuronal/biological action. (You can probably go down one more level to whateverthefuck QM is doing, but I don't think we really have that answer yet.) The experience of redness isn't a qualia floating above the brain; it is a very high-level abstraction that supervenes on the brain through a whole stack of other effects.

But here's the problem: even once you've explained every level of this hierarchy in mechanical and physical terms, you haven't explained what the experience of redness is like. Being able to stimulate the same experience in another brain artificially is all well and good, but a society of the blind could do it just as well, and congratulate themselves for figuring out what redness is, without ever having experienced redness. To explain experiences, you need to have some sort of non-physical (in the sense that you can't touch it) abstract mechanism by which an involution happens which allows one mental process to perceive another, and then to perceive itself perceiving. Again, "non-physical" doesn't mean spiritual or magical or anything, but just abstract. And that abstract process requires an explanation that goes far being merely "inject chemicals" or "pick out some neurons" or whatever.

Our experiences are different to physical objects. Or rather, they seem different. The above sort of explanation can explain how and why we have experiences, in a sense. But they can't explain (easily) why it seems to ourselves that we do. I for one think that this seeming can in fact be explained by some complex, high-level, abstract form of cognition, but it has to be really damn clever to overcome the fact that things seem to us a certain way, which is not by itself motivated. This is the distinction that the philosophical zombie thought experiment is supposed to illustrate. We can completely consistently imagine a world like ours with no experiences in it. In fact, there was such a world before humans became self-conscious.

I have some sympathy for the dualists who accuse people like Dennett of being "zombies" because they go so far out of their way to discount experience. Experience is very real, and is unlike anything else we can point to in the world. It IS spooky! It's fuckin' weird! And it requires a lot more than dismissive talk about reproducibility to account for it. As a last remark, I'll just say that this is hard to talk about, especially in an internet comment, but my only point to you, delmoi, is that you should think very carefully about how, or whether, your experiences really are like other things in the world, like chairs and words and ideas. Again, there's nothing magical about experiences, but I, and a lot of people much smarter than me, think there is, and nothing you've said so far has made me doubt that.
posted by cthuljew at 11:07 PM on March 19, 2013 [4 favorites]


We can completely consistently imagine a world like ours with no experiences in it. In fact, there was such a world before humans became self-conscious.

What is the evidence that various animals that came before us had no qualia-type experiences? What would such evidence even look like? (Non-rhetorical question.) I can't think of any scientific data to suggest that it is like nothing to be a bat. Presumably they lack self-consciousness, as defined by mirrors and red dots and "theories of mind" about other bats and so forth, but that's no reason they can't experience the qualia of red in a more limited fashion.
posted by chortly at 12:07 AM on March 20, 2013 [1 favorite]


What is the evidence that various animals that came before us had no qualia-type experiences?

Yeah, I'm pretty doubtful that any animal we'd call "human" would lack qualia-type experiences, being that all the great apes have mirror self-recognition and all of that.

(But then I have trouble even imagining philosophical zombies, much less consistently imagining them.)
posted by PhoBWanKenobi at 12:14 AM on March 20, 2013 [2 favorites]


The mirror test is actually more a test of cleverness than of self-consciousness.

However, both of you make good points. I had the hidden assumption there, which I do in fact hold, that creatures that aren't conscious the way humans are aren't aware of their own awareness of their experiences. That is, they are not reflexively self-conscious, and as such, don't have experiences the way we do. Many animals are amazingly intelligent in huge varieties of ways that overlap with human intelligence almost completely, but I don't think non-human animals have a sense of "looking out at the world" the way humans do. I could, of course, be wrong, and this sense of subjective experience could be a much easier thing to achieve, require only a bat or monkey or chimp brain. In fact, I should really retract that assumption completely, as such experiences could be happening in the minds of animals, but since they are phenomenal and not merely physical, we have no access to them. Hmmm.
posted by cthuljew at 2:06 AM on March 20, 2013 [1 favorite]


(But then I have trouble even imagining philosophical zombies, much less consistently imagining them.)

Never played a video game?
posted by Pope Guilty at 2:33 AM on March 20, 2013


Kind of went long so TL;DR:
1) We only know that we experience Qualia because we have other Qualia that allows us to sense our own sensation of things. The Qualia and the second-order Qualia need not be made of different "stuff"

2) If things are isomorphic to eachother then there is no need for us to consider them different things. The physical sensation of Qualia may appear to be completely different from the chemical stuff in our brain, but this is not a problem so long as there is some operation we can do to convert them back and forth
(we don't even need to propose any fantastic technology, simply imagining something is an example of creating experience from neural patterns, rather then neural patterns through experience, although memories are somewhat degraded from the original experience)

3) You guys seem to have a problem with the fact that we cannot communicate experiences but:
A) why is this a problem? Why do we need to be able to communicate all of our experiences?
B) Even if it were a problem, do you think that if we switched to a different view of the world we would somehow gain the ability to psychically transfer experience from one person to another, just by changing our viewpoint? Seems pretty unlikely
4) the other problem with the so-called communication problem is there is no theoretical reason why we can't transfer experience from one person to another, the same we copy an image from one hard drive to another.

5) I totally agree it absolutely feels as though my experiences are real and not just the result of chemical reactions. But there isn't any reason why we can't just accept the fact that we may feel something and know something else (like those optical illusions that we know are illusory but

6) Furthermore if we go by what we feel is true there is a problem with dualism in that I don't feel at all like I'm simply observing Qualia generated by material things. It feels like I am directly experiencing the things themselves, in fact it not only feels as though I am experiencing color, light, and so on it actually feels like I am physically experiencing the objects I am looking at without any mediation at all, which we know can be true because we know I see things because photons bounce off of them and bounce into my eyes. So we are stuck with the same problem of being forced to believe that contradict what feels to be true about perception.

7) however we can avoid that problem by suggesting that in fact we are directly experiencing the things themselves and that our brains are in fact directly connected to those things via the electromagnetic field of which photons are a carrier particle. Instead of saying that information about the material stuff around us we can say that that stuff, the photons bouncing off of it, the first-order qualia generated by the photons and the second-order qualia generated by the first-order qualia are in fact all just part of one large physical system and made from the same stuff. However, if we do that we lose the boundary of self, which also feels very real.

In the end it may be the case that we can't ever reconcile the sensations of experience with what we know about how experience works. It may be that the only way to reconcile that is to suck it up and get over it, since in the end how we feel about things has no baring on whether or not they are actually true or not.

(So yeah, 600 word summary. I kind of hate myself now. The rest of the comment is about 1k words, without quotes. Have fun :P)
▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓
Hallucinogens affect neuron function, but there is an absolute separation between neuron function and the rich, singular visual experience of the hallucination. There is a correlation between certain neural activity and the hallucination, but the correlation still is as far from an actual appearance as the number 000080 on a computer is far from the color blue. -- shivohum
If two things are correlated, then the following statement is false:
neurons and explanations for how neurons work have exactly nothing to do with what a hallucination looks like.
So, logically that means you now agree your earlier statement was wrong, which I suppose means you've actually learned something.
The whole question is: where is the little movie that we're seeing right now? The sights, sounds, colors, and emotions, the little dialoguing voice that talks in our head? Cut the brain open or scan it and you will see, at most, correlations. But where do the experiences come from? --shivohum
Well how do we know we see these sounds, colors, and emotions? We know they are happening inside our head because we feel them happening. When we are warm, we both experience the sensation feeling warmth, and yet at the same time we also experience our own perception of it

I realize that might seem a little bit paradoxical, but it's not. Think about a reptile. It seeks out warmth, and can feel warmth -- perceive it. But it's unlikely that it perceives itself perceiving it. Assuming this is true, if a reptile were somehow given the ability to communicate, it could not understand this discussion because it would not be aware of it's own perceptions.

Now if we can describe first order qualia materially, then we can also describe second order qualia the same way. We don't need some other non-material thing out there to receive our material qualia and covert into mental matter, according to the materialist view.
delmoi: You are insisting on dismissing the objective/subjective gap. There's nothing magical about it — it's some combination of feedback, self-similarity, strange loops, and who knows what else, but it's really there. Saying as blithely as you do that there's no mystery, it's all just neurochemical/electrical, we already understand it


Where did I say I thought we already understood it? I am only saying that 'it' is hypothetically possibly understandable, or at least nothing precludes us from understanding it, other then quantum physical laws.
The color red shares no inherent properties with whatever brain state produces the sensation of redness. For instance, the former is a color, and shares a property with blood and clay, while the latter is an arrangement of neurons, and shares a property with a bundle of rope or a computer processor.
I think you and shivohum both need to learn the concept of Isomorphism. In math, if things are isomorphic they basically behave the same way even if they 'look' different.

So for example x and 1/x are isomorphic for the set of numbers that are not zero: every number maps to one inverse and can be mapped back. So for example you can multiple numbers as normal numbers or as fractions 1/(x * y) = (1/x)/(1/y), and 1/(x + y) = 1/(y/(x*y) + x/(x*y))/(x*y). So everything there is to say about x, 1/x and every relationship between {x1 ... xn} for any n (that is to say any sized group of numbers) can also be said about {1/x1 ... 1/xn}
But here's the problem: even once you've explained every level of this hierarchy in mechanical and physical terms, you haven't explained what the experience of redness is like. -- cthuljew
Well, two things:

1) why do we need to explain it when we can experience it directly? Now obviously someone who is color blind may not be able to see red, and because I cannot explain the true experience of seeing red I would never be able to convince him of it. Isn't the fact that I can experience red enough for me to believe it exists? Why do I need to communicate my experience to another person using language?

2) The other problem with this problem is that a materialist 'theory' need not accept this as true - there is no theoretical reason why we can't transfer qualia directly from one person to another.

3) Is there some other metaphysical view where qualia can be communicated with language? If not, how does this make materialism worse then any other metaphycisal system? If it's not worse then what's the problem? Or do you think there is some future metaphysics that will allow us to transfer qualia if we believe it?
Our experiences are different to physical objects. Or rather, they seem different. -- cthuljew
the problem is that there are lots of things that seem but aren't. Why do you think Seeming is a sufficient condition? Do you think that all things that seem [whatever] are [whatever]?
delmoi, is that you should think very carefully about how, or whether, your experiences really are like other things in the world, like chairs and words and ideas.
Huh? Why? I enjoy thinking about these things, because it's fun. And yes, of course there is a lot cognitive dissonance between the knowledge that that experiences are just chemical and the fact that perception is so visceral, especially vision, for me at least.

There is absolutely nothing I can do to make myself feel that vision, and the self, are not real. But how can you say that because something doesn't feel true it isn't true? I am OK with the fact that there may be differences between what I feel and what I know. That may even cause cognitive dissonance when I think about it. But why not just accept that and move on with your life? Why not just accept that there will be contradictions between your model of reality, and your everyday experience?

You guys seem to want a philosophy that eliminates frustrating contradictions between what we know and our everyday experiences and how we think about the world. But there is not rational reason to assume that things can't be both true and frustrating.


The other problem is that dualism actually produces it's own cognitive dissonance with my sensation of the world. Qualia theory tells us that we only perceive the qualia of things we don't really "see" light reflected off things, and we certainly don't see the actual things themselves.

However, the sensation caused by vision is so visceral that I do not feel at all like I experiencing an image. In terms of how I feel I feel like I am experiencing the things themselves, but that can't be true. However, if we assume that the mind, and the stuff I see are made of the same kind of stuff (i.e. quantum fields or whatever) then in a way that makes more sense - instead of imagining that somehow I'm only seeing an image I can instead think that the stuff around me and my mind are directly physically linked together through the electromagnetic field (via photons bouncing off them and hitting my retina) and are in fact part of the same physical system with no separation then that removes the problem. However, then you have the problem the sense of self and the boundary between self and the rest of the world.

This is how Berkeley ended up thinking that there was no physical reality at all and we experienced was made up of mental substance, and that the reason people seemed to share perception of the same thing was that everything in the universe was actually composed of ideas in the mind of god. Which is obviously kind of a problem if you're not religious. A fully material system, on the other hand though also resolves this the problem.
posted by delmoi at 3:19 AM on March 20, 2013 [1 favorite]


You guys seem to want a philosophy that eliminates frustrating contradictions between what we know and our everyday experiences and how we think about the world. But there is not rational reason to assume that things can't be both true and frustrating.

This is a question of science, more than philosophy. There is a thing in the natural world that we call experience. It is fundamentally unlike anything else we know about in the real world. We want to find out why and how it exists. Shrugging and saying, "eh, it's just confusing, but experience is isomorphic to neuronal action, so don't worry about it" isn't good enough. I guess maybe it is for you, but it definitely isn't for me. Because experiences are qualitatively different to other sorts of things. It's like saying that a painting and the paint + canvas are isomorphic to each other. So long as I know where all the bits of paint are, I know what it's a painting of. Except that the image on the painting is something very different to the paint and canvas, even though it consists of nothing more than said paint. So we need a way to explain that this happens, and we have to invoke artists and viewers and all sorts of other things that aren't just the paint and canvas. The analogy isn't perfect, because there isn't some other thing outside our head that explains our experiences, but there is something at a different level of description that ought to, because just describing the paint and canvas in our skulls doesn't tell us anything about the picture. Your basic argument can be used to deride biologists for worrying about population dynamics because they're all perfectly isomorphic to elementary particles going through the evolutions of their quantum systems. You're ignoring that there are phenomena at levels of description above that of the neuron.
posted by cthuljew at 5:02 AM on March 20, 2013 [1 favorite]


Perhaps qualia are a question of science, but what if there is an event horizon, call it a 'qualia horizon', beyond which science will be unable to comment on what it means to experience the color red? Even as time goes by and the precision and capability of science increases, as the materialist thinker approaches these qualia, catching shadows of the mechanisms, and collecting hints about how they might work, she may be tracing an asymptote that never reaches the goal of proving qualia.

Let me rephrase my above point as a question. For those standing on either side of the debate, Why is it necessary and sufficient for qualia to function outside of a scientific/material conception? And why is a metaphysical explanation of a game of billiards merely sufficient in explaining the mechanics of the game play?

Perhaps the mechanics of experience are disrupted by attempts to capture it.

Also, a note about my definition of language above: the relations between things are also things. These are physical facts like `distance away from`, and `orientation relative to`. It is not necessary to bring up metaphysical explanations of language, it functions just fine in a purely material conception. However, if there exists relations that cannot be measured or processed scientifically, all bets are off.
posted by kuatto at 6:19 AM on March 20, 2013


Perhaps qualia are a question of science, but what if there is an event horizon, call it a 'qualia horizon', beyond which science will be unable to comment on what it means to experience the color red?

I wouldn't call it an "event horizon," I'd call it "an explicitly acknowledged and widely accepted limit of science as an epistemology." Science also won't tell me whether a movie is "good," shouldn't say whether I have rights in the criminal justice system, and can't say anything about mathematical proof. That is because science as a tool constructs probabilistic explanatory models for generalized evidence, and doesn't work well with singletons like your personal experience of color perception when seeing a red squiggly line under a misspelled word.

I'm insistent on a bright line there because I want to preserve the tremendous power of science as a working tool. Just as I would not wish to blunt an axe by using as a shovel, I don't want for science to be hopelessly muddled with questions like, "what does it mean?"
posted by CBrachyrhynchos at 7:04 AM on March 20, 2013


However, if there exists relations that cannot be measured or processed scientifically, all bets are off.

Of course there are. Science can't work with n=1 or non-probabilistic statements where n=infinity. That doesn't mean that those relations are non-material. I don't need supernaturalism to use triangulation (in the qualitative research sense) to construct a case study of the JFK assassination or an interpretation of Shelly's Frankenstein (n=1), or to work through a proof regarding an infinite set triangles (n=infinity).
posted by CBrachyrhynchos at 7:19 AM on March 20, 2013


The mirror test is actually more a test of cleverness than of self-consciousness.

That pigeons can be trained to have MSR does not mean that MSR in apes other than humans is the result of "cleverness." Assuming that's true is a huge leap--and a rather unintuitive one, given the genetic and physical similarities between humans and chimpanzees.

Then again, if you inherently believe that humans are a special and unique class in terms of experience--the underpinning of most religious thought--I suspect that this seems intuitive to you. Seems wrong to me, though, given what we know about brains and evolution.

Never played a video game?

A philosophical zombie is not a zombie as depicted in most media. A philosophical zombie is a thought experiment--a being that is physically identical to another human in terms of brain composition, which displays all outward signs of having normal responses, but does not have "qualia" or an experience of self. Most materialists would tell you that Chalmers (who came up with the concept) goes off the rails when he says this is even possible; any brain that has the same inner workings as your brain, and displays normal responses, could safely be assumed to have subjective experience.
posted by PhoBWanKenobi at 7:40 AM on March 20, 2013


A philosophical zombie is not a zombie as depicted in most media. A philosophical zombie is a thought experiment--a being that is physically identical to another human in terms of brain composition, which displays all outward signs of having normal responses, but does not have "qualia" or an experience of self. Most materialists would tell you that Chalmers (who came up with the concept) goes off the rails when he says this is even possible; any brain that has the same inner workings as your brain, and displays normal responses, could safely be assumed to have subjective experience.

It seems to me Chalmers was just trying to get at the explanatory gap between our materialist explanation of the world and conscious experience. If we simulate H2O, we can clearly see higher-order properties of water like surface tension, capillary action, etc., in the simulation. If we simulate 635nm photons, a retina, and a Boltzmann machine, we may be able to produce many properties of light and brains but there is no part of the simulation we could point to as conscious experience of the color red. If we completely simulated the entire world using only these equations, or whatever, there is nothing in the simulation that we could point to as the property of conscious experience. We just have to be like well we know we're conscious and this is a simulation of us, so let's just say it has consciousness too by fiat.

Perhaps Chalmers goes off the rails when he starts believing in panpsychism.
posted by Golden Eternity at 8:30 AM on March 20, 2013


He's not talking about a simulation, though. The thought experiment is specifically about a universe just like ours (that is, not simulated) without conscious experience--which isn't a world I can imagine, but is one that Chalmers apparently can. But then, I believe that we're robots made of meat.

(If he were talking about simulations, I'd agree with you in that we'd have to agree with "consciousness by fiat." After all, that's, for me, the most compelling argument against solipsism--I have consciousness, so I'll grant you consciousness, even though I have no subjective experience of your consciousness, only indirect evidence for it.)
posted by PhoBWanKenobi at 8:46 AM on March 20, 2013 [1 favorite]


CBrachyrhynchos, point taken, but facts (such as n=1) do not exist independently. It is within those dependent relations that science operates (see Pierce).

I'd call it "an explicitly acknowledged and widely accepted limit of science as an epistemology."

I agree with you on this point. However (and I think this is the crux of the entire discussion) strict materialists would assert that that epistemological limit defines the edge of all understanding and would assert that nothing exists beyond it.

I'm insistent on a bright line there because I want to preserve the tremendous power of science as a working tool. Just as I would not wish to blunt an axe by using as a shovel, I don't want for science to be hopelessly muddled with questions like, "what does it mean?"


If qualia exist, they cannot be reasoned about using science. However, problems crop up when the uninspired thinking of the materialist shoulders aside all else and uses his tremendous power (think atom bombs) to transform education, culture, art, society etc. He attempts to dismiss "that which is not real". If it is an imposition to inquire of the materialist school, "What does it mean?" and "Why?", then is it not equally an imposition for the materialists to assert their heavy-handed worldview?
posted by kuatto at 8:56 AM on March 20, 2013


However, problems crop up when the uninspired thinking of the materialist shoulders aside all else and uses his tremendous power (think atom bombs) to transform education, culture, art, society etc.

However, problems crop up when the uninspired thinking of the dualist shoulders aside all else and uses his tremendous power (think the inquisition) to transform education, culture, art, society etc.
posted by PhoBWanKenobi at 8:58 AM on March 20, 2013


I agree with you on this point. However (and I think this is the crux of the entire discussion) strict materialists would assert that that epistemological limit defines the edge of all understanding and would assert that nothing exists beyond it.

Philosophical materialism != "scientism."
posted by CBrachyrhynchos at 9:00 AM on March 20, 2013 [1 favorite]


Most materialists would tell you that Chalmers (who came up with the concept) goes off the rails when he says this is even possible; any brain that has the same inner workings as your brain, and displays normal responses, could safely be assumed to have subjective experience.

My big problem with such a refutation is that it isn't one--it is just a working hypothesis. This kind of argument against philosophical zombies is not even as sophisticated as a Turing test, which at least has the scientific virtue of being soundly empirical: i.e., if the difference between a known conscious system and a complex machine is not empirically observable, then they are the same. But the Turing test is not intended as a contribution to the philosophy and science of mind. If anything, it's more of a pragmatic statement that creates an equivalence class of systems. At times, I see it as a statement of empirical despair.

There has never been any demonstration that "awakening" occurs in sufficiently complex, properly constructed systems. Nor has there been a mathematical model that illustrates how such a thing might occur. I'm not saying there might not be in the future--after all, just as one example, it took a long time to discover the Higgs boson after it was hypothesized. But keep in mind that there is no equivalent to the Higgs theory in consciousness studies: the Higgs is still a purely objective observable, whereas the crux of the "mind problem" is that we are trying to square the subjective and objective. Just saying "the subjective arises out of the objective" is, at this point, a metaphysical--and frequently ideological--statement without any scientific support. Indeed, radical dualism has created many stupid and/or extreme views that have been discredited: for example, the belief that one should not give psychological descriptions of animals since they are just "machines". Hell, human babies were operated on without anesthesia until just a few decades ago because "they couldn't feel anything", their screams to the contrary.

And bringing up ideas like mental representations, engrams, and encodings does not resolve the issue. Think of what delmoi said in one of his many interesting posts on this thread:

You seem to be having trouble with the concept of representation. If you smash open your hard drive you won't see any of the pictures stored on it. Does that mean that the images actually aren't stored on the drive? Or does the hard drive have it's own spirit in the non-material world where the images actually exist?

This is a clever rejoinder, but it does not address the crux of the matter. Indeed, it illustrates the problem. It confuses encoding with feeling, since we can turn around and ask: just because the hard drive has images of your loved ones encoded on its hard drive, does it feel love and affection for them when the read head accesses the data?

What I find tiresome is the structure of this sort of debate, particularly because "my tribe" (the scientists) seem to be so unable, or unwilling, to untangle their scientific claims from their metaphysical frames. The thing I most question is this: who says that "subjectivity" is "non-material"? One can, as I do, point out the limitations of our current materialist conceptions, particularly regarding mental phenomena, without thinking that one should abandon materialism. Rather, I think it it would be, at the very least, an interesting exercise to imagine a material world in which "mind" is an objectively observable property of systems of matter with the same ontological status as "mass" and "force".
posted by mondo dentro at 9:01 AM on March 20, 2013


who says that "subjectivity" is "non-material"?

The resistance to this comes from the fear that our subjectivity is subject to the same strict determinism as all other material phenomena, that our thought is, in Spinoza's phrase, a "spiritual automaton."
posted by No Robots at 9:30 AM on March 20, 2013


Btw, I say this as an absolute idealist and a relative materialist.
posted by No Robots at 9:32 AM on March 20, 2013


This is a question of science, more than philosophy. There is a thing in the natural world that we call experience. It is fundamentally unlike anything else we know about in the real world. -- cthuljew
Well, there's part of the problem. You say it is fundamentally different, when in fact all you know is that it feels fundamentally different.

And as I said, I can't even say that I, personally don't feel like I'm sensing qualia, I feel like I am directly experiencing the material world. Especially if we say that atoms and photons are essentially the same thing, and "qualia" is only a label applied to the chemical signatures created by those experiences.
It's like saying that a painting and the paint + canvas are isomorphic to each other. -- cthuljew
I assume you mean the physical arrangement of the dry paint on a canvas, not, like, an unpainted canvas and wet paint on a palate then they are probably not the same thing. In that case they are obviously the same thing.
Except that the image on the painting is something very different to the paint and canvas -- cthuljew
That's also true. If you replaced all the red paint with a different chemical makeup but the same color and texture the painting would be quantitatively different but retain the same image. So they're not isomorphic, because there are multiple paintings that can produce the same image. It's more like a surjective relationship (also called an 'epimorphism' sometimes) (at least on first approximation, if you ignore the fact that there are actually lots of variations on the image produced by different lighting, and viewing it from different angles, and so on)

It seems like you are actually making the same mistake discussed in this article - you have a concept 'painting' that doesn't have any real meaning. The arrangement of the paint on canvas is real, the image is real, but "the painting" is just an abstract idea in your mind that is not real.

For me, the "the painting" is the physical arrangement of the paint on the canvas - the image of the painting is created by the painting interacting with a light source. My perception of the painting is created by my eyes interacting with that system, and everything is made out of the same stuff

In other words: (paint + canvas + light) = image, (paint + canvas + light + me) = perception.
I guess maybe it is for you, but it definitely isn't for me. -- cthuljew
Except the universe doesn't care how you feel, and is under no obligation to provide you an explanation of itself that you like.

Obviously you can go on believing whatever you want, it doesn't make any difference to me. But I would say that from my perspective it seems like you're doing two things wrong: 1) you're basing what you think is true on your own emotional response, which doesn't make any sense at all to me. 2) you're confusing conceptual things that represent sets of things with singular things themselves: a painting is a collection of atoms and "the painting" refers to that collection of things not a singular thing, we don't think about the painting that way most of the time because it's not necessary. But here you seem to be drawing inferences from the fact that the painting is a singular thing instead of a label for a particular set of atoms

(and not only that the set of atoms isn't even well defined, since paint can flake off or new paint can be added, so the set changes and there isn't a precise hard boundary between 'the painting' and 'the rest of the unvierse')
If we simulate 635nm photons, a retina, and a Boltzmann machine, we may be able to produce many properties of light and brains but there is no part of the simulation we could point to as conscious experience of the color red. -- Golden Eternity


How is that different from a real brain? You can't point to any part of a real brain and say "that's where the consciousness is!" either.
If qualia exist, they cannot be reasoned about using science. -- kuatto
Well again, that's a definitional issue. You can define qualia in material terms as the physical structure of the experience in your brain. Those physical structures can be studied by science, and are studied by science all the time. We actually know a lot about what neurons respond to what kind of image in the visual cortex, for example.

If you define qualia as something that can't be reasoned about without science, then of course you can't reason about them with science. By definition.
But the Turing test is not intended as a contribution to the philosophy and science of mind.
I think Turing would have disagreed with you about that.
This is a clever rejoinder, but it does not address the crux of the matter. Indeed, it illustrates the problem. It confuses encoding with feeling, since we can turn around and ask: just because the hard drive has images of your loved ones encoded on its hard drive, does it feel love and affection for them when the read head accesses the data? -- mondo dentro
You're missing the point I was trying to make there, which is that someone said you can't smash open someone's head and see the qualia the same way they do. Both brains and hard drives store images, but when a brain access an image it does feel an emotional response. However, if you were to crack open someone's head you would neither see the image or the emotion. (however in theory it might be possible to detect and analyze the physical arrangement of neurons and energy that makes up the image or the emotional response to it - but that data will be isomorphic to it, just as the data on the hard drive is isomorphic to the image. It will not 'look like' the image or the feeling)
posted by delmoi at 9:40 AM on March 20, 2013 [2 favorites]


Except the universe doesn't care how you feel, and is under no obligation to provide you an explanation of itself that you like.

Another hypothesis stated as scientific fact. And a weird one given your overall thrust, since it personifies the universe as if it were a subject. I rather like it! The universe knows that we feel, but it just doesn't give a shit!

I think Turing would have disagreed with you about that.

Nope. He was smart enough not to get into the morass of "what is consciousness". He asked a simpler. mathematically posable question: is there a sense in which we can say that "machines think"? His test simply says, if a machine can imitate a thinker (that is, it presupposes the existence of "thinkers"), then it's "close enough for government work". That is, it creates an empirically-defined equivalence class.
posted by mondo dentro at 9:53 AM on March 20, 2013 [1 favorite]


It *seems* as if the sensation of seeing red is something in the universe that we can perceive/experience with our minds but is somehow independent of us. But since this is an incoherent notion, why is it so hard to accept that our intuitions about the fundamental nature of conscious experience simply suck? Once you accept that, it's no longer necessary to insist that qualia are fundamental entities in the universe that present serious challenges to materialist theories of nature, instead of properties of complex brains that are "important" mainly because we are self-aware apes that like to talk to each other.
posted by leopard at 9:54 AM on March 20, 2013 [1 favorite]


...why is it so hard to accept that our intuitions about the fundamental nature of conscious experience simply suck?

It isn't. Any more than our intuitions about rigid body motion or special relativity suck. But from a scientific perspective, that's no excuse to not engage the subject critically and try to find a way to resolve this "suckiness"--using tools that we have reason to trust (e.g. mathematical models and experiments).
posted by mondo dentro at 9:59 AM on March 20, 2013 [1 favorite]


who says that "subjectivity" is "non-material"?

The resistance to this comes from the fear that our subjectivity is subject to the same strict determinism as all other material phenomena, that our thought is, in Spinoza's phrase, a "spiritual automaton."


I'm suspicious of the entire dualism/holism dichotomy that continues to be thrashed. Intellectual history shows us that usually such irresolvable dichotomies can disappear with a reformulation of the problem. It might very well be that we're not dualistic enough when thinking of consciousness, in the sense that we have not taken our materialist perspective to include "intention" or "mind" as a fundamental property. If we did, the spiritual automaton problem might just disappear.
posted by mondo dentro at 10:03 AM on March 20, 2013 [1 favorite]


that's no excuse to not engage the subject critically and try to find a way to resolve this "suckiness"--using tools that we have reason to trust (e.g. mathematical models and experiments).

Experiments, sure -- neuroscientists can manipulate the brain and nervous system and observe the reported effects on consciousness. (Or they can observe the effects of "natural" manipulations like brain injuries.) I'm not sure how mathematical models are going to help. Quite frankly, this is a philosophical problem, a puzzle that arises because our direct experience of consciousness seems to be at odds with the materialist conception of the universe derived through advances in physics, chemistry, and biology. This creates a tension that can only be resolved by changes in how we think about consciousness and the world. Scientific advancements can change how we think but they are not the only thing that can do so. In fact, since this puzzle seems to be based on the gut feeling that science will *never* be able to fundamentally explain consciousness, "more science" is probably not the true key to progress. The way forward is through the development and refining of our folk psychology -- which is affected by advances in science, for example in research establishing physical correlates of subjective experience, but goes beyond that.
posted by leopard at 10:15 AM on March 20, 2013


I'm not sure how mathematical models are going to help. Quite frankly, this is a philosophical problem, a puzzle that arises because our direct experience of consciousness seems to be at odds with the materialist conception of the universe derived through advances in physics, chemistry, and biology. This creates a tension that can only be resolved by changes in how we think about consciousness and the world.

Why would you assume that these changes in how we think about consciousness will not yield changes in how we model the universe? Prior to the last century, mathematics and physics were believed to reveal the austere, timeless nature of the universe, and concerned themselves with regular motions (the "clockwork universe") and regular geometry. Then various paradoxes within mathematics and physics led to the discovery of fractals and chaos. The concepts and the models changed, and we now see (and model, explain, predict, measure) such "regular irregularities" everywhere. Something like that is likely to happen with our models of intentional systems. Not that I've a clue how exactly, mind you.

I don't have a lot of faith in philosophy on it's own to resolve this. It tends to have a hard time imagining certain things (like limit processes)--perhaps because of it's near-total dependence on language--and creates conceptual categories that often reinforce distinctions where there need not be. Science has a history of synthesis: Newtonian mechanics does not contradict quantum and/or relativistic mechanics, it is contained within them as a limiting case. Whereas philosophy tends to get hung up asking whether the world is stochastic or deterministic (say, as a simple example), science can just answer "yes" to both, and feel comfortable thinking of these two "opposites" as coexisting descriptions of reality.
posted by mondo dentro at 10:34 AM on March 20, 2013


Whereas philosophy tends to get hung up asking whether the world is stochastic or deterministic

Some philosophers have dealt with this in detail, as for example:
The tendency of the material world is towards integration, cooperation and inter dependence. On the other hand the tendency of the spiritual world is towards self-consciousness and freedom. The more thoroughly we shall become integrated materially the higher shall we rise spiritually and the wider will then be our freedom. For freedom we are to look in the direction of Spirit, and not in the direction of matter. As St. Paul said: "Now, the Lord is that Spirit: and where the Spirit of the Lord is there is also Freedom."--The Fetishism of liberty / Harry Waton
Today's materialists are desperately trying to find a morsel of freedom somewhere down in the infinitesimal that will somehow legitimize there faith in their own inner freedom. At the same time, they deny any kind of inner freedom to any other life-form.
posted by No Robots at 10:54 AM on March 20, 2013


Another hypothesis stated as scientific fact. And a weird one given your overall thrust, since it personifies the universe as if it were a subject. I rather like it! The universe knows that we feel, but it just doesn't give a shit! -- mondo dentro
Well, It was a rhetorical florish - although if you parse it carefully I am simply stating that it does not care, that is to say caring about how you feel is not a property of the universe. I didn't say anything about whether or not the universe knows things.

But that said, from a materialist perspective our thoughts and feelings are a part of the universe.

With regard to Turing, this statement:
He was smart enough not to get into the morass of "what is consciousness". He asked a simpler. mathematically posable question: is there a sense in which we can say that "machines think"? -- mondo dentro
is not evidence for this statement:
But the Turing test is not intended as a contribution to the philosophy and science of mind. -- mondo dentro
And furthermore if he didn't want to contribute to the philosophy of the mind, why did he publish his paper in a philosophy journal titled Mind?

That seems to me that if someone didn't want to contribute the philosophy of the mind, publishing in a philosophical journal on the mind would be the last thing they would do.

Furthermore, he does discuss consciousness in his paper, and while he doesn't try to define it, he actually explicitly states that we must either accept that machines can have consciousness, or become "extreme solipsists" who deny the consiousness of other people
According to the most extreme form of this view the only way by which one could be sure that machine thinks is to be the machine and to feel oneself thinking. One could then describe these feelings to the world, but of course no one would be justified in taking any notice. Likewise according to this view the only way to know that a man thinks is to be that particular man. It is in fact the solipsist point of view. It may be the most logical view to hold but it makes communication of ideas difficult. A is liable to believe "A thinks but B does not" whilst B believes "B thinks but A does not." instead of arguing continually over this point it is usual to have the polite convention that everyone thinks.
...
In short then, I think that most of those who support the argument from consciousness could be persuaded to abandon it rather than be forced into the solipsist position. They will then probably be willing to accept our test.
-- Alan Turing
He does go on to say:
I do not wish to give the impression that I think there is no mystery about consciousness. There is, for instance, something of a paradox connected with any attempt to localise it. But I do not think these mysteries necessarily need to be solved before we can answer the question with which we are concerned in this paper.


So he's not trying to totally define it, but he did make one specific claim about the nature of consciousness, which is that if solipsism is incorrect, then computers can have it. You might think that's a fairly weak overall claim, but it is in there.

But more importantly, you initially claimed that he didn't want to contribute to the philosophy of the mind, when clearly the entire point of his paper was to contribute to the philosophy of the mind, because he published it in it in a journal on the philosophy of the mind.
posted by delmoi at 11:00 AM on March 20, 2013 [4 favorites]


I don't have a lot of faith in philosophy on it's own to resolve this. It tends to have a hard time imagining certain things (like limit processes)--perhaps because of it's near-total dependence on language--and creates conceptual categories that often reinforce distinctions where there need not be

People create conceptual categories with or without philosophers. The best thinking on the subject of consciousness has come from philosophers (so has some terrible and mediocre thinking).

You yourself write things like "no mathematical model ever made has subjectivity in it" and "no one has yet made a machine that has become self-aware." The first one perfectly reflects philosophical confusion about subjectivity -- since mathematical models of agents with subjective beliefs obviously don't count for you, you must mean "no mathematical model ever made has *real* subjectivity in it," which is just a category error. For the second one, depending on how you define "self-aware" something as simple as a thermostat is self-aware -- its behavior changes based on its own inner state.

If you believe that we will only understand qualia when there is a mathematical model for qualia, and it turns out that this is an incoherent (not-even-wrong) concept because qualia are not the kind of entities that it makes sense to mathematically model, then you've committed to believing that we can never understand qualia, and no amount of scientific advance will ever change your mind. Because you've made a philosophical error.

science can just answer "yes" to both, and feel comfortable thinking of these two "opposites" as coexisting descriptions of reality.

We're already at this point -- science can just say that people are wrong about their intuitions and feel comfortable thinking of "qualia" and neural firings as coexisting descriptions of reality. For some reason you don't think this is enough, you want there to be Maxwell's laws for the perception of the color red. This is a philosophical issue, not a scientific issue.
posted by leopard at 11:10 AM on March 20, 2013 [3 favorites]


Today's materialists are desperately trying to find a morsel of freedom somewhere down in the infinitesimal that will somehow legitimize there faith in their own inner freedom. At the same time, they deny any kind of inner freedom to any other life-form.

I can't speak for all of "today's materialists" but I believe that human freedom is neatly divorced from the issue of whether the universe is deterministic or stochastic. It just doesn't matter -- freedom is a function of having capabilities (the ability to develop oneself, the ability to be free from oppression from other human beings, the ability to shape one's environment). It's not a function of being free from the laws of physics or being able to achieve completely arbitrary ends on a whim. As such some measure of freedom is available to a number of other life-forms.
posted by leopard at 11:16 AM on March 20, 2013 [3 favorites]


I know what a philosophical zombie is. I mean, I play video games with characters that are often surprisingly realized, and it's not hard to imagine a video game character who is refined to the point of being indistinguishable from a human, but it'd still be a processor crunching numbers.
posted by Pope Guilty at 11:17 AM on March 20, 2013


If it was truly indistinguishable from a human, then I'd be okay granting that video game character "consciousness" even if the mechanism for consciousness was different than ours. In fact, that seems more sensible to me than the invention of a non-materialist space or "substance" for whatever we'd like to think distinguishes us from complex non-organic machines.

I'm nothing if not consistent, for a materialist.
posted by PhoBWanKenobi at 11:33 AM on March 20, 2013 [1 favorite]


delmoi: How is that different from a real brain? You can't point to any part of a real brain and say "that's where the consciousness is!" either.

And that is the point of Chalmers' philosophical zombie and the 'explanatory gap' argument of Kripke as I understand it. Our physical understanding of the universe does not, and perhaps can not, predict consciousness, therefore physicalism is false.

leopard: I'm not sure how mathematical models are going to help. Quite frankly, this is a philosophical problem

Other higher order properties in nature, even population dynamics in biology(?), can be represented mathematically and ultimately predicted with models built entirely from quantum physics(??). Perhaps a sufficient mathematical representation of conscious states would be isomorphic to our experience and capable of predicting new experiences. If our physical models don't predict consciousness, but instead we have to arbitrarily assign conscious states to the models to match what we personally experience, then I am going to suspect the models are not sufficient. I personally have a very difficult time grasping non-reductive, higher-order or abstract explanations. As far as I'm concerned, if it can be simulated it is reductive and we understand it, if it can't it is a mystery and we don't. I guess I need to read "Being No One." I have no background in biology whatsoever which doesn't help.

As I take it, Carroll and Dennett say consciousness is not a mystery, but "folk psychology" makes it so. Turing seemed to admit it might be. I remember Chomsky saying something to the effect that it is not our direct experience that needs to be discarded in order to retain the laws of physics, it is the laws of physics that have kept changing and will keep changing to ever more closely predict our direct experience.

What I have in mind is that we are eventually able to do neurological experiments to precisely determine neural correlates of consciousness, and out of which scientific predictions can be formulated, and out of that our physics may indeed change such that it can formulate mathematical models for conscious states.
posted by Golden Eternity at 12:46 PM on March 20, 2013 [1 favorite]


Today's materialists are desperately trying...

I'm happy with "I don't know (and you probably don't either.)" But I'll grant that having considered that humans are probably not exceptional, my ethical conduct toward non-human organisms and entities is something to take seriously.
posted by CBrachyrhynchos at 1:07 PM on March 20, 2013


Wait, what on earth is the connection between population dynamics in biology and quantum physics?

My understanding is that mathematics is "unreasonably effective" at describing the behavior of fundamental particles -- that is, the language of particle physics is basically mathematics -- but is only "reasonably effective" at describing higher-order things like population dynamics. You have a fairly simple model with some simplifying assumptions, and hopefully you can tweak it so that the model qualitatively matches some real-world phenomena.
posted by leopard at 1:08 PM on March 20, 2013


Also, what would a mathematical model of consciousness even look like? I have no idea what this means, except possibly as some shorthand for "scientific truth = mathematical equations."
posted by leopard at 1:11 PM on March 20, 2013


Also, what would a mathematical model of consciousness even look like?

Yes, that is the hard problem of consciousness in a nutshell, how could the mechanical evaluation of a set of rules produce actual experiences.
posted by Pyry at 1:18 PM on March 20, 2013 [2 favorites]


if it can be simulated it is reductive and we understand it, if it can't it is a mystery and we don't.

If we build an intelligent robot and it tells us that it has feelings and experiences, have we succeeded in simulating consciousness or not? How are we going to know? The whole qualia debate hinges on the idea that a simulation of qualia is not real qualia, you can talk and act as if you see the color red but that's not the same thing as actually experiencing red, which is this wonderful ineffable experience that can't be reduced to brain chemistry and physics.
posted by leopard at 1:19 PM on March 20, 2013 [1 favorite]


I love that this debate is happening while I work on a book for thirteen year olds about robots with apparent feelings and whether or not those feelings are "real." Seriously, metafilter, stay awesome.
posted by PhoBWanKenobi at 1:50 PM on March 20, 2013 [4 favorites]


If we build an intelligent robot and it tells us that it has feelings and experiences, have we succeeded in simulating consciousness or not? How are we going to know?

In my opinion we do not. If we create computer chips or something that can create conscious experiences of new primary colors we've never experienced before (or a new smell or sound or something of this sort) and I wire them into my visual cortex and retina or whatever and start seeing the new colors, then I will believe we have an understanding of consciousness.

Wait, what on earth is the connection between population dynamics in biology and quantum physics?

Of the 6B people on the planet or whatever, I may be the last person you would want to ask if this is possible. That's why a added question marks. It was brought up before, so I used it as an example. What I had in mind is something like an array of structures, where each structure has the properties of an individual member of the population (of a virus let's say). A virus structure would contain properties of the virus and also contain structures of individual proteins, the proteins of molecules, the molecules of atoms, and so forth. I'm not sure what happens when the virus reproduces and how the new structures would be created and added to the arrays. There would also be a model of the environment. The size of the virus array would be the population. Plausibly, the model could go down all the way to the quantum mechanical level, and new states would be calculated based only on the fundamental laws of physics.

Also, what would a mathematical model of consciousness even look like?

A bunch of numbers and relations between numbers. Isn't that what every mathematical model looks like? I doubt anyone is able to answer this very well, but maybe that doesn't mean it can't be done. Again, the idea would be an isomorphism between the model, the physical state of a brain and the world, and the subjective or experiential state being experienced, and furthermore the ability to predict new experiences. This still leaves a huge explanatory gap depending on what counts as an "explanation." Perhaps all of our explanations are lacking in one way or another.
posted by Golden Eternity at 2:03 PM on March 20, 2013


In my opinion we do not. If we create computer chips or something that can create conscious experiences of new primary colors we've never experienced before (or a new smell or sound or something of this sort) and I wire them into my visual cortex and retina or whatever and start seeing the new colors, then I will believe we have an understanding of consciousness.

If you truly believe this, why not be a solipsist? How do you know that I have qualia--that I have "experiences" and not just outward behavior which is evidence of nothing more than brain waves? If you need to experience another's higher order thought firsthand in order to believe it exists, how can you believe that anyone exists beside yourself?
posted by PhoBWanKenobi at 2:07 PM on March 20, 2013 [2 favorites]


I can't speak for all of "today's materialists" but I believe that human freedom is neatly divorced from the issue of whether the universe is deterministic or stochastic.
I actually remember the first time I heard the free will "question", I actually thought it was completely ridiculous at the time. Mainly because at the time it seemed like even if everything was composed of particles following pre-determined paths there would still be no way to know what was going to happen in the future without simulating the entire universe. And because we couldn't actually do the future was effectively undecidable before it happened and therefore not actually 'determined' (My mom gave me a copy of Gödel Escher Bach in middle school, OK?)

Now there's obviously a problem with that in that it assumes that the answers to mathematical questions do not exist until they are computed.

Of course the bigger problem is quantum theory, which in fact makes it totally impossible to predict the future even given perfect information on it's current state.

So it's really not a question of free will vs. determinism, but rather free will vs some kind of non-determnistic quantum randomness.

And of course the response to that is that quantum randomness is not our choice and so quantum randomness does not give us free will, which is obviously true.

Anyway, in computer science you have the concept of pseudorandom complexity, where you have a completely deterministic pseudorandom number generator that is said to be pseudorandom against a complexity class (like P, NP, BQP, whatever).

If you have a function that's pseudorandom against P, for example, then to any P-time algorithm it would effectively have free will, as it's next output could never be predicted.

Obviously if there was some godlike being that was capable of thinking in NEXP or something, they might be able to predict what we would do in the future. But why should that be a problem for us?
Our physical understanding of the universe does not, and perhaps can not, predict consciousness, therefore physicalism is false.
Well, or there are a couple other possibilities:
1) Consciousness is false - and it's seeming existence is illusory, like the movement of the snakes
2) philosophical zombies are false. Any attempt to create a philosophical zombie would cause the spontaneous creation of consciousness - this is similar to Turing's counter to the religious argument about machine consciousness, if god is all powerful he can give a computer consciousness whether we like it or not.

Instead of invoking god, we merely state that if consciousness is an emergent property, then it may not be possible to prevent it from emerging from something we create.

3) As Turing said about computer consciousness, we would have no more ability to know if the philosophical zombie is conscious then we do anyone else - we can only rely on their self reporting. If the "phil-zomb" reports that it is conscious then how can we truly know that it hasn't spontaneously developed it anymore then we know that people besides ourselves our conscious?

(Also, what happens if the philosophical zombie becomes a materialist who claims consciousness does not exist, and is just an illusion (which he claims to experience). If there are people who agree with him, then is the phil-zomb correct while the other people are just in denial?)
If we create computer chips or something that can create conscious experiences of new primary colors we've never experienced before (or a new smell or sound or something of this sort) and I wire them into my visual cortex and retina or whatever and start seeing the new colors, then I will believe we have an understanding of consciousness.
That seems really odd, but at least you are willing to put forward a test that would falsify your beliefs.
A bunch of numbers and relations between numbers. Isn't that what every mathematical model looks like? I doubt anyone is able to answer this very well, but maybe that doesn't mean it can't be done. Again, the idea would be an isomorphism between the model, the physical state of a brain and the world, and the subjective or experiential state being experienced, and furthermore the ability to predict new experiences. This still leaves a huge explanatory gap depending on what counts as an "explanation." Perhaps all of our explanations are lacking in one way or another.
But earlier you said a robot could not be conscious. If we had a mathematical model of consciousness that was fully isomorphic couldn't we run that model on a computer chip? Wouldn't that chip then have some thing isomorphic to 'experience'? Do you think you can events that are isomorphic to Qualia in a computer chip without actually 'being' Quaila?
posted by delmoi at 2:22 PM on March 20, 2013 [1 favorite]


delmoi: With regard to Turing, this statement: [...] is not evidence for this statement: [...] And furthermore if he didn't want to contribute to the philosophy of the mind, why did he publish his paper in a philosophy journal titled Mind?

You're quite right. I shouldn't have said that Turing didn't intend to contribute to the philosophy of mind. The Turing test does contribute to the field of consciousness studies... as do (to the distress of many philosophers) the fields of machine learning, robotics, and AI, which all proceed in a similar way: by ignoring the hard problem of consciousness and just trying to solve "practical" problems. This has been extraordinarily productive. However, my intention was to point out that Turing worked from a mathematician's perspective, carefully posing his problem to avoid difficulties with nasty philosophical issues. I am very familiar, and perhaps too comfortable, with this style.

leopard: Also, what would a mathematical model of consciousness even look like? I have no idea what this means, except possibly as some shorthand for "scientific truth = mathematical equations."

I'll cop to the bias in your last equation. I don't know what it would look like. I also don't want to belabor apparent differences which always seem to get amplified in threads like this, since I suspect that they might in reality be slight.

A personal note that might shed some light on where I'm coming from: I devise models and experiments to study the human neuromotor system's regulation of goal-directed movements. In so doing, I am always acutely aware that my models are "dead". Goal directedness is always exogenous information assumed in the model, and "intention" is something that, well, we'd just rather not mention. It's obvious that this is a problem, in part because the models are so fragile: experiments have to be carefully constructed so that humans don't "mess them up". Many of my colleagues, however, see little intrinsic difference between their models and the human subjects. I'm not saying that they don't see differences between models and reality--models can be more or less sophisticated, and do a better or worse job of matching data. What I'm talking about is the intrinsic and inescapable fact that none of the models have an "interior life" modeled in them. So this is something I think quite a bit about (well, not so much when I'm doing things I can publish... maybe more when I'm in a post-work altered state...) I'm not expecting a grand mathematical solution. I'd just be happy with a decent idea of how to define my "states" so that some degree of "interiority" is included. Yes, a "mathematical model of agents with subjective beliefs" would count for me. I just don't know an esthetically pleasing, compelling, and and scientifically useful way to do that--at least for the stuff I'm doing. Maybe you can vector me to the literature I'm missing...

What bugs me is that by even talking about such things one is stepping into a mine field of scientific cultural taboos. Scientists are terrified of any accusation that they're being "teleological" (even when studying goal directedness!!) or even worse, that they've "gone off the rails" and become followers of "woo". I'm too much of a free-thinker (contrarian, troll, your choice) to be happy with that.
posted by mondo dentro at 2:48 PM on March 20, 2013 [2 favorites]


What bugs me is that by even talking about such things one is stepping into a mine field of scientific cultural taboos. Scientists are terrified of any accusation that they're being "teleological" (even when studying goal directedness!!) or even worse, that they've "gone off the rails" and become followers of "woo".

This kind of intellectual straight-jacketing destroys civilizations.
posted by No Robots at 3:02 PM on March 20, 2013 [1 favorite]


Oh, come off it. Scientists get criticized for engaging in teleology because there's not a scrap of evidence for teleology in nature and because engaging in teleological thinking is a classic red flag for magical thinking.
posted by Pope Guilty at 4:01 PM on March 20, 2013 [5 favorites]


Scientists are terrified of any accusation that they're being "teleological" (even when studying goal directedness!!) or even worse, that they've "gone off the rails" and become followers of "woo".

Right. Feser comments on this deep logical confusion of metaphysics with magic. The same kinds of experiments may be interpreted under different metaphysical assumptions, and those assumptions, by definition, cannot be proven with scientific evidence: rather, they are the frame within which scientific evidence is defined.
posted by shivohum at 5:08 PM on March 20, 2013 [1 favorite]


1) Consciousness is false - and it's seeming existence is illusory, like the movement of the snakes

I equate consciousness with experience. To say consciousness is false is to say we don't experience anything, which seems crazy. I guess I don't understand. I can understand better the complaint that consciousness is poorly defined or not coherent: To speak of a world without consciousness is nonsense, therefore consciousness is incoherent. But I recall someone else on metafilter suggesting many ideas in science started out poorly defined.

Instead of invoking god, we merely state that if consciousness is an emergent property, then it may not be possible to prevent it from emerging from something we create.

I don't understand emergentism. If the property can be modeled mathematically/logically in theory then it is not emergent, it seems to me. If it can't, then I have trouble seeing in what sense it has been explained.

But earlier you said a robot could not be conscious. If we had a mathematical model of consciousness that was fully isomorphic couldn't we run that model on a computer chip? Wouldn't that chip then have some thing isomorphic to 'experience'? Do you think you can events that are isomorphic to Qualia in a computer chip without actually 'being' Quaila?

1/x == x?
If I simulate a hydrogen atom in my computer, is my computer a hydrogen atom?
If I simulate photosynthesis in my computer, is my computer doing real photosynthesis?

That seems really odd, but at least you are willing to put forward a test that would falsify your beliefs.

Odd? Ouch! It's just a thought experiment, obviously. It is disappointing to me that this is taken as an odd question, yet the Turing test (which I find to be very unscientific) is so appealing to everyone.

Color vision seems to me like a good gateway into understanding consciousness since color is so easily identifiable and so connected to physical phenomena that we do understand well. The fact that three arbitrary frequencies of light are used to create the spectrum of color we experience is amazing, but why just three frequencies? Why only three primary colors? What actually is color (the experience not the frequency of light)? Is it possible to create full color experiences just in the brain without light even hitting the retina? Are more primary colors theoretically possible? If we could answer questions like these, it seems to me we'd be on our way to understanding consciousness better.

3) If the "phil-zomb" reports that it is conscious then how can we truly know that it hasn't spontaneously developed it anymore then we know that people besides ourselves our conscious?


If we build an intelligent robot and it tells us that it has feelings and experiences, have we succeeded in simulating consciousness or not? How are we going to know?

>In my opinion we have not.

If you truly believe this, why not be a solipsist? How do you know that I have qualia--that I have "experiences" and not just outward behavior which is evidence of nothing more than brain waves? If you need to experience another's higher order thought firsthand in order to believe it exists, how can you believe that anyone exists beside yourself?


I think John Searle's Chinese Room thought experiment shows how problematic the idea that a turing machine is conscious and actually understands things is. Also, humans love to anthropomorphise everything, and human intuition is notoriously unreliable. Relying on a human's black box impression of something as "having a mind" or "not having a mind" as the only test to validate our understanding of consciousness seems very unscientific to me. When I was thinking about this a while ago I seemed to be coming to the conclusion that the only way some sort of "information processing" theory of consciousness would be true is if something like Tegmark's mathematical monism were true, and this would still seem to require some sort of emergentist magic to get from numbers to the actual physical world we experience. The physical world, and the conscious world to the extent that they can be differnatiated, are just emergent properties of the causal structure of a universe that can best be described as purely mathematical?
posted by Golden Eternity at 7:33 PM on March 20, 2013


The idea that there is some inherent colorness to color, rather than simply how our brains interpret a particular set of stimuli, is an unwarranted leap.
posted by Pope Guilty at 7:39 PM on March 20, 2013 [1 favorite]


But earlier you said a robot could not be conscious.

When did I say that? (Not that I don't contradict myself). I don't believe a turing machine is conscious, but that doesn't mean that once we understand consciousness it couldn't be created artificially.

The idea that there is some inherent colorness to color, rather than simply how our brains interpret a particular set of stimuli, is an unwarranted leap.

The metafilter blue I am experiencing right now is very real to me. To say that it is only an interpretation or whatever seems odd. In any event the same question applies. Can you predict how the brain could interpret things differently such that I see new primary colors? Or can you predict that the three primary colors we see are the only interpretations possible. If you could, then it would seem you have a real explanation of what is going on.
posted by Golden Eternity at 7:45 PM on March 20, 2013


If we create computer chips or something that can create conscious experiences of new primary colors we've never experienced before (or a new smell or sound or something of this sort) and I wire them into my visual cortex and retina or whatever and start seeing the new colors, then I will believe we have an understanding of consciousness.


You are making an assumption here that conscious experiences can be "wired" into your brain. That is, conscious experiences have some existence that is independent of your brain, in the same way that TV programming exists whether or not your TV is on. It definitely *feels* like this is the case, but when I try to think about this logically I can't get it to add up. So I tend to think that consciousness is something of an illusion -- I'm not saying that I'm not "really conscious", but that my experiences have this vivid reality to them that is deceptive. I mean, it's not like I have privileged access into my brain -- I have no idea what my brain is doing, I don't even have any first-hand evidence that I have a brain. So maybe this feeling that there is this experience of seeing "red" that can't be entirely reduced to some patterns of electrical firing in my brain is just an illusion.

I think John Searle's Chinese Room thought experiment shows how problematic the idea that a turing machine is conscious and actually understands things is.

I think the Chinese Room thought experiment was pretty effectively demolished by Dennett and Hofstadter in "The Mind's I" (I think that's where I read about it). There's a slight of hand regarding scale. If we shrink ourselves to the size of a fly and wandered around inside of a human brain, it wouldn't look like there was any thinking or understanding going on in a human brain either.

Plausibly, the model could go down all the way to the quantum mechanical level, and new states would be calculated based only on the fundamental laws of physics.

I'm not very informed but I don't think any scientific theories actually work this way. It's not like chemistry is just the application of the basic physics. Chemistry is chemistry, a subject that operates on a different level and whose laws are not simply extensions of fundamental particle physics.
posted by leopard at 7:54 PM on March 20, 2013 [1 favorite]


You are making an assumption here that conscious experiences can be "wired" into your brain. That is, conscious experiences have some existence that is independent of your brain, in the same way that TV programming exists whether or not your TV is on.

I didn't think I was making that assumption. I said "computer chips," but this could only work if I am wrong about Turing machines being capable of consciousness. Perhaps it would need to be something else like artificial neurons. Or, let's say we are able to grow real new neurons and other brain cells into the brain and the retina to do this. Something like this: previously. Perhaps, when they do figure out consciousness they will be growing actual brains in the lab.

I think the Chinese Room thought experiment was pretty effectively demolished by Dennett and Hofstadter in "The Mind's I"

I'm guessing this is only true to Dennett, Hofstadter and their followers, but I have no idea. This doesn't seem to be the conclusion of the Stanford Encyclopedia of Philosophy, but the article is from 2004.
Leibniz and Searle had similar intuitions about the systems they consider in their respective thought experiments, Leibniz’ Mill and the Chinese Room. In both cases they consider a complex system composed of relatively simple operations, and note that it is impossible to see how understanding or consciousness could result. These simple arguments do us the service of highlighting the serious problems we face in understanding meaning and minds. The many issues raised by the Chinese Room argument may not be settled until there is a consensus about the nature of meaning, its relation to syntax, and about the nature of consciousness. There continues to be significant disagreement about what processes create meaning, understanding, and consciousness, as well as what can be proven a priori by thought experiments.
If we shrink ourselves to the size of a fly and wandered around inside of a human brain, it wouldn't look like there was any thinking or understanding going on in a human brain either.

Maybe if the fly understood consciousness, and new what neural correlates of consciousness to look for, it would. It might point to something and say, "yup, those neurons firing up over there are creating a conscious experience of red because they are doing this and that" or something.
posted by Golden Eternity at 8:57 PM on March 20, 2013


In my opinion we do not. If we create computer chips or something that can create conscious experiences of new primary colors we've never experienced before (or a new smell or sound or something of this sort) and I wire them into my visual cortex and retina or whatever and start seeing the new colors, then I will believe we have an understanding of consciousness.
Yes, this is true if you define "consciousness" as comprising solely the neurochemical processes of sense organs. But now you're begging the question. "If we can create a computer which simulates this version of consciousness that I just defined, then we will have understood the version of consciousness I just defined."
posted by deathpanels at 10:34 PM on March 20, 2013 [2 favorites]


First I'd like to note, I don't think qualia is a useful concept and I don't think they "exist" the way consciousness exists. I am definitely not in the qualia camp.

This talk of Turing machines and Chinese rooms misses what I think is the most important point of this debate. I think it is perfectly consistent that we could create conscious artificial life forms, inject the experiences of one thinker into the mind of another, and do all sorts of other amazing things, all without understanding consciousness. (The analogy here to people breathing, making balloons, and stoking fires all without understanding oxygen should be obvious.) The question isn't "what does the brain do when we have an experience?" Well, that's certainly one question, called the Easy Problem of Consciousness. The Hard Problem is, "why, while it's doing all those things, do we also have experiences?"

Now, maybe the answer is that hammers have the experience of hitting nails, but due to the fact that we can't experience the thoughts of others (I'm going to coin a Dennett-like term for this fact, "the eerie of mind"), we'll never know. But it seems to me perfectly consistent to think of beings far more intelligent and sophisticated than us, capable of far greater technological innovation and far deeper understanding of the universe, all without having conscious experiences (Peter Watts's Blindsight explores this in great detail and with great insight). Such beings would share many cognitive features with us, but not share the cognitive feature of conscious experience. Basically, the assumption that consciousness falls out of any sufficiently complex system seems entirely unmotivated. It seems to me that consciousness can only arise in systems designed to be conscious. (Design in the rhetorical sense of evolution and etc, etc, blah, blah.) As such, we need to account for the design features of the brain that give rise to conscious experience, and not mere to the physical processes that follow that design.

What I failed to make clear with my painting analogy is that you can't explain to someone what a painting is without talking about artists and viewers. Because if you just describe the physical content of a painted canvas you're missing important information about paintings. But this has nothing to do with paintings as such; it has to do with explanations. What I want isn't to pin down consciousness as an invisible glowing orb in the middle of our chakras, nor as a physical, neurochemical process in our brain, but as a logical arrangement of elements, totally agnostic to substrate, that gives rise to subjective experience. I want to explain why a brain can lead to experience, rather than just to biological action and intentionality. The Hard Problem of Consciousness shouldn't necessarily have to evoke the brain. The brain can and will certainly be a tool in addressing the problem, but only because it's the only tool we have that we (by consensus) know is a conscious system. My problem with the Dennett position certainly isn't that claims that there's nothing extra going on in the brain beyond the physical; it's that it refuses to acknowledge that there's something that needs to be explained. I want to explain the feeling.
posted by cthuljew at 10:54 PM on March 20, 2013 [2 favorites]


Yes, this is true if you define "consciousness" as comprising solely the neurochemical processes of sense organs. But now you're begging the question. "If we can create a computer which simulates this version of consciousness that I just defined, then we will have understood the version of consciousness I just defined."

I was intending to define "consciousness," or a form of it anyway, as the experience of color and in particular a new primary color *not* neurochemical processes. Producing a simulation or mathematical model that can predict phenomena that can then be verified in the real world does seem like it is producing a valid scientific explanation to me but that wasn't part of the thought experiment either.

The point was that explaining what comprises or causes color experience to happen to the extent that new colors not currently available to humans could be predicted as possible or not and creating them if possible would further our understanding of color. The test for such a theory would be actually seeing the new color directly, and that would require integrating it with the consciousness I have now, which I presumed would require "wiring" it into my brain. The point of using computer chips was to demonstrate that I was not requiring neurochemical processes, but rather a black box that would be defined by the theory. I fail to see how this is begging the question, the premises were:

1) color sensation is a conscious experience
2) me seeing color proves it exists
3) predicting and creating a new color would validate a theory of color consciousness (not necessarily a complete theory but one with some predictive power).

I think it is perfectly consistent that we could create conscious artificial life forms, inject the experiences of one thinker into the mind of another, and do all sorts of other amazing things, all without understanding consciousness.

Hmm. After reading cthuljew's comment I wonder if the 3rd premise is still in doubt.
posted by Golden Eternity at 11:51 PM on March 20, 2013


Scientists get criticized for engaging in teleology because there's not a scrap of evidence for teleology in nature...

Well, you mean no evidence except for minds, right?
posted by mondo dentro at 5:20 AM on March 21, 2013


If we shrink ourselves to the size of a fly and wandered around inside of a human brain, it wouldn't look like there was any thinking or understanding going on in a human brain either.

And you don't even have to go that far "down". If you measure the neuromuscular activity associated with a smooth motion of your arm as you touch your finger to your nose, you see a lot of noisy-looking crap. So a lot of the research effort for, say, human-machine interfacing and prosthetics is focused on extracting the structure of these seemingly noisy signals. It is this mapping of the noisy "microscale" at the neural level to the smooth, regular "macroscale" of motion that I find particularly fascinating. It's pretty clear that there's a statistical physics analogy here: if we look at a vibrating guitar string at the atomic or molecular scales, we just see thermal randomness, while the coherent mode shapes of the vibrations are at the scale of human interaction. But that's not a complete analogy, by any means...
posted by mondo dentro at 5:55 AM on March 21, 2013 [1 favorite]


Well, you mean no evidence except for minds, right?

Minds are only evidence of purpose if you start by presuming that they are.
posted by Pope Guilty at 6:05 AM on March 21, 2013 [2 favorites]


Since it's impossible to subjectively distinguish between purpose, and the appearance of purpose (ie, I can't tell if I'm choosing to do A, or if I just feel like I'm choosing to do A), 'purpose' itself as a concept is either meaningless or axiomatic, your choice <-- ha ha.
posted by unSane at 6:26 AM on March 21, 2013 [1 favorite]


Simon Blackburn thinks the book should be banned (via Maverick Philosopher).
posted by No Robots at 9:53 AM on March 21, 2013


Minds are only evidence of purpose if you start by presuming that they are.

I think you're missing my point, on multiple levels, so let me try to clarify.

First, minds are "purpose generating machines". They illustrate the existence in the universe of systems built around purpose. Now, you can explain that as an epiphenomenon or not, but to say there is "no evidence of teleology" is only true for people who have already rejected the notion of teleology. You're confusing theory and observation by doing this.

For example, it is wrong to deny that there is evidence that the sun goes around the earth. Of course, now that we have embedded that evidence in a coherent theoretical framework (classical mechanics), we believe that interpretation is profoundly wrong. But the evidence supports a Ptolemaic model rather well. Likewise, there is plenty of evidence in the world for teleology--which explains its persistence in human culture. It's just that, rightly in my view, we abandoned the emphasis on this sort of thinking during the rise of science: we have found that many, many things that seem to be explainable as coming from a purpose can instead by explained as purposeless, emergent behavior, and that the overemphasis of teleology from the earlier "magical" ages was a huge source of error. So, I'm not interested in reversing the insights gained in this centuries-old pursuit. However, we are in a new era of science that is not just focused on the dead matter in the universe, for which this approach was wildly successful, but seeks to model and understand naturally occurring autonomous agents possessing some degree of self awareness (i.e., "animals"). Do you really want to not talk about "purpose" in that context? My opinion is that to do so frequently leads to wildly non-parsimonious models.

Second, the issue of what one "presumes" when interpreting data is at the core of this entire thread--as well as at the heart of mathematical modeling. The math part of any model is meaningless. That's one of the main reasons math (and here I'm talking about applied mathematics and mathematical science) is so powerful: the same logical structure can be imbued with different meanings, depending on context--that is, depending upon what one "presumes".

Finally, I may have given the mistaken impression that I'm advocating for boneheaded teleology, perhaps because you've spend a lot of time arguing with creationists. I'm not at all advocating for that. I'm speaking as an applied mathematician and modeler, and I'm saying that the fear of having one's arguments called teleological--which carries the threat of being ostracized from the scientific community--has led many scientists to an irrational hatred of anything that they feel comes to close to using arguments based on purpose. This is closing off possible avenues for creative thinking. And while an erudite tag-team cage match with Dennett and Dawkins vs. Nagel and Chalmers would be most entertaining, I don't see it as improving the situation all that much, because successful theoretical breakthroughs have often come from someone realizing how something can be viewed as "both/and" rather than "either/or". Just think of the way the wave/particle debate was resolved. My skeptical view is that "real" philosophers (as opposed to philosophically oriented scientists) would have simply kept elaborating why "wave-ness" was more like "how the universe is" than "particle-ness".
posted by mondo dentro at 9:56 AM on March 21, 2013 [1 favorite]


However, we are in a new era of science that is not just focused on the dead matter in the universe, for which this approach was wildly successful, but seeks to model and understand naturally occurring autonomous agents possessing some degree of self awareness (i.e., "animals").

And this line of thought forces us to acknowledge that nothing is really dead, that everything is in some sense alive. At the same time, it reduces human teleology to natural proportions as nothing more than the subjective experience of an object's inertia, ie. its tendency to maintain itself in motion.
posted by No Robots at 10:05 AM on March 21, 2013


And this line of thought forces us to acknowledge that nothing is really dead, that everything is in some sense alive.

Yes. And this is what's freaking "materialists" out. But... why? What is the rational reason for not considering this possibility? Despite what many say, it's not the evidence.

It's not like consciousness would be the first thing in the universe that behaves like this (being present in all things, even when it's not obvious it is). Both quantum and relativistic effects are present in all matter/energy at all times. It just that their respective phenomena do not become obvious until we reach certain thresholds (the Planck length for the former, near the speed of light for the latter).

Why not the same for consciousness? I might be totally wrong--let me be more blunt: I'm most likely totally wrong--but I don't need to be a New Age purveyor of woo, or even stop self-identifying as a materialist, to entertain the notion!
posted by mondo dentro at 10:18 AM on March 21, 2013


"A Darwinist Mob Goes After a Serious Philosopher"
posted by No Robots at 12:42 PM on March 21, 2013 [1 favorite]


I equate consciousness with experience. To say consciousness is false is to say we don't experience anything, which seems crazy.
Well, "seems crazy" is not a universal measure of something because different things seem crazy to different people. Furthermore, some of the effects of quantum physics, like double-slit experiment can "seem crazy". Optical illusions can "seem crazy". So it stands to reason that things can both seem crazy and be true.
If the property can be modeled mathematically/logically in theory then it is not emergent, it seems to me.
Well, come up with a mathematical definition of consciousness and then this might matter. As it stands now it seems like that no one will ever come up with a mathematical/testable model. Look what happened when Turing came up with a proposed test - everyone decided that 'consciousness' was defined as something that could not be tested for that way and that if a computer could pass a Turing test it was just faking it.

The problem here is that you are assuming there is some mathematical definition, despite the fact that none exists that everyone can agree on.

The second problem is the assumption that things that can be modeled mathematically cannot emerge from simpler processes. In fact, lots of things that can be defined mathematically emerge from simple rules, such as entropy and evolution. A simple system will tend towards entropy increasing even if you don't code for it, and a simple mutation/selection system will produce interesting results that are not coded for.

So we know that for some fundamental property of complex systems, X, then X can emerge in code even if we don't code for it. Again, without a mathematical definition of consciousness, we cannot say it would not emerge.
If I simulate a hydrogen atom in my computer, is my computer a hydrogen atom?

If I simulate photosynthesis in my computer, is my computer doing real photosynthesis?
Okay, that’s a fairly interesting question, I don’t think there is a straight ‘yes/no’ answer.

First, keep in mind order for those things to happen, physically real electrons need to move around inside your computer. The simulation itself is not "immaterial". The simulation is physically real, made up of physically real things. It isn't the same thing as a hydrogen atom, but if the simulation is correct then it would be mathematically isomorphic to a real hydrogen atom.

You could not say from a mathematical point of that they are different, with the exception that you need to apply the mathematical mapping functions to them to convert from real to simulated. Which you obviously cannot do, since you can not apply arbitrary mathematical formulas to physical objects. (If we could, we could take things out of reality and put them on a computer, and then create physical reality out of stuff on our computers, which make the question of 'reality' of simulation more challenging.)

Now, one key difference though is that hydrogen and photosynthesis both involve quantum physics. Assuming that the brain is a classical machine, and that it stores data using some kind of encoding the way a computer does (i.e. a certain voltage gradient in a neuron, or perhaps RNA or something like that) then we can actually apply mathematical formulas to the encoding.


So, what I would say is that there is a difference between "materially real" and "mathematically real" the simulated hydrogen is a materially real 'thing' inside a computer, but not a real hydrogen atom On the other hand, there is no mathematical distinction. And if we are trying to figure out mathematical definitions of things, you cannot have a mathematical definition of something that would exclude things that are isomorphic. The mathematical definition of a “Turing machine” also includes every other type of machine that is turing complete, such as a “RAM machine”, the abstract model that your computer is actually based on (physical turing machines can be built, so long as you drop the requirement for an infinite supply of blank tape :))

So that’s why I think your idea of coming up with a mathematical definition can never happen - there is no way to create a mathematical definition that would exclude simulated minds. And, if you include them, everyone will disagree with you about their having consciousness. Otherwise we’d all just be using Turing’s definition.

(Oh, then you have the whole question of what ‘quantum’ reality actually is. There’s an interesting tech talk that explains the various double-slit experiments and he comes up with this “zero worlds” interpretation, in which case some might say that the hydrogen atom or photosynthesis aren’t really “real” in the first place. But that’s just one of the many interpretations of quantum mechanics.)
Color vision seems to me like a good gateway into understanding consciousness since color is so easily identifiable and so connected to physical phenomena that we do understand well. The fact that three arbitrary frequencies of light are used to create the spectrum of color we experience is amazing, but why just three frequencies? Why only three primary colors?
Why are you asking me and not a neuroscientist? Those all seem like empirical questions. I did take a class on neurochemistry in college, and IIRC you can stimulate images of lines and bars and stuff like that in people’s brains. Maybe not in color, but that seems like it’s more of a technological limitation then some kind of ‘philosophical’ limitation. I don’t see why the stimulation of color images would somehow be off limit to physical chemistry.


Adding a new color might be very difficult from a technological perspective, however, there are people who have four color receptors and can see with four different primary colors. So it is obviously possible for the human brain to be modified to do this. That doesn’t mean that an existing brain can be ‘upgraded’, obviously but the development from a single cell to a brain+retina with four colors can happen.
Think John Searle's Chinese Room thought experiment shows how problematic the idea that a turing machine is conscious and actually understands things is.
The chinese room is ridiculous. I don’t think it’s clear that someone wouldn’t be able to learn chinese that way if given an infinite amount of time, and enough computing resources (obviously with an infinite amount of time he could use a pencil and paper, but let’s give this poor guy a break here :)
human intuition is notoriously unreliable.
LOL. The only reason why we are having this argument is that some people are unwilling to let go of their intuitive sense of what consciousness is! The very first thing you said was that consciousness can’t be false because that “seems crazy” But whether or not something “seems crazy” is an entirely intuitive judgement.

It is only because people are unwilling to accept unintuitive explanations that this is even an issue.
The metafilter blue I am experiencing right now is very real to me.
It feels real to you. But you only know that it feels real because you experience yourself experiencing it. movement of the rotating snakes seems just as real as the blue on the screen. But I know it’s not real. There is a contradiction between sensation and knowledge.

So, the sensation that you feel conscious is no more evidence that you are conscious then the fact that I see motion means that there is motion.
Chemistry is chemistry, a subject that operates on a different level and whose laws are not simply extensions of fundamental particle physics.
Chemistry is actually probably the only science that can be derived directly from quantum mechanics at this point. As you scale up to, say, biology it would become way complex to actually do anything, although I did see a video lecture talking about how you can look at the quantum information in DNA molecules and do things like predict mutation rates, (it was another google tech talk)
Perhaps it would need to be something else like artificial neurons.
Neurons can be fully simulated on computer chips, So if you can do it with neurons you can do it with chips. You can even simulate them down to the quantum physical level if you want, although that would take a lot of processing power.

Maybe you should read-up on current neuroscience. Just buy some college textbooks off amazon and read through ‘em. Or you could try to teach yourself using MIT open coursewear classes. They have a whole section on Brain and Cognative Sciences, lots of classes about how neurons work, what we know about how they are arranged in the brain (not all that much), etc.
Basically, the assumption that consciousness falls out of any sufficiently complex system seems entirely unmotivated. It seems to me that consciousness can only arise in systems designed to be conscious. - cthuljew
Yeah that’s pretty ridiculous. It's not even logical.
A) X is not an property guaranteed to emerge system of certain complexity
B) therefore if X is present, it must be designed.
Clearly a logically invalid statement. Presumably the missing premise is that only things that must emerge from a complex system end up doing so. But that is obviously false. Evolution creates all kinds of crazy things. Consciousness may be as improbable as squid that fly on jets of water.
For example, it is wrong to deny that there is evidence that the sun goes around the earth. Of course, now that we have embedded that evidence in a coherent theoretical framework (classical mechanics), we believe that interpretation is profoundly wrong. - mondo dentro
In order to say that the sun does not go around the earth, you need a fixed frame of reference. Under relativity, things do not have actual positions, only distances from each other. So the sun and earth go around each other, mainly due to the sun’s gravity. Obviously the earth gravitationally orbits the sun, while the gravity of the earth has a negligible (but non-zero) effect on the sun. And the surface of the earth is an accelerating reference frame because it is rotating. But there is nothing in relativity that says the sun isn't going around the earth.
And this line of thought forces us to acknowledge that nothing is really dead, that everything is in some sense alive.
Yes. And this is what's freaking "materialists" out. But... why? What is the rational reason for not considering this possibility? Despite what many say, it's not the evidence.
- mondo dentro
If that's what you think then it seems pretty obvious that you don't even understand what materialists actually think. It's been scientifically established for over a century (IIRC) that there isn't any difference between living and non-living matter. "Alive" vs. "Dead" is just a value judgement, like "Beautiful" vs. "Ugly".

That is not to say that not say that beauty does not 'exist' just that it's not a some kind fundamental substance in the universe, which you think is the case with consciousness. Which probably some of you people actually think, given the fact you probably experience beauty the same way you experience consciousness.
posted by delmoi at 3:00 PM on March 21, 2013 [2 favorites]


Yeah that’s pretty ridiculous. It's not even logical.

It wasn't meant as a logical statement but as an empirical one. Evolution designs things for certain purposes (i.e. survival or reproductive value), it doesn't just randomly create them, except as byproducts of design processes. So it seems that consciousness must have either had some survival or reproductive value (I'm guessing the latter) or it was a byproduct of some other cognitive process that did. I think there can be many other kinds of cognitive systems that can be just as complex as a human brain, but which would not be reflexively self-conscious the way a human brain is.

And, yes, only those things which must emerge will emerge, because in nature "can" and "must" are functionally equivalent. Since nature itself isn't teleological (despite certain parts of it acting teleologically) anything that happens must happen under that set of circumstances.

And for the record, if you think I'm in the "[consciousness is] some kind fundamental substance in the universe" camp, you are very wrong. Can't speak for anyone else, but I merely think it's a phenomenon that needs to be explained, like evolution or galaxy formation, not a nuisance that can just be handwaved away.
posted by cthuljew at 8:55 PM on March 21, 2013


tl;dr. sorry.

The question isn't "what does the brain do when we have an experience?" Well, that's certainly one question, called the Easy Problem of Consciousness.

I think it is more than this. The brain does a lot of things when we have experience that probably have nothing to do with consciousness (metabolizes glucose, consumes oxygen, etc). We need to find the things that "cause" or are "isomorphic" with consciousness. This is not an easy problem, but it seems to me there must be something there to be found other than "information processing." If we find it, perhaps it will provide insight into the hard problem in surprising ways, similar to how Newton may have changed how people think about the world.

But it seems to me perfectly consistent to think of beings far more intelligent and sophisticated than us, capable of far greater technological innovation and far deeper understanding of the universe, all without having conscious experiences (Peter Watts's Blindsight explores this in great detail and with great insight).

Intelligence requires understanding the meaning of things/words (aka intentionality), and it seems to me understanding requires consciousness. I haven't read the book but I found this random amazon review:
The team establishes communication with the aliens very early, but the linguist soon decides that the responses from the aliens are a “Chinese Room”, that is, responses dictated by a non-self-aware system, something that repeats back (in a highly sophisticated way) meaningless (to the alien) word-sounds, but so cleverly put together according to the rules of grammar that, to the humans, they seem like intelligent speech.

Which would seem to indicate the aliens actually do not understand what they are saying individually or as a collective. It's cool that Watt's wrote about CR. I'll have to check this out.

What I want isn't to pin down consciousness as an invisible glowing orb in the middle of our chakras, nor as a physical, neurochemical process in our brain, but as a logical arrangement of elements, totally agnostic to substrate, that gives rise to subjective experience.

I think a possible problem with this would be that a "logical arrangement" is 'observer-relative' as Searle calls it. It is something that only exists to a mind and can't be appealed to as an explanation of mind.

Can Information Theory Explain Consciousness?
This distinction underlies another distinction—between those features of the world that exist independently of any human attitudes and those whose existence requires such attitudes. I describe this as the difference between those features that are observer-independent and those that are observer-relative. So, ontologically objective features like mountains and tectonic plates have an existence that is observer-independent; but marriage, property, money, [...] have an observer-relative existence. Something is an item of money or a text in an intellectual journal only relative to the attitudes people take toward it. Money and articles are not intrinsic to the physics of the phenomena in question. Why are these distinctions important? In the case of consciousness we have a domain that is ontologically subjective, but whose existence is observer-independent. So we need to find an observer-independent explanation of an observer-independent phenomenon. Why? Because all observer-relative phenomena are created by consciousness. [...] Our explanation of consciousness cannot appeal to anything that is observer-relative —otherwise the explanation would be circular.
Dretske may be a good resource on intentionality as well:

Dretske (1985) agrees with Searle that adding machines don't literally add; we do the adding, using the machines

Brentano, Dretske and Whether There is Intentionality Below the Level of Mind

Well, "seems crazy" is not a universal measure of something because different things seem crazy to different people.

I take it your position is color eliminativism:
Color eliminativists believe that, strictly speaking, nothing in the actual world is colored: ripe lemons are not yellow, traffic stoplights are not red, and so on. Of course, eliminativists allow that these objects are perceptually represented as bearing colors: ripe lemons look yellow, traffic stoplights look red, and the like. It is just that, in their view, these perceptual representations are erroneous.

Color "representations" exist, they are just "erroneous." I probably misunderstood you as saying they don't exist.

So that’s why I think your idea of coming up with a mathematical definition can never happen

I'm not sure I was asking for a mathematical definition of consciousness. I was imagining a mathematical model of a theory of color consciousness based on empirical data and lab measurements that would have the power to predict conscious experiences, and suggesting that the ability to predict 'never before experienced' conscious states (colors) would prove its use as an explanation and make it testable. Something more powerful than "neurons firing create consciousness," "information processing create consciousness," "consciousness just emerges," etc.

Neurons can be fully simulated on computer chips

Begging the question, imo.

however, there are people who have four color receptors and can see with four different primary colors

Wikipedia:
Visual information leaves the eye by way of the optic nerve; it is not known whether the optic nerve has the spare capacity to handle a new color channel. A variety of final image processing takes place in the brain; it is not known how the various areas of the brain would respond if presented with a new color channel.
Interesting. I had no idea. Based on wikipedia I would be suspicious that people who may have four color receptors do not have the full circuitry to actually experience four primary colors. I don't see why this couldn't be determined empirically, though. What I had in mind is the structures put in place to add a new color or other experience would have to be built and designed based on theory, even if done with artificial neurons or whatever, not by adding a gene and seeing what happens which obviously doesn't explain much.

It feels real to you. But you only know that it feels real because you experience yourself experiencing it.

"Experiencing it" is all that matters. I am talking about the *experience* of color vision and how it arises in a material world. What is actually on my LCD is not necessarily that relevant to the problem of understanding experience itself.

I don’t think it’s clear that someone wouldn’t be able to learn chinese that way if given an infinite amount of time

This becomes clear to me if I add a step where the Chinese characters are first converted directly to binary numbers then put in a serial string. The man in the room only sees a string of 1's and 0's and does simple boolean arithmetic one bit at a time. I think the system or 'virtual mind' replies are the best. It seems to me most believers in AI fall into these categories.

the issue of what one "presumes" when interpreting data is at the core of this entire thread--as well as at the heart of mathematical modeling. The math part of any model is meaningless.

From a hard core materialist perspective, the universe would also seem to be devoid of meaning. Meaning requires a "mind," perhaps.
posted by Golden Eternity at 12:55 AM on March 22, 2013


The team establishes communication with the aliens very early, but the linguist soon decides that the responses from the aliens are a “Chinese Room”, that is, responses dictated by a non-self-aware system, something that repeats back (in a highly sophisticated way) meaningless (to the alien) word-sounds, but so cleverly put together according to the rules of grammar that, to the humans, they seem like intelligent speech.

Well, turns out it's not the aliens who were doing that, strictly speaking, but you should just read the book.

Intelligence requires understanding the meaning of things/words (aka intentionality), and it seems to me understanding requires consciousness.

This is something I really disagree with, and ties in again to me calling modern ideas about consciousness "muddled". Understanding can be thought of in various ways, but I think the most productive way to think of it is functionally. An ant can "understand" that a certain chemical is harmful to it. It might take a few times wading into that chemical before it "gets it", but I think understanding is just this sort of reinforcement of (physical or mental) behavior. We are aware of our understanding, but I don't see any necessary link between awareness and understanding as such. A child can learn to avoid a flame with no recollection of they time they were burned — that is, they understand that a flame is hot, but are not aware of their understanding.

I think a possible problem with this would be that a "logical arrangement" is 'observer-relative'

I think logic is independent of minds. Any attempt to work out the consequences of the same set of axioms will be able to arrive at the same set of conclusions. I think there's some features of the brain that are as unarguable as the features of the Earth's crust that we then call "plate tectonics", and that those features give rise to subjective experience. And just as we can go look at another planet and decide that the arrangement of its outer and inner layers justifies us calling it "tectonically active", we could look at another cognitive system and decide that its arrangement justifies us calling it "conscious" (once we identify such a hypothetical arrangement, of course).

I agree with delmoi and Dennett that we are not justified in assuming that consciousness really is different to other aspects of the universe. But I disagree with them in that we don't need to explain why it appears different.
posted by cthuljew at 1:25 AM on March 22, 2013 [1 favorite]


If that's what you think then it seems pretty obvious that you don't even understand what materialists actually think. It's been scientifically established for over a century (IIRC) that there isn't any difference between living and non-living matter.

The way you're using this phrase, it is just a tautology. Matter by definition is not living. Now, properly arranged systems of matter... that's another story.

Sure, the matter is the same. But the living system is clearly different, and this is true whether or not self-aware robots are possible. I'm assuming you've buried loved ones, so you know what I'm talking about. The absurdity of your position is manifest with statements like these.

Finally, if I am not a materialist, given what I do every day to earn my keep, who is? And who gets to decide? You?
posted by mondo dentro at 7:32 AM on March 22, 2013


I agree with delmoi and Dennett that we are not justified in assuming that consciousness really is different to other aspects of the universe. But I disagree with them in that we don't need to explain why it appears different.

It's like translating a dead language or some other encryption. We ask what an ancient petroglyph means, with a deer here and a human there and wonder if someone isn't telling a story. In reality, it's probably a territory marker or other legal instrument where the rules have been forgotten. The evolutionary key to consciousness is long gone, but it remains useful in communication among autonomous beings because it demands satisfaction and even certainty at times by asking why. It would be strange to ask why without assuming our importance, as children and bosses do, and so it does. It's like a game. It awkwardly explains itself by references to gods and other what-ifs and tries to claim it has higher values in order to escape the absurdly physical, but in the end these amount to little more than mourning a loss because it all reminds us of death. The troubling part about consciousness is that most people like to assume it isn't dangerously controlled, led to believe it exists apart.
posted by Brian B. at 7:41 AM on March 22, 2013 [1 favorite]


"A Darwinist Mob Goes After a Serious Philosopher"

Dang, that article is awful! Like the OP's article, it seems to set up Nagel as some kind of martyr being attacked by a flock of -- to use the article's term -- dittoheads, rather than actually evaluating Mind And Cosmos on its own merits.

He thinks strictly but not imperiously, and in grateful view of the full tremendousness of existence; and he denies matter nothing except the subjection of mind; and he speaks, by example, for the soulfulness of reason.

No, he does not think strictly; the edifice on which he constructs his whole book is the wishful mewling of a creature who wants to believe it's Important. It is not about reason, but about Nagel's own ego.
posted by Greg Nog at 9:34 AM on March 22, 2013 [2 favorites]


It is not about reason, but about Nagel's own ego.

"A Darwinist Mob Goes After a Serious Philosopher." Q.E.D.
posted by No Robots at 12:22 PM on March 22, 2013


WTF is a Darwinist?
posted by Artw at 12:32 PM on March 22, 2013


WTF is a Darwinist?

What F.C.S. Schiller described as "the narrowest, most self-righteous and unphilosophical of biological sects, that of the ultra-Darwinists."
posted by No Robots at 12:36 PM on March 22, 2013


Do we get spaceships and leather jumpsuits?
posted by Artw at 12:39 PM on March 22, 2013 [4 favorites]


Sure. Think "ultravixens" minus teh sexy.
posted by No Robots at 1:38 PM on March 22, 2013 [1 favorite]


I would still like to keep the sexy aspect please
posted by Greg Nog at 3:14 PM on March 22, 2013 [1 favorite]


It wasn't meant as a logical statement but as an empirical one. -- cthuljew
I wonder what you think the word "empirical" even means?
And, yes, only those things which must emerge will emerge, because in nature "can" and "must" are functionally equivalent. -- cthuljew
I suppose this is another logic-free "empirical" statement as well?

Obviously if you abandon logic consistent logic anything and everything can be true, in which case there isn't really much point having a discussion.

___
The way you're using this phrase, it is just a tautology. Matter by definition is not living. Now, properly arranged systems of matter... that's another story. - mondo dentro
Except what I was replying to was this:
And this line of thought forces us to acknowledge that nothing is really dead, that everything is in some sense alive
Yes. And this is what's freaking "materialists" out. But... why? What is the rational reason for not considering this possibility? Despite what many say, it's not the evidence. - mondo dentro
So in one comment you say we must consider the possibility that "everything is in some sense alive" and in the next you say matter is by definition not living. Except you just said everything might be alive. So either you're not paying attention to what you're saying or you think "everything" does not include "matter"

Either way, totally illogical.

Also, the actual definition of matter doesn't say anything about whether or not it's "alive". You can't just make up definitions and then claim that "by definition" things are true.
_____
Based on wikipedia I would be suspicious that people who may have four color receptors do not have the full circuitry to actually experience four primary colors. I don't see why this couldn't be determined empirically, though. -- Golden Eternity


Well try doing a google search? here's one result
Over the course of two decades, Newcastle University neuroscientist Gabriele Jordan and her colleagues have been searching for people endowed with this super-vision. Two years ago, Jordan finally found one. A doctor living in northern England, referred to only as cDa29 in the literature, is the first tetrachromat known to science. She is almost surely not the last.

Sitting in a dark room, peering into a lab device, women saw three colored circles flash before their eyes. To a trichromat, they all looked the same. To a tetrachromat, though, one would stand out. That circle was not a pure color but a subtle mixture of red and green light randomly generated by a computer. Only a tetrachromat would be able to perceive the difference, thanks to the extra shades made visible by her fourth cone.
Jordan gave the test to 25 women who all had a fourth cone. One woman, code named cDa29, got every single question correct. “I was jumping up and down,” Jordan says. She had finally found her tetrachromat
Believe it or not there are still some things we don't know and are in the process of learning. Maybe this article just hasn't been referenced yet Wikipedia or maybe it's not considered fully confirmed or something - but there is obviously empirical evidence.
Neurons can be fully simulated on computer chips -me
Begging the question, imo. -- Golden Eternity
Well, it might seem that way if you don't know how they work, but your apparent ignorance is not really my problem. What's odd is that you seem to think your ignorance of certain scientific facts is evidence of something, even to people who are not.
"Experiencing it" is all that matters.
But you don't know you're experiencing it if you don't experience yourself experiencing it. I mean, if you were to dream in a new color, you wouldn't actually know about it when you woke up (slightly different as that would be remembering vs. awareness). If you were to do this test, and it worked, but you were somehow unable to experience yourself experiencing the new color - you would not have any knowledge that it was successful.
This becomes clear to me if I add a step where the Chinese characters are first converted directly to binary numbers then put in a serial string. The man in the room only sees a string of 1's and 0's and does simple boolean arithmetic one bit at a time.

That's not even how computers work, they operate on many bits at once (like 64), which is enough to store a visually recognizable image of most Chinese characters.

So for example, if you encoded Chinese characters into 64 bit bitmaps, and inputted them as a string, the guy could cut them into 8 bit strips, line them up into 8x8 blocks, and then see the image.

Restricting the guy to only using binary math is defeating the point. Originally it was suggested that the guy couldn't learn Chinese even if he could use his own intelligence.

Also, he is interacting with other people writing Chinese too him, so he can try experiments to see what kind of results he gets, if the people on the other side want to help him, it would be much easier.

In fact, given a situation where the person on the other side is actively trying to help them it seems as though it wouldn't be very different then the process by which Helen Keller learned language, her only senses were touch, smell, and taste. Presumably, the person on the other side of the room would be able to start with simple combinations of characters, and work their way up, just as normal human babies get simple sounds and words first.

I'm not making a claim about what it would be "like" to "be" a computer, simply stating that I think a human (given infinite time and human intelligence) in a room trying to learn Chinese could do it, and that therefore the "Chinese Room" argument isn't a proof anything.
posted by delmoi at 4:42 PM on March 22, 2013


I wonder what you think the word "empirical" even means?

The word "empirical" means "observable and testable in nature". That is, either it's true that any sufficiently complex system gives rise to consciousness, or it's not. There is no matter of logic about it. You need to go out into the world and look. I wasn't making an argument. I was just making two loosely related observations, and relating my impressions of what is likely to be the case if we go out and look. You're welcome to disagree with me about its likelihood — there's no need to be petulant about it. I'm really not sure why you're being quite as hostile as you are in your replies.

As for the second excerpt of mine you quote, I'll grant that it's a pretty strong position, and probably one I shouldn't have committed to the way I did. But it also isn't really central to the discussion at hand, so I didn't develop it further.

But you don't know you're experiencing it if you don't experience yourself experiencing it.

All I'm really concerned with is why we have any experience at all, whether of a color or of our own experience of a color, or any level of recursion and stacking you want. If experience is happening, I think we need to explain why. Again, not just what neural/physical activity is giving rise to it, but why such activity ought to give rise to something like it.
posted by cthuljew at 6:00 AM on March 23, 2013


But you don't know you're experiencing it if you don't experience yourself experiencing it.

That's beside the point it seems to me. The existence of conscious experience in the present is not contingent on 'knowledge of me experiencing it.' If I am in a state of knowing prior experience that is wrong, my experience of this false knowing still exists. I would agree that everything else except present experience, including the assumption of a self, could be doubted in theory only, but it is incomprehensible to me to deny present experience itself. And if the idea that we can have any "knowledge" of experience like colors is seriously doubted, which is absurd, I don't see how anything could not be doubted. That's not to discount the unreliability of subjective experience.

I think a human (given infinite time and human intelligence) in a room trying to learn Chinese could do it, and that therefore the "Chinese Room" argument isn't a proof anything.

The "man in the room" is intended to do only what the real CPU does as closely as possible but with intentionality and consciousness added in order to show that a real CPU could not understand Mandarin even if it were conscious. Such a mind-possessing CPU only has a few memory registers for receiving single data and instructions, for one thing, and obviously is not capable of understanding a language.

The assumption is that *the program already passes the Turing test* and thus, according to AI adherents, already understands Mandarin. No further learning is required. If we were to allow the "man in the room" to fully investigate what was happening using all of his intellect, in addition to his duties executing the program with pen and paper, and allow that he does eventually start to learn Mandarin after a long time, *the Turing test would still pass without him understanding Mandarin* during the entire time he is still learning.

Anyway, it seems to me the system/virtual mind reply is the best, which requires that the paper, pencil, and man taken together as a system form a new mind that is able to understand Mandarin, and is exactly what AI adherents believe about computers, anyway. Personally, I think it is clear that computers and algorithms as currently understood could not possibly be conscious and understand meaning without arbitrarily and magically assigning experiences to "dead" physical objects.

Dan Dennett:
I want to shift the burden of proof, so that anyone who wants to appeal to private, subjective properties has to prove first that in so doing they are not making a mistake. This status of guilty until proven innocent is neither unprecedented nor indefensible (so long as we restrict ourselves to concepts).

The burden of proof is on the Turing test. Appealing to private, subjective feeling that something "has a mind" should have to prove it is "not making a mistake."

here's one result

Cool.
For example, when looking at a rainbow, these females can segment it into about 10 different colors, while trichromat (with three iodopsins) people can see just seven: red, orange, yellow, green, blue, indigo and violet. For tetrachromat women, green was found to be assigned in emerald, jade, verdant, olive, lime, bottle and 34 other shades.
The idea was that a breakthrough theory of consciousness would predict unexpected phenomena, similar to general relativity's prediction of Mercury's perihelion precession, deflection of light by the sun, gravitational red shift of light, etc. I was suggesting new primary colors as a possibility, (a new and different rainbow if you will) but maybe that is dumb. It seems that in these tetrachromat cases it is not a new extension to the RGB color spectrum that is occurring, but better definition of colors within or derived from the RGB spectrum. Maybe this has significance in philosophy, it seems to me it could possibly suggest some sense of color realism.

Well, it might seem that way if you don't know how they work

I question if neuroscience understands enough about even an individual neuron to say it could be replaced by a computer interfaced to artificial synapses.

10 Unsolved Mysteries Of The Brain

Cytoskeletal Signaling: Is Memory Encoded in Microtubule Lattices by CaMKII Phosphorylation?

Chomsky:
I think we learn a lot of things from the history of science that can be very valuable to the emerging sciences. Particularly when we realize that in say, the emerging cognitive sciences, we really are in a kind of pre-Galilean stage. We don't know what we're looking for anymore than Galileo did, and there's a lot to learn from that.
a logical arrangement of elements, totally agnostic to substrate, that gives rise to subjective experience. I want to explain why a brain can lead to experience, rather than just to biological action and intentionality.

me: Intelligence requires understanding the meaning of things/words (aka intentionality), and it seems to me understanding requires consciousness.

Understanding can be thought of in various ways, but I think the most productive way to think of it is functionally.

I think logic is independent of minds.


I agree that logic is 'observer-independent.' But, if 'logically arranged elements' are 'agnostic to substrate,' they wouldn't seem to be physical. A physical object can't be considered as independent from substrate, it is the substrate, perhaps.

If a theory of consciousness is 'independent' of its 'substrate,' it would seem not to be a physicalist account of consciousness. Suggesting conscious entities can supervene onto 'logical arrangements' in and of themselves regardless of the 'substrate' suggests that logical entities (empty symbols) form their own referents and create the "physical" and "mental" world out of thin air. A big problem with mathematical monism, it seems to me, is it is not possible to tell just by looking at a purely mathematical or logical structure what it is referring to - color,smell,sound or time,space,velocity etc.

I think we should at minimum want a theory of consciousness that describes the bio-chemical or physical states upon which conscious states supervene. I want to suggest that our current understanding of neuroscience is not be able to do this, and is therefore incomplete, and this incompleteness may need to be addressed at the level of fundamental physics.

I think we should see the world as essentially a black box, and math, science, and philosophy as instrumentalist - useful in their ability to describe or predict our experiences. Mathematical monism seems kind of backwards - math becomes fundamental and math itself creates our experiences.
As an empiricist I continue to think of the conceptual scheme of science as a tool, ultimately, for predicting future experience in the light of past experience. Physical objects are conceptually imported into the situation as convenient intermediaries not by definition in terms of experience, but simply as irreducible posits comparable, epistemologically, to the gods of Homer . . . For my part I do, qua lay physicist, believe in physical objects and not in Homer's gods; and I consider it a scientific error to believe otherwise. But in point of epistemological footing, the physical objects and the gods differ only in degree and not in kind. Both sorts of entities enter our conceptions only as cultural posits.
--Willard Van Orman Quine
William F Buckley said he had to flog himself to read Atlas Shrugged. I have the same experience trying to read Dan Dennett. I thought W.V. Quine might be similarly painful, but I am shocked at how pleasurable it has been to read about him - I think I love him, actually.
posted by Golden Eternity at 2:54 PM on March 24, 2013 [1 favorite]


If a theory of consciousness is 'independent' of its 'substrate,' it would seem not to be a physicalist account of consciousness.

Nah. By analogy: a house can be built out of bricks, wood logs, or steel and glass. In any case, the arrangement of these parts into an enclosure with rooms for various purposes and certain necessary elements (such as a kitchen and bathroom, perhaps) is what makes it a house, not the material it is built out of. In the same way, the conscious system (house) can be made out of neurons or transistors or crystals or whatever, so long as it is arranged in such a way that a consciousness results. Such an account of conscious experience would not require anything non-physical, and wouldn't require some sort of Platonic reality of mathematical objects, but would require the recognition of certain patterns that regularly recur in reality, seemingly independently of the physical nature of that bit of reality. In the same way that a horse is just the particular arrangement of a bunch of tissues, and a tissue is the arrangement of a bunch of cells and so on, I argue that consciousness is the particular arrangement of a bunch of mental processes, which are themselves the arrangement of a bunch of computations all running on a neurological substrate. This same computational arrangement could (I suggest) be set up on any other sufficiently powerful piece of hardware

I think we should at minimum want a theory of consciousness that describes the bio-chemical or physical states upon which conscious states supervene.

And of course we want this — it's called the Easy Problem of Consciousness. Not because it's an easy problem to solve, mind you, but because we at least have a decent idea of how to approach it: learn about the brain. The Hard Problem, of explaining how conscious experiences arise out of the brain's workings, we do not have any definitive strategy for approaching yet because we don't even have a consensus on what the target of our research is! Is sense-perception an integral part of consciousness? Is intelligence? Is memory? Etc. People used to think these were all the same and inseparable. Now we know better. What other parts that we now assume are essential really aren't? Self-awareness? Unity of identity? Who knows.
posted by cthuljew at 3:52 AM on March 25, 2013


« Older As lexicographers revel in the capabilities of onl...  |  Wrestling Out Of The Olympics ... Newer »


This thread has been archived and is closed to new comments