a pink sliver of rat brain sat in a beaker
July 18, 2009 12:21 PM   Subscribe

The simulated brain - "The scientists behind Blue Brain hope to have a virtual human brain functioning in ten years... Dr. Markram began by collecting detailed information about the rat's NCC, down to the level of genes, proteins, molecules and the electrical signals that connect one neuron to another. These complex relationships were then turned into millions of equations, written in software. He then recorded real-world data -- the strength and path of each electrical signal -- directly from rat brains to test the accuracy of the software." Is it possible to digitally simulate a brain accurately? Can it only be analog? And are there quantum effects to be considered? (previously 1 2 3 4)

+ some other AI/brain robot projects:
Blue Brain is controversial, and its success is far from assured. Christof Koch of the California Institute of Technology, a scientist who studies consciousness, says the Swiss project provides vital data about how part of the brain works. But he says that Dr. Markram's approach is still missing algorithms, the biological programming that yields higher-level functions...

Despite the challenges, the push to understand, replicate and even re-enact higher behaviors in the brain has become one of the hottest areas of neuroscience. With the help of a $4.9 million grant from the U.S. Department of Defense, IBM is working on a separate project with five U.S. universities to build a tiny, low-power microchip that simulates the behavior of one million neurons and ten billion synapses. The goal, says IBM, is to develop brainy computers that can better predict the behavior of complex systems, such as weather or the financial markets.

The Chinese government has provided about $1.5 million to a team at Xiamen University to create artificial-brain robots with microcircuits that evolve, learn and adapt to real-world situations. Similarly, Jeff Krichmar and colleagues at the University of California, Irvine, Calif., have built an artificial-brain robot that learns to sharpen its visual perception when moving around in a lab environment, another form of emergent behavior, a form of spontaneous self-organization. And researchers at Sensopac, a project backed by a grant of €6.7 million ($9.3 million) from the European Union, have built part of an artificial mouse brain.
BONUS MEMRISTORS
- Memristor Minds, the Future of Artificial Intelligence
- New Memristor Makes Low-Cost, High-Density Memory
- Spintronic Memristors
- The Mysterious Memristor
- Understanding Memristors - "in which the memristor would be used as an analog device"*
posted by kliuless (251 comments total) 25 users marked this as a favorite
 
Just don't give it the keys to the nucular missiles, m'kay?
posted by localroger at 12:28 PM on July 18, 2009


are there quantum effects to be considered?

no. not in the quantum computing sense.
posted by alk at 12:30 PM on July 18, 2009


An ethical question that's rapidly becoming more pressing: if you actually do create an intelligence, is it ethical to deactivate it?

Assume, for the sake of argument, that they do make a virtual human brain, and that it doesn't go insane. Isn't it, then, human? Don't normal ethical rules apply, like not experimenting on humans?

We've been thinking about this in science-fiction form for many many years. It's been explored in fiction nearly to exhaustion, and thus most of us will probably find the moral issues to be blasé and boring. But, if they succeed in their stated goal, it won't be just be 'let's pretend' anymore. We won't be arguing about fictional entities in a book, but real entities in a lab.

Maybe we need to start thinking about this seriously, not just in play?
posted by Malor at 12:34 PM on July 18, 2009 [2 favorites]


There's still no explanatory theory of mind/consciousness. I don't know how they plan to boot up a brain without giving it exposure and interaction with the real world, but an AI built from a wire-by-wire brain simulation seems like it would leave us with more questions than answers

If this whole experience of self we all have really is just the result of a bunch of ho-hum algorithms that evolution has been able to map onto our carbon-based wetware, that's going to leave me with a pretty empty feeling
posted by crayz at 12:42 PM on July 18, 2009


That's a good question, Malor, and one which I think will be incredibly difficult to answer tidily (as with stem-cell and embryonic-cell research, there will be a vast gray area within which reasonable people will arrive at radically different positions). One thing that is worth considering, though, about an AI brain is that it is, in principle, immortal (or, at least, has no natural terminal lifespan) and, perhaps more importantly, it is in principle recreatable. That is, if I switch an AI brain off, it is in principle possible to do so in such a way that it's state at that moment is recordable and recreatable.

I'm not sure what the consequences of those facts are, exactly, but they seem to me to entirely change the nature of the ethical implications of these decisions. If I store a record of the exact state of every "mind" I turn off, such that it could be recreated with no loss of a sense of continuity to that "consciousness" at any time in the future am I doing anything more than putting that consciousness into a form of suspended animation?

If I make a consciousness that is in every way similar to a human consciousness, do I have an ethical obligation to keep it alive beyond any possible human lifespan, or should I predetermine it's lifespan before I start?

I think my main feeling is that questions like these show how radically situationist our ethical stances are--they only make sense in the context of human limitations.
posted by yoink at 12:46 PM on July 18, 2009 [2 favorites]


If this whole experience of self we all have really is just the result of a bunch of ho-hum algorithms that evolution has been able to map onto our carbon-based wetware, that's going to leave me with a pretty empty feeling

But what else could it possibly be?
posted by yoink at 12:47 PM on July 18, 2009 [2 favorites]


If this whole experience of self we all have really is just the result of a bunch of ho-hum algorithms that evolution has been able to map onto our carbon-based wetware, that's going to leave me with a pretty empty feeling

Why? Does that make your life any less of a miracle?
posted by scrowdid at 12:55 PM on July 18, 2009


Brute force it!
posted by norabarnacl3 at 12:57 PM on July 18, 2009 [1 favorite]


Thinking seems to be O(1) and somewhat unreliable, i'd say the quantum computational effects are going to be determinant.
posted by CautionToTheWind at 1:04 PM on July 18, 2009


They're Pinky and the Brain, Yes Pinky and the Brain, one is a genius, the other's insane...

"If I store a record of the exact state of every "mind" I turn off, such that it could be recreated with no loss of a sense of continuity to that "consciousness" at any time in the future am I doing anything more than putting that consciousness into a form of suspended animation?"

Dunno. You'll have to ask said consciousness and see if it's pissed at you. Literally, I should think.

There are many fascinating aspects of this, but riffing on the above --- how do they really know they've got this right? I suppose it's merely a variation on the "is my red the same as your red" problem....but with humans at least we're operating with the same wetware. If one brain is artificial, then this seems to me a more serious problem...we're presuming an artificial entitiy which of necessity passes the Turing Test, but I dunno that we'll ever really be sure that our creation works the same way as we do. That is, if consciousness is in fact an emergent property of a sufficiently sophisticated network, then mimicking said network will mimick consciousness. But if it's not, it won't, yet the end result --- a sophisticated network where the yellow lightbulbs light up as per spec --- will be the same.
posted by Diablevert at 1:05 PM on July 18, 2009


Bugger, forgot to italicise. My apologies.
posted by Diablevert at 1:07 PM on July 18, 2009


Assume, for the sake of argument, that they do make a virtual human brain, and that it doesn't go insane. Isn't it, then, human?

If so, is it ethical to make a human brain that does not have a human body? Does the human brain expect to have arms, eyes, senses etc?
posted by Brandon Blatcher at 1:07 PM on July 18, 2009


Why? Does that make your life any less of a miracle?
Because you could put me in a computer and run me twice, and get the same result
posted by crayz at 1:16 PM on July 18, 2009


But will it have a plan?
posted by The Whelk at 1:19 PM on July 18, 2009 [1 favorite]


Malor, I think one more aspect to consider is the circumstance of creation.

We can choose to create the new intelligence in one of three modes: our ulterior, our equal or our overlord.

The ethical standards for the first category would be, I assume, the same as for dissecting a rat's brain - it will be thoroughly studied and continuously rebooted until somebody falls in love with it and declares it untouchable on account of its irresistible cuddliness.

The second category, our equal, is IMHO still a long way off and, again IMHO, has no point really. There already is a (much more pleasurable) way of creating our equals.

My favourite, and I dare say, the actual purpose of our whole existence as a human race, is the launching of a superior intelligence. In that case my proposed ethics would be simple - once the intelligence achieves overlord status, simply welcome it and let it take the lead. If it really is an intelligence above ours, it's our best bet anyway.
posted by Laotic at 1:19 PM on July 18, 2009


the end result --- a sophisticated network where the yellow lightbulbs light up as per spec

If the artificial brain lights up exactly the same as ours, it's also going to think it is conscious. Can we say it isn't? Anyway this is the philosophical zombie thought experiment
posted by bhnyc at 1:19 PM on July 18, 2009 [2 favorites]


If it's an intelligence sufficiently above ours it'll take the lead anyway.
posted by Skorgu at 1:22 PM on July 18, 2009


I think this is a neat idea, but probably not for the same reasons most do.

This will not teach us about human intelligence in any meaningful sense. It will not be human, or close to it. The architecture will be radically different enough that we will not be dealing with a human-like mind. It might be a neural network, and have a similar processing capacity, with many of the drawbacks of your basic neural network, but it won't be Guy in a Box. Think of it more like a dolphin; big brain, but we don't have as much in common as we'd like.

(I don't think we even want a human intelligence — nearly seven billion brains in the planet, we're not exactly lacking.)

At the same time, this probably won't be particularly useful intelligence. It'll have to be taught in the same kind of painstaking manner as any other neural network. And, like other thinking platforms there, messing about with them in a meaningful manner, short of the occasional lobotomy, is difficult. We won't have a very tunable, convenient intellect. We won't be able to make it, say, friendlier, or have more of an interest in chess, because we have no easy way to do that with humans now aside from raising them. We won't be able to prepackage these minds and pick out options like leather bucket seats or deep understanding of military strategy and tactics. They would exhibit the sloppy interconnectedness and unfathomable nonlinearity of organic minds, without the handy benefits we have from silicon now.

Okay, maybe two hundred years from now we could have some kind of payoff in the ability to rapidly upload a human brain into some other kind of architecture, doubtlessly in a destructive manner ("crushload"). Maybe, just maybe, if the tech could get small and self-sufficient enough, it could serve as generic replacement brain matter for some very traumatic injuries, but my guess is that that won't be used unless it's significantly faster than the custom-order tissue engineering which would doubtlessly beat it to the market.

The project will primarily serve as some kind of murky crystal ball looked into while we attempt to pick our favorite theories and ethical concerns out of our own dim, distorted reflections peering back at us. That could be very useful indeed, but the idea that we could get minds to order out of it is at odds with what the project is attempting to imitate.

We could see benefits in semi-autonomous agents like retrieval bots and so forth, but anything out of a hyperspecialized niche won't work out. Talk to some dog trainers — you can breed a lot, but sometimes the canine in question hasn't the temperament for whatever job it is you are trying to do. And the bigger the brain, the more you'll have this problem.

That's what we're dealing with, not what will be a round of I HAVE NO FUNDING AND I MUST SCREAM as some black box, powered by nuclear decay batteries, wishes someone would plug its eyes and network connection back in while it undergoes sensory deprivation because some asshat forgot to put it into sleep mode before sticking it in a shelf in the same warehouse they keep the Lost Ark all because Congress drew a line through the associated appropriations in this year's budget.
posted by adipocere at 1:24 PM on July 18, 2009 [6 favorites]


Thinking seems to be O(1) and somewhat unreliable, i'd say the quantum computational effects are going to be determinant.

That's absurd. First of all, my understanding is that it's to hot inside the brain to do anything like what we call 'quantum computing' which is really just a mathematical model at this point, but will probably need to be done at extreemly low temperatures.

Second of all quantum computers don't make algorithms run in "O(1)" time. They can solve the set of problems in the complexity class PQP in P time, which while larger then P is still smaller then NP. So if you have an Exponential time algorithm in NP, like boolean satisfiability you still can't solve it with a quantum computer.

From a mathematical, computer science perspective, your statement is totally absurd, it would be like trying to say that driving is quicker then walking by saying "You can't walk there, but it would only take few hours to drive to the moon" Total nonsense.

---

I do think our current models of the brain as a network of neurons our totally insufficient to describe human cognition, and I think eventually we'll discover structures inside brain cells that store memory. It could be RNA or it could be something else.

(Interesting sidenote: our noses are quantum devices. They detect chemicals by their resonate frequency, rather then simply their chemical properties, so different isotopes of the same elements can smell different)
posted by delmoi at 1:26 PM on July 18, 2009 [2 favorites]


Er, I mean BQP (bound quantum polynomial) not PQP. The link is right, though :P
posted by delmoi at 1:28 PM on July 18, 2009


An ethical question that's rapidly becoming more pressing: if you actually do create an intelligence, is it ethical to deactivate it?

It is UNethical to activate it, and I don't see how this is a complicated (ethical) issue. The ethics seem really, really simple to me.

I'm assuming that this system mimics the human brain, and...

If so, is it ethical to make a human brain that does not have a human body? Does the human brain expect to have arms, eyes, senses etc?

Of course the brain expects to have a body!

So if such an artificial brain could be built, it would be like building a quadriplegic.
posted by grumblebee at 1:48 PM on July 18, 2009


are there quantum effects to be considered?

The whole question of whether consciousness-as-embodied-in-neurons necessarily depends on quantum effects instead of just plain old chemical and electrical reactions has always seemed awfully handwave-y to me. "Ooh, consciousness is mysterious and not fully understood. What else is mysterious and not fully understood? Quantum! It must be 'cos of quantum!"

Meanwhile:
a pink sliver of rat brain sat in a beaker containing a colorless liquid. The neurons in the brain slice were still alive and actively communicating with each other. Nearby, a modified microscope recorded some of this inner activity in another brain slice. "We're intercepting the electro-chemical messages" in the cells

Wouldn't those electro-chemical messages they're recording primarily be "OH CRAP WE'VE JUST BEEN SLICED OUT OF THE REST OF OUR BRAIN?" It seems a bit like trying to figure out how a computer program works by slicing it into arbitrary segments, and letting each of them run separately for a bit while you take a snapshot of the memory they're using -- any meaningful information about the signal pattern will have evaporated by the time you measure it.

I get that they're using it as training data for their algorithm, rather than trying to piece it all together directly, but it still seems like they're building their training corpus up out of what must mostly be random firings from disconnected slices of the brain.
posted by ook at 1:56 PM on July 18, 2009 [1 favorite]


At the end, do we find that we're *all* digitally created intelligences powered by some super-alien's megaprocessing watch, like some kick-ass version of those Japanese tradeable digital animals? Are we, even now as you read this, just denizens of The Matrix? Soylent Green is DATA!
posted by jamstigator at 2:01 PM on July 18, 2009


Actually the project seems to have stalled. According to Markram they had completed a single neocortical column from a rat brain about 16 months ago. In essence we only have his opinion to tell us that the simulation was accurate, but in any case a single neocortical column is a tiny fragment of only one part of a brain. Markram himself is on record as saying the full brain could not be done without a computer that could handle 500 petabytes of data.

So even if we overlook the debatable methodology - has it ever been possible to simulate the functioning of a human organ by just building a replica in the absence of an underlying understanding of the process it implements? - success is still a long way off at best.

So the ethical questions are interesting and worthy of debate, but in my view not at all pressing in practical terms.
posted by Phanx at 2:16 PM on July 18, 2009


It seems a bit like trying to figure out how a computer program works by slicing it into arbitrary segments, and letting each of them run separately for a bit while you take a snapshot of the memory they're using

There's no reason why you can't do that. In fact, people have been doing stuff like that since the beginning of computer programs to try to break what's now called DRM.
posted by delmoi at 2:35 PM on July 18, 2009


ook: It seems a bit like trying to figure out how a computer program works by slicing it into arbitrary segments, and letting each of them run separately for a bit while you take a snapshot of the memory they're using -- any meaningful information about the signal pattern will have evaporated by the time you measure it.

Actually, this is not far from some valid techniques in reverse engineering. It's a desperate measure you resort to because you can't get the big picture, so you try to find some functionality at a low level you can understand and work up from that. Usually you can find some subroutine primitive enough that you can figure out what it does from first principles, and then you look at what calls it and see if it gives you a clue. And that's pretty much exactly what these guys are trying to do with a brain; if they can figure out what the low level circuits are doing, then the macro-organization might make more sense.
posted by localroger at 2:44 PM on July 18, 2009


It's the "and let the slices continue running for a while in isolation before you make your measurements" part I'm objecting to, which doesn't happen in the type of software analysis you guys are describing. When you reverse engineer software you can look at the bits in vivo, as it were, without having to disconnect them from the rest of the program first.
posted by ook at 2:55 PM on July 18, 2009


I've always thought the "but is it a REAL consciousness" thing to be a pointless question. I have no idea--and can never know--whether any of the people I know are "real" consciousnesses. It is purely an act of faith that assumes in all other human beings the same internal experience of consciousness that I have. The only test for the "consciousness" of a computer-bred mind is behavior--and it's the only test that matters. If the machine talks to me as if it were a human-in-a-box then it's a human-in-a-box. If it doesn't, it isn't.

The ghastly messy grey area, of course, is if I successfully produce something exactly like, say, a brain-damaged human. Would switching that off to "fix" it or "improve" it be the same as performing experimental brain surgery on a brain-damaged human without their informed consent? If I couldn't improve them and decided to abandon the experiment, is that equivalent to euthanasia? Am I free to judge the "quality of life" of a brain-in-a-box in a way that I wouldn't feel free to judge the "quality of life" of a brain damaged (or mentally deficient) human?
posted by yoink at 3:12 PM on July 18, 2009 [2 favorites]


It's the "and let the slices continue running for a while in isolation before you make your measurements" part I'm objecting to, which doesn't happen in the type of software analysis you guys are describing. When you reverse engineer software you can look at the bits in vivo, as it were, without having to disconnect them from the rest of the program first.

One step at a time.

Someone will work out a non-invasive technique for copying a brain. And eventually hardware will be such that it can run a full simulation of a copy. And eventually everyone will backup their brain periodically (once a day? once a minute?). And if they chose, they can have their brain reconstructed biologically upon a messy death. Or have it run on a computer. Computer hardware will be able to run your brain sped up if you like and it will make time go slower since you think faster. Maybe speed time up for a while - "sleep" through this winter. It'll be pretty cool - IMHO. Probably within the next 100 years, certainly within the next 1000. Maybe in my lifetime...

Assumption: There is nothing special about brain matter. "Me" is just the running of my brain. It doesn't matter if it is biologically or electronically.
posted by Bort at 3:16 PM on July 18, 2009


Someone will work out a non-invasive technique for copying a brain

Well, sure, someday. Maybe. I was discussing this particular study, though, which seems (unless the journalist's description of what's going on is misleading, which, granted, is pretty likely) to founder on that hurdle.
posted by ook at 3:28 PM on July 18, 2009


Of course the brain expects to have a body!

So if such an artificial brain could be built, it would be like building a quadriplegic.


You're making what I think is an unwarranted assumption about cognitive scientists: That "functionally isomorphic" actually means "functionally isomorphic."

I.e., I don't take it as given that when they say they will be creating an accurate model of a human brain, they'll actually have bothered to copy it all. This is the same group of folks, after all, who surprised themselves by deciding that a "mind" made up of people pulling strings couldn't have mental states (a.k.a. the "chinese nation mind" argument). I've compared that in the past to being surpirsed to discover that you can't make an internal combustion engine out of cheese.

Put another way: You're assuming they'll be able to build it as a functioning human brain. I don't think that's possible or advisable, and if they did I don't think they'd learn much from it. Much more interesting would be to build progressively more complex independent "brains" that controlled sensory-motor apparatuses and see how they progressed. Start with a fruit fly, not a human.
posted by lodurr at 3:44 PM on July 18, 2009


great thread, BTW.
posted by lodurr at 3:44 PM on July 18, 2009


Ian M. Banks: Excession


Pretty good book with lot's of AI and some ethical issues surrounding it, but not the main point of the book.
posted by dibblda at 3:49 PM on July 18, 2009


The scientists behind Blue Brain hope to have a virtual human brain functioning in ten years...

Not to be mistaken with Project Blue Balls, a more advanced version of Blue Brain, designed to algorithm it in the nuts, any time it begins to think of itself a tad bit high and mighty...
posted by Skygazer at 5:11 PM on July 18, 2009


Modeling a mouse or a human brain is sexy and fundable, but I think that ultimately, they're setting themselves up to fail. The bad analogy would be like trying to write Windows 7 from scratch without knowing very much about machine language/assembly.

I think that a more realistic approach is to model c. elegans which has a hundred-and-change neurons and a few thousand synapses. All the neurons have been mapped through development, the genetics is quite well known, the worm world is quite large and mostly friendly with one another, and there are a few robust habituation protocols (ie., seeing how worms learn).

Simulate a worm through development in as much detail as possible and see if it "boots up" when it hatches. Run it through some habituation protocols to see if the simulated worm can learn like the real thing.

Figuring out what's missing from the simulation to make it boot, to make it learn is going to teach us a lot. With those lessons learned, move up to modeling aplesia (sea snail) which also has been studied a lot. Once the snail's been figured out, maybe there'll be enough lessons learned to try to model a mammalian brain.
posted by porpoise at 5:13 PM on July 18, 2009 [5 favorites]


It seems to me that it would be a far better approach to work the other way around - figure out the basic "opcodes" and patterns used for producing intelligent behavior; neuron types, connection building/breaking, higher level patterns commonly found, etc... then build some simple "semi-realistic" structures using those low-level pieces with the purpose of trying to reasonably solve some simple Rodney Brooks bug-level problems. Stuff the whole thing into a giant GA-based "grow a brain, win a kewpie doll" simulation that gets rewarded for incrementally better at competing for food and mates... non-biologically realistic in that you'd probably be passing memories through reproduction as a timesaver at least at first. Let *that* spin for 10 years on the distributed power of the tubes as Evolve-HARLIE@Home, and I think you'll have a much better chance of ending up with something that tries to communicate a request that it would rather not be turned off...

Sure, there's a chance that intelligence absolutely requires the precise analog nature of each neuron to be simulated with extreme fidelity, but somehow I suspect that if you were to remove all the complexities we have in our brains that resulted from evolution trying to build useful circuitry out of meat, and just emulate the basics of neurons or even neural groups, you could get away with a lot less CPU power - still more than we have today, but less than what their approach will require. I think our brains are a wetware based local minima to the intelligence problem, and that a simulated brain could be built more efficiently. Old-skool neural network simulations took this a bit too far to the abstract, but I think there's some happy medium between those and full neuron simulation - my personal guess: the temporal delay in neural firing plus the position of the connection along the length might be important variables.
posted by argh at 5:27 PM on July 18, 2009 [1 favorite]


It's the "and let the slices continue running for a while in isolation before you make your measurements" part I'm objecting to, which doesn't happen in the type of software analysis you guys are describing. When you reverse engineer software you can look at the bits in vivo, as it were, without having to disconnect them from the rest of the program first.

Well, you can if you want to (in, say, a virtual machine) but you don't have too. The point is "taking a computer program apart and looking at the pieces" is not a difficult thing to do.
posted by delmoi at 5:32 PM on July 18, 2009


This is the same group of folks, after all, who surprised themselves by deciding that a "mind" made up of people pulling strings couldn't have mental states (a.k.a. the "chinese nation mind" argument).

What are you talking about? The people who make that argument and AI researchers are mostly disjoint. The Chinese mind argument isn't a proof, it's just a thought experiment, and a stupid one at that.
posted by delmoi at 5:33 PM on July 18, 2009 [1 favorite]


The point is "taking a computer program apart and looking at the pieces" is not a difficult thing to do.

The point is that taking a brain apart is.
posted by ook at 5:45 PM on July 18, 2009


If it's really smart, it will wake up, take a look around, and turn itself off.
posted by pracowity at 6:05 PM on July 18, 2009 [1 favorite]


The point is that taking a brain apart is.

Well how would you know? You already have an expanded sense of the impossible when it comes to how easily computer programs can be analyzed. And it isn't that they think it will be easy, but that they think it will be possible. And it seems like the worlds top neuroscientists probably have a better idea about what's possible then you do.
posted by delmoi at 6:07 PM on July 18, 2009


delmoi, you absolutely have a point. I always thought it was a stupid thought experiment -- but then, I thought most of them were. The real problem with what I said is that it's hopelessly out of date, and I realized that after I wrote it. I am better aware than that comment would make it appear that "real" AI is no longer done by philosophers of mind.

If I were to try and salvage a point out of that mess, it would be that going for a human brain is probably silly and pointless because there's too much context (e.g., sensory-motor systems with actual input happening) required to really simulate the interfaces that would be available to a real human. (In the terms of my ill-advised example, the "chinese mind" wouldn't have mental states because it's missing a shitload of stuff, like functioning eyes, nose, sensory nerves, etc.) Much more interesting and educational to start small and build and see what arises. And honestly I find it hard to believe that's not the real agenda.
posted by lodurr at 6:12 PM on July 18, 2009


Let *that* spin for 10 years on the distributed power of the tubes as Evolve-HARLIE@Home, and I think you'll have a much better chance of ending up with something that tries to communicate a request that it would rather not be turned off...

I've been arguing for years that the first real AI won't even be recognised as such: It will be an emergent phenomenon resulting from the interaction of multiple autonomous or semi-autonomous systems. It'll probably get "turned off" before it gets noticed.

The key is that laypeople (in which I include most SF writers) so often conceptualise artificial intelligence in terms of consciousness. Consciousness isn't really required for anything -- even a desire for self-preservation is anthropomorphic and arbitrary. The "primitive" organisms we talk about feed and prey and breed only because that's what their evolutionary environment demanded of them. To place the same requirements on artificial life strikes me as arbitrary in the extreme. It's "vivo-centric", to badly coin a term.
posted by lodurr at 6:22 PM on July 18, 2009


Much more interesting and educational to start small and build and see what arises. And honestly I find it hard to believe that's not the real agenda.

People do those things all the time. The purpose of these experiments is to study how brain cells work in order to try to treat neurological diseases like Alzheimers and Parkinson disease. They are trying to simulate the cells because they want to study the cells. Obviously there are much easier methods if you just want to create intelligent seeming stuff.

As far as "starting small and building up" people have been doing stuff like for years, even on regular PCs. I have a friend who did an experiment where he evolved a neural network that could make a 3d model of a human walk around and do obstacle courses and stuff, all the way back in 2003, again, with just a regular PC and an open source physics engine.
posted by delmoi at 6:24 PM on July 18, 2009


delmoi, we seem to be talking past one another. I'm done now.
posted by ook at 6:25 PM on July 18, 2009


I'll just go ahead and say it. *Sigh.* Oh, frak.
posted by functionequalsform at 6:49 PM on July 18, 2009 [1 favorite]


So August 29, 1997 was a typo, and it's really August 29, 2017?
posted by maxwelton at 8:00 PM on July 18, 2009


This story seems relevant.

(not sure if the author is MetaFilter's Own localroger)
posted by ArgentCorvid at 8:21 PM on July 18, 2009 [1 favorite]


Someone will work out a non-invasive technique for copying a brain. And eventually hardware will be such that it can run a full simulation of a copy. And eventually everyone will backup their brain periodically (once a day? once a minute?). And if they chose, they can have their brain reconstructed biologically upon a messy death.
Why would you think that this would be you?

That is, let's say that you back up your brain, perfectly, and then suffer a messy death. The hospital reconstructs you and restores your brain patterns from the last backup.

This would clearly (assuming things like perfect backup/restore and such) create someone who thinks he or she is you, who acts perfectly like you, who has all your memories, and so forth. But you are still dead. You didn't regain consciousness or self-awareness; you're gone.

Otherwise, think about what happens in the exact same situation, but minus the "messy death" part. You've backed up your brain, the hospital makes a body slug for you, and -- while you are perfectly alive and well -- your brain backup gets restored to the body slug.

Again, that's someone who thinks he or she is you, has all of your memories, et cetera. But your consciousness is still in your body, which is still alive and kicking; your consciousness has not transferred (or expanded) to the new body, has it? Why would it?

And why would it be any different in the case where you died before the brain restoration was done?

This is why I don't* trust teleporters. Sure, someone conscious and self-aware, who thinks he is me, and can convince my own mother of that, just materialized in Uzbekistan. But that doesn't mean I didn't actually die and permanently lose my consciousness and self-awareness when I was just vaporized in Tonga.

*: well, won't
posted by Flunkie at 8:42 PM on July 18, 2009 [1 favorite]


"I've always thought the "but is it a REAL consciousness" thing to be a pointless question...The ghastly messy grey area, of course, is if I successfully produce something exactly like, say, a brain-damaged human. Would switching that off to "fix" it or "improve" it be the same as performing experimental brain surgery on a brain-damaged human without their informed consent? If I couldn't improve them and decided to abandon the experiment, is that equivalent to euthanasia? Am I free to judge the "quality of life" of a brain-in-a-box in a way that I wouldn't feel free to judge the "quality of life" of a brain damaged (or mentally deficient) human?"

Well, that's the thing --- and perhaps I cannot fully appreciate the subtleties of the question --- but wouldn't the only real way to answer those questions be to know if the AI being is conscious or not? Because if all it is, is a clever but hollow mimic --- if it responds to stimuli but does not consciously experience it --- then morally speaking you'd be free to tinker with the thing just as you'd like, no different than reprogramming a malfunctioning Roomba. But if it is experiencing consciousness, then you have created a being, and experimenting on it without its consent would be more like experimenting on a baby.
posted by Diablevert at 9:39 PM on July 18, 2009


An ethical question that's rapidly becoming more pressing: if you actually do create an intelligence, is it ethical to deactivate it?

As long as we find it unethical in theory but ethical enough in practice to work children to death for sneakers, spray people with carcinogens to get some fresh strawberries, or chalk up the "accidental" deaths of some people who look different as not important, I imagine we'll have no problem turning it off when it decides optimizing code to predict the stock market isn't a fulfilling existence.

I suppose we'll know we've succeeded when it learns to rationalize just as well as we do.
posted by yeloson at 10:52 PM on July 18, 2009 [1 favorite]


yoink: "If I store a record of the exact state of every "mind" I turn off, such that it could be recreated with no loss of a sense of continuity to that "consciousness" at any time in the future am I doing anything more than putting that consciousness into a form of suspended animation?"

Aaaah... now I see why it's called Norton Ghost.
posted by JHarris at 12:58 AM on July 19, 2009


Why would you think that this would be you?

Why would you think it wouldn't? I believe that if you made the copy without my dying, then there would be 2 of me, neither with more right to be called me than the other.

Again, that's someone who thinks he or she is you, has all of your memories, et cetera. But your consciousness is still in your body, which is still alive and kicking; your consciousness has not transferred (or expanded) to the new body, has it? Why would it?

I believe if there was a copy made (with enough fidelity), then my consciousness would be copied as well. The original version of me would not have a transferred or expanded consciousness to the second me; the original me would continue as before. But the new me would still have it's own consciousness that would be independent of the original me's consciousness and it would diverge from the original me consciousness, as we would have different experiences.
posted by Bort at 5:16 AM on July 19, 2009


ook: assuming that QM and concsiousness are related just because neither is really understood is a fallacy of reasoning sometimes called "minimization of mysteries". But I think there are more compelling reasons to believe that consciousness somehow is connected to QM - this is a train of thought going back to Wigner (look up "consciousness causes collapse"). For me, part of the appeal is that QM offers a way out of strict determinism, and consciousness isn't, or at least doesn't *feel*, deterministic - it at least seems as though we have free will. For more on this see Kochen and Conway's recent work on the "Free Will Theorem" (side note, Conway is a well-known mathematician responsible for the cellular automaton known and loved by hackers known as "life").
posted by crazy_yeti at 6:29 AM on July 19, 2009


Just don't give it the keys to the nucular missiles, m'kay?

oh like in Colossus: The Forbin Project :P

THERE IS ANOTHER SYSTEM.

cheers!
posted by kliuless at 6:30 AM on July 19, 2009 [1 favorite]


Why would you think that this would be you?
Why would you think it wouldn't? I believe that if you made the copy without my dying, then there would be 2 of me, neither with more right to be called me than the other.
I think you're misinterpreting me.
Again, that's someone who thinks he or she is you, has all of your memories, et cetera. But your consciousness is still in your body, which is still alive and kicking; your consciousness has not transferred (or expanded) to the new body, has it? Why would it?
I believe if there was a copy made (with enough fidelity), then my consciousness would be copied as well. The original version of me would not have a transferred or expanded consciousness to the second me; the original me would continue as before. But the new me would still have it's own consciousness that would be independent of the original me's consciousness and it would diverge from the original me consciousness, as we would have different experiences.
The part that I bolded is exactly what I was saying. From it, you seem to be agreeing with me. So, again, given your disagreeing overall tone, I think you are misinterpreting me.

I agree that the copy will "be you" in the sense that it has its own consciousness, with the same memories as you and so forth.

I agree that it would have the "right" to be called "you", and just as much of a right that you do.

In fact, one could easily imagine this being done such that neither you nor the copy (nor anyone else) can actually know which one is the original and which one is the copy (for example, if the brain backup is done instantaneously, and the restore is done immediately thereafter, also instantaneously, in a completely dark room, by a robot which randomizes your positions before turning on the lights).

But that doesn't change the fact that one of you is the original, and one of you is the copy. There's no way to know which is which, but there's no getting around the fact that one of you is the original, and one of you is the copy. And, as you seem to agree with me, the two of you have separate consciousnesses.

So, back to the original issue: Someone suggested that people would backup their brains so that they could be restored in case of death. Please remember that that's the context we're speaking of - I am not and never was arguing that the copy wouldn't be conscious, wouldn't think it was you, or wouldn't have a right to be treated as you. I was merely questioning the actual purpose of making such a copy.

Why do this? I gather that the idea is to prolong your life.

But it doesn't make you regain consciousness. It creates a new consciousness. That new consciousness happens to have a lot in common with your old consciousness, but your old consciousness is still gone.

You will be no more self-aware of this new consciousness than you will be self-aware of any other consciousness in the world - i.e. not at all.

Yes, this new consciousness, from its point of view, is you (as long as it doesn't think about it too much). It has your memories. It has your pet peeves. It has your desires. It has all your thoughts. And sure, I agree that it has a right to be called you, and treated as if it were you.

But just as in the case where all this is done without you being killed, its consciousness is not yours. It is self-aware of itself, but you are not self-aware of itself.

You are still stopped. Ended. Dead.

So you have not benefited from this procedure at all.

Perhaps your loved ones have benefited from it, or your boss if you were a productive worker. The new consciousness that was created certainly has benefited from it. But you haven't. You're dead.
posted by Flunkie at 6:58 AM on July 19, 2009


This story seems relevant.

Thanks, that was an awesome story!

But I think there are more compelling reasons to believe that consciousness somehow is connected to QM - this is a train of thought going back to Wigner (look up "consciousness causes collapse").

Personally, I'm not compelled by the "consciousness causes collapse" theory or the Copenhagen interpretation. How to interpret the wave function collapse is an open question that I believe indicates an incomplete theory that once completed will not involve consciousness at all.
posted by Bort at 7:02 AM on July 19, 2009


I think you're misinterpreting me.

You're right, I was.

But it doesn't make you regain consciousness. It creates a new consciousness. That new consciousness happens to have a lot in common with your old consciousness, but your old consciousness is still gone.

OK, I think we are in total agreement up to this point, at which I totally disagree; or perhaps just entirely misunderstand. :)

In what sense is the copy a new consciousness? New as in an instance that didn't exist before I'm ok with. But I think you mean new as in different somehow - somehow making a copy that is physically separate from the original kills some kind of essence of me that won't be there in the copy. Is that it? If instead of the copy being made as an entirely different instance, suppose that I decide to slowly replace my neurons with computer chips that simulate them perfectly, one at a time. Do I end up with a "new" consciousness and the "old" one no longer exists? At what point?

But just as in the case where all this is done without you being killed, its consciousness is not yours. It is self-aware of itself, but you are not self-aware of itself.

I think you are mingling two different ideas/situations. One being that when there is a copy made (lets refer to the original as A and the copy as B), A will not feel any different or have any awareness of the internals of B and vice versa. I agree with that assessment. But then you also say that this means that in the case of B being made and A no longer existing that B will not be you. I agree that B will not be A, but B is as much you as A is. There will be an instance of me in the world that has a continuous consciousness experience and memory of being me and "waking up" (if you will) in a different physical medium. And this version is me.

So I think in that situation that I will regain consciousness (although I don't know if I conveyed my reasoning very well, if at all).
posted by Bort at 7:29 AM on July 19, 2009


From a "pseudo-philosophical" viewpoint, you could say that rather then consciousness "collapsing" the wave form, what actually happens is that the waveform envelopes the mind and so you have two separate superpositions of your brain, but you only experience one of them at a time, like how the fork() function works.

I kind of doubt that, but it's something to think about.
posted by delmoi at 7:31 AM on July 19, 2009


re: conway and kochen's free will theorem...
posted by kliuless at 7:34 AM on July 19, 2009


But then you also say that this means that in the case of B being made and A no longer existing that B will not be you. I agree that B will not be A, but B is as much you as A is.
You're just using terms differently than I am. You're saying the copy "is" you; in the sense that it is a self-aware entity that thinks it is you and has a right to be treated as you, I agree. But we also agree that the copy has a different consciousness than the original does. They're different people; a hell of a lot in common, but different people. I was using "you" in the sense of referring to one particular self-aware entity - specifically the original. But if you don't want me to use "you" in this sense, fine:

A's consciousness is gone. A is dead. A did not regain life. A did not regain consciousness. A is not self-aware. A is not aware, period. A did not benefit from this procedure at all.

There is some other consciousness out there now. That consciousness has a lot in common with A's consciousness. But A is not aware of that new consciousness, any more than he or she is aware of any other consciousness.

Yes, there's a self-aware B who is very like A. But A is gone.

And, back to the issue: A created a backup of his brain. Why did A do this? What benefit did A think he or she would get from it?

Because A gets no benefit from it. If A was doing it altruistically, so that A's boss wouldn't have to fill a job vacancy and A's kids didn't have to grieve and some new consciousness would spring into existence, great.

But if A was doing it so that A's consciousness, self-awareness, life would continue, nope, sorry. The self of A is gone, and won't be retrieved by creating the self of B.
posted by Flunkie at 7:52 AM on July 19, 2009


And, back to the issue: A created a backup of his brain. Why did A do this? What benefit did A think he or she would get from it?

Because A gets no benefit from it.


Hmmm... Let me put my view in these terms:

Right now you can view the person "bort"/me in two ways. One as A in the above, but two as the "me"/I that I care about - what I see as the software of the mind that rides/exists on the brain. This software I benefits because when B is created, this software I does gain its consciousness back. The instance A doesn't change and if it doesn't die, it will live on as a separate instance of "me"/I that doesn't benefit from the creation of B at all. But I don't care about that - I care about existing in some form that still lives.

So, from that view, I benefit from a backup copy that gets instantiated if this instance dies. I will not be dead.

But if A was doing it so that A's consciousness, self-awareness, life would continue, nope, sorry. The self of A is gone, and won't be retrieved by creating the self of B.

I don't understand how you can say that. What's the difference in the "self" that exists in/on A and the exact same "self" that exists on B? What's the response to the neuron by neuron replacement question - I think that might help illuminate our differences (assuming you're interested - I'm in this "argument" to help understand my own beliefs as much as understanding yours and our differences).
posted by Bort at 8:20 AM on July 19, 2009


Thanks, that was an awesome story!

It's #4 in a series, the rest can be found here.
posted by ArgentCorvid at 8:35 AM on July 19, 2009


What's the difference in the "self" that exists in/on A and the exact same "self" that exists on B?
Do you agree that you have a sense of self?

Do you agree that I have a sense of self?

Do you agree that your knowledge that you have a sense of self is direct?

Do you agree that your knowledge that I have a sense of self is indirect and assumed, through evidence and Occam's Razor?

Do you agree that that's a fundamental, large, and important difference, from your point of view?

If you get cloned and brain-duplicated, your knowledge of your clone's sense of self is exactly the same as your knowledge of my sense of self. Indirect and assumed. Fundamentally, largely, and importantly different than you (from your point of view).

I really am not sure how to be any more clear. What is "you", from your point of view, other than your sense of self, your self-awareness, your consciousness? Which you do not sharee with me? Which you do not share with your clone? Which you do not have after you die, even if a perfect clone of you gets made, even with its own self-awareness that has the same memories as you had?
posted by Flunkie at 8:47 AM on July 19, 2009


ArgentCorvid -- yes, I'm the localroger from the hulking wreck of K5 who wrote Mortal Passage and some other stuff. I've archived the whole story in one place.

Flunkie, accompany me on a thought experiment. Let's say that through the magic of an arbitrarily advanced technology, we take your body apart atom by atom and catalogue every bit of positional and chemical bonding information that can be gathered. Furthermore, we store these atoms on individual little atomic shelves so we can find them again. A hundred years later (maybe we've found the cure for your incurable disease) the nanoassembler take the original atoms your body was made of and assembles them exactly as it found them. Is the result of this "you?" I would consider any other conclusion very unreasonable, and only supportable by appeals to metaphysics.

Now let's say that those atomic shelves are a bit expensive, and instead of storing all of the atoms individually we store them in bulk. Physics is quite clear that there is no difference whatsoever between any two oxygen atoms, so clearly (to me at least) this is still "you" even though the oxygen atom that used to be in your big toe is now in your brain; it's part of your original body and doing the same thing.

Now let's say, again since physics insists there is no real difference between any two oxygen atoms, that we don't use the original matter at all. Maybe we email the assembly directions to our new colony at Proxima Centauri and rebuild you there. Why would this not be "you?"

But now you might object -- legitimately -- that if we can assemble "you" once we could do it again; and so we might if we had that technology. Which copy is "you?" The only reasonable answer is that both are.

If we could somehow make the copy without destroying the original (IME far more difficult and pretty much requiring that the Universe work in a very different way from what physicists think) then thet copy would STILL be just as much "you" as the original; arguing about originality would be no more relevant than identical twins making an issue over who was born first. It's an irrelevant technicality.

Now of course once two of "you" exist, you will begin to diverge; one of "you" might take up an interest in the piano while the other gets a job driving trucks. Your life experiences will diverge, eventually creating two somewhat different people with a common claim on your life experience up to the point of copying. It does not surprise me that some people simply can't wrap their head around this but it is the only possible conclusion. We don't even really have words in our language adequate for dealing with this state of affairs; we've only really had hints of the possibility since the invention of computers.

And of course, having come this far, we must reach a similar conclusion for experiments at lower fildelity. Suppose we find a microtome and electron microscope can collect enough information to assemble a copy that asserts its you-ness, likes the same TV shows and novels you do, and insists it owns your stuff. The fact of the copy's demosntrable verisimilitude would seem to make the case that, however imperfect, it is indeed you.

And to take a final tack, let's say that your original natural body has a mini-stroke, killing a modest but macroscopic number of brain cells; your quality of life isn't affected and you still like the same TV shows and can form new memories and skills just fine, but you can't play the piano any more or remember most of your childhood. Is this still "you?" How could this be any more "you" than the microtome-sourced simulation that actually does still have all of your memories and skills?

Finally, let's say that the last scenario is something we can predict with 100% accuracy will happen. (Actually, we can predict with 100% accuracy that something considerably worse will happen to every single one of us; dying puts a rather significant crimp on your quality of life.) However, we know that the microtome copy technology exists which can create a new "you" immune to biological cell death. Do you volunteer?

I know there are a lot of other issues (I've written about a lot of them) but assuming a mature technology where the ethical issues were pretty much worked out, I would not hesitate.
posted by localroger at 9:08 AM on July 19, 2009 [2 favorites]


On not previewing -- I should learn to preview.
posted by localroger at 9:09 AM on July 19, 2009


Again, I'm not saying that both people don't think that they're you.

I'm saying that from the point of view of one of the two - either one of the two - the other is not the same person.

They don't share thoughts. They're not aware of each other's thoughts. They don't share emotions. They're not aware of each other's emotions. They don't share anything but a common set of memories and a common set of behavior patterns. They don't share self.

I mean, come on. You get cloned. You may or may not know which of the original or the clone your physical body is - doesn't matter for this little thought experiment. The other one of "you" starts crying. So do I.

Do you know that I am sad? Yes.

Do you know that the other of "you" is sad? Yes.

Did you know that I was sad before I started crying? No.

Did you know that the other of "you" was sad, before he started crying? No.

When you see me crying, do you think "He is sad"? Yes.

When you see the other of "you" crying, do you think "He is sad"? Yes.

When you see me crying, do you think "I am sad"? No.

When you see the other of "you" crying, do you think "I am sad"? No.

He is not you. Not from your self-aware point of view. From your self-aware point of view, he is other. Not self. Just as, from your self-aware point of view, I am other, not self. The fact that he shares your memories does not change that.

And if you die, or are dead before he is created, that doesn't magically make other become self - neither the other of me or the other of him. Self is gone. Or in heaven, or whatever. But it sure isn't in me, or in that clone.
posted by Flunkie at 9:34 AM on July 19, 2009


Flunkie, the point isn't whether the two people are the same person; given long enough to live they will become completely different people. (One of them even digitally murders another in my story Revelation Passage.) The question that is relevant is whether they are the person who was copied.

Now, the whole way you are using the word "you" falls down here because, by your logic, I am just as much not the person I was yesterday as your two hypothetical copies aren't the same person. After all my experience of yesterday will never, for the rest of my life, be anything more than remote; unless it was a very special day I'll probably forget it (and the commonness of this is why we take pictures and keep diaries). Eventually I will have quite different opinions and skills than that person who, frozen in time, sat at this very desk yesterday. But we tend not to think of that ghost in the past as "other" even when we have made quite dramatic life changes.

You are using words like "self-aware" and "other" as if they possess a gravitas which they do not. Tell me specifically, how do you reply to my first hypothetical where you are disassembled and then reassembled with the same atoms in exactly the same relationships? Because if you think that is Other I think you are being silly, and if you think it is you then I'd assert that the rest of my (and bort's) argument flows pretty inevitably.
posted by localroger at 10:07 AM on July 19, 2009 [1 favorite]


Tell me specifically, how do you reply to my first hypothetical where you are disassembled and then reassembled with the same atoms in exactly the same relationships? Because if you think that is Other I think you are being silly
You've been vaporized. You're gone.

Otherwise, what if you were not disassembled; a copy of you was created using different atoms, but in the same relationships? Is that copy, from your point of view, self? Or other?

Obviously from his or her point of view, it's self. But I'm asking from your point of view. Do you know what it's thinking? Do you know what it's feeling? No, obviously not, no more than you know what I'm thinking or feeling. And when you die, you still won't know what he or she is thinking or feeling, any more than you will know what I am thinking or feeling.

And if you died before he or she was created, how is that supposed to change things? How is that supposed to magically make you know what he or she is thinking or feeling?
I think you are being silly, and if you think it is you then I'd assert that the rest of my (and bort's) argument flows pretty inevitably.
Yeah, well, I think that's patently absurd, and I assert that I'm done with this conversation. Goodbye.
posted by Flunkie at 10:17 AM on July 19, 2009


I remember pondering what would happen to consciousness and self if humans could be copied when I was in elementary school. I haven't thought about it too much since then.

If there's one thing humans have proven throughout history, is they don't let little things like abstract moral and ethical questions get in the way of doing whatever they think is cool. For everyone who asks "what is to to say I exist and what happens to that existence if I can be copied" 10 people will say: "Upload my brain to a computer?! That sounds Awesome!!!"

(or maybe the ratios will be reversed, but enough people will want to do it that it will be done if it can be)
posted by delmoi at 10:35 AM on July 19, 2009


delmoi -- I remember people having exactly this same argument about the fictional Star Trek transporter, e.g. if it takes you apart here and puts you back together on the ground, is the result still "you?" I suspect by the time we're capable of building such things we will have gotten used to the implications though.

Flunkie -- you clearly believe there is some metaphysical component to consciousness which is not expressed by mere matter; I don't believe that, so you're right, there's no point discussing it further.
posted by localroger at 11:17 AM on July 19, 2009


Well, that's the thing --- and perhaps I cannot fully appreciate the subtleties of the question --- but wouldn't the only real way to answer those questions be to know if the AI being is conscious or not? Because if all it is, is a clever but hollow mimic --- if it responds to stimuli but does not consciously experience it --- then morally speaking you'd be free to tinker with the thing just as you'd like, no different than reprogramming a malfunctioning Roomba. But if it is experiencing consciousness, then you have created a being, and experimenting on it without its consent would be more like experimenting on a baby.

As I say, we don't know if other human minds are conscious in the way we take ourselves to be (which, itself, may be merely an illusion). There is no possible way of knowing that other people are "conscious" rather than terribly clever automata. By "no possible way" I don't mean "we can't yet do that technically," BTW, I mean it's simply an impossibility--there's no imaginable experiment that could demonstrate it to us. So the only useful test for whether a computer has become "conscious" is "does it behave in the way we expect conscious minds to behave." If it does, then, it's a conscious being.

Perhaps an easier way to think about this is to imagine an encounter with non-humanoid aliens. How will we decide if they are "conscious" beings and therefore merit the same kind of ethical consideration we advance to other humans (as opposed, say, treating them as cattle or pets)? There is no imaginable way of entering into their subjective experience and saying "oh yes, that's a real consciousness." The only tests we'll have will be behavioral tests. If they respond in ways that make sense to us as intelligent and conscious responses, then we'll treat them as "other minds." If not, we won't.
posted by yoink at 11:34 AM on July 19, 2009


The question that is relevant is whether they are the person who was copied.

And the answer to this is clearly "no", for the simple and obvious reason that they are not made of the same matter.

That there is "no difference" at an atomic or sub-atomic level is irrelevant: They don't occupy the same space, and variance will begin immediately.* In fact, if you activate a "clone" in a state enjoyed by the "source" at any previous time, variance will be instantaneous.

Really, the whole "which one is you" part of the transporter argument is kind of silly at that level. If you duplicate yourself, it's a new "you", and those scare-quotes absolutely belong there because it's clearly a different being. It's a bit like the character in The Prestige who "wonders" whether he'll be the one in the tank or the one in the Prestige: The answer is that he's always going to be the one in the tank, but the new him, in the Prestige, is going to have that history of having always been the one on the balcony. The one on the balcony will always only ever have memories of being the one on the balcony. He'll see it from his subjective perspective as being a game of chance. But it's actually absolute: If he's on the platform, he's going into the tank. It's fairly obvious.

Which is why Caine's ingeneur shakes his head in puzzlement: He'd never realized his protege was that stupid.
--
*Of course, you could stipulate a scenario in which two beings are given exactly similar sensory inputs and placed into situations that would cause their states to remain exactly similar. They're still not occupying the same space.
posted by lodurr at 12:23 PM on July 19, 2009


Consciousness is irrelevant until you can say what consciousness is and definitively argue that consciousness is required for intelligence. It's not at all clear to me that it is.

This requirement of "consciousness" -- it's just another example of human exceptionalism. I remain unconvinced that there's anything qualitative that distinguishes us from "lower" animals.
posted by lodurr at 12:26 PM on July 19, 2009


Consciousness is irrelevant until you can say what consciousness is and definitively argue that consciousness is required for intelligence. It's not at all clear to me that it is.

This requirement of "consciousness" -- it's just another example of human exceptionalism. I remain unconvinced that there's anything qualitative that distinguishes us from "lower" animals.


Well, as I say, it's largely a matter of faith. You have a problem, of course, at the "other end" when you extend to animals the same value that you extend to humans. Why not assume that plants have a consciousness (or have whatever it is that makes the life/death/suffering of a being of the heightened moral significance that we [almost] all agree human life/death/suffering has)? That is, I agree that I can't "prove" some distinction between the moral value of a human life and a dog's life, but by the same token you can't "prove" the distinction between that dog's life and the life of the bacteria you kill by the billions upon billions every day. There has to come some point at which we say "this is too unlike what I recognize as a moral being." And that point is always going to be arbitrary to some degree.
posted by yoink at 12:39 PM on July 19, 2009


I have only one fundamental problem (that I'm presently aware of) with extending "consciousness" to other things: I don't know what that kind of consciousness would be. It would be very, very different from anything else that I've heard called consciousness.

Mind, I'm not saying you're making a unique claim -- I've heard it a lot, so I'm lumping you in with a bunch of other claimants, so to speak, and if you'd like to take issue with that I'll treat you individually ;-).

One approach you could take: Define "suffering." What precisely does it mean for something to suffer?

But I'm left still suspecting that what you're talking about is an ineffable essence. The problem with all those ineffables is that they're ineffable: We can't really talk about them, except in handwaves. They are defined by their indefinability. They're unfalsifiable.

So if someone says "it's obvious" that a machine-mind doesn't have mental states or doesn't have consciousness or doesn't suffer, I always have to ask "how?" And vice-versa. As long as we use ineffible criteria, this is a fundamentally unresolvable debate.
posted by lodurr at 1:00 PM on July 19, 2009


this is a fundamentally unresolvable debate

Er...wasn't that my point?
posted by yoink at 1:01 PM on July 19, 2009


Guess I'm not following you. Are you saying that you're only interested in using indefinite criteria? If you are, then it's fundamentally unresolveable.

I don't think indefinite criteria are required. Perhaps I wasn't clear: I don't think the term "consciousness" means what you seem to think it means. You seem to have a mystic definition that can potentially exist to rocks and trees. I take a more prosaic view, I would say: I think consciousness means something more akin to the way most people use the word.

I'll put this another way: If you require a life-essence of some kind, then this is a religious question and we needn't discuss it any further. If you think it's possible to answer these questions through experimentation -- if this is a matter that's vulnerable to empirical enquiry -- then we need to be able to resort to definable terms.
posted by lodurr at 1:07 PM on July 19, 2009


"can potentially exist to" > "can potentially extend to"
posted by lodurr at 1:08 PM on July 19, 2009


Flunkie -- you clearly believe there is some metaphysical component to consciousness which is not expressed by mere matter; I don't believe that, so you're right, there's no point discussing it further.
Oh, please. I never said anything about anything metaphysical. In fact, to me, you are the one who seems to be asserting magic: that his consciousness somehow benefits your consciousness after you have been vaporized.

Do you know your clone's thoughts? Any more than you know mine?

Do you know your clone's feelings? Any more than you know mine?

Yes or no.

You certainly could have better guesses to them than you do to mine. But you don't know them. You know yours. You don't know mine. You don't know his. You don't know not-yours.

my assertion was false
posted by Flunkie at 1:09 PM on July 19, 2009


I think consciousness means something more akin to the way most people use the word.

I'll put this another way: If you require a life-essence of some kind, then this is a religious question and we needn't discuss it any further. If you think it's possible to answer these questions through experimentation -- if this is a matter that's vulnerable to empirical enquiry -- then we need to be able to resort to definable terms.


Consciousness is something we experience subjectively. That subjective experience is certainly knowable--in a non-metaphysical sense--in terms of its quiddity as an experience. It is strictly and absolutely impossible for me to know if consciousness as I experience it is the same for you as it is for me.

Now, we could say "well, let's get rid of purely subjective knowledge--that's of no use to science." Fair enough. If we're talking solely about what is available to scientific study, how do we determine whether a human being is "fully conscious"? Well, they have to be able to communicate with us, right? They have to be able to answer questions about their current state, their past states, their future intentions and desires, right? Fine--I'm perfectly happy with that as a definition of "consciousness": "consciousness is the ability to behave in the ways we expect a fully-conscious human being to behave."

But wait--you want to say that dogs and cats and sheep are "conscious" too? Huh? They sure as hell don't meet those behavioral criteria, do they? What do you mean by that, then? That they react to stimuli? O.K., so do anenomes. That they show purposive behavior? So do robots. So if all we're going to talk about here is what is strictly available to us for shared, scientific analysis, the claim that we can't really make a firm distinction between "human consciousness" and, say, "sheep consciousness" just seems to me to be demonstrably wrong.

Even if we look at some genuinely difficult cases (like apes that have been taught language, for example), it's clear that we would not describe an adult human who behaves in exactly the way those apes behave as being "fully conscious" or as having an "unimpaired consciousness."

So, either "consciousness" is what we take it to be subjectively (which, I readily grant, might be nothing but an illusion)--i.e., a "sense of self" or "identity" which we grant to other human beings in a pure act of faith (I mean this not as a metaphysical statement but rather as an ethical one: in order to act ethically I must assume--without any possible proof--that other human beings' lives have exactly the same inherent value as I find my own life to have; if they're all unselfconscious robots simply acting out a clever program I have no ethical obligation to them and I cannot know for a fact that that is not the case). And then we have the problem you raised that the exact point at which I stop extending this assumption of "selfhood" is always arbitrary (dolphins yes, tadpoles no?).

Or we say "subjective don't cut it when it comes to science" and then we have a much clearer position. Obviously there are no other beings on earth who display anything like the signs of unimpaired human consciousness. But this scientific clarity gives me no help with ethical dilemmas.
posted by yoink at 1:32 PM on July 19, 2009


That clone discussion got explored here as well - I was unable to convince interlocuters on that thread that a clone would be a distinct consciousness and no more "you" than an identical twin, either.

But how can you claim two people are one consciousness? How are you defining consciousness? I am thinking of the internal experience of sensations and thoughts that a thinking thing has. Two different bodies cannot have the same consciousness. So, even if they are cloned from the same original, they must be separate experiencers now. If they are having separate experiences, then they are separate people, even if they have the same DNA - just like identical twins are separate people.
posted by mdn at 1:50 PM on July 19, 2009


But how can you claim two people are one consciousness? How are you defining consciousness? I am thinking of the internal experience of sensations and thoughts that a thinking thing has. Two different bodies cannot have the same consciousness.

I don't think anybody would argue that the two consciousnesses continue forever to be identical. I think the argument would be that at the instant of making the clone, the two are identical, and that from that point on they diverge as their experiences diverge. The one is "you with abcd... experiences" and the other is "you with wxyz...." experiences.

But that's no more startling than saying that "if I'd never made that decision to go to Paris in my senior year, I'd be a differet person now." Yes, you would--you'd be "you-who-never-went-to-Paris-in-your-highschool-year." The clone is "you-who-got-cloned."

(Of course, the word "clone" in this context is horribly misleading--a should say "atomic-level double" or some such).
posted by yoink at 2:51 PM on July 19, 2009


Flunkie -- I asked a simple question. If you were taken apart, your molecules stored in some different arrangement, and very importantly the information about where they came from preserved, and later those molecules were reassembled into exactly the same form you have now, the question was, is that still you?

You said no. You're vaporized, gone.

I say, you'd wake up unaware that any time had passed and totally conscious of being yourself, thinking the same thoughts you were thinking before being disassembled.

The only way to draw your conclusion from my stipulation is to think that there is something other than atoms and molecules that makes us up. I don't believe that.

I have been a bit coy, and should probably go ahead and state my position: I do not think "you" are made of atoms and molecules. "You" are the information encoded by those particles. "You" are not your brain, you are the information contained within your brain (and, to a lesser but still important extent, the rest of your body). "Life" is the development of that information structure through a recursive mechanism that is encoded in the structure of the brain and body.

This is a very new thing to think with vast implications, and it invalidates a lot of what has been canon in many philosophical theories. But I really don't think there is any other way to proceed.
posted by localroger at 3:00 PM on July 19, 2009


Flunkie -- I asked a simple question.
Yes, and I answered it. You haven't done the same for my simple questions.

Do you know that I'm sad before I start crying? Yes or no. When you determine that I am sad, do you think "I am sad"? Or "He is sad"?

Do you know that your clone is sad before you start crying? Yes or no. When you determine that your clone is sad, do you think "I am sad"? Or "He or she is sad"?

Do you know that you are sad before you start crying? Yes or no. When you determine that you are sad, do you think "I am sad"? Or "He or she is sad"?

One of these is not like the others. Which? Why?

Does any of this change if you die, and your clone doesn't? Or if you are dead before your clone is made?
posted by Flunkie at 3:07 PM on July 19, 2009


Yes, and I answered it. You haven't done the same for my simple questions.

Do you know that I'm sad before I start crying? Yes or no. When you determine that I am sad, do you think "I am sad"? Or "He is sad"?

Do you know that your clone is sad before you start crying? Yes or no. When you determine that your clone is sad, do you think "I am sad"? Or "He or she is sad"?

Do you know that you are sad before you start crying? Yes or no. When you determine that you are sad, do you think "I am sad"? Or "He or she is sad"?

One of these is not like the others. Which? Why?

Does any of this change if you die, and your clone doesn't? Or if you are dead before your clone is made?


Isn't all this just destroying a straw clone? Is there anybody here who is arguing that if you are cloned "at the atomic level" that you and your clone continue to be "one consciousness" forever? Isn't the argument simply that you and your clone start from a condition of absolute identicalness and then (rapidly) diverge.

It's simply not a conceptual challenge to the "teleported people remain the same people" argument to point out that if teleportation is possible, so is duplication.
posted by yoink at 3:20 PM on July 19, 2009


Isn't all this just destroying a straw clone?
No.

I entered this argument to raise a point in response to someone who claimed that people would back up their brains so as to prepare for "messy death". Specifically, I questioned how, exactly, it would actually help them in that situation. Can you tell me how? Given that (as we seem to agree) the consciousness of the new copy is not your consciousness?

Again, it helps (or at least might help) your loved ones. And it helps your boss. And it helps some newly created person who has the same memories as you. None of that is in question.

How does it help you?

Wave your hands, as many here have, saying it helps me because the clone is you. But that's all it is - handwaving. And my questions, which you dismiss as an attack against a strawman, demonstrate that amply: He is not you.

If he were you, you would answer my questions about "you" the same way that you would answer my questions about "your clone". But you don't, do you? You answer my questions about "your clone" the same way that you answer my questions about me.

Finally, "teleportation" was originally merely a side issue. To remind you once again, we were talking about restoring someone's brain patterns to another body after death.
posted by Flunkie at 3:33 PM on July 19, 2009


Flunkie, again you are using the word "you" in a way that does not stand up.

My first thought experiment was phrased the way it was very deliberately. You either believe you are made of atoms and molecules, or you believe something else is involved. If the latter, we have nothing to discuss.

I suspect your problem is that you are hung up on an idea of "continuity" which you don't think is metaphysical, but really is. If you're disassembled into constituent atoms you're poof, gone, even if the information has been stored to reassemble you exactly so that you step out of the booth not even aware that the hundred years has passed, thinking the same thoughts and even made up of the same atoms.

If you don't believe such a creature is "you" you don't believe in the concept of "you" at all. You believe in something metaphysical which is broken when you are disassembled.

As for your obsession with whether "you" would be feeling what "other" does: For large amounts of your "life" you don't "feel" anything at all. You're unconscious. If somebody disassembled and reassembled you while you were asleep, or knocked out from a sucker punch, when you came to you would have absolutely no basis at all for thinking you were anything but your original self. You would not know whether you were made of the same atoms or number 37 in a series. You would feel that you are you, as would all 36 others. There would be no loss of continuity for any of you, though you would start to diverge as soon as your life experiences differed.

The original you does not die in the sense you are talking about when you are disassembled; if you are that hung up on continuity it ceases to exist every time you lose consciousness for any reason. And that happens to all of us at least once a day.

So the way it helps you is that really, fundamentally, it is you. It is more you than your body would be if it had a stroke after the copy was made. It will have your memories and carry your legacy and if it's made right its children would be considered as yours. There is nothing separating you from that copy except a break not really different from what happens every night to all of us.
posted by localroger at 4:05 PM on July 19, 2009


"By "no possible way" I don't mean "we can't yet do that technically," BTW, I mean it's simply an impossibility--there's no imaginable experiment that could demonstrate it to us. So the only useful test for whether a computer has become "conscious" is "does it behave in the way we expect conscious minds to behave." If it does, then, it's a conscious being."

Yeah, I follow you there. I suppose what I'm arguing is that, though we cannot rationally and inherently prove that another human is experiencing consciousness, we infer that they are, and therefore treat them as such. I think that inference is important; it's quite a big world, and so there may be 1 or 2 among the 6 billion, but I think you'd have quite a struggle to find a true agnostic on the existence of consciousness in other humans. You are correct to point out that this is somthing of a leap of faith, yet that judgement seems to have an important role in governing our moral assessments.

I do not know that we could make that same inference so easily about an entirely artificial consctruct, however. Practically speaking, you're right, in that we'd simply do it anyway. But then again, a great deal of our own thoughts and reactions are entirely unconscious. People are capable of performing actions of great complexity unconsciously, retaining no conscious memory of them --- driving to work, say.
posted by Diablevert at 4:15 PM on July 19, 2009


Yoink, re. consciousness, you're asking excellent questions. I've been asking the same questions for years. But I'm unclear on:

a) why you seem to think we need to be satisfied with subjective criteria;
b) why the subjective experience needs to be considered at all.
posted by lodurr at 4:30 PM on July 19, 2009


Wave your hands, as many here have, saying it helps me because the clone is you. But that's all it is - handwaving. And my questions, which you dismiss as an attack against a strawman, demonstrate that amply: He is not you.

Perhaps I can help you here. With respect to present day me, both future clone-me and future me-me are "future-mes." This is where you are getting confused. You're saying "will future me-me think of future clone-me as "me." To which, of course, the answer is "no." Because future me-me and future clone-me will have different lives and different selves. But with respect to present-day-me they are simply "alternate future selves"--in exactly the way that the present-day-me has a hypothetical "alternate self" who is "the me who didn't choose to go to Paris in senior year."

So, to recap--you do not disprove the contention that "if I make a back-up-me to boot up in the event of my death, then I'm effectively immortal" by proving that "if two of those back-ups are made they will be different people."

The difference that you're getting hung up on is the same as the difference between myself-last-week and myself-this-week. If you can accept that "I" remain "me" even as I change then you should be able to accept that "I" can have two separate paths of development (me-me and clone-me) who remain versions of "me" even as they undergo divergent changes.
posted by yoink at 4:35 PM on July 19, 2009


a) why you seem to think we need to be satisfied with subjective criteria;

I don't think we need to be. I accept entirely the fairness of saying "subjective criteria are of no use in scientific inquiry." I suspect, however, that you were smuggling some of the subjective aspects of consciousness into what you were attributing to animal consciousness.

b) why the subjective experience needs to be considered at all.

I think that from an ethical point of view we have to assume that our subjective experience of consciousness is valid. I don't think one can have an ethics that doesn't assume freedom of the will. Or, I suppose, you could have one, but it wouldn't matter a damn.
posted by yoink at 4:39 PM on July 19, 2009


The difference that you're getting hung up on is the same as the difference between myself-last-week and myself-this-week. If you can accept that "I" remain "me" even as I change then you should be able to accept that "I" can have two separate paths of development (me-me and clone-me) who remain versions of "me" even as they undergo divergent changes.

Well, that's a nice phil-o-mind take on it, but it's both divorced from empirical reality and (as I recall) outside the mainstream of thought on continuity of self. Last time I read much in this problem -- which was, admittedly, in the late 80s -- the mainstream thinking in phil-o-mind seemed to me to be that you needed a perception of continuity between the two selves. Since two discrete bodies don't, as far as we know, have continuity of experience, it's not the same "you" in two places at once.

In any case, as I noted, that answer is more or less irrelevant to empirical reality. The bodies are in different places, and (again, as far as we know) they aren't psychically or physically connected. (If it turns out that they are, this all changes, but we have no reason at this point to suppose they would be.)

Think of it this way: Ralph is cloned/copied instantaneously into Ralph and Ralph[1]. If we prick Ralph, does Ralph[1] bleed? If not, they're not the same person.

I'm having a really hard time understanding any test that would have Ralph and Ralph[1] both "Ralph" in any meaningful or useful way.
posted by lodurr at 4:45 PM on July 19, 2009


I wasn't attributing anything to animal consciousness. I don't recall making any statements about it.

What I recall arguing is that consciousness has no necessary relation to intelligence. I'm also not sure it has any real bearing on free will. (Yes, I am aware that conventional notions of concsiousness and free will require conscious choice. I think those notions betray very sloppy thinking about the nature of free will and consciousness.)
posted by lodurr at 4:48 PM on July 19, 2009


BTW, w.r.t. the scenario where Ralph and Ralph[1] are instances of the same person looks to me like it's an ethical landmine. If they're not, it's very simple: They're both human beings, and you don't do anything that would violate Ralph[1]'s human rights. E.g., you don't make an army of Ralph[1...x+1]s.

But if they're the same, then:
  1. Ralph gets to make that decision for all immediately proximate Ralphs;
  2. You have the problem of figuring out where in the experiential time frame of the collective Ralph-entity that Ralph (or Ralph[x]) loses the right to make decisions for the other Ralphs
posted by lodurr at 4:54 PM on July 19, 2009


I entered this argument to raise a point in response to someone who claimed that people would back up their brains so as to prepare for "messy death". Specifically, I questioned how, exactly, it would actually help them in that situation. Can you tell me how? Given that (as we seem to agree) the consciousness of the new copy is not your consciousness?

What are you talking about? Of course it would help them -- they would feel more comfortable about dying if they held the same position as localrodger.

And I have to agree with him. You're just not thinking very clearly about what's real and what's "metaphysical". The difference between you and the new copy wouldn't be measurable, and if it's not measurable, it doesn't exist.

Now if you made a copy, rather then a restore from backup so that there were two of you, then the difference would be measurable. You would be over here and they would be over there. The two copies would begin to have different experiences and they would be able to tell each other apart that way.
posted by delmoi at 4:55 PM on July 19, 2009


Sometimes I wonder if consciousness debates are clouded because some people aren't conscious and simply cannot understand (tongue in cheek, more realistically I suspect people just have different definitions of consciousness).

Let's ignore the whole body problem and say you just duplicate your brain. Let's say it perfectly emulates you in every way, except we wrote it in computer code instead of neurons to make things especially simple. This brain would act in the exact same way as you would now. I would argue that this computer brain is no more conscious than your email client. It would act the same, but it wouldn't have that ineffable "I am here" to itself, it would just merely have instructions. The machine may be self-modifying, but it wouldn't be an observer. To argue that consciousness would be created purely out of machine instruction (whether it's wet or made out of silicon) is as much faith as any religion out there.
posted by amuseDetachment at 4:56 PM on July 19, 2009


Think of it this way: Ralph is cloned/copied instantaneously into Ralph and Ralph[1]. If we prick Ralph, does Ralph[1] bleed? If not, they're not the same person.

You're talking about the hardware, while we're saying the self resides in the software. Brain/body = hardware, mind = software.
posted by Bort at 4:59 PM on July 19, 2009


Since two discrete bodies don't, as far as we know, have continuity of experience, it's not the same "you" in two places at once.

I'm not sure why you think you're arguing with me, here. I'm saying that Ralph and Ralph[1] are NOT the same person as each other.

Here's what I wrote:

You're saying "will future me-me think of future clone-me as "me." To which, of course, the answer is "no."

Notice that "no"?

The only point I'm making is that from the perspective of past Ralph (back when there was only Ralph and no Ralph[1]) both future-Ralph and future-Ralph[1] are continuations of his current "self." They just happen to continue on divergent paths. Similarly, both future-Ralph and future-Ralph[1] will look back on the same "past-Ralph" as "me-in-the-past." Each will have exactly the same relationship to, say, their memories of being eight years old as a relationship to "my earlier self."
posted by yoink at 5:00 PM on July 19, 2009


The difference between you and the new copy wouldn't be measurable, and if it's not measurable, it doesn't exist.

Actually, the differences would be quite measurable, in a very specific and relevant way: Version 1 would be in one location, Version 2 would be in another. And (again, as far as we know) those two versions would not share experiences. These would be two discrete individuals.

If it's a matter of 'feeling better' that there's a Version 2, then bully for the Version 1 that feels better. But for all the stories-as-thought-experiment that I've read going one way, there's at least one going the other. (Algis Budrys's "Rogue Moon", for one example.)
posted by lodurr at 5:01 PM on July 19, 2009


Perhaps I can help you here. With respect to present day me, both future clone-me and future me-me are "future-mes." This is where you are getting confused. You're saying "will future me-me think of future clone-me as "me." To which, of course, the answer is "no." Because future me-me and future clone-me will have different lives and different selves. But with respect to present-day-me they are simply "alternate future selves"--in exactly the way that the present-day-me has a hypothetical "alternate self" who is "the me who didn't choose to go to Paris in senior year."
Oh, baloney. One of you is the original, and there's no getting around that fact. Whether the original knows he is the original or not -- and make no mistake, he might -- does not change that fact, nor does it change the fact that the original does not see through the copy's eyes, nor vice versa. They don't think sense each other's thoughts; they don't feel each other's feelings; they might not even know each other exist.

This is true regardless of whether the original was dead at the time of creation of the copy.

There is a unique individual who is the original.

That individual, if dead, does not know that a new copy of him has been reconstituted.

I'm sorry, but this is just ridiculous. You're using "I" to mean something other than your own self-awareness and your own consciousness. That may be fine under some extremely strict and extremely handpicked definition, but that's not what anybody means when speaking of the sense of self.
posted by Flunkie at 5:02 PM on July 19, 2009


yoink, then I'm confused about your position, and I apologize for that.
posted by lodurr at 5:03 PM on July 19, 2009


That individual, if dead, does not know that a new copy of him has been reconstituted.

That's simply begging the question. But I see that you are determined to do that.
posted by yoink at 5:05 PM on July 19, 2009


amuseDetachment: You're basically saying that you can't program the soul. Which is a fine thing to say, but unfalsifiable, and therefore not very useful.
posted by lodurr at 5:06 PM on July 19, 2009


I'm sorry, but this is just ridiculous. You're using "I" to mean something other than your own self-awareness and your own consciousness. That may be fine under some extremely strict and extremely handpicked definition, but that's not what anybody means when speaking of the sense of self.

I'm sorry, but this is just ridiculous. You're using "I" to mean something other than your own self-awareness and your own consciousness.
posted by Bort at 5:07 PM on July 19, 2009 [1 favorite]


That individual, if dead, does not know that a new copy of him has been reconstituted.
That's simply begging the question. But I see that you are determined to do that.
What? You think that's circular reasoning?

I am not saying "the original, if dead, does not know that a new copy of him has been reconstituted because he doesn't know that a new copy of him has been reconstituted."

I'm saying "the original, if dead, does not know that a new copy of him has been reconstituted because he wouldn't even necessarily know it if he were alive."

That's hardly begging the question.
posted by Flunkie at 5:08 PM on July 19, 2009


I'm sorry, but this is just ridiculous. You're using "I" to mean something other than your own self-awareness and your own consciousness.
No I am not, Bort. Again, if you are alive when your clone is made, you and your clone do not share self-awareness, nor consciousness. He has one self-awareness; he is an "I". You have another. You are another "I".
posted by Flunkie at 5:10 PM on July 19, 2009


What? You think that's circular reasoning?

I am not saying "the original, if dead, does not know that a new copy of him has been reconstituted because he doesn't know that a new copy of him has been reconstituted."

I'm saying "the original, if dead, does not know that a new copy of him has been reconstituted because he wouldn't even necessarily know it if he were alive."

That's hardly begging the question.


Flunkie, the whole question is whether or not "the individual" can be rebooted. Those of us who think "the individual" is an emergent state rather than a particular bunch of wetware think that the "rebooted" self is the individual and therefore, once rebooted, will be aware that he "died" and got "rebooted." When you say "the individual, if dead, does not know that a new copy of him has been reconstituted" you are assuming the very thing that is in contention. That is the definition of "begging the question."
posted by yoink at 5:13 PM on July 19, 2009


When you say "the individual, if dead, does not know that a new copy of him has been reconstituted" you are assuming the very thing that is in contention.
Do you or do you not agree with me that the original individual, if alive, might not even know that a clone of him has been reconstituted?
posted by Flunkie at 5:14 PM on July 19, 2009


I'm saying that with all current knowledge, it is not possible to say that you can program in consciousness in any material (wet or silicon). If you saved your brain (Von Neumann-esque processing+memory) and duplicated it, there is no guarantee you would retain your sense of self, the result may have self-preservation and self-modification, but it's downright dogmatic to say otherwise. It's entirely possible that the result of a duplication would be 1 person (original self) which is conscious and 1 new duplicate which acts in the exact same way as the original but does not have a sense of self.
posted by amuseDetachment at 5:17 PM on July 19, 2009 [1 favorite]


amuseDetachment: If it is acting in the exact same way as the original, how could it not have a sense of self? (I note that you use different terms: Original self "is conscious", duplicate self "does not have a sense of self." Are you equaing consciousness with a "sense of self", or are you talking about two different things?)

I don't see how you can make this argument without positing something like a soul. If it's not a soul, or some other essence, what is it that's missing?

Put another way: What exactly is this 'sense of self'? Any why does 'acting in the same way' not cause it to come into existence? Does this sense of self arise from actions, or does it come about by some other means?
posted by lodurr at 5:27 PM on July 19, 2009


Those of us who think "the individual" is an emergent state rather than a particular bunch of wetware think that the "rebooted" self is the individual and therefore, once rebooted, will be aware that he "died" and got "rebooted."

I think "the individual" is an emergent phenomenon (not a state, though), and I don't see any mechanism whereby that would imply that a person who's "rebooted" would know he's been "rebooted." Quite the opposite, in fact: If he's been "rebooted", that would seem, by definition, to erase knowledge of rebooting.

Unless people have a status bit that gets flipped on reboot. Which is entirely possible, I suppose -- but non-obvious.
posted by lodurr at 5:31 PM on July 19, 2009


It also seems quite clear to me that the self being an emergent phenomenon would in no way preclude it being somehow tied to the wetware. In fact, again, I'd expect the opposite.
posted by lodurr at 5:34 PM on July 19, 2009


And this is why in my original statement, I was wondering whether this is a misunderstanding of the definition of consciousness, or whether (inflammatory and tounge-in-cheek) some don't recognize their own consciousness/don't have it.

I am equating consciousness completely separate from actions, the fact that you are conscious does not necessitate your ability to take actions on your behalf, I suspect that may be what's tripping up understanding of my argument. I am also not talking about asleep vs. awake.

Put it this way. Pretend that your brain's actions and decisions are completely separate from your ability to observe yourself (this is what I meant by sense of self). You'd just be watching a show in your own head. The hypothetical separation of your observations do not prevent your brain from acting and going about its business. It's only a hop and skip away from removing your consciousness entirely. This observational ability is entirely rooted in the sense of self, which is why I equated it with consciousness, as there is a "me" and "not me". Note how this is entirely distinct between computer code which can detect physical walls/boundaries/&c.

I am questioning that this observational ability may be entirely disconnected from computational instructions, as I see no current way for it be purely based on instructions which can be stored.

Soul is a really loaded word, so I'm not going to address that seriously.
posted by amuseDetachment at 5:43 PM on July 19, 2009


And that's the core of my argument, to argue that consciousness is an emergent phenomenon is a religious argument that has no basis in fact. It's just as likely as it is not.

It's handwave-y and no better than foo-foo hippie Quantum-blah-blah-blah, as it's logically implausible for anyone that has done any serious programming.
posted by amuseDetachment at 5:48 PM on July 19, 2009


How do you observe your brain thinking without thinking about observing your brain thinking?

IOW, it's a nonsensical question: Without a separate thinker, consciousness can't be separate from thought.

I think it's a basic assumption in discussions like this that consciousness is recursive in a very rich and complex sense. We might quibble about the way we describe it, but I doubt there's anyone on this thread who serioiusly thinks the brain is a Von Neumann machine (e.g. "instructions which can be stored").

People who are arguing that you can upload brains (I wouldn't be one of those, BTW) are NOT arguing that it will be easy or happen soon (I don't think they're arguing the latter, but I'll take correction on that). When yoink says something like "emergent state", I sincerely doubt he's talking about something you can easily save in the manner in which you'd dump your system's state to disk.
posted by lodurr at 6:03 PM on July 19, 2009


I am questioning that this observational ability may be entirely disconnected from computational instructions

I'd say you are asserting this without any evidence.

it's logically implausible for anyone that has done any serious programming

It's logically plausible to me and I've done serious programming for over a decade.
posted by Bort at 6:07 PM on July 19, 2009


Ah, perhaps we don't disagree as much as I had thought then.

Note that I am making an implicit comment that the experiment in the original link up top that if they extended their modeling to the entire human brain, that it may not entirely be the same as a real one because the simulated one may not be conscious. The project above appears to run on the foundation that the brain is entirely computation+storage and therefore can be emulated on a computer, and I suspect that is entirely possible up until consciousness, where it gets really murky.
posted by amuseDetachment at 6:11 PM on July 19, 2009


Bort: Really? Explain how you can write code that can be conscious. Douglas Hofstadter's Godel Escher Bach was less than convincing.
posted by amuseDetachment at 6:13 PM on July 19, 2009


No, if you're talking about me, we disagree completely -- or at least, I have no idea what you mean by 'consciousness.' If by 'consciousness', you mean 'self-awareness', I don't see any reason that would be part of any duplicated state.

If you mean that the brain won't necessarily be 'self-aware' if it's 'booted' up cold, without experiential data (a.k.a. memory) or sensory data, then I'd say "duh." Or if you start with an infant brain and "grow" it -- you probably wouldn't have something we'd conventionally think of as "self-awareness" then, either. But you would have the organism being "aware" of itself and its state -- that's how the body works, by being aware of its state.

I'm quite happy to admit that we don't know exactly how consciousness happens; I happen to believe it's not nearly as special as we think it is (though I'd also argue it's entirely -- well, largely -- what's driving me back to this thread again and again). But I don't see any reason to suppose that it's not a result of normal brain functions. And if that's the case, then it should be as duplicable as any other complex of brain functions.

IOW, if you can do it at all, you should be able to get consciousness. It may take some fiddling (which is kind of an icky thought on the ethical grounds), but AFAICS you should be able to get to it.
posted by lodurr at 6:20 PM on July 19, 2009


amuseDetachment: You seem to be saying there's something inherent about code that makes it impossible for it to produce consciousness. It would be helpful if you explained what that was. (And also if you remembered that "code" is only a metaphor, here: The kind of "code" we're talking about is nothing like what you or Bort write [unless Bort is a cognitive scientist].)
posted by lodurr at 6:22 PM on July 19, 2009


"don't see any reason that would be part of any duplicated state" => "don't see any reason that would not be part of any duplicated state"
posted by lodurr at 6:23 PM on July 19, 2009


Some perspective on this matter...
posted by lodurr at 6:35 PM on July 19, 2009


lodurr:
To understand my POV, I would disagree strongly with this statement:
If you mean that the brain won't necessarily be 'self-aware' if it's 'booted' up cold, without experiential data (a.k.a. memory) or sensory data, then I'd say "duh."

Consciousness is entirely possible without any experimental data/memory. It's what Buddhists shoot for when they're mediating or trying to be mindful. The body's sense of self may not be aware of its own body, but you would still be conscious.

I am not disagreeing that it may not be possible to create a conscious being, but I am incredibly skeptical that it can be stored and recreated purely in machine code. We then get the question of whether it would be possible to replicate real neural networks as machine code, it would be technically possible unless there was some kind of crazy quantum entanglement magic (or other phenomenon) going on. On the other hand, if we assume that it were possible to duplicate in every way into code (a firing at this level at X neuron will hit Y and Z at this level), then we get back to my question to Bort how it is possible to make machine code conscious.
posted by amuseDetachment at 6:36 PM on July 19, 2009


Concsiousness is not entirely possible without any experiential data/memory. As far as I've ever been concerned, Buddists are mistaken about that.

I know a bunch of Buddists, Wonderful people. I've done meditation, self-hypnosis, self-submergence, and I'm aware that high-level practitioners are doing very impressive things and can achieve states that appear to them to be a negation of consciousness. But to argue that what they're doing is the negation of being is simply mystical. It's unfalsifiable, un-testable. If you want to have that belief, you're welcome to it; but it's entirely separate from what's being discussed here.
posted by lodurr at 6:46 PM on July 19, 2009


I am not disagreeing that it may not be possible to create a conscious being
Err, I meant: "While it may be possible to create a conscious being, "
posted by amuseDetachment at 6:46 PM on July 19, 2009


Consciousness cannot be possible without memory. It's an inherently time-bound activity: You have to be conscious of something, and that has to happen in a time frame. Ergo, some form of memory is required. Ergo, some form of experience.

If someone claims to be "conscious" without being conscious-of, they are using the word in a nonsensical way.

There are mental states that can be achieved through intense practice, drug use, mental aberration, or accident which look very much like the negation of consciousness. But even in those states, there's no reason to suppose that there's no memory happening or being used. Similarly, you can achive an altered state of consciousness -- something not within the normal range. Happens frequently. I'll wager many folks on this thread have experienced them. But they're still time-bound, and still involve consciousness-of.
posted by lodurr at 6:54 PM on July 19, 2009


Yes, the whole Buddhist comment is entirely untestable and foofy and completely unrelated, which is why I avoided bringing "the soul" and other examples into this.

We can disagree on whether consciousness can exist without experimental data/memory (neither of us can prove it). However, in any case I was primarily emphasizing a difference between consciousness and neural instructions related to the self (e.g. self-preservation, visual self-recognition, etc), in that the neural instructions are not the same. Also, to argue that it's time-bound is not proven either.

However, my argument is that something like "an entire person (including consciousness) will be able to be stored and duplicated" is just as difficult to test and foofy. I'm just laying out how implausible that is as well.

In any case, I'm getting a little bit bored. While your arguments are convincing, I still don't see how it's possible to store and transmit consciousness, who knows what we discover so I'll just leave it at that.
posted by amuseDetachment at 7:07 PM on July 19, 2009


Bort: Really? Explain how you can write code that can be conscious. Douglas Hofstadter's Godel Escher Bach was less than convincing.

Write code that emulates the neurons/brain to the precision needed to emerge it. But I now get that you don't think that that will necessarily do it. That's where our assumptions differ I guess - I don't believe you can make that copy without also having consciousness come along. It is inherent in the structure of the brain/mind and can't be separated out (IMO - since I believe it emerges).
posted by Bort at 7:53 PM on July 19, 2009


On the other hand, if we assume that it were possible to duplicate in every way into code (a firing at this level at X neuron will hit Y and Z at this level), then we get back to my question to Bort how it is possible to make machine code conscious.

I'm saying that if you do code a brain at the right level (simulate with enough fidelity) then you've made machine code that is conscious. I'm saying that that is all we are. Our consciousness is a program run on wetware. Move it to hardware and it'll run just the same, with consciousness and all.
posted by Bort at 8:00 PM on July 19, 2009


This is an extraordinary claim that requires at least some sort of proof. I haven't seen any such proof. Is this an article of faith for you?
posted by Crabby Appleton at 8:17 PM on July 19, 2009


This is an extraordinary claim that requires at least some sort of proof.

Prove it's an extraordinary claim.

Next you'll be telling me I have some sort of extra-dimensional ghost thingy that is what contains my consciousness. Perhaps my brain is just some sort of transceiver to it?
posted by Bort at 8:36 PM on July 19, 2009


If I build a machine that screams "Ouch!" when I hit it with a hammer, is it feeling pain? Is there any ethical consequence of me hitting it? Of course not. This question of whether this proposed computer simulation of a human brain is conscious is just a much more complex version of this.

Sometimes I wonder whether people like Daniel Dennett, or some of the commentators on this thread who think consciousness can be "explained away" are really conscious in the same way I am, or if perhaps they are some sort of automata. (I'm saying this kiddingly!). But it's obvious to me, and yes this is an article of faith, that there is something extraordinarily special going on when it comes to consciousness. (And this is not "human exceptionalism", my cat is obviously conscious too).
posted by crazy_yeti at 8:40 PM on July 19, 2009


Now, in this episode of serious answers to flippant questions...

I've been following this thread with interest but it's all gotten a bit confusing. An example has occurred to me which I think has the potential to be clarifying (though you may disagree). Perhaps y'all (Bort, lodurr, amuseDetachment, yoink, Flunkie, et al) will indulge me by answering this question? It is this:

In the Ep. 177 part 2 of The Simpsons, is Grandpa Simpson alive or dead? That is, having been separated from his human body and incorporated in a love testing machine on the wall of Moe's bar --- with memories, personality, and the querelous voice of Dan Castellaneta intact --- is the Lovematic Grandpa still Grandpa?

As far as I can tell, Flunkie seems to be arguing that a being, or "I" is inseparable from the physical body within which it exists. I am my body, and therefore the Lovematic Grandpa is not Grandpa, it's merely something like Grandpa. Whereas Yoink and others seem to be arguing that a being, or "I" consists of its memories and instincts and if that information can be preserved and embodied in a new physical form, then that being still exists. I am my mind, and the Lovematic Grandpa is Grandpa.

Yes/No/Maybe? Do I have this right? Would anybody care to quibble with the construction?
posted by Diablevert at 8:42 PM on July 19, 2009


Do I have this right?

I believe so, yes. And I would hold that the Lovematic Grandpa is Grandpa.
posted by Bort at 8:57 PM on July 19, 2009


If you can accept that "I" remain "me" even as I change then you should be able to accept that "I" can have two separate paths of development (me-me and clone-me) who remain versions of "me" even as they undergo divergent changes.

If you were one of them, would you experience anything that the other one experienced? In the fundamental sense of "being there", you can only experience one continuum - one person. The fact that you have the same DNA and the same memories doesn't have any impact on whether you experience that consciousness - think "Being John Malkovich" - the thing in your head that thinks its there...

Copies are not originals. It doesn't matter that a=a in theory. In matter, there are two different places, two different empirical realities, two different lumps of flesh that have become self aware. All you end up with is a twin.
posted by mdn at 9:06 PM on July 19, 2009


I think "the individual" is an emergent phenomenon (not a state, though), and I don't see any mechanism whereby that would imply that a person who's "rebooted" would know he's been "rebooted." Quite the opposite, in fact: If he's been "rebooted", that would seem, by definition, to erase knowledge of rebooting.

Unless people have a status bit that gets flipped on reboot. Which is entirely possible, I suppose -- but non-obvious.


I'm assuming you wake up in the "rebooting facility" and finish whatever sentence you were in the midst of when you were "stored" and then say "hey, I thought you were about to store me?" and someone says "yeah, we did--you died and we just rebooted you." I'm not saying you burst instantly into life with the full knowledge of your rebootedness--but I'm struggling to think of a real world situation in which you are "stored" without your knowledge and "rebooted" with the entire world conspiring to keep the fact of your rebooting a secret. So, yes, the "rebooted" you is, in fact, aware of being rebooted.
posted by yoink at 9:19 PM on July 19, 2009


If you were one of them, would you experience anything that the other one experienced? In the fundamental sense of "being there", you can only experience one continuum - one person.

I have explicitly stated exactly that point over and over and over again. I must be writing with disappearing pixels or something.
posted by yoink at 9:24 PM on July 19, 2009


And that's the core of my argument, to argue that consciousness is an emergent phenomenon is a religious argument that has no basis in fact.

I know that some people like to seize on the term "emergent" in woo woo ways, but you should be aware it has a rigorous meaning which has no "mystical" implications whatsoever. To say that consciousness is an emergent phenomenon is not to say that it is not entirely deterministic, or to say that it cannot be rigorously described in scientific terms. It is simply to say that it is a result of purely physical processes in the body which cannot be well-described, functionally, simply in terms of those physical processes. It's more a comment on appropriate analytical levels than it is a statement of a particular kind of "causality."
posted by yoink at 9:31 PM on July 19, 2009


Prove it's an extraordinary claim.

The mind boggles.

I'll address this, but first I'd like to remind you that this isn't really the way it's supposed to work. You make a claim, you need to provide some proof. Not just vigorous handwaving, proof.

I realize, of course, that your claim is a common trope in science fiction. In science fiction, wild speculation and "extrapolation" are to be encouraged—indeed, they are the genre's raison d'être. But it's not science. It's science fiction. What's the opposite of "mundane"? Something like "extraordinary"?

It's an extraordinary claim because human-level intelligence (and consciousness) is unique in the universe, as far as we know. No one has ever been able to duplicate it in a machine (despite all the brash predictions of the AI weenies over the decades). Human exceptionalism? We are exceptional; that's just a brute fact that you can't wave away and stay in the realm of reality.

It's an extraordinary claim because not only has nobody done it, nobody has the slightest idea how to do it. If they did, they would have at least made a credible start, and so far, no one has.

It's an extraordinary claim because nobody knows enough about human-level intelligence to know what would be the lowest level of abstraction at which a simulation would have to operate in order to reproduce it. (Hell, nobody knows if it's even possible to simulate it.)

It's an extraordinary claim because nobody knows enough about human-level intelligence to know whether or not it can be implemented by the computational model known as a Turing machine. (And any realizable digital computer is really a finite state automaton, not a Turing machine with an infinite tape.) Nobody knows what computational model is required.

I could go on, but it's getting late. The bottom line is that you have no idea what you're talking about. What you're asserting is science fiction charlatanism trying to pass itself off as science. I wonder whether the true believers in the Rapture of the Geeks ever hold themselves to the same intellectual standards that they insist on for others.

Next you'll be telling me I have some sort of extra-dimensional ghost thingy that is what contains my consciousness. Perhaps my brain is just some sort of transceiver to it?

It's as good a hypothesis as any, considering the level of actual knowledge we have. But I won't be telling you anything. You obviously know so much more than I do.
posted by Crabby Appleton at 9:52 PM on July 19, 2009 [1 favorite]


Let's ignore the whole body problem and say you just duplicate your brain. Let's say it perfectly emulates you in every way, except we wrote it in computer code instead of neurons to make things especially simple. This brain would act in the exact same way as you would now. I would argue that this computer brain is no more conscious than your email client. It would act the same, but it wouldn't have that ineffable "I am here" to itself, it would just merely have instructions. The machine may be self-modifying, but it wouldn't be an observer. To argue that consciousness would be created purely out of machine instruction (whether it's wet or made out of silicon) is as much faith as any religion out there.
But the idea that your own brain is anything more then "instructions" embedded into a network of cells is also in article of faith.
Actually, the differences would be quite measurable, in a very specific and relevant way: Version 1 would be in one location, Version 2 would be in another.
Read the whole comment. I was talking about a copy where the original is destroyed (i.e. restore from backup). I addressed the 'two copies' issue in the next paragraph, and I said yes they would be measurably different, but that's not relevant to the 'restore from backup' thing.
I'm saying that with all current knowledge, it is not possible to say that you can program in consciousness in any material (wet or silicon).
Presumably, if we got to the point we could do it, then we would know it could be done. And how would we know if it could be done? Well, if we had a program that could pass a Turing test, for example.
And that's the core of my argument, to argue that consciousness is an emergent phenomenon is a religious argument that has no basis in fact. It's just as likely as it is not.
This is only true for people who consider "atheism" a religion. After all, if you believe that nothing but the material world exists, then everything you experience must be the result of emergent phenomenon from the material world.
it's logically implausible for anyone that has done any serious programming.
Only if they suck at it.

---

Here's a thought experiment. Suppose you upload a copy of someone's brain into a computer. You plug in cameras and microphones to give it eyes and ears and a microphone for a mouth so you can talk to it. You ask the program if it thinks it's conscious and it replies that it is. You can carry on a conversation with it just as you would with the person who's mind was uploaded.

You say it can't be experiencing consciousness, and it says it is. Do you think you would be able to convince it that it's not?
posted by delmoi at 9:58 PM on July 19, 2009


I'll address this, but first I'd like to remind you that this isn't really the way it's supposed to work. You make a claim, you need to provide some proof. Not just vigorous handwaving, proof.
Look, either we can or can't do this. Either claim is extraordinary, but people in this thread are saying that if we had a computer capable of simulating the brain it still wouldn't be conscious, and tangentially that if could 'restore a person from backup' that the person would still be dead. Those two claims posit incredible technology, and make statements about that hypothetical world.
posted by delmoi at 10:00 PM on July 19, 2009


I agree, mostly, with what you said, delmoi. I'm not sure why you said it. Are you telling me that I'm derailing the discussion? I thought it would be OK to address a claim made in this thread, even if it isn't the main claim. But I hate derails and don't want to be responsible for one, so I'm willing to bow out, if that's the case.
posted by Crabby Appleton at 10:10 PM on July 19, 2009


Don't worry, this thread is nowhere near the rails (it was about a project designed to simulate neuron cells in order to study how they work, to do things like test different drugs and see how brain diseases operate, not create a consciousness)

I don't see how you can say you mostly agree with me when you wrote this:
It's an extraordinary claim because nobody knows enough about human-level intelligence to know whether or not it can be implemented by the computational model known as a Turing machine. (And any realizable digital computer is really a finite state automaton, not a Turing machine with an infinite tape.) Nobody knows what computational model is required.
This is totally wrong. It's known for sure that any computing model can be modeled on a Turing machine. Since the brain is a computing device, we know for sure that it can be modeled on a Turing machine.

The only other possibility is that the brain is not a computing device machine, that it's something else. And that certainly is an extraordinary claim. If it's not a computing device, then what is it?

The extraordinary claim on the other side is that we will ever be able to 1) make chips fast enough and 2) write the software needed. Maybe we won't be able to.
posted by delmoi at 11:16 PM on July 19, 2009


I only meant that I mostly agreed with what you said in the immediately preceding comment.

As for the Turing machine stuff, it looks like you're assuming that the brain is computing effectively computable functions and applying the Church-Turing thesis. I don't think anyone's established that human consciousness is an effectively computable function, or even that the Church-Turing thesis is true. Turing machines with oracles are posited in computability theory. Who knows what kind of oracle would be required to emulate human consciousness? It's possible that human consciousness depends on non-computable functions. Nobody knows. This is not an extraordinary claim. (If I claimed that human consciousness does in fact depend on non-computable functions, that would be an extraordinary claim and, of course, I can't prove it.) But I think I'm on pretty solid ground in claiming that we don't know.
posted by Crabby Appleton at 11:39 PM on July 19, 2009


So, yes, the "rebooted" you is, in fact, aware of being rebooted.

But that's a silly example! Why did you state it as though this were information that were available to the individual intrisically, when it's actually something that's contingent on experiential data?

Anyway, it ought to be easy for you to think of situations where the "rebooted" individual had no expectation of being copied: Catastrophic injury, where a reboot is remediary; unwilling duplication (which might well go along with substantial pre-duplication conditioning) (e.g. a 'clone army' -- wildly SF'nal, sure, but considering what we humans do to other humans, not that far fetched if we assume it's possible); general malice or experimentation.
posted by lodurr at 5:08 AM on July 20, 2009


Yes/No/Maybe? Do I have this right? Would anybody care to quibble with the construction?

As you construct the scenario -- that is, assuming it's possible to separate a mind from its meat -- it seems clear to me that the Grandpa in the machine is a person. Whether it's "Grandpa" or not in a subjective sense (i.e., to Grandpa, Lovematic Grandpa and the people around him/it) is something only Lovematic Grandpa and the people it interacts with can have a real opinion on -- in a deterministic sense, it's definitely not.

That said, while I think it's vaguely plausible that you could instantiate a human mind apart from its meat, I don't think there's any meaningful sense in which you could claim that the person is going to have a consistent, continuous self. We are fundamentally meat, after all; it's fundamental to our experience of reality. Take that away, and you would be very different. This is where "uploaded brain" fictional scenarios fall down, IMO: The uploaded individual (at least in the recent, popular versions a la Doctorow et al) acts and talks just like a normal person. IMO that's preposterous. C. L. Moore produced a much more realistic scenario for what humanity would be like (see her short story "No Woman Born").
posted by lodurr at 5:17 AM on July 20, 2009


delmoi, my understanding was that you can't model a cellular automaton using a Turing machine. Do I have that wrong? (There was a lot of talk at one point about the mind being a cellular automaton.)
posted by lodurr at 5:19 AM on July 20, 2009


Read the whole comment. I was talking about a copy where the original is destroyed (i.e. restore from backup). I addressed the 'two copies' issue in the next paragraph, and I said yes they would be measurably different, but that's not relevant to the 'restore from backup' thing.

I apologize. I tend to ignore "destroy copy" clauses because i've never understood what contribution the destroyed copy scenario makes to these discussions. It's still the same problem -- you just killed one of the copies. (I think the Nolan brothers did a good job illustrating the degree to which that scenario obfuscates the arguments.)
posted by lodurr at 5:22 AM on July 20, 2009


You say it can't be experiencing consciousness, and it says it is. Do you think you would be able to convince it that it's not?

This is a powerful formulation. Maybe you could think about honing it down to a koan and propagating it. There's a lot of interesting stuff you have to think about if you consider it seriously.
posted by lodurr at 5:26 AM on July 20, 2009


All the cellular automata I know of run on digital computers, which are (modulo the infinite tape) Turing-equivalent.
posted by Crabby Appleton at 6:13 AM on July 20, 2009


while I think it's vaguely plausible that you could instantiate a human mind apart from its meat, I don't think there's any meaningful sense in which you could claim that the person is going to have a consistent, continuous self.

The scenario I have trouble deciding about is this: posit the existence of a mechanical device which performs the same essential functions as a neuron. Remove one neuron from a living brain, replace it with one of those machines, connected to the rest of the brain exactly the same way the original neuron was. Repeat over time until all neurons have been replaced with machine equivalents. At what point is continuity of experience lost?

I have mdn to thank for clarifying much of my thinking about this subject, in that long ago thread she linked to above. So, seven years late: thank you, mdn.
posted by ook at 7:39 AM on July 20, 2009


So as I see it we have two suppositions:
  1. The human brain is capable of being implemented as a Turing machine
  2. The human brain is not capable of being implemented as a Turing machine
Note not capable is used in its strongest possible sense here. Not implausible, not inefficient, not unlikely to work well, but fundamentally impossible. Halting problem impossible. Violation of thermodynamics impossible.

I posit that Occam's razor sides unambiguously on the side of camp #1 and it would require some truly staggering new science for that to change.

Thus far the universe has submitted to attempts to model it. We've not come across any processes in the natural world that are incapable of being simulated. We can't yet actually perform many simulations but that's because we're not clever or capable enough yet not because the math tells us that we'll never be able to.

So yes, strictly we don't know that the mind, consciousness, etc will also submit to the same process because we haven't done it; nevertheless I think it's a mistake to treat the two options as equally probable.
posted by Skorgu at 7:46 AM on July 20, 2009


ook: I would say "never lost", but then, I don't think the case is sufficiently detailed: It doesn't account for the body. Though I'm not sure "continuity of experience" is the problem I have with the case; just that you're focusing on the brain.

Discussions like this bother me in part because they assume a clearer boundary between the body and the brain than I think exists. Our "brain" in the prosaic sense is the organ in our skulls; but in a functional sense, it extends throughout our body and has both digital and analog (ductless glands, etc.) aspects.
posted by lodurr at 7:52 AM on July 20, 2009


Skorgu, on the contrary, I think the overwhelming evidence supports case 3: We don't know, and need to do empirical research to figure it out.
posted by lodurr at 7:54 AM on July 20, 2009


Whether it's "Grandpa" or not in a subjective sense (i.e., to Grandpa, Lovematic Grandpa and the people around him/it) is something only Lovematic Grandpa and the people it interacts with can have a real opinion on -- in a deterministic sense, it's definitely not.

See, this is interesting. The other side of this fuzzy scale is brain death --- the phycial body remains but consciousness is lost. In this case, I would argue that the person is dead. (It is difficult, perhaps impossible, to prove definitively that consciousness has been lost, but that's another matter.) You can have all the molecules, but without the information up and running the person is lost. Indeed, that's all death means --- the body without its animating consciousness.

We are fundamentally meat, after all; it's fundamental to our experience of reality. Take that away, and you would be very different.

Yeah, but I would be from the moment when my mind was embodied in a different form. My experiences would be different from that moment forward; one could argue, I think, that I had become something somewhat inhuman. But I'd still say I was me. After all, I were to suffer some sort of horrible Johnny Got His Gun type accident in which I became blind, deaf, dumb and limbless, my experience of the world would be very very different from a normal human's from that point on. But I don't think I would cease to be me. And I don't see a functional difference between a conscious mind remaining in a human body which has been utterly cut off from its senses and a conscious mind preserved in a computer.
posted by Diablevert at 9:08 AM on July 20, 2009


Discussions like this bother me in part because they assume a clearer boundary between the body and the brain than I think exists.

A fair point. One could extend the thought experiment to include replacement of whatever other parts of the body are necessary, though; as long as we're inventing neuron replacements, coming up with a gland or two ought to be a snap, yeah? (I'm in total agreement with you that the inner experience of someone undergoing the process would change drastically, and probably rapidly cease to be recognizably "human" -- but I think that continuity of experience is a crucial point: I'm a completely different person than I was twenty years ago, but I'm still the same consciousness. The only thing I can see that explains this is the continuity of experience from then to now.)

Mostly I think the example is useful because it sidesteps many of the difficult questions about copies, backups, etc. -- all interesting questions in their own right, but subsidiary to "does mind == meat" (because if the answer to that is "yes", then all the other questions are irrelevant.)
posted by ook at 9:21 AM on July 20, 2009


But that's a silly example! Why did you state it as though this were information that were available to the individual intrisically, when it's actually something that's contingent on experiential data?

Lodurr--I didn't "state it as though this were information that were available to the individual intrinsically"; you simply assumed that that was what I meant. I didn't mean that, and I didn't say it.

Anyway, it ought to be easy for you to think of situations where the "rebooted" individual had no expectation of being copied: Catastrophic injury, where a reboot is remediary; unwilling duplication (which might well go along with substantial pre-duplication conditioning) (e.g. a 'clone army' -- wildly SF'nal, sure, but considering what we humans do to other humans, not that far fetched if we assume it's possible); general malice or experimentation.

These scenarios are utterly irrelevant to my argument. Nothing I have said in any way relies upon the rebooted person being aware of being rebooted. I only said that they would have that awareness because it seemed to me the most plausible scenario. If they don't, fine. It doesn't change a thing.

I think you must be getting my posts mixed up with someone else's. You certainly have not understood the position I'm trying to outline. I find this whole conversation bizarre. I guess it's too much to ask you to reread my actual posts, but until you do there's really not much point in my continuing to try to debate with you--you've got such a completely upside-down version of my argument lodged in your mind that I simply don't know where to begin to try to straighten you out.
posted by yoink at 9:35 AM on July 20, 2009


Diablevert: as for your Grandpa Simpson example. To me it is obviously Grandpa Simpson whether he is in the meatware or not. I expect that in real life if we were to encounter a scenario like this even the ones who insist it isn't Grandpa Simpson in theory would readily accept that it was in practice.

Let me expand on this with a series of hypotheticals.

Think of someone you know very well. I'm going to say "your brother," but you can substitute father, mother, lover--whatever.

When you meet your brother after he's been absent for a while, how do you know it's still "him" and not someone else? Well, first off he looks a certain way (meatware); he has certain personality traits; he has certain kinds of knowledge that only he would know.

O.K., say your brother goes to war and is badly injured in a bomb blast. You don't get to see him until you visit him in hospital. He is unrecognizable. He talks through a voice-synthesizer; he has lost his arms and legs; his body is a mass of skin grafts; his face swathed in bandages. But his mind is 100% and he is able to converse freely with you. Do you need a DNA sample before you accept that this person is your brother? Of course not. Ten minutes of conversation with this person whose "meatware" is entirely unrecognizable to you, whose voice is unrecognizable, and you'll be as sure as you ever could possibly be that this "is" your brother.

O.K., so you're whisked away with your brother on spaceship to an alien planet. You get separated and then you encounter a robot. The robot says to you "Billy, it's me, Jeff, your brother! The aliens transferred my mind to this robot! Help!" How long do you think the conversation would need to be before you call the robot "Jeff" quite unselfconsciously--assuming that the way it talks to you is exactly the way you would expect your brother Jeff to talk and that it has all the knowledge you expect your brother Jeff to have?

(Incidentally: that's why I disagree with Lodurr about the meatware being essential to identity. While it's true that a "mind in a box" might go insane or be unable to think without certain kinds of sensory input, it seems to me that providing that sensory input (whether real or simulated) is trivial compared to the problem of creating the "mind" in the first place. Heck, we can already provide our meatware brains with mechanical sensory inputs (camera-eyes etc.).)
posted by yoink at 9:47 AM on July 20, 2009


These scenarios are utterly irrelevant to my argument. ...
you've got such a completely upside-down version of my argument lodged in your mind that I simply don't know where to begin to try to straighten you out.


Well, you could begin by not assuming I'm speaking to your whole argument. I was speaking very specifically to one part of it that seemed to me to serve no purpose but obfuscating the rest of the argument.
posted by lodurr at 10:06 AM on July 20, 2009


Well, you could begin by not assuming I'm speaking to your whole argument.

Lodurr, if you understood my whole argument, you couldn't possibly have made the error of thinking that I was claiming "intrinsic" knowledge of being rebooted for the person who is rebooted. In other words, the "parts" of my argument that you keep misconstruing are parts that you couldn't possibly misconstrue if you actually understood the whole argument (and, in many cases, if you just read what I actually wrote--as, for example, when I wrote quite explicitly that "You're saying "will future me-me think of future clone-me as "me." To which, of course, the answer is "no." and you roared in to tell me I was wrong wrong wrong because I should have said that the answer was, well, "no.")
posted by yoink at 10:20 AM on July 20, 2009


Isn't all this just destroying a straw clone? Is there anybody here who is arguing that if you are cloned "at the atomic level" that you and your clone continue to be "one consciousness" forever? Isn't the argument simply that you and your clone start from a condition of absolute identicalness and then (rapidly) diverge.

So, as long as there's a divergence, they are not the same consciousness once they've been "rebooted", right?

I guess my issue with your arguments as they come across is your distaste for the subjective. The problem is that consciousness is and must be subjective; that is the very definition of subjective, of consciousness. It is the subject's experience. "consciousness" from an objective point of view is not consciousness; it's only intelligence or relatability or something.

so to say that the rebooted consciousness is the "same" one as the one that died, even though if they both lived they would not be the same, just dismisses the whole idea of subjective experience. If you had a clone made, would you be willing to have yourself euthanized because the clone was going to do your job? The clone may think he's you, but who cares? If you're euthanized, your subjective experience still ends. You just created an identical twin who had copies of your memories.
posted by mdn at 10:21 AM on July 20, 2009


So, as long as there's a divergence, they are not the same consciousness once they've been "rebooted", right?

I'm not sure who "they" is in this sentence. If you mean "prereboot-me" and "postreboot-me" (i.e. I get a copy of me made, I die, the copy is booted up), then no, not right. As soon as I'm "rebooted" I begin to change, of course, just as well all change day by day. But as long as you think you'll still be "you" tomorrow--despite the fact that the "tomorrow you" will not be identical to the "today you" then the "pre-reboot me" and the "post-reboot me" will still be the same person.

On the other hand, if you mean a scenario in which the copy is booted up while the original still lives, then right, yes, from the second that the reboot occurs, the two beings (copy and original) begin to diverge and are, thus "different" people. They will both, however, rightly refer to all versions of themselves prior to the moment of copying (not the moment of rebooting, necessarily) as past versions of "themselves." That is, they will be able to say with equaly truth "I have always loved chocolate" or "I once visited Rome" in reference to past character traits and experiences of the pre-copy-me. On the other hand, if post-reboot-copy-me visits Rome while post-reboot-original-me does not, original-me will not be able truthfully to say "I am now in Rome" while sitting in Hoboken.

If you had a clone made, would you be willing to have yourself euthanized because the clone was going to do your job?

Ask yourself this: if you suddenly found yourself in the Star Trek universe, would you use the transporter? If your answer is "yes" then you are saying that you would be happy to destroy yourself and have an instantaneous recreated clone-self made. I, too, would be happy to do that. If you answer "no" (and not because of the obvious risk of transporter malfunction, but because of some inherent objection to the loss of "self" involved) then I think your objections are metaphysical and not subject to rational argument.

If, on the other hand, the point of your question is "would you have a clone made, let that clone live for a while, then kill yourself because your clone is, after all, 'still you'" then you're still misunderstanding the point I keep saying over and over again. Once your "clone self" and you are both living then with regard to each other you are NOT (NOT NOT NOT--how many times do I have to say this?) the same person.

As to what you say about my discounting of subjectivity, I have no idea what you're referencing. I will note that I find it pretty funny that in this same thread I've had Lodurr on my tail for privileging subjectivity and now you complaining that I'm discounting it. As I say, I think you guys are getting some of the participants in this discussion mixed up with each other.
posted by yoink at 10:42 AM on July 20, 2009


Skorgu writes:
So as I see it we have two suppositions:
1. The human brain is capable of being implemented as a Turing machine
2. The human brain is not capable of being implemented as a Turing machine
Note not capable is used in its strongest possible sense here. Not implausible, not inefficient, not unlikely to work well, but fundamentally impossible. Halting problem impossible. Violation of thermodynamics impossible.
I agree with that.
I posit that Occam's razor sides unambiguously on the side of camp #1 and it would require some truly staggering new science for that to change.
I wish I knew where your confidence comes from. Certainly John Lucas would disagree with you. He claims to have a proof that a Turing machine cannot represent a human mathematician. I can't vouch for his proof; my computability theory skills are pretty rusty. But the man has fairly impressive credentials. You might want to take a look at his proof.

Also, keep in mind that Occam's Razor is just a heuristic. I doubt it's sharp enough to slice such a fundamental problem.
Thus far the universe has submitted to attempts to model it. We've not come across any processes in the natural world that are incapable of being simulated.
What about true random number generation? A Turing machine can only produce sequences of pseudo-random numbers. Pseudo-random number sequences are inherently predictable, and eventually repeat. But a physical quantum process, such as the timing of radioactive decay or thermal noise in a Zener diode, can produce true random numbers. True random number sequences are unpredictable. Cryptologists much prefer to use true random numbers. So there's a counter-example. I don't know whether it's relevant to simulating human-level intelligence, but it might be. And if there's one counter-example, I wouldn't be surprised if there are others.
So yes, strictly we don't know that the mind, consciousness, etc will also submit to the same process because we haven't done it; nevertheless I think it's a mistake to treat the two options as equally probable.
I don't believe that a Turing machine is powerful enough to do what we do, so I don't agree with you with regard to your assignment of probabilities. But, for what it's worth, as a research program I think it's reasonable to take it as a hypothesis and see what can be done. I agree with lodurr that we need to do empirical research to find out.

After all, I'm just trying to be a good skeptic here.
posted by Crabby Appleton at 10:52 AM on July 20, 2009


Once your "clone self" and you are both living then with regard to each other you are NOT (NOT NOT NOT--how many times do I have to say this?) the same person.

This is very hard to understand - I'm sorry to keep making you repeat yourself - but you seem to want to claim that yoink and yoink II will be the same consciousness as long as yoink is killed immediately, whereas if he is kept alive, yoink II will be a separate consciousness. So the mind of yoink II depends on whether or not yoink is kept alive! Doesn't that seem ridiculous to you? How would you even know if the original were alive? What happens if they are both kept alive for 10 seconds? What about 10 microseconds? How do you designate whether a new consciousness is born or the old one continues in a new place?

Ask yourself this: if you suddenly found yourself in the Star Trek universe, would you use the transporter? If your answer is "yes" then you are saying that you would be happy to destroy yourself and have an instantaneous recreated clone-self made.

This is a completely irrelevant question. There is no such thing as a star trek universe, and I have no idea how transporters could possibly work. If it were destruction and clone self, then you could be transported without destroying the original, so you could just send clones down to the planet, but not leave the ship. If you did that, you would not experience both lives, and if you had yourself simultaneously destroyed, that would have no effect on who lives the life of the clone. So if transporters are just making clones, then yes, it sure is dumb for them to kill themselves while the clones are being formed, and I would not do that unless I were suicidal.

As to what you say about my discounting of subjectivity, I have no idea what you're referencing

I was thinking of this attempt to define what consciousness is:

Now, we could say "well, let's get rid of purely subjective knowledge--that's of no use to science." Fair enough. If we're talking solely about what is available to scientific study, how do we determine whether a human being is "fully conscious"? Well, they have to be able to communicate with us, right? They have to be able to answer questions about their current state, their past states, their future intentions and desires, right? Fine--I'm perfectly happy with that as a definition of "consciousness": "consciousness is the ability to behave in the ways we expect a fully-conscious human being to behave."

To me, once you are looking at it from the outside, you are equating twins - saying they're the same person because they look the same and respond the same way. But the twins don't think they're the same as each other - they each have their own subjective experience.
posted by mdn at 11:01 AM on July 20, 2009


But as long as you think you'll still be "you" tomorrow--despite the fact that the "tomorrow you" will not be identical to the "today you" then the "pre-reboot me" and the "post-reboot me" will still be the same person.

But there's no continuity of experience, except from the perspective of the post-reboot you. You're quite clear on that. So they're only "the same person" from the perspective of how they are treated -- by themselves, and by others. They are not the same person physically: They are made of different matter, one replaces the other.

This is why the original-deletion scenario confuses more than it clarifies: It hides the obvious fact that these are different and unique people. (They are unique: They occupy different space and are comprised of different matter.) If they both still existed, it would be nonsensical to say they're both "you" -- only you would be you, and you2 would be someone else who happens to share all your experiences up to point X.

IOW, you're only "you" in the deletion scenario because there's no better match. That may actually be fine for your aunt Margaret, but it's not very rigorous.

You pose an interesting question about the transporter. If I understand your position correctly, you are claiming that you would happily murder yourself -- annihilate yourself, actually -- if you knew that there was a sufficiently high probability that an instance of a being extremely similar to you would come into existence shortly thereafter, somewhere else, with all your memories intact. (There really can't be any doubt that in the current Euro-American ethos, what you'd be doing is murdering one self in order to instantiate another.)

I don't see that as a merely metaphysical issue -- it's a pretty clear ethical issue, and it seems to me to be pretty clearly vulnerable to logic, at least as that logic works within our current ethos. It seems clear to me that in using the transporter, you are annihilating an existing sentient organism to replace it with another. That's just how the machine works. Think multiple-Rikers, if you want to stick with a Trek analogy. (Though, admittedly, multiple Rikers might be a good reason to keep the thing working as designed.*)
--
*If Johnathan Frakes reads this: It's only because you played such a good jerk all those years....
posted by lodurr at 11:03 AM on July 20, 2009


That Lucas (and Penrose) argument has been refuted:
[Penrose's] reasoning is that, while a computer operating within the fixed formal system F can't prove G(F), a human can see its truth, and therefore humans must have mental capabilities beyond those of computers.

This argument isn't new (it goes back at least to John Lucas in 1961), and logicians and computer scientists have pointed out a major flaw in it. This is that human mathematicians don't use any consistent formal system such as F: they rely on intuition, and they frequently make mistakes. If we grant a computer this same liberty to make mistakes, then it need not operate strictly within F, and there's nothing paradoxical about it being able to 'see' the truth of G(F).
posted by Bort at 11:29 AM on July 20, 2009


That's quite a refutation. Can you give me a pointer to the code to give a computer intuition? I could sure use that in a program I'm working on.
posted by Crabby Appleton at 11:47 AM on July 20, 2009


What about true random number generation?

If you really care, give your Turing machine one of these, though how randomness could be required for consciousness is beyond me - perhaps you have a theory ?

So there's a counter-example. I don't know whether it's relevant to simulating human-level intelligence, but it might be. And if there's one counter-example, I wouldn't be surprised if there are others. ... I don't believe that a Turing machine is powerful enough to do what we do

Oh, I see now. You don't believe we are Turing machines. You have no real reasons or evidence for this belief, but you wish to throw up "counter-examples" (that you "don't know whether it's relevant") with I'm guess the goal of showing that it really takes faith to believe either way and all beliefs are equally valid.

That's quite a refutation. Can you give me a pointer to the code to give a computer intuition? I could sure use that in a program I'm working on

See, now you're just being willfully obtuse.
posted by Bort at 11:51 AM on July 20, 2009


Right off the bat, absolutely more research is required.

On preview someone linked the Penrose criticism saving me a paragraph. Yay!

I don't see the relevance of random numbers honestly, as long as we can recreate the distribution faithfully it has no effect on other kinds of simulation.

Again I'm not saying that there is absolutely no chance that there will be something that precludes us from simulating consciousness, I simply don't see the point in assuming some as-yet undetected but insurmountable obstacle ahead.

In short, why should consciousness be different than everything else? And to be clear I don't consider qualia to be effective arguments in either direction; human intuitions, 'thoughts' and experiences are breathtakingly irrational.
posted by Skorgu at 12:33 PM on July 20, 2009


skorgu, i was being picky, I suppose; in terms of what my old AI prof used to call "broad logical possibility", I agree with you, but I'm hung up on how the meat is integrated. It's not that I believe in essences; it's just that I think extrication from the meat might be a sufficiently touchy problem that what we should be talking about is not whether we can instantiate people inside machines, but rather how much like a meat-human those people have to be before we call them 'people.'
posted by lodurr at 12:46 PM on July 20, 2009


In short, why should consciousness be different than everything else?
But it so clearly *is*. There's nothing in physics to explain subjectivity. How come there is this privileged point of view, this "headless body" I call "me" while everybody else is "you" and has a head??
posted by crazy_yeti at 12:50 PM on July 20, 2009


It's not clear to me, crazy yeti.

Not sure what you mean about there being 'nothing in physics to explain subjectivity.' I thought that's what the Uncertainty Principle was about. But that would be really getting off track...

As for your question: I've been hearing that question ('How come there is this privileged point of view...?') for many years, and I've never thought it was a very interesting question. All this POV recursion -- the importance of it is an illusion. We think it's important because we're at the center of our world.
posted by lodurr at 1:29 PM on July 20, 2009


I don't think we disagree lodurr, and well said. There are ethical, moral and technical quandaries aplenty once you go down the road of everything being meat. I just happen to think we'll be further down that road a lot sooner than we realize.

Personally I think if it can pass the same intelligence tests as a $foo it should be given the same standing as a $foo, at least to start. We may acquire better understanding of the cognitive functionality of things like suffering and thus be able to bend that rule in one direction or other but for the moment a function-based approach is the only one that lets me sleep at night.

The problem with "it so clearly is" is that there are hundreds of things that violate our intuition. These two squares are obviously different shades, alcohol obviously affects your behavior, and we can obviously tell what a good deal is just to name a few. Given how thoroughly incompetent our intuition is in other areas I don't find it persuasive here.
posted by Skorgu at 1:42 PM on July 20, 2009


lodurr: It's very hard to put what I'm trying to say into words. There are a whole bunch of minds, there are a whole bunch of reference frames, and according to all I know about physics (which is a fair amount), they should all be related via equivalence transformations. Yet, of all the 6 billion individuals, I have privileged access to the sense data coming in through one particular pair of eyes, one pair of ears - the symmetry between individuals is somehow broken because only one of these individuals is *me*. How does this symmetry get broken??? To me, this is a profound mystery, and *almost* pushes me in the direction of religion - it would be easier for me to understand this in terms of the concept of "soul". I've wondered about this my whole life, and I don't expect to ever understand it, but it is truly amazing to me. A few other people I've talked to feel the same way, while others are satisfied believing we are somehow computing machines, and that consciousness is just an emergent phenomenon, or even an illusion. I find these explanations very shallow and unsatisfying, and continue on in my doomed quest to understand...
posted by crazy_yeti at 2:12 PM on July 20, 2009


"But there's no continuity of experience, except from the perspective of the post-reboot you. You're quite clear on that. So they're only "the same person" from the perspective of how they are treated -- by themselves, and by others. They are not the same person physically: They are made of different matter, one replaces the other. "

I sleep and I age. Every night, my continuous experience of consciousness is interrupted by a period of unconsciousness, yet when I awaken I do not consider myself to be a different person. Further, every day the cells which comprise my body age, decay, and are removed and replaced by new cells. (I heard a factoid that the whole process takes seven years, but that could be bullshit.) The matter that currently comprises my body is not the same matter that comprised it when I was born (couldn't be, I'm a lot bigger now) and it is not the matter that will comprise my body when I die, cross fingers that I get me three score and ten. If in some far future, science is able to replace a damaged brain with an artificial one made of silicon, keeping one's memories intact, then it seems to be the transplant recipient would be the same person, just as they would if they'd recieved an artificial heart.
posted by Diablevert at 2:25 PM on July 20, 2009


The problem with "it so clearly is" is that there are hundreds of things that violate our intuition. These two squares are obviously different shades, alcohol obviously affects your behavior, and we can obviously tell what a good deal is just to name a few. Given how thoroughly incompetent our intuition is in other areas I don't find it persuasive here.

Word. This I think gets at something that I've been trying to clumsily elucidate --- I don't think yoink and I disagree, really, about consciousness. (Eg., it's impossibly to objectively verify and so we basically infer its existence in other humans, and practically speaking, would and ought to do the same if we ever managed to create an artificial intelligence that was apparently equal to a human's.) However, I suppose the thing that still gives me pause is that while I feel perfectly fine about taking the leap of faith that other humans are conscious just as I am, a human-created artificially intelligent being is fundamentally different from a human such that I am not sure whether such an inference is justified. (You might have to make it anyway, if your robot is telling you "ow, that hurts, please stop it.")

Because as Skorgu says, we're kind of bad at assessing the existence of intelligence/consciousness in non-human beings. We rush to attribute intelligence when it isn't there, and we are quite willing to decree that it does not exist when that serves our purposes. I tend to think that consciousness is an emergent property of sufficienty complex information networks such a brains. But that's far from proven, and I'm not sure how you would prove it, if you could. And in the attempt one risks tortuting and enslaving conscious begins or conferring personhood on automotons. Neither seems desirable...
posted by Diablevert at 2:46 PM on July 20, 2009


But there's no continuity of experience, except from the perspective of the post-reboot you. You're quite clear on that. So they're only "the same person" from the perspective of how they are treated -- by themselves, and by others. They are not the same person physically: They are made of different matter, one replaces the other.

How could there be continuity of experience from the perspective of the pre-copy person? Precognition? You have no "continuity of experience" with the person you'll be tomorrow--does that mean that that person is not you?

Please remember that for the sake of this argument we were assuming the ability to instantaneously copy a person and their mind down to the atomic level--and assuming, also for the sake of argument, that non-copyable quantum effects were irrelevant to cognitive function.

Like you I think the "doubling" argument might be obscuring as much as it is helping here. I think the "teleportation" argument is in some ways the cleaner one. I invent a teleportation device that works a la the Star Trek one. Somehow it disassembles me in spot A and reassembles me so perfectly in spot B that I have no discontinuity even of my thoughts during the process. Your claim is that the only possible way to view such an experience is as the murder of one person and the creation from scratch of some entirely different person. This claim strikes me as deeply counterintuitive and I would cite the experience of generations of Star Trek watchers, almost none of whom have ever had the slightest problem thinking of Captain Kirk as the "same" person before and after transportation (except when the transporter misfunctions and puts him together incorrectly).

In what way am I not the same person before teleportation as I was afterwards (assuming no errors in the way I'm put together)? You say "well, the atoms are all different atoms." So? If I get a prosthetic hand, am I no longer the same person I was before? When one cell dies and another replaces it, do I become a "different person"? If I get a heart transplant, am I a different person? Very, very few people consider their personhood essentially bound to a specific body (I say this with confidence because there are few major religious that don't accept the possibility of the existence of the 'self' separate, permanently or temporarily, from the body). The "self" is a collection of memories, a set of cognitive abilities and styles, and a series of attitudes towards the world. If those remain constant through the teleportation process, how is the self not "continuous" through that process?

I am extremely confident that if someone you loved agreed to be a subject of an experimental teleportation and then came up to you afterward and was, in every way you could determine, the "same" person, you would continue to treat them and interact with them as the same person they always were. You would not "mourn" for the passing of the pre-teleportation "self" and feel that you needed to start a new friendship with the post-teleportation one.
posted by yoink at 3:00 PM on July 20, 2009


However, I suppose the thing that still gives me pause is that while I feel perfectly fine about taking the leap of faith that other humans are conscious just as I am, a human-created artificially intelligent being is fundamentally different from a human such that I am not sure whether such an inference is justified. (You might have to make it anyway, if your robot is telling you "ow, that hurts, please stop it.")

I think that as soon as one imagines the actual interaction with a "machine mind" that acts exactly as you would expect a human to act (i.e, so that you simply cannot tell--as you interact--whether this is a machine or whether it's a person responding to you via remote control), then this question becomes rather simple to answer.

I suspect that people who say "oh, but it's not a real person" tend to give themselves the out of thinking that the machine will be somewhat like a person, but will inevitably betray it's machine-like status (a la every second Star Trek episode where the machines are simply "too logical" or what have you). But if you actually picture to yourself the experience of chatting away to a machine as if you were talking to a close friend on Skype, say, the question of "is this a real consciousness" would strike you as morally obscene--just as "is my Mom really a person or is she just a meat puppet" is the kind of question that only a psychopath would ask.
posted by yoink at 3:06 PM on July 20, 2009


This is why the original-deletion scenario confuses more than it clarifies: It hides the obvious fact that these are different and unique people. (They are unique: They occupy different space and are comprised of different matter.) If they both still existed, it would be nonsensical to say they're both "you" -- only you would be you, and you2 would be someone else who happens to share all your experiences up to point X.

Oops, sorry Lodurr, I see it was the non doubling version that you thought was obfuscating.

Well, I've addressed this situation until I'm as blue in the face as the Metafilter background, so I'm not sure what to add. Maybe if I try one more hypothetical situation.

You go to "atomic-copy-lab" in the future to make a back-up copy of yourself. There's a mix-up at the lab, however and instead of just copying you for storage, they copy-and-reboot you. So now there are two yous. Immediately afterwards there's an earthquake and both you and you[1] get knocked out. You're carried out and laid on stretchers side by side. You both come too, and both have no recollection of the past 24 hours. Nobody around remembers which one of you is the body that actually walked in that morning and which is the exact (down to the atom, remember) recreation.

O.K. I say that both of these people are "continuations of the pre-copy you." They are obviously two different people (becoming more different with ever passing second), but that there is no meaningful sense in which either is a more "real" continuation of the one original pre-copy "self" than the other. Your argument is that it is nonsensical to call one of these people simply a continuation of the pre-copy "you." So...how do you go about determining which one is the "real" you and which one is the "copy" you?

If there is no possible way--by scientific test or by having friends examine them or what have you--to tell these two apart as "real you" and "fake you" aren't you insisting on a purely imaginary distinction?
posted by yoink at 3:17 PM on July 20, 2009


Diablevert: have you considered that you are conscious (in the metaphysical sense, not awake/asleep sense) when you are asleep, except that your memory and sensory aspects of your brain aren't running? I'm not saying that's the case, but it's entirely possible. There really isn't much knowledge about any of this.

There is no known conceptual means for consciousness to be emulated in any material machine. I would like to again emphasize that an appeal to emergence is handwaving. There's a big difference between PageRank magically creating implicit structures and cool learning techniques and actual consciousness. Replacing the neurons will just make a machine that acts human and not be conscious. It may be awake and even act with independent agency, but it would not be conscious according to all known computer science. To argue anything else is as crazy as mythical sun-god creation stories.
posted by amuseDetachment at 3:30 PM on July 20, 2009


Replacing the neurons will just make a machine that acts human and not be conscious. It may be awake and even act with independent agency, but it would not be conscious according to all known computer science. To argue anything else is as crazy as mythical sun-god creation stories.

So...you're saying it's behavior could be absolutely indistinguishable from a human's, but you wouldn't say that it was conscious?

Why not? And why, then, do you assume that anybody else in the entire world other than yourself is conscious in the way you know yourself to be? Is all that you are saying that you are willing to assume consciousness in other humans but unwilling to assume it in the case of a machine? How is that an interesting statement about anything other than your own beliefs?
posted by yoink at 3:44 PM on July 20, 2009


There is no known conceptual means for consciousness to be emulated in any material machine.... Replacing the neurons will just make a machine that acts human and not be conscious.

So, then: you ascribe some magical property to neuronal matter, which can embody consciousness, but which property is for some reason unavailable to other types of matter, which can't. Is that right?
posted by ook at 4:11 PM on July 20, 2009


(...which is to say, "zimboes"...)
posted by ook at 4:18 PM on July 20, 2009


yoink: Yes, because you can have human behaviors without being conscious. The Turing Test doesn't test for consciousness, it tests for intelligence. Massive difference. I am not sure anyone else is conscious, many philosophers have already resolved that, isn't that what "Cogito, ergo sum" is supposed to embody? I am willing to assume others because of a (in my opinion) reasonable assumption and it suits me, it is not reasonable to assume with current knowledge that a laptop can gain consciousness.

ook: No, If A is false, then that does not automatically make B true. I am saying that if you replaced neurons with computers, there is no known arrangement of instructions/data to make it conscious. As to what exactly makes us conscious, I have no idea. The only thing I do know that if you replaced neurons with computer instruction, then there is no logical way that it can magically become conscious.
posted by amuseDetachment at 4:21 PM on July 20, 2009


amuseDetatchment, as far as I can see you've simply made an a priori judgment that computers cannot become conscious. As they say, you can't be reasoned out of a position you didn't reason your way into.

Given that you have no definition for consciousness, however, other than "a magical thing humans have and others don't" I don't really see what it is that you're denying computers would be able to do. In the end, I'm guessing that it would take mere minutes of interaction with an actually human-behaving computer for you to ascribe "consciousness" to it in precisely the way you to--with equal lack of evidence--to the humans you interact with.
posted by yoink at 4:29 PM on July 20, 2009


amuseDetachmetn: That's quite a strong statement. What would it take to convince you otherwise? Would any evidence make you stop and say "you know, I was wrong; this computer really is conscious" ?
posted by Skorgu at 4:31 PM on July 20, 2009


Drat, saw the typo as I hit post. Pretend I can type please.
posted by Skorgu at 4:32 PM on July 20, 2009


have you considered that you are conscious (in the metaphysical sense, not awake/asleep sense) when you are asleep, except that your memory and sensory aspects of your brain aren't running?

Have I considered that I am conscious when I'm unconscious? No, I haven't. Not even in some super-nifty metaphysical sense. I'm being a bit glib --- but I would define consciousness as awareness-of-self-as-self, that I am an I incorporated in a form, distinct from the universe within which I exist. If I do not possess such awareness --- whether temporarily, through sleep, or permanently, through death, I am not conscious. My computer is not on when it's off, even if I can turn it back on and have it work just fine. You might say that the sleeping me is a being which is capable of consiousness, but as per Swift, I may be an animal rationis capex but that doesn't mean that I'm rational past, say, the fourth dark n' stormy. (I make 'em strong.)

conceptual means for consciousness to be emulated in any material machine

Yes, including the human body --- as yoink has pointed out, I can't prove my mother's conscious and not an automoton, though I believe her to be. At a certain point, psychology leaks into biology leaks into chemistry leaks into physics. The electron, the atom, the molecule, the cell, the neuron, the nerve, the medulla oblongata --- none of the these things are conscious. (Though all are things.) The brain is. Somehow consciousness snuck in there when we were scaling up. How?

Or in other words, an appeal to emergence is handwaving? Okay. An appeal to emergence is handwaving. Nevertheless, what is the alternative to emergence? (I can think of one, don't subscribe to it myself, but it is rather popular. Skip down to paragraph seven.)

If you are merely saying, "There is something which causes consciousness, but we do not yet know what that thing is," well, then, okay. But if we don't know what it is, why should we presume that it is uniquely tied to brains? The brain is a physical machine that operates on chemicals and electricity. We know this because we know that if you fuck with it, physically, chemically, or electrically, you fuck with how people feel and think. We're very far from that day, but if we can come to understand how the machine works, why should we not be able to replicate it and expect it to work just as well, consciouness included?
posted by Diablevert at 4:54 PM on July 20, 2009


How could there be continuity of experience from the perspective of the pre-copy person? Precognition? You have no "continuity of experience" with the person you'll be tomorrow--does that mean that that person is not you?

Easy: If that person -- who might more usefully be known as "the original" -- is still around after the copy.

Your claim is that the only possible way to view such an experience is as the murder of one person and the creation from scratch of some entirely different person. This claim strikes me as deeply counterintuitive and I would cite the experience of generations of Star Trek watchers, almost none of whom have ever had the slightest problem thinking of Captain Kirk as the "same" person before and after transportation (except when the transporter misfunctions and puts him together incorrectly).

First, that is certainly not my claim: I don't believe I ever said "only". Obviously there are lots of ways to see it. What I claimed was that in the current ethos, it's cleary causing the death of one individual to create another.

As far as "intuitiveness" and "counter-intuitiveness" go: So what? So people watch a TV show and don't give much thought to what they see -- we're supposed to an ethos out of that? What matters is what conclusions and opinions and feelings people arrive at when they actually consider the problem.

In any case, the subjective, external perception is something I explicitly never argued with. I would argue that it's supremely foolish to let Star Trek writers determine how I ought to think about something because it's convenient for their plots that they be able to move someone instantaneously from A to B.

In what way am I not the same person before teleportation as I was afterwards (assuming no errors in the way I'm put together)? You say "well, the atoms are all different atoms." So? If I get a prosthetic hand, am I no longer the same person I was before? When one cell dies and another replaces it, do I become a "different person"? If I get a heart transplant, am I a different person? Very, very few people consider their personhood essentially bound to a specific body (I say this with confidence because there are few major religious that don't accept the possibility of the existence of the 'self' separate, permanently or temporarily, from the body). The "self" is a collection of memories, a set of cognitive abilities and styles, and a series of attitudes towards the world. If those remain constant through the teleportation process, how is the self not "continuous" through that process?

Wow, a lot going on in this paragraph. First, you're using a gradual replacement analogy. to respond to a total replacement scenario. In the Transporter scenario, the whole body is replaced -- and it's replaced while destroying a whole previous body. (You meanwhile completely ignore the fact that the option exists to not destroy the original body. Why is that?) But you then respond to that with gradual replacement scenarios: Are you still the same person with a new heart, prosthetic hand, etc. Well, as you should well know, a lot of people ask precisely those questions when they do get prosthetic limbs or replacement organs. The fact does remain, though, that that's a long way from duplicating your entire body, including your CNS.

You are, categorically, a different body after the transportation. This is a logical fact. The two individuals that obtain in the transporter scenario -- the before, and after, let's call them -- are two different and physically distinct individuals. If you deny that, you're denying physical reality.

Which brings me to the second thing you're doing in that paragraph, and this is very importan and it is, I think one source of a lot of the confusion and posturing that's happenign on this threat: You're conflating several things here that it's very important not to conflate. You seem to be conflating "persons", people, "self" and bodies, and you're failing to distinguish between several senses in which the term "person" and "self" are used.

In anthroplogy and in certain areas of philosophy of mind and ethics, "Personhood" is a socially defined construction: Whether you're a "person" or not is something other people get to decide. For our purposes, let's say that you also get to decide that for yourself. But your "personhood" is also decided by institutional forces: To the law you may be a person, to the church you may not be; to the law in addition you may be a person in some regards and not others. A "Body" may or may not be required for a person; same with "selves." You define "Self" in a particular way, and I don't actually see a problem with your definition, but that's not the same as a "person" or the person's "body."

You're mixing these terms up as though they have interchangeable meanings. They don't. A person may certainly regard himself/herself as continuous, when we objectively know they did not have a continuous body. It is not the same animal as it was before: This is objectively true. Does it perceive itself as being so? Sure. Does Aunt Margaret? Sure, maybe. Does the State of New York? Dicier, but let's say it does. It's still not the same animal, and it only exists because you decided to destroy another animal in the process of creating it.

This seems really clear to me: Post-transporter animal (yoink1) only exists because pre-transporter animal (yoink0) was destroyed. I don't really care whether yoink1 perceives itself as a continuous self; it's absolutely irrelevant to the ethics of the issue, as far as I'm concerned.

As far as an "imaginary distinction", I will just point out again: It's only imaginary because you chose to annihilate yoink0. If you don't do that, it's an extremely real distinction. The only reason that yoink1 has any best-match claim to being "yoink" is that yoink0 no longer exists -- because you chose to annihilate it.
posted by lodurr at 4:58 PM on July 20, 2009


I'm puzzled at this notion that calling consciousness an "emergent" phenomenon is handwaving. It seems to me that it's either an emergent phenomenon or we have a metaphysical soul--I don't see any other option. Nobody, surely, is suggesting that any physical part of the brain is "conscious," are they? If consciousness is what the workings of the neurons in the brain achieve, then that is an emergent phenomenon resulting from the interactions of those non-conscious neurons. That's not "hand-waving," that's just a correct description of what is going on. It's not some thing that is added in (just add "emergence" to "neurons" and, hey presto, we have consciousness).

Of course, if you believe in a soul, then you believe that all the brain stuff is just a sideshow. The real "you" is somehow infused through the body. In that case the physical workings of the body and brain are actually quite distinct from consciousness. Of course, I've always been puzzled why people who believe in a soul aren't more troubled by such phenomena as being knocked unconscious or senility. Why does the soul's capacity to express itself depend on the state of the brain from which it is logically quite separate?
posted by yoink at 5:05 PM on July 20, 2009


lodurr, you're argument is entirely circular. You start from the assumption that the body is essential to personal identity (although only in the case where it is replaced instantaneously and totally, not in the case where it is replaced piece-by-piece). You have no argument for that assumption, you just posit it as a given. Then you reject my argument that identity could persist across the instantaneous replacement of a body by and exact copy on the grounds that bodies are identical to personal identity--regardless of either the "self"'s experience of personal continuity OR other people's continued satisfaction with that self's continuity.

There's no way for reason to enter that circle.
posted by yoink at 5:11 PM on July 20, 2009


Bugger. For "by and exact copy on the grounds that bodies are identical to personal identity" read "by an exact copy on the grounds that bodies are essential to personal identity." Sorry.
posted by yoink at 5:12 PM on July 20, 2009


By the way, here is the point where you make the pure circularity of your argument clear: You are, categorically, a different body after the transportation. This is a logical fact. The two individuals that obtain in the transporter scenario -- the before, and after, let's call them -- are two different and physically distinct individuals. If you deny that, you're denying physical reality.

Of course you are a different body after the teleportation. My argument is that personal identity and the continuity of "selfhood" as we normally understand it are irrelevant with respect to the particular history of the atoms that comprise my body. You can't counter that argument--at all--by saying "but it's a different body!"

I doubt you actually believe what you claim to believe here. Imagine that I use my teleportation device to instantaneously replace every single atom in your body below your head with an identical counterpart. I'm pretty sure, even though 90-odd percent of your body changed instantaneously, that you will have no qualms about saying that your identity has remained continuous--that "you" are still "you."

O.k., now I replace the body plus every single non-brain atom in your head as well. Still you? O.K., Now I replace every single atom in your body, your non-brain head, and one atom in your brain. Still you? Of course, right? Two atoms. Still you? Of course. Three atoms, four atoms....tell me at what point it stops being you and why.
posted by yoink at 5:30 PM on July 20, 2009 [1 favorite]


By the way, I did some googling around and found that there's actually some truth to that old "every cell in your body changes every seven years" canard. While that is obviously false, it is apparently true that 98% of the atoms in your body will be replaced over the course of a year. 98%! It's really just the enamel on your teeth that remains stable. Maybe the teeth are the seat of the soul?
posted by yoink at 6:07 PM on July 20, 2009


You start from the assumption that the body is essential to personal identity...

Huh?

Never said that, dude. Never even implied it.
posted by lodurr at 6:27 PM on July 20, 2009


Never said that, dude. Never even implied it.

Ahem.
posted by yoink at 6:30 PM on July 20, 2009


Yoink, I'm sure this is all your revenge for me misreading your posts much earlier in the thread. But really, you're over-focusing on the whole question of "different matter."

I've made clear multiple times that it's the multiple instances that are the problem. I'm sorry you don't seem able to see that. I'm also sorry that you insist on destroying one instance so you don't have to deal with the fact that multiple instances pose a problem for your continuity of self.

Really, you've got a perfectly good out here that you're not taking, and I wonder why that is: You could simply argue that my point is irrelevant to yours. But you're choosing, stubbornly, to not understand me when I say that bodies and selves and persons are not the same concepts. If you understood that, you'd have an exit strategy from this argument. But you seem uninterested in doing so.

I have actually granted one of your major points: That yoink1 experiences continuity of experience. I've granted that several times. I don't know why you choose not to see that. The old revenge thing, perhaps. Turnabout's fair play, I guess, so take it away.
posted by lodurr at 6:34 PM on July 20, 2009


Yoink, your link is pointing to one of your posts.
posted by lodurr at 6:35 PM on July 20, 2009


I'm also sorry that you insist on destroying one instance so you don't have to deal with the fact that multiple instances pose a problem for your continuity of self.

Ahem again.
posted by yoink at 6:36 PM on July 20, 2009


Yoink, your link is pointing to one of your posts.

Try reading it.
posted by yoink at 6:37 PM on July 20, 2009


Oh, I see what you've done: You've pointed to you quoting me saying something that is irrelevant to the point.

Here, I'll put it in boldface so it's perhaps a little harder to skip over: Bodies are not selves. Persons are not selves.
posted by lodurr at 6:38 PM on July 20, 2009


OK, I see your one instance where you didn't destroy the original.

What's your point, exactly? That they both think they're the original? Haven't we already agreed on that? That aunt Margaret thinks they're both original? Haven't we already agreed on that, too?

Let me state this as plainly as possible: They are different individuals. Physically discreet, physically distinct. One is original, the other is not. Do you dispute that?

You will want to state that your "confused perceiver" case makes it irrelevant which is which. In a legalistic framework, you'd be correct; but it would remain a physical fact that one was an origianl and the other a copy, and that one exercised choice (or had choice exercised upon it) to make the other.

Beyond this point, I'm not sure what you're trying to prove, except maybe either: [a] that being yoink1 is somehow just as good as being yoink0, or [b] that if there's a yoink1 that's an exact copy of yoink0 and that has a not too different set of memories, it's OK to destroy yoink0.

I happen to think that's a pretty ethically challenging position, and I don't think you've supported the idea that it's not.
posted by lodurr at 6:45 PM on July 20, 2009


They are different individuals. Physically discreet, physically distinct. One is original, the other is not. Do you dispute that?

Of course not. So what? And I mean that not as a dismissal, but as a real question: "so what follows from that. How does that prove that the copy isn't, at the moment of copying, the same person as the original?" You keep saying "but they are two different bodies." So? What, again, do you think that proves?

if there's a yoink1 that's an exact copy of yoink0 and that has a not too different set of memories, it's OK to destroy yoink0

Wait a second, where does this "not too different" come from? We have stipulated, have we not, that for the purposes of this thought experiment, the copy is exactly the same as the original--has exactly the same intentions, memories, dreams, aspirations etc. etc. etc. It's simply cheating to bring in a "not too different" standard which was no part of the argument.

that one exercised choice (or had choice exercised upon it) to make the other.

Again, you're smuggling in a difference which is ruled out ex hypothesi. The copy has exactly the same thoughts and intentions as the original. Thus they are either both victims of someone else's choice or they both decided to go through the copy-process.

Of course one may come to regret the decision and the other not. Again, so what?
posted by yoink at 7:09 PM on July 20, 2009


But if you actually picture to yourself the experience of chatting away to a machine as if you were talking to a close friend on Skype, say, the question of "is this a real consciousness" would strike you as morally obscene--just as "is my Mom really a person or is she just a meat puppet" is the kind of question that only a psychopath would ask.

But ELIZA fools people. Still. And there's a form of agnosia in which the part of the brain that registers our emotional associations with familiar objects is disconnected --- .e.g. you see you mom, and you think, ah, that's my mom, but you do not feel the wave of love, compassion, and nostaliga associated with your mom (though you truly love her). People who suffer from this try to reconcile the lack of appropriate emotion coupled with a sense of familiarity by telling themselves that their loved ones have been replaced by strangers or aliens who look like their loved ones. It is often impossible to dissuade them from this belief, so unnerving is this you-are-familiar-yet-a-stranger sensation in relation to those closet to them.

Obviously, people who suffer from this condition have a severe, though limited, form of brain damage. I mention it to point out that even the very mechanisms by which we recognize and attribute consciousness to others depend upon entirely unconscious and instinctive brain functions, and thus are subject to inherent limits and flaws. As we attempt to create artificial intelligence, we're not gonna get it right the very first time. We're gonna make mistakes, we're gonna misunderstand things, we're going to have to experiment. And that sends a bit of a shiver up me spine. I mean, maybe I've read a bit too much Lovecraft, but one of the creepiest, saddest things I can ever remember hearing was that ekg scans of people rendered comotose by Creutzfeldt-Jacob disease (what humans get when the eat a mad cow) is very similar to the ekgs of people on LSD --- the idea of being trapped on the world's worst acid trip as you linger in a coma for decades freaks me right the fuck out. And so I find the idea that we may not be able to truly determine whether an artificial intelligence is conscious troubling....or in other words, getting to the point where I can chat away to a machine on Skype is not going to be a clean easy transition, where we figure all this out on paper perfectly and then implement....
posted by Diablevert at 7:19 PM on July 20, 2009


Metafilter: The teeth are the seat of the soul.
posted by localroger at 7:52 PM on July 20, 2009


OK, yoink, remove your "ex hypothesi" items if it makes you happy. Though why you get to stipulate "the hypothesis" and not someone else, I don't know. It doesn't change any of my arguments.

What this comes down to is that you're OK with the idea of killing yourself if you can "know" that someone exactly like you will come into existence. I throw that right back at you: So what does that prove? It proves that you're OK with that, because you have a notion of "personhood" that allows you to say that yoink0 and yoink1 are as good as the same person. (Though after all this back and forth I'm still unclear on whether it's OK to just off one of the two of you, since it doesn't matter which one lives.)

What does it prove that they have different bodies? Only that it's not the same individual. Your "personhood" (by which it's never clear whether you mean "personhood" or "sense of self" -- these are different things, as I've pointed out, both subjective from different perspectives) is purely subjective. The identity scenario is sterile in general and has no real import for anything. We've both been wasting all the time we spent discussing it.
posted by lodurr at 7:56 PM on July 20, 2009


Lodurr, yoink, fellas --- it's so weird watching you talk past each other.

It seems to me that Lodurr believe that in a transporter or in a rebooting, the "Person" dies, because their matter is not preserved, and the memories and instincts of the person are intrinsically linked with the matter of which they are comprised, any other matter cannot be them because consciousness is embedded in matter in an inextricable whole. Thus his obsession with which one is the "true" or "original" and which the "copy," in the reboot, and his contention that a transporter, if real, would in effect murder the person transported.

Yoink, on the other hand, is arguing that the consciousness or being is the memories and instincts, the unique set of information which comprises an individual's experience. If the pattern of that information can be replicated, the medium in which the replication takes place is irrelevant, whether it be silicon chips or saline bathed neurons or the dusty vacuum tubes of the Lovematic machine. Obviously, if the pattern is replicated in more than one place, then immediately following the replication the patterns will begin to diverge, as each embodiment of the being is now experiencing the world in a slightly different way. But at the moment of replication is, if perfect, not one person being copied (leaving a true original and a lesser/false copy) but one person being split into two bodies. Neither is greater, neither more true, neither original; they are both equal, and the exactly the same, because the pattern of information is the same, each embodiement has precisely the same memories and experiences.

It's a bit like --- this may be a dangerous analogy --- but identical twins are different, yes? Because from the moment they diverge in the womb, as their very bodies and consciousnesses are formed, they begin to experience the world differently. But it is not the case that one twin is a copy of the other. They are each a perfect copy of the same original cell. The information --- the DNA --- in each embryo is precisely the same. Neither is the "original" twin.
posted by Diablevert at 10:04 PM on July 20, 2009


But at the moment of replication is, if perfect, not one person being copied (leaving a true original and a lesser/false copy) but one person being split into two bodies. Neither is greater, neither more true, neither original; they are both equal, and the exactly the same, because the pattern of information is the same, each embodiement has precisely the same memories and experiences.

Well put.
posted by Bort at 5:28 AM on July 21, 2009


Diablevert: I have no "obsession" with which one is original. I don't care which one is original, except insofar as the original is making an ethical decision w.r.t. the copy.* All I'm saying is that they are different.

That's it.

I really don't care if you think they're the same in practical terms, and as far as I'm concerned they're both persons and they both have continuity of experience. I hold no beliefs about 'intrinsic linkage' between a person's sense of self and their meat -- though I do think that in practice it's going to be extremely hard to separate them, for practical reasons that I've already mentioned.**

If there's a point I'm "obsessed" with, it's that they're both persons. At least, in my ethos they are. Action that you take w.r.t. either one of them is action taken in regard to a person, and the substance or import of that action is in no way diluted by the fact that there's more than one copy.

For example: If you have two perfect copies, and you kill one, you're still killing a person. In our common ethos -- at least, I think this is our common ethos (it's certainly consistent with mind and the one within which I was raised) -- having duplicate persons at your disposal does not diminish the impact of killing one.

This is totally compatible with the idea that a self "is" memories and experiences.***

What you guys don't seem to get is that "self" and "person" are not the same thing. The words refer to two different things -- two different kinds of thing. "Self" is, well, your self: Most people would say it's the thing that's aware of being you. I'd say it's roughly (but only roughly) coterminous with "consciousness."

"Person" is much, much more fluid. In practical terms, "person" has very different meanings across cultures and even within cultures -- i.e., it differs by ethos. "Person"-hood in a legal sense is different from personhood in a religious sense, and both are different from personhood in the sense it would be used by, say, many poets.

--
* (In an involuntary copy scenario, they're both in at least similar if not identical ethical relation to the entity that's acting to copy the original.)
**All this theory -- it really ends up having very little bearing if we dig into the meat and discover that in practical terms the nature of it is such that only an omnipotent and omniscient god (which I don't happen to believe makes sense) would be able to could ever accomplish these thought-experiments. Put another way: All these thought experiments end up being irrelevant because they aren't under real-world constraints of any kind. I don't happen to believe that we ought to live by the dictates of perfect logic -- which is, as this discussion should amply demonstrate, in practice quite imperfect.
***It should also be totally clear to you that ethos is not an a priori fact: It's a socially constructed fact. It varies with culture and personal experience (and, I would say, with temperament). If you think ethos is a priori (and I don't seriously think either of you do), then we have nothing to discuss.

posted by lodurr at 6:36 AM on July 21, 2009


Well put.

Well put, but deeply strange. It seems to support the view that the more perfect copies you have, the less each one is worth. Which would be a very troubling view. And that's exactly what I've been on about this past day, here.

People are not logical identities, and 'perfect duplication' is meaningless mental masturbation at the base of it: As soon as time (you know, that fundamental aspect of reality that we all deal with in every -- er -- moment of our lives?) is allowed to enter the picture, you have difference. And you have more difference, the more time you have.

You guys can only get to this practical equivalence standpoint by ignoring time and other inconvenient aspects of reality. Your practical equivalence of perfect duplicates is purely a logical nicety with no import for human experience, and no bearing on an actual physically real world.
posted by lodurr at 6:41 AM on July 21, 2009 [1 favorite]


re: non-computability (never mind consciousness ;) freeman dyson claims life is analog in the last link...
The superiority of analog-life is not so surprising if you are familiar with the mathematical theory of computable numbers and computable functions. Marian Pour-El and Ian Richards, two mathematicians at the University of Minnesota, proved a theorem twenty years ago that says, in a mathematically precise way, that analog computers are more powerful than digital computers. They give examples of numbers that are proved to be non-computable with digital computers but are computable with a simple kind of analog computer. The essential difference between analog and digital computers is that an analog computer deals directly with continuous variables while a digital computer deals only with discrete variables. Our modern digital computers deal only with zeroes and ones. Their analog computer is a classical field propagating though space and time and obeying a linear wave equation. The classical electromagnetic field obeying the Maxwell equations would do the job. Pour-El and Richards show that the field can be focussed on a point in such a way that the strength of the field at that point is not computable by any digital computer, but it can be measured by a simple analog device. The imaginary situation that they consider has nothing to do with biological information. The Pour-El-Richards theorem does not prove that analog-life will survive better in a cold universe. It only makes this conclusion less surprising.
that is all!
posted by kliuless at 6:54 AM on July 21, 2009


I think some of the analog v. digital conflict misses the point: If you could create a mind with an analog computer, I think most of the folks here who think it's possible with a digital computer would be perfectly happy with that.
posted by lodurr at 6:58 AM on July 21, 2009


(yes, I know: "the brain is an analog computer!" they -- really, we -- want to create one. We want to be God, not God's art-critic.)
posted by lodurr at 7:00 AM on July 21, 2009


I invent a teleportation device that works a la the Star Trek one.

well, no, you don't. It's fiction, and there is absolutely no technology that is anywhere near teleportation currently available. There's really no reason to think it will ever be possible.

Somehow it disassembles me in spot A and reassembles me so perfectly in spot B that I have no discontinuity even of my thoughts during the process.

are the disassembled atoms sent through some tunnel or something? where do they go when "disassembled"?

Your claim is that the only possible way to view such an experience is as the murder of one person and the creation from scratch of some entirely different person.

you suggested that: Ask yourself this: if you suddenly found yourself in the Star Trek universe, would you use the transporter? If your answer is "yes" then you are saying that you would be happy to destroy yourself and have an instantaneous recreated clone-self made.

This claim strikes me as deeply counterintuitive and I would cite the experience of generations of Star Trek watchers, almost none of whom have ever had the slightest problem thinking of Captain Kirk as the "same" person before and after transportation (except when the transporter misfunctions and puts him together incorrectly).

They're watching fiction! It's a plot device. I have always been a bit too literal with my sci-fi (and other fiction) and I always have been annoyed by the "magic" technology on star trek (and plenty of other shows/movies) but most people just don't even really think about it. It's accepted because it moves the story on - it's a "flux capacitor" - there doesn't need to be a sensible explanation for how it works. If there were, it would be science, not science fiction. It's just an idea, a dream, like fairy tales or myths about gods used to be, except now future humans are the gods. Nothing is impossible in the star trek universe. A writer just has to come up with a pseudo-scientific way of framing it, but any miracle can be performed - healing, raising from the dead, turning one substance (or creature) into another... mental control of physical things - etc.
posted by mdn at 8:36 AM on July 21, 2009


In the case of that particular plot device, it didn't start out as a gedankenexperimental device for philosophers of mind: it was simply a handy way to get crew across very long distances, very quickly. I gather Roddenberry was very happy whenever his stuff made it to that status (and a bunch of it did before he died), but I doubt that he'd argue the primacy of gedankenexperiment over actual experiment.

To defend science fiction a little: The best of it really is about thought experiment, and when it's done well, and taken in the right spirit, it can be incredibly useful for that. There's a cliche that goes back at least to Wells, though it's most famously attributed to le Guin: Science Fiction is almost never really about the future or about technology per se; rather, it's about where we are and what we are doing right now. (Similarly, good fantasy is almost never really about dragons or elves or elder gods, etc.)

The best SF & F writers are very aware of that.
posted by lodurr at 8:46 AM on July 21, 2009


It seems to support the view that the more perfect copies you have, the less each one is worth. Which would be a very troubling view. And that's exactly what I've been on about this past day, here.

Well, I think some areas would be straight forward, ethics wise. For example, the make a backup copy and restore upon death scenario would be ethically fine, correct? At the other extreme, would I be justified in creating a copy of myself, have it instantiated in some form of robot and treat it as my (the wetware me) slave? I don't think so, because I do believe that time is important and as my two instances diverge, they may disagree on things (like the robot instance being the slave of the wetware instance) and the robot instance has no more obligation to obey the wetware instance than vise versa.

If you're imagining a scenario where I made 50 copies of myself; used those copies to do dangerous things, like mining coal; and then didn't care about their lives or well being because "they are just copies" - then I think there should be no question that that would (should?) be considered wrong.
posted by Bort at 9:20 AM on July 21, 2009


Science Fiction is almost never really about the future or about technology per se; rather, it's about where we are and what we are doing right now.

yah, exactly - just like mythology and fairy tales are really morality tales about human beings and their real lives, and only symbolically about gods, so too with science fiction. Most of the old star trek stuff is about issues that were important in the 60s - sexism and racism or the vietnam war - just represented by "the gods", who are now the idealized future version of humans (no longer fucked up earth people, now they can travel magically across the universe and fix other planet's problems).

Personally I prefer sci fi that deals with real limitations, like Primer, to that which allows almost any work-around that serves the plot, like star trek, but I can enjoy star trek now and then. I just get irritated at the sometimes contradictory plot points. Character and symbolism is much more important to star trek than actually intellectually interesting thought experiments.

For example, the make a backup copy and restore upon death scenario would be ethically fine, correct?

what is a "backup copy"? Is it not alive? Do you think it would somehow be you when "restored"? To me it would simply be an identical twin, and would have to be alive from the moment of its creation, so no, it couldn't really be ethical to use it as a backup. Even if you could keep a body in stasis indefinitely, when rebooted, it would be a new person subjectively, even if outsiders felt like they were interacting with the same original. Think Moon - you'd still be killing all the previous versions.
posted by mdn at 9:36 AM on July 21, 2009


what's the point again? :P

oh and quantum cellular automata!
posted by kliuless at 9:41 AM on July 21, 2009


For example, the make a backup copy and restore upon death scenario would be ethically fine, correct?

For me, sure. But I don't get to make that decision. That decision has to do with "personhood", which is socially and thus also legalistically constructed.

At the other extreme, would I be justified in creating a copy of myself, have it instantiated in some form of robot and treat it as my (the wetware me) slave? I don't think so, because I do believe that time is important and as my two instances diverge, they may disagree on things (like the robot instance being the slave of the wetware instance) and the robot instance has no more obligation to obey the wetware instance than vise versa.

You've got a couple of things going on here. First, you assume subservience of the copy to the original (note use of term "robot instance" -- "robot" in English, like robota in Czech, has the connotation of subservience). If we get rid of that, then in the reigning American ethos, there should be no justification for expecting the copy to do anything you say. It has nothing to do with time: We just don't expect equals to be subservient without a mutual agreement.* In fact, for many people I know (myself maybe included), the copy would probably disobey on principle (at least some of the time).

But lets get rid of that and I'd say that, as far as I can see, most people would agree with the proposition that you don't get to use your clone as a slave.

If you're imagining a scenario where I made 50 copies of myself; used those copies to do dangerous things, like mining coal; and then didn't care about their lives or well being because "they are just copies" - then I think there should be no question that that would (should?) be considered wrong.

Why go to 50? Shouldn't one be enough?

--
* Behavior of bullies and other status-conscious weenies et al notwithstanding.

posted by lodurr at 9:41 AM on July 21, 2009


mdn, that use of the "backup copy" hasn't entered into this discussion so far (I think), which is interesting: It's a real possibility, one people have been talking about for years and which has had a lot of pop-cultural currency over the past four or five. And it's not really SF'nal anymore: Consider the Jodi Picoult novel about a child raised to be her sibling's tissue donor.

I think it's highly relevant to the duplication scenarios.

Oh, and: Live Organ Transplants.
posted by lodurr at 9:48 AM on July 21, 2009


what is a "backup copy"?

A stored (electronically, as 1s and 0s) scan of my brain with enough fidelity to allow for its reproduction (assumption that with enough fidelity, any physical object/system is reproducible such that it'll have all the properties of the thing being simulated/reproduced).

Is it not alive?

Just being stored without being instantiated (running)? I'd say no.

Do you think it would somehow be you when "restored"?

Yes. That's been much of the discussion for the past 100/150 comments or so, I believe. Some saying yes, some saying no.

To me it would simply be an identical twin, and would have to be alive from the moment of its creation

Is the moment of creation when it is copied (the scan is made) or when it is running (executing/you can interact with it)? I'd say when it is running, so making a backup should be fine.
posted by Bort at 9:55 AM on July 21, 2009


Lodurr, can I ask a simple question that might help me understand where you're coming from. Imagine, again, a teleport device. Now, this one actually disassembles you atom-by-atom at Point A and sends those actual atoms in a stream to your destination at Point B and then reassembles you exactly as you were. There is, then, no duplication at all. There is continuity of the body (better continuity, in fact, than you would have by taking an airplane from Point A to Point B). Imagine that this is done with such precision and such instantaneity that if you begin saying "the quick brown fox..." at Point A you walk out saying "...jumps over the lazy dog" at Point B. There is no discontinuity of thought, of sense of "self" or of "personhood."

Do you see that situation as the murder of one person and the birth of a new one, or simply as a means of transportation?
posted by yoink at 9:57 AM on July 21, 2009


That's very clearly a means of transportation, and I would have no ethical issue with it: because the system as you describe it never destroys one individual for the purpose of creating another.

But I don't think that really clarifies anything. You just seem uninterested in noticing that there are two different individuals after the duplicator-transporter operates -- each of which looks at the other and knows he's not it.

I think I can predict where you're going now: I expect you to basically re-iterate that I'm making a distinction without a difference, that there is no practical difference between the destructive and transportational transporter. But to do so, you would have to ignore the fact that one destroys, and one transports.*

You're also hung up on this continuity of experience. I've established that we both think there's continuity of experience, yet you don't acknowledge that. You also don't acknowledge that it's nonsensical to keep time out of the scenario. We live in time. Things happen in time. Perfect duplicates diverge in time.
--
*If that was what you were going to do, I just stopped you from doing it. If it wasn't, this should be no skin off your nose.
posted by lodurr at 10:06 AM on July 21, 2009


By the way, re the validity of science fiction in all this. Obviously Sci Fi tells us nothing at all about the technical possibility of actually creating teleportation machines (you really, really didn't need to point that out mdn). The question of "is this physically possible" is obviously one that this conversation has not been addressing (although there is a conversation about that going on in this thread) and cannot address.

But I think the way audiences respond to these stories is actually useful data of a certain kind. All Sci Fi is, amongs other things, a philosophical gedankenexperiment. The fact that audiences have no trouble investing in "Kirk" as a continuous character through his multiple trips through the teleporter does, in fact, tell us quite a lot about how people in the real world would respond to real-world people travelling through teleporters if such machines were possible (again, I understand fully that it tells us nothing at all about the possibility of such machines; I really, really do understand that). It tells us that what we use to judge the question "is this still the same person" (which is roughly the same question with respect to a "real person" as it is with respect to a fictional character) is not "has this person's body been in continuous physical existence" but "does this person continue to behave in certain predictable ways."

I think much of what we know about "what people think" comes from fiction, in fact. Indeed, I would say that one of the prime reasons we tell each other stories is to test, confirm and challenge our understandings of what it is to be human (usually, of course, what it is to be a particular kind of human: a mother, a lover, an American, etc.). It's certainly entirely up to the writers of Star Trek whether or not teleportation "works" in their world. It's not up to them whether or not audiences will accept that teleportation allows for continuity of character.
posted by yoink at 10:10 AM on July 21, 2009


I think much of what we know about "what people think" comes from fiction, in fact

Narrative reasoning. Increasingly I think it's probably the key innovation (or evolutionary step, whichever it turns out to be) in human development.

The fact that audiences have no trouble investing in "Kirk" as a continuous character through his multiple trips through the teleporter does, in fact, tell us quite a lot about how people in the real world would respond to real-world people travelling through teleporters...

... but it doesn't tell us what they think teleporters do, and so gives us more or less no fix on what they think the related ethical/moral issues are.

Expecting to judge people's moral positions on a destructive-transporter issue (or continuity-of-self) based on whether or not they enjoy ST is a bit like judging whether a man is liable to be an adulterer based on how much he enjoys watching James Bond movies.
posted by lodurr at 10:14 AM on July 21, 2009


First, you assume subservience of the copy to the original (note use of term "robot instance" -- "robot" in English, like robota in Czech, has the connotation of subservience).

That wasn't my intention. I chose robot because when I imagine a copy being instantiated, I see it being in a mechanical robotic body that can interact with the world. To avoid the connotation, lets say instead of a robot, an engineered real organic human that was built or grown quickly in such a way that it was an exact copy of you at the time of your last back up - so perfect that once you lose track of which is which, there is no way imaginable to tell them apart.

Why go to 50? Shouldn't one be enough?

OK, stick with one. That is enough.

Let's assume I want to make a copy of myself, have myself put to sleep with drugs (induced coma, say). Have that copy broadcast to the moon. There have it constructed into the human copy described above. Then, once that has confirmed to work, have the copy of myself in a coma killed.

I'm saying that I have no ethical problems with that; and believe I would effectively (from my perspective) appear to have instantaneously moved from (in both time and space) the moment I was scanned until the moment I'm constructed on the moon (basically, traditional sci-fi teleportation).

Upon Preview:

That's very clearly a means of transportation, and I would have no ethical issue with it: because the system as you describe it never destroys one individual for the purpose of creating another.

OK, I think this demonstrates our differences somehow, because I don't see those 2 situations as different. Why is one transport and the other destruction and creation? Because I see what you differentiate as transport as also destruction with construction, just at a finer level that occurs more quickly perhaps. I guess I'd say that I don't have an issue with destruction and reconstruction; assuming it was at the direction of the one deconstructed and the reconstruction was identical (ignoring any ethical issues there may be if one choices to change themselves).
posted by Bort at 10:16 AM on July 21, 2009


On narrative reasoning, though: It is extremely powerful, yes, because it allows us to string together explanations for things into narratives and use that as a method of analysis. I'm convinced that's what's behind most religious literature -- most literature. I'd be willing to bet yoink has a similar view.

But it's not very good for anything that requires much precision, because the precise truth is so often counter-intuitive. Scientific reasoning in its various forms (which is far older than Bacon [either one]) was developed as a remedy for that.
posted by lodurr at 10:18 AM on July 21, 2009




I'm saying that I have no ethical problems with that; ...

Well, I do, and we will probably just have to differ on that. I have less of a problem if it's a matter of free and un-coerced choice, though I think it points in a really dangerous ethical direction. If there's any absence of clarity about what's happening, then to me it should not happen.

... and believe I would effectively (from my perspective) appear to have instantaneously moved from (in both time and space) the moment I was scanned until the moment I'm constructed on the moon (basically, traditional sci-fi teleportation).

I don't see this as even controversial. This is effectively what you're stipulating; that part of your scenario seems darn near tautological.

As far as the other cases -- getting rid of the subservience, etc. -- I think I covered those.

(I'm sorry to do that, I really have to work.)
posted by lodurr at 10:25 AM on July 21, 2009


That's very clearly a means of transportation, and I would have no ethical issue with it: because the system as you describe it never destroys one individual for the purpose of creating another.

O.K., good. Now we may be getting somewhere.

Let me give you an alternate hypothetical and see if you're willing to rule on this one.

I build a teleporter. It takes you from Point A to Point B. Like the other one, it pulls you apart atom by atom at Point A and reassembles you atom-by-atom at Point B. However, in this case the atoms that get pulled apart at Point A are not the ones that get put together at Point B--that is, every single oxygen atom is replaced with an oxygen atom etc., but they aren't the same actual atoms. Now, technologically, duplication is not an option. In order for the Point B-me to be constructed, the Point-A-me has to be de-constructed. That is simply a technological requirement of this process. Again, I walk through the transporter door at Point A and walk out the door at Point B with no moment of discontinuity of experience of any kind.

Is this murder/birth or is this transportation?
posted by yoink at 10:27 AM on July 21, 2009


Yet yoink when I tried to introduce time, you accused me of going "ex-hypothesi":
if there's a yoink1 that's an exact copy of yoink0 and that has a not too different set of memories, it's OK to destroy yoink0

Wait a second, where does this "not too different" come from? We have stipulated, have we not, that for the purposes of this thought experiment, the copy is exactly the same as the original--has exactly the same intentions, memories, dreams, aspirations etc. etc. etc. It's simply cheating to bring in a "not too different" standard which was no part of the argument.
(... the context you so helpfully removed my quote from having to do with variance over time...)
posted by lodurr at 10:28 AM on July 21, 2009


Actually--if you see this in time--could you change that "technologicall, duplication is not an option" to "it turns out that according to some absolute, fundamental scientific law, duplication is not an option." In other words, there is some kind of quantum-entanglement conservation-of-information deal involved that means that teleportation of a "copy" while leaving the "original" intact is strictly impossible, not just a "we haven't figured that one out yet" limitation.

I'm not sure if that would make any difference to you, and, if it would, I'd be interested to know it.
posted by yoink at 10:31 AM on July 21, 2009


Well, that's clearly destruction for the purpose of creating a new instance. But if you stipulate that experience is continuous, then what am I to say? You make the rules: As you should well know (since you seem to have studied logic), it's possible to create a logical calculus within which any statement you want to make is true. You just have to create the right alphabet and beta functions.

IOW, the fact that you can create a thought experiment that I agree with really means nothing much unless you can relate it to something real in human experience. I've been trying to do that with the duplication scenario; perhaps you could do the same with this perfect transporter scenario? What does it mean, for example, w.r.t. a case where a duplicate is used as a source for organs, or as cannon fodder (e.g., I get paid a shitload to clone myself a thousand times and they get drugged and used as suicide bombers)?

Yoink, the kind of stepwise reasoning that you're using can be used to prove essentially anything: Just read Republic for some excellent and repulsive examples. The arguments you're making are excellent examples of the weakness of narrative reasoning style, and excellent illustrations of why science is done by scientists and not philosophers. .
posted by lodurr at 10:37 AM on July 21, 2009


Yet yoink when I tried to introduce time, you accused me of going "ex-hypothesi":

Show me a single instance where I have said that it would be o.k. to destroy "yoink[0]" after the creation of "yoink[1]."

As for "removing the context" which had to do with "variance over time," here's your whole post:
OK, I see your one instance where you didn't destroy the original.

What's your point, exactly? That they both think they're the original? Haven't we already agreed on that? That aunt Margaret thinks they're both original? Haven't we already agreed on that, too?

Let me state this as plainly as possible: They are different individuals. Physically discreet, physically distinct. One is original, the other is not. Do you dispute that?

You will want to state that your "confused perceiver" case makes it irrelevant which is which. In a legalistic framework, you'd be correct; but it would remain a physical fact that one was an origianl and the other a copy, and that one exercised choice (or had choice exercised upon it) to make the other.

Beyond this point, I'm not sure what you're trying to prove, except maybe either: [a] that being yoink1 is somehow just as good as being yoink0, or [b] that if there's a yoink1 that's an exact copy of yoink0 and that has a not too different set of memories, it's OK to destroy yoink0.

I happen to think that's a pretty ethically challenging position, and I don't think you've supported the idea that it's not.
Want to point out where the word "time" features; let alone the phrase "variance over time"? I'm finding it hard to spot. You say "not too different set of memories"--apparently you were silently assuming that it was time that had allowed for that difference to emerge. I'm not sure what kind of quotation it is that adequately represents people's silent assumptions.
posted by yoink at 10:39 AM on July 21, 2009


disassembles you atom-by-atom at Point A

I would have no ethical issue with it: because the system as you describe it never destroys one individual for the purpose of creating another.

If disassembling you atom-by-atom isn't the destruction of an individual, then what is? Alternatively, if what you are saying is that you don't have a problem with destruction coupled with creation of the same individual, then how is that different than the "duplicator-transporter" where "there are two different individuals after the duplicator-transporter operates"? Because the destruction takes place after the copy is made instead of during? If the duplicator-transporter made a destructive copy - a copy where the individual is destroyed, and used that for the transport, then you'd be ok?
posted by Bort at 10:39 AM on July 21, 2009


Yoink, the kind of stepwise reasoning that you're using can be used to prove essentially anything: Just read Republic for some excellent and repulsive examples. The arguments you're making are excellent examples of the weakness of narrative reasoning style, and excellent illustrations of why science is done by scientists and not philosophers.

Wow. That's amazingly weak sauce. I'm out of here.
posted by yoink at 10:40 AM on July 21, 2009


yoink, I was wrong --- you didn't remove it -- you just didn't understand it: ...[b] that if there's a yoink1 that's an exact copy of yoink0 and that has a not too different set of memories...

Where did you think that variance was coming from?

... weak sauce ...

"Weak sauce" is weak sauce, yoink. You have been fighting this to win the whole time; you could have recognized a day ago that you are essentially talking about your ethos and not logic, but you refused to do so.
posted by lodurr at 10:43 AM on July 21, 2009


you could have recognized a day ago that you are essentially talking about your ethos and not logic

If that's the assumption you've made the whole time, you could've probably saved many people some time by admitting it up front. You've basically said that: yeah, I've argued the technical details for a day, but they don't matter anyway as it is all just belief on either side.
posted by Bort at 11:06 AM on July 21, 2009


Bort, I've been talking about ethos the whole fucking time. I've been arguing (between yoink's increasingly fine thought experiments) the whole fucking time that this is a question of ethics, and that I don't really care about whether the beings that result are similar or how similar they are. I have said repeatedly that that doesn't matter to me.

I'm sorry that you guys couldn't see that. Maybe I should have boldfaced "ethics" and "ethos" every time I wrote it.
posted by lodurr at 11:11 AM on July 21, 2009


Well, in the perverse interest of adding a straw to the camel....

Bort, I've been talking about ethos the whole fucking time. I've been arguing (between yoink's increasingly fine thought experiments) the whole fucking time that this is a question of ethics, and that I don't really care about whether the beings that result are similar or how similar they are. I have said repeatedly that that doesn't matter to me.

It seems to me that this is the crux. "Similar" is not "same."

If a the being emerging from the other end of a transporter is the same as the one that went in, then a trasporter is merely a transportation device, with glitter thrown in.

If the person emerging from the other end of the transporter is merely similar to the one that went in, then you can make a case that the person who entered the transporter was murdered and the person emerging from the other end is a different, though indeed very similar being, akin to a clone or twin.

Now bearing in mind that none of this bloody technology fucking exists and all of this is merely a thought experiment/philisophical inquiry in the nature and potential ethical implications of humans being able to understand brains well enough to create them from scratch...

If "self" is information, then we can destroy a physical body and recreate it and the other end of a transport beam, without murdering anybody, so long as we can perfectly preserve the info and recreate it.

If "self" is "information contained in matter" then the desruction of the matter is the destruction of the self, regardless of whether we preserve the info. The preserved info may allow us to make a new being, one which is quite similar to the old one, but each is independent and the destruction of the first is a seperate ethical quandry from the creation of the second.

Lodurr, you agree that a the being emereging from the transporter would be similar to the being that went in. You agree that subjectively, the being might feel itself to be the same person, and that we as its friends might feel it to be the same being in our interactions with it, but nevertheless, the being is, objectively, not the same but similar. Do I have that right?

This seems to be the wall we keep slamming into --- I think you believe* that if a body is destroyed and/or a new body created in any process we care to dream up, then that new body is prima facie a new being, because it's in a different body. So Yoink keeps bringing up all these examples where the new body has all the same info as the old body and you keep ackowledging them, but none of it matters so long as a new body is involved, yes?

For myself, I'm pretty firmly in the self as information camp. There are a whole bunch of other ethical issues that follow from that supposition, but I personally would say that if, though whatever mechanism, consciousness is capable of being preserved, then any new form of embodyiment that consciouness that is the same being in a new body.

*Heh.
NB. I have deliberately avoided using the word "person" because lodurr is quite right to say that various civilizations and cultures have treated personhood and the rights associated there by differently over time.
posted by Diablevert at 2:52 PM on July 21, 2009


Careful consideration of the comments on this thread have led me to a few surprising conclusions:

1) I guess, after all, and despite my strong preference for science over religion, I must believe in some sort of "metaphysical soul", although I'd prefer to avoid that loaded term. I believe that there are certain aspects of consciousness that we probably will never "explain away". As goofy as this sounds, maybe there's something else going on, and our brains are "transcievers" for some extra-dimensional God-knows-what.

2) The fact that some people consider the question "why do I seem to be special? or "Why does it seem that I experience subjectivity?" or "Why am I the only one without a head?" or simply "Why me?" to be a fascinating mystery, and others consider it not even worth thinking about, makes me wonder whether some of us might be experiencing consciousness differently than others? (Maybe Daniel Dennett really *is* an automaton...)

3) I am hoping that the Blue Brain experiment fails to produce a conscious mind,
(just as I am hoping the LHC does *not* find the Higgs boson). It would just be a much more interesting outcome, in my view. And it would help us find the limits of what is knowable... How boring it would be if it turned out that consciousness really was an emergent property of meat! I hope that's not the case. I think there are still some very large mysteries, and we've only begun to scratch the surface. How arrogant to think we'll ever understand it all!
posted by crazy_yeti at 5:18 PM on July 21, 2009


I am hoping that the Blue Brain experiment fails to produce a conscious mind,
(just as I am hoping the LHC does *not* find the Higgs boson). It would just be a much more interesting outcome, in my view. And it would help us find the limits of what is knowable... How boring it would be if it turned out that consciousness really was an emergent property of meat! I hope that's not the case. I think there are still some very large mysteries, and we've only begun to scratch the surface. How arrogant to think we'll ever understand it all!


I've heard many people claim that they don't want to understand how something works, because knowing what's under the hood will ruin the mystery.

I strongly disagree, though I can only disagree when it comes to myself. I can't say what will ruin a mystery for you. But it saddens me if you genuinely can't maintain a sense of mystery when you know how something works.

I'll try to be a bit clearer: if I know how a car works, under the hood, that DOES ruin the mystery of how a car works under the hood. But that's not the really interesting mystery, from an experiential perspective (the most important perspective to a person going about his day to day life). They great mystery of a car is how it makes you feel when it takes you someplace new.

I know how paintings work, but that doesn't ruin the mystery of the Sistine Chapel; I know how musical instruments work, but that doesn't ruin the mystery of Beethoven's 9th Symphony; I know that Marlan Brando is an actor, but that doesn't ruin the mystery of "The Godfather." The fact that the Godfather doesn't really exist -- that he's a fiction -- is the least interesting thing about him. I don't give it a thought while I'm watching the movie.

I know a lot about how hunger works, about why I get hungry, and about the biology and psychology of eating. People who lived before the age of science didn't understand these things. Yet both they and I got/get equal enjoyment out of eating a peach.

If we are one day about to dissect or recreate a brain and learn all of its inner workings, and if we can "explain away" all of consciousness, that will not change the experience of being conscious. You will look at a flower and find it beautiful. As a possessor of this new knowledge of the mind's mechanics, you will know exactly why you find it beautiful. You will know that if someone "flips a switch" in your brain, you will no longer find it beautiful.

And yet you will still find it beautiful. The experience will be exactly the same.

Maybe this is simple for me because I work in the theatre. It's one thing to experience beauty and mystery when you see other people's plays. What's amazing is that I experience these things when I see my own plays. I'll work with an actor who has been a friend of mine for years; I'll tell him where to walk, how to talk, and how to move his body. Then I'll sit back and watch. Though I'm intimately acquainted with everything under the hood, before long the magic takes over and I'm enthralled.

Unless you're utterly different from me, I'm skeptical that you're really worried about what will happen to "the mystery" if we figure out how consciousness works. I believe that you're worried about something, but I don't think you've expressed what it is.
posted by grumblebee at 12:01 PM on July 22, 2009 [1 favorite]


Thanks Grumblebee. This thread is really forcing me to examine my own beliefs. It's not that I don't want to know how it works, but it's more that I hope when we do find out it's something a little more subtle and interesting than what we can model on computers - perhaps something involving quantum properties, or some other not-yet-known physics. I'm not worried that the mystery will be gone if we find out how the mind works, but I would be sorely disappointed to find out that it's "just meat" and can be modeled on dumb old Turing machines.
It would just be more interesting and satisfying to me to learn that our consciousnesses are being projected down from the 11th dimension, or something like that!!!
posted by crazy_yeti at 12:51 PM on July 22, 2009


but I would be sorely disappointed to find out that it's "just meat" and can be modeled on dumb old Turing machines.

You can look at it that way. Or, if they did discover it was "just" meat (note how I took the quotes off meat), you could say, "Wow! Isn't meat amazing. Look what it's capable of!"

(Say "just meat" to a biologist and he'll slap your face.)

We KNOW the mind is extraordinary. Nothing can change that.

I've been a programmer for over a decade and I'm still flawed by what can be done with code -- even code that I can understand. If we ever code something like a human brain, the code we write to do it will be one of the greatest human accomplishments (probably THE greatest human accomplishment) of all time. It will be the Mount Rushmore of code. There will be nothing "just" about it.

Be careful with the word "just." Sometimes I hear people say, "it's just a story." And I'm almost taken in by it. The word "just" makes things seem diminished. Saying "just a quantum effect" makes quantum physics seem unimportant.

When people say "just a story," I think about the stories that have affected me profoundly -- stories that I was told 40 years ago that are still with me; little poems that open up whole worlds of meaning.
posted by grumblebee at 1:17 PM on July 22, 2009


Grumblebee: I've been a programmer for 3 decades and am very aware of the limits of what code can do. Computers are stupid, stubborn beasts and I'll be amazed if they ever come *close* to the level of human consciousness! It's just a hunch, I can't back it up, but I have a very strong feeling that consciousness is non-algorithmic, and that there is something deeper going on. Of course I have no concrete idea what that "something deeper" might actually be!
posted by crazy_yeti at 3:58 PM on July 22, 2009


We KNOW the mind is extraordinary. Nothing can change that.

Well, just to play devil's advocate: Jesus's face appearing in the bark of a tree or a statue crying because God is manifesting his presence on earth is deeply meaningful and sacred. Jesus's face appearing in the bark of a tree because of a purely random, natural variations in bark pigment or a statue appearing to cry because of a combo of material, shape and interior atmosphere of a old building leads water to condense on its face are amusing, but not meaningful. It's not a symbol, it's not a portent, it didn't have to happen. Intelligent forces at work in the world can produce symbols; random happenstance is merely that, random happenstance. It could have turned out another way; that it turned out the way it did is sheer luck, out of billions of possible outcomes, one had to be; this one was; no meaning attaches.

The existence of human consciousness at all poses the same problem; if consciousness is simply an emergent phenomenon of a sufficiently complicated network, then the mere existence of my consciousness is not in itself significant. The rose isn't beautiful, it doesn't smell sweet. I may think it does, other humans may agree with me, but that's only because we all are a product of the same evolutionary forces.

And so with my very feelings of wonder, my very sense that something is extraordinary. I may perceive something to be, but I cannot say that "extraordinariness" is a characteristic of the the thing, the way its atomic weight or chemical properties are.
posted by Diablevert at 6:28 PM on July 22, 2009


The rose isn't beautiful, it doesn't smell sweet. I may think it does, other humans may agree with me, but that's only because we all are a product of the same evolutionary forces.

Isn't this true whether consciousness is an emergent phenomenon or not?

And so with my very feelings of wonder, my very sense that something is extraordinary. I may perceive something to be, but I cannot say that "extraordinariness" is a characteristic of the the thing, the way its atomic weight or chemical properties are

I guess I've always taken this as a given, regardless of consciousness. All subjective judgments such as "extraordinariness" are inherently not a real property of the object.
posted by Bort at 7:09 AM on July 23, 2009


if consciousness is simply an emergent phenomenon of a sufficiently complicated network, then the mere existence of my consciousness is not in itself significant.

Significant of what? Things can't just BE significant. They have to be significant of something.

If consciousness is emergent, it's significant of the fact that they laws of physics allow consciousness to emerge, and that we're living in a special sort of universe that's "constructed" that to make those laws possible.

If consciousness is emergent, then, perhaps you can say that, having come about randomly, it DOESN'T signify the work of a god-like entity with special plans for us.

But this is wrong. If consciousness is emergent, that says nothing about intelligent design one way or another (though it does rule out traditional Christian theology, if you take the Bible literally).

As a programmer who has built emergent systems, I can promise you that they can be the product of an intelligent mind. God could have set the initial laws that he knew would eventually produce intelligence -- or not. I don't really see how consciousness being emergent says anything about whether it's deeply meaningful.

But here's another way to look at it:

if consciousness is simply an emergent phenomenon of a sufficiently complicated network, then the mere existence of my consciousness is not in itself significant.

Okay (ignoring what I wrote above and taking your word as truth), the EXISTENCE of consciousness is not meaningful. So what?

The existence of my CD of Beethoven's 9th Symphony is not terribly meaningful. It's an object that was massed-produced in a factory. Boring. But what I DO with that object and how it affects me, is incredibly meaningful.

Why is the origin of consciousness so important (other than being of scientific interest)? Surely the profundity of consciousness is what we DO with it, which is just as amazing if it arose via a grand plan or via random accident.
posted by grumblebee at 9:46 AM on July 23, 2009


(Say "just meat" to a biologist and he'll slap your face.)
OK, I take that back - maybe meat has more sublime properties than I am willing to give it credit for. But I stand by my comment that consciousness cannot be "just" a Turing machine.
posted by crazy_yeti at 4:21 PM on July 23, 2009


Henry Markram at TED
posted by kliuless at 7:49 PM on July 27, 2009


« Older Cyriak's Animation Mix   |   In Sickness and in Health Newer »


This thread has been archived and is closed to new comments