Join 3,512 readers in helping fund MetaFilter (Hide)


Grape Apes: The Origins of Morality
September 2, 2012 7:41 AM   Subscribe

Chimp Fights and Trolley Rides from Radiolab's morality episode: "try to answer tough moral quandaries. The questions--which force you to decide between homicidal scenarios--are the same ones being asked by Dr. Joshua Greene. He'll tell us about using modern brain scanning techniques to take snapshots of the brain as it struggles to resolve these moral conflicts. And he'll describe what he sees in these images: quite literally, a battle taking place in the brain. It's 'inner chimp' versus a calculator-wielding rationale."
posted by kliuless (36 comments total) 12 users marked this as a favorite

 
If I have to read one more paper about the goddamn trolley problem, I'm going to redirect a train through a conference of philosophers.
posted by belarius at 9:16 AM on September 2, 2012 [15 favorites]


Meh. The brain-scan wonks make a lot of news-bite friendly claims, but so much of it is barely backed up with any other research, or even reproducible in a consistent manner.

Seems like more reductionist nonsense to me.
posted by clvrmnky at 9:43 AM on September 2, 2012 [4 favorites]


I just wish there was a better version of the trolley problem because every time I hear about it I keep thinking "How do they know the man is fat enough to stop the trolley?" Otherwise that's six people it's just run over, and that would just be awkward.
posted by dragoon at 10:12 AM on September 2, 2012 [3 favorites]


I just wish there was a better version of the trolley problem because every time I hear about it I keep thinking "How do they know the man is fat enough to stop the trolley?" Otherwise that's six people it's just run over, and that would just be awkward.

I also dislike hypotheticals that depend on assumed absolute knowledge of the past, present, and future. That's crazy.

I remember a torture ethics hypothetical that was like, "You KNOW there's an actual nuclear warhead in NYC, you KNOW this person planted it, you KNOW this person knows how to deactivate it, and you KNOW there's a 30% chance he'll tell you where it is and how to deactivate it if you shoot him in the kneecaps. You have ten minutes before detonation, and no other way of locating the bomb. 704,321 people will die if it goes off. Is torture ethical in this instance?"

And I'm all like, that's not torture, man, that's a unicorn.
posted by jsturgill at 10:33 AM on September 2, 2012 [13 favorites]


I wish there was a version of Radiolab without its signature "gee whizz" discussion tone and the continuous "wacky" sound effects
posted by Bwithh at 10:58 AM on September 2, 2012 [3 favorites]


And I'm all like, that's not torture, man, that's a unicorn.

This is the central problem with philosophy as it has traditionally been practiced. Prior to the integration of probability into intellectual thought, the world was largely discussed in terms of absolutes using logical operators. False dichotomy is ubiquitous is Continental philosophy because an "it depends" result gets swept under the rug by people constantly revising their axioms instead of questioning whether logic is an appropriate methodology for asking the question.

The gap between the axiomatic approach of philosophy and the probabilistic approach of science represents the most important paradigm shift of the modern era. Insofar as they fail to make that leap, philosophers marginalize themselves and hamstring their ability to further expand the sphere of human knowledge.
posted by belarius at 11:07 AM on September 2, 2012 [6 favorites]


I just wish there was a better version of the trolley problem because every time I hear about it I keep thinking "How do they know the man is fat enough to stop the trolley?"

As near as I can tell trolley cars have a minimum unoccupied weight of around 15000 pounds, and moving at 15mph it would have a kinetic energy of about 150kJ. So what we're talking about here is a man who is so far that when he's struck by a normal-sized car doing between 30 and 35 mph, he stops the car. In such a case, we do not need to worry about the welfare of the man, because he is too fat to live more than a few seconds once he is conjured into existence by the philosophers. For trolley cars that are more representative weights -- Toronto's weigh 81000 pounds -- the problem gets much worse very quickly.

Another problem for the shove-off-the-bridge variants is that if you can move a man who can stop a trolley, it follows that you are likely to be able to stand on the tracks and stop the trolley by shoving real hard, because you are the Hulk.
posted by ROU_Xenophobe at 11:27 AM on September 2, 2012 [15 favorites]


I was thinking of the fmri amd the dead Salmon and then Douglas Adams final book Salmon of Doubt came to mind.
posted by humanfont at 11:53 AM on September 2, 2012


The "trolley" in the original problem could refer to a railway handcar (an ordinary maintanence one rather than the likes of an armoured military draisine... my new word of the day!). A fat man falling on the tracks would be more capable of stopping such a vehicle... or at least act as an effective warning signal for the driver to brake


(I note that the Radiolab website designer seems to think that a trolley is a heavy steam locomotive train, judging by the photo)
posted by Bwithh at 11:54 AM on September 2, 2012 [1 favorite]


oh ok, according to Wikipedia, the original formulation of the problem used "tram" rather than "trolley", suggesting that had a streetcar in mind rather than a railway handcar - so ROU_Xenophobe is very much on point.

Still... draisine
posted by Bwithh at 11:57 AM on September 2, 2012


The gap between the axiomatic approach of philosophy and the probabilistic approach of science represents the most important paradigm shift of the modern era. Insofar as they fail to make that leap, philosophers marginalize themselves and hamstring their ability to further expand the sphere of human knowledge.

Absolutely!

Er, I mean...probably.

Oh never mind.
posted by O Blitiri at 12:44 PM on September 2, 2012


I think the Moscow Theatre Hostage Crisis makes a better real-world trolley problem. Do you inject the anesthetic into the ventilation system, knowing that it will probably kill some hostages, given your expectations of how likely the terrorists will kill the hostages themselves? The "right" answer is unknowable (and hotly contested), but it still challenges peoples conception of moral choices better than classical problems.
posted by Popular Ethics at 1:15 PM on September 2, 2012


I'd pay good money to watch the owners of the two I-Can't-Believe-It's-Not-Butter-commercial-esque voices on Radiolab fight in a deathmatch with a couple of adult chimps. I'm pretty certain there's some sort of moral quandary there.
posted by item at 1:49 PM on September 2, 2012 [1 favorite]


The Google-Trolley problem

Suppose that you are a programmer at Google and you are tasked with writing code for the Google-trolley. What code do you write? Should the trolley divert itself to the side track? Should the trolley run itself into a fat man to save five? If the Google-trolley does run itself into the fat man to save five should Sergey Brin be charged? Do your intuitions about the trolley problem change when we switch from the near view to the far (programming) view?
posted by Bwithh at 2:10 PM on September 2, 2012 [1 favorite]


The Google trolley problem answer is that you slap Beta on it and add some language to the TOS protecting you.
posted by humanfont at 2:28 PM on September 2, 2012 [3 favorites]


What if the AI devil gets control of all the nuclear weapons in the world, and will kill everyone in Holland unless you murder and cannibalize a child? And you know it's serious, because its an AI and it presents to you a proof that you can verify (using another AI that you trust, of course) that proves that it will really do it. What if that child is an orphan and has organs that are compatible with five people on the organ waiting list? BTW, you checked with the AI devil and he said it was ok to donate organs as long as you ate most of the child. However, a homeless man on a street told you that you're actually in a simulation staged by ALIENS to tell if humanity is worth saving (the aliens are strict deontologists). The man didn't present any hard proof, but on the other hand you didn't tell anyone about the AI devil, so that's pretty suspicious, right? Then while you're driving home and mulling this over, the brakes suddenly give out on your car, and you have to choose between running over one fat man (who you know is a famous cancer researcher) vs. five school teachers. WHAT DO YOU DO?
posted by Pyry at 2:31 PM on September 2, 2012 [5 favorites]


Oh, I forgot to mention that if you try to get out of the AI devil's choice, it will both murder the orphan and kill everyone in Holland, but if you die by an accident both will live. However, since the AI is so vastly intelligent, it can simulate your whole brain, and thus know whether you intended to commit suicide. If you hit the fat cancer researcher, you have a 25% chance of dying, and you know this, and the AI devil knows you know this. If you hit the teachers, you will almost certainly survive, and both you and the AI devil also know this.
posted by Pyry at 2:41 PM on September 2, 2012 [1 favorite]


WHAT DO YOU DO?

Ramp that shit Duke boys style.
posted by jason_steakums at 2:43 PM on September 2, 2012


WHAT DO YOU DO?

Say fuck it and rinse with shotgun mouthwash.
posted by localroger at 3:04 PM on September 2, 2012


Dang! I always have difficulties with multiple choice quizzes where there is a supposed right answer. They're all partly right and all partly wrong. I'd just go with "C" or "all of the above" or "none of the above" or the longest answer.

As for the chimp/human scan dicotomy: doesn't that sort of conflict happen with most choices we face in everyday life? And why is it either chimp or human part of our brain when it's more of an infinite continuum of possibilities? I don't understand the basic assumption that what's showing up on the scans is as evidentiary the researcher indicates.
posted by mightshould at 3:11 PM on September 2, 2012 [1 favorite]


I remember a torture ethics hypothetical that was like, "You KNOW there's an actual nuclear warhead in NYC, you KNOW this person planted it, you KNOW..."

Yes, definitely, it's not realistic. But that's why it works as an ethics question. Having absolutely known parameters takes out the comfortable wiggle room of being able to absolve yourself of responsibility because you thought there was a chance that the negative consequences of your actions might not come to pass.

Fortunately, the wiggle room of uncertainty does exist in real life. We'd probably have a hard time living with ourselves otherwise.
posted by the jam at 5:22 PM on September 2, 2012 [1 favorite]


The "inner chimp" links to an experiment on capuchin monkeys; while capuchins are an incredibly smart asshole of a species, their ancestor has been separated from our ancestor (and chimps' ancestor) ror somewhere on the order of 25 million years. There are certainly an interesting set of evolutionary implications to capuchin rationality and sense of fairness (or lack thereof), but arguing that capuchins reflect our inner chimp fundamentally misunderstands and misrepresents primate evolution and the conclusions we can draw from experiments with other primates. That being said, if folks are interested in capuchin economic decisions, check out the work of Laurie Santos, including her TED Talk.

I've met Laurie a few times and work with some of her collaborators.
posted by ChuraChura at 9:22 PM on September 2, 2012


@pyry

i read that manga
posted by This, of course, alludes to you at 4:48 AM on September 3, 2012


Yes, definitely, it's not realistic. But that's why it works as an ethics question.

And this is why people who do not specialize in the study of ethics think those who do are not doing anything Real World Useful.

It is possible to frame these questions in such a way that reality is not twisted into a pretzel. If you cannot think of a way to frame your dilemma that doesn't twist reality into a pretzel, it might suggest that your dilemma isn't relative to anything, you know, real.
posted by localroger at 10:08 AM on September 3, 2012


So, a while back there was this movie where this couple receives a box and the guarantee that pushing a button in the box will mean they will receive 1 million dollars while at the same time causing the death of an innocent person. However, the person who will die is not anyone they know personally. I know, I'm groaning just reading that, and no, I did not pay to see the movie in the theater.

I did happen to stumble by the (contrived, strangely misogynistic and deliberately emotionally manipulative) ending while channel-surfing, though.

So I put the question to my own kids, who were young teenagers at the time: Would you push the button? They both said, "For a million dollars?! Yes!"

And I was appalled.

Not because they would willingly kill a stranger, though of course I don't approve of that as a general rule. I knew they were teens, and their world consisted of their friends and families, and they were young enough to still be working through the we're-all-in-this-together global empathy equation.

No, I was appalled because I taught them to be a hell of a lot more cynical than that. They're supposed to question this stuff, not just accept everything on face value!

You get an offer like that, you need to see the money upfront, and you need to ask whoever is behind that box some serious questions. Like:

"Why don't you just push the button yourself?"

"Are you a police officer?* And doesn't this fall under the legal definition of entrapment? Are you wearing a wire? Would you consent to a strip search to verify that you are not? By the way, this man to my left is my attorney."

"That million dollars, is it a firm offer or is it negotiable? Because I'm thinking the going price for me personally pushing that button is closer to an even 5 million..."

And, most importantly, "How in the hell do you have the power and/or technology to make a box that will kill some random person at the push of a button and yet still have the need to pay some person to push the button? Couldn't you just have added a switch or something?"

The scenario, as given, is WAY too fishy. The logic just doesn't add up. That box is just bad news, and you're better off just walking away.

Besides, (and I'm so proud of my youngest son, who, after actually thinking the dilemma overs seriously, reached this point in the logical process) how do you know there's only one box? Or that the next one doesn't have your name on the button?

But then my older, Machiavellian son pointed out that, if someone else did have such a matching box and could push a button to kill you, you'd have no control over them anyway, and since you can't stop them from pushing the button, you might as well take the money and party like there's no tomorrow.

And that is why I'm now considering making our younger son the executor of our estate.

*Cause, you know, they have to tell you the truth if you ask them! ;)
posted by misha at 11:40 AM on September 3, 2012 [1 favorite]


So, a while back there was this movie where this couple receives a box and the guarantee that pushing a button in the box will mean they will receive 1 million dollars while at the same time causing the death of an innocent person.

That movie was a complete waste of a really excellent Twlight Zone episode.

The TZ episode ended with the box being picked up, and the dialog (paraphrasing):

-- Why do you need it back?
-- Oh, we're going to give it to someone else. Don't worry, they won't be anyone you know.

This made it quite obvious that the person to die would be the person who pushed the button, or to whom it was given, and there wasn't necessarily any weird magic technology involved; if you push the button, you're the person to die when the next person pushes it for their million, maybe by drive by shooting or whatever; if the kind of psychopath who is rich and powerful enough to play this game for a million dollars a pop decides it's time for you to die, do you really have any realistic chance to defend yourself? The mechanism need be no more magical than the one behind Stephen King's later story Quitters, Inc.

And of course, the horror is that even if the next person holds out, they'll just keep giving the box to other people until someone else is as weak as you were.

It really worked a lot better without explanation, as it dawned on you while the credits were rolling.
posted by localroger at 12:01 PM on September 3, 2012


That movie was a complete waste of a really excellent Twlight Zone episode.

Button, Button
posted by homunculus at 3:01 PM on September 3, 2012


Button, Button

That was the New TZ episode, which still hammed it up too much at the end but was much better than the dagowful movie ending. The real button episode is the original B&W one introduced by Rod Serling.
posted by localroger at 3:45 PM on September 3, 2012


On review, it seems my memory has failed me and I'm wrong; Richard Matheson wrote the original story in 1970 and the New TZ version was the one. I might have been remembering Matheson's story. He was unthrilled with the TZ ending. link
posted by localroger at 3:53 PM on September 3, 2012


Yes, definitely, it's not realistic. But that's why it works as an ethics question. Having absolutely known parameters takes out the comfortable wiggle room of being able to absolve yourself of responsibility because you thought there was a chance that the negative consequences of your actions might not come to pass.

I disagree. Defining absolutely known parameters in this manner is a way for philosophy to divorce itself from the hard questions of uncertainty that plague real life actors. To contemplate right and wrong along such rigid, well-defined, and terribly unrealistic lines is to poison whatever insights are later developed. The foundation is so divorced from reality as to render the entire line of reasoning close to useless.
posted by jsturgill at 10:15 PM on September 3, 2012


On review, it seems my memory has failed me and I'm wrong

Same here. When I went looking for it, I could have sworn that it was from the original black-and-white series.
posted by homunculus at 10:18 PM on September 3, 2012


To contemplate right and wrong along such rigid, well-defined, and terribly unrealistic lines is to poison whatever insights are later developed.

It's exactly the opposite. Demanding that thought experiments be realistic is like demanding that scientists not isolate variables in their experiments because the real world is messy. You could introduce uncertainty or other confounding variables into a thought experiment, but then you wouldn't be testing the same principle. You would be poisoning your experiment by making it unnecessarily hard to tease out why - in virtue of what - you reached the conclusions you did.

I'll admit that precisely because most people, when asked to consider a thought experiment, tend to introduce confounding variables rather than stick to the assumed parameters, brain scans of people thinking through these experiments may not be testing what they're meant to test.
posted by Mila at 3:41 PM on September 4, 2012


It's exactly the opposite. Demanding that thought experiments be realistic is like demanding that scientists not isolate variables in their experiments because the real world is messy. You could introduce uncertainty or other confounding variables into a thought experiment, but then you wouldn't be testing the same principle. You would be poisoning your experiment by making it unnecessarily hard to tease out why - in virtue of what - you reached the conclusions you did.

It's not hard to be consistent and realistic and also come up with an ethical dilemma with precise contours/boundaries. The quick sketch of the torture hypothetical I provided is a complete and utter failure of imagination, as has been every version of the trolley problem I have encountered.

How can I trust someone's conclusions when they haven't even put any thought into the loaded scenario they have constructed so they could show off their pet concept(s)? The rigor of philosophical thought should begin with the hypothetical situation (if one is being used), not at some arbitrary point afterward.
posted by jsturgill at 4:18 PM on September 4, 2012 [1 favorite]


I'll admit that precisely because most people, when asked to consider a thought experiment, tend to introduce confounding variables rather than stick to the assumed parameters, brain scans of people thinking through these experiments may not be testing what they're meant to test.

Duh. That's because people aren't computers and they don't stick to rigid parameters even if you tell them to.

My wife is a great example of this; she simply cannot wrap her head around the idea of an ethical paradox at all because she cannot exclude the possibility of metasolutions. Simply can't. Given what was, when I took Philosophy 1000, called the "Pedro paradox" (you're captured by Pedro, and he insists that you shoot a man or he'll shoot a whole row of 10) her immediate, unhesistating answer was "shoot Pedro." If you say that just gets you and the other ten guys and your victim all shot, she's still "shoot Pedro." If you take Pedro out of the room she says "I shoot myself." You simply cannot get her to stay within the framework.

Now if you wanted to be rigorous, you'd tell her she's bound in a chair so that the only meaningful movement she can make is to press a button, and she is ordered to press a button and observes that it makes a really impressive electrical display. Then her victim is strapped in to where all the arcing occurs and she's told "you will press the button again, or I start shooting these men." When I put it that way she said she would not press the button.

It really isn't that hard to do this right.
posted by localroger at 4:47 PM on September 4, 2012


Defining absolutely known parameters in this manner is a way for philosophy to divorce itself from the hard questions of uncertainty that plague real life actors.

So leave it abstract.

If you perform no action, 5 people will die.
If you perform the only action available to you, you will save those 5, but cause another 1 to die.

It's particularly annoying since real-world situations that come pretty close to trolley problems aren't hard to come up with. Issuing recommendations about cancer screening, for example, where you have five people who will be hit by the cancer train, but if you screen everyone you'll accidentally kill one healthy person who gets a horrible infection from a biopsy like Peter Watts did.

But no. It has to be trolleys, where (reasoning from crash attenuators) you'd need a 13,000 pound man to stop the trolley.
posted by ROU_Xenophobe at 5:48 PM on September 4, 2012


jsturgill, the torture hypothetical you mentioned lacks elegance, but how is it not rigorous? Given that it's not intended to be used as a scene in a well-written novel, what's wrong with a thought experiment that covers its bases with "Dude, you KNOW who planted the bomb" and "You KNOW they know how to deactivate it" and so on? Do you think that anything would be gained by launching into an explanation of how we know these things?

localroger, I think you're saying that a rigorously designed thought experiment is one that won't confuse the average, non-philosophically-inclined person on the street. If a thought experiment is going to be used to test that person's intuitions, then yes, it had better be accessible to them.

But understanding the trolley problem or the shooter problem the way they're intended, without asking irrelevant questions, pointing out how these scenarios are different from real life, or proposing solutions that go outside the framework of the problem, is a critical thinking skill that I would want in someone who's tasked with making policy decisions that affect my life. The fact that many people don't have this skill isn't a failure of the thought experiment or of philosophy as a discipline, or mean that people who do have this skill are like computers.
posted by Mila at 9:03 PM on September 4, 2012


« Older "Superman Returns is far from perfect, yet its fla...  |  Speedqueens... Newer »


This thread has been archived and is closed to new comments