“Thinking about science leads to [endorsing] more stringent moral norms”
March 30, 2013 4:25 PM   Subscribe

Christine Ma-Kellams and Jim Blascovich. Does “Science” Make You Moral? The Effects of Priming Science on Moral Judgments and Behavior. PLOS One, 6 March 2013. (Salon)

“The notion of science contains in it the broader moral vision of a society in which rationality is used for the mutual benefit of all.”
posted by jeffburdges (18 comments total) 14 users marked this as a favorite
 
Not sure if Edward Teller would agree.
posted by John of Michigan at 4:30 PM on March 30, 2013 [2 favorites]


Ok, so current western culture imbues science with certain associations that also promote certain pro-social behaviors. That is an intereting result about current predominant lay associations with science.
posted by eviemath at 5:34 PM on March 30, 2013 [1 favorite]


Meh, one study is virtually worthless.

But science isn't clearly amoral. Peirce, for example, argues that logic, broadly construed, is the ethics of belief--and that's not all that inconsistent with a lot of common views. It wouldn't be terribly surprising if the ability to be objective and fair-minded isn't domain specific. People who are more capable of being fair-minded about hypotheses might be more capable of being objective and fair-minded about actions.
posted by Fists O'Fury at 6:04 PM on March 30, 2013 [2 favorites]


I was really hoping this was going to be about people using science to arrive at more fact-based and logical moral decisions, but I'm happy at least that science has an indirect effect based perceptions of rationality, impartiality, and social benefit. War on science? Not over.
posted by finnegans at 6:13 PM on March 30, 2013


I'd imagine scientific thought proved important in the development of moral universalism, which gained so much ground only fairly recently. We should however observe that this experiment mostly used university students, presumably adults' careers impact their morality much more immediately than any class thsy took in school..
posted by jeffburdges at 6:32 PM on March 30, 2013


All of the above may be true, but what the study seems to capture is a result of cultural beliefs about science - it's a statement about western culture, not so much a statement about science. The question that the study generates is not "what is it about science that makes people more moral in the particular ways studied?" but "what is it about popular culture's relationship to science (eg. including trends in science education, probably influenced by Enlightenment ideas around science and issues of how the scientific enterprise has been 'sold' to the general public) that makes students (age, and likely class and race, may make a difference here - the Salon article mentioned that religiosity and political viewpoints were controlled for, but didn't specify other factors) associate (at least subconsciously) science with more moral or pro-social behavior, in the two areas of morality tested for?"
posted by eviemath at 7:39 PM on March 30, 2013 [2 favorites]


(On re-read, maybe that's not so far off from what other folks are saying.)
posted by eviemath at 7:42 PM on March 30, 2013


Meh, one study is virtually worthless.

I've seen some form of this sentiment expressed a lot lately, and it's just....wrong. Every study or review must be rigorously examined for all the usual flaws on a case-by-case basis. To prove extraordinary claims, one certainly needs to replicate a study many times and so on, but the idea that just because there is only one study means that the study is automatically "virtually worthless" is false. One study is a piece of a puzzle, it just isn't the required threshold to say QED. I tend to agree with Steven Novella's assessment of evidence threshold as requiring:

1- Methodologically rigorous, properly blinded, and sufficiently powered studies that adequately define and control for all relevant variables (confirmed by surviving peer-review and post-publication analysis).

2- Positive results that are statistically significant.

3- A reasonable signal to noise ratio (clinically significant for medical studies, or generally well within our ability to confidently detect).

4- Independently reproducible. No matter who repeats the experiment, the effect is reliably detected.


I'm glad that more people nowadays seem to get that they can't take the old "...studies say that [FOO] causes [BAR]" without doing some more work to investigate the quality of the evidence gained from the study, but we shouldn't over-correct in the other direction, either.

All that said, this particular study seems flawed to me, due to the very low sample size (total of 156 persons) and other stuff like the low median ages across the board.
posted by lazaruslong at 8:04 PM on March 30, 2013 [7 favorites]


I wish the paper had defined "moral" as well as they defined "science." But yeah, I have to say that asking college sophomores how they view the world is not the best model for how working scientists do.
posted by maryr at 8:54 PM on March 30, 2013


According to the Salon recap, they asked two ethical questions: one about appropriate punishment for a rapist, and the other one of those economics experiments where the subject gets to decide how fairly to divide up some free money. The Salon article at least (I think based on the paper though) referred to some guy's taxonomy of moral ideas, which they linked to, of which the test example were from two of many possible components of morality. So I think that was their definition of "moral". The Salon article did not seem to define "science", on the other hand:-P
posted by eviemath at 9:16 PM on March 30, 2013


These recent priming studies that have completely taken over psychology of late make me sad. Sand castles of triviality.
posted by srboisvert at 10:38 PM on March 30, 2013 [3 favorites]


Science at its core is the pursuit of truth, of correct data and replicable actions, and while truth and predictability of interactions are not the entirety of morality, they are certainly the greater part. If we are honest with ourselves and with each other, we necessarily acknowledge the consequences to each other of our actions, and accordingly, provided that we also hold reduction of harm as a value, we will behave morally enough to sustain a productive and progressive society.

It is possible to not lie, and still do evil, but it greatly reduces the evils that can be done.
posted by aeschenkarnos at 1:12 AM on March 31, 2013 [1 favorite]


I've noted before, in the magic land of big pharma, you occasionally see a scientist falsify data to get the answer they want. If you really want to get into the land of moral lapses, you have to visit the folks in sales.
posted by Kid Charlemagne at 5:48 AM on March 31, 2013 [1 favorite]


Meh, one study is virtually worthless.

I've seen some form of this sentiment expressed a lot lately, and it's just....wrong.


Couldn't agree with you more, lazaruslong. The irony is that, in this case, this is not "just one study". PLoS One is one of the few journals out there in which novelty (or lack thereof) is not a consideration during the review process, which I think was one of the reasons the authors sent their paper there in the first place. Specifically, because references 4-8 have already contributed considerably to the question of whether science is "value-laden" and because the results of the present study add only incrementally to this.

Just because it may be the first time a MetaFilter reader has heard about something does not mean it's new to the rest of the world. And considering that references 4,5,6,7 and 8 were published in 1968, 1975, 1956, 1986 and 1953 respectively, I'd say we've been discussing this particular concept for a considerable amount of time now.
posted by kisch mokusch at 6:06 AM on March 31, 2013 [2 favorites]



Meh, one study is virtually worthless.

I've seen some form of this sentiment expressed a lot lately, and it's just....wrong. Every study or review must be rigorously examined for all the usual flaws on a case-by-case basis. To prove extraordinary claims, one certainly needs to replicate a study many times and so on, but the idea that just because there is only one study means that the study is automatically "virtually worthless" is false. One study is a piece of a puzzle, it just isn't the required threshold to say QED


Not really. The average single study of this type provides way, way less evidence than just not being "the required threshhold to say QED." Way, way, way less than that. Much closer to "virtually worthless" than to "not the required threshold for QED."

To cheat a bit and focus on this type of psychology research: such questions are typically idiosyncratically asked and answered. The question would have to be asked in hundreds of different ways--and nature would have to respond with lots of affirmative answers and few negative ones--to give us good reason to believe that the phenomenon in question was real. Experimenter bias and honest error are so common, choices of statistical methods so consequential, social pressures to publish so powerful, and the phenomena so complex that one study of this kind really doesn't mean much.

Note also studies that suggest that most published research results may be wrong. Not many, but perhaps even most. That's disheartening stuff right there.

So, anyway, I'm sticking with: virtually worthless...though 'virtually' is crucial there.
posted by Fists O'Fury at 7:01 AM on March 31, 2013


I wonder how the following questions would go over:

1) Is is right or wrong to keep a cell culture from a dying woman if that cell culture will be used for important research?

2) Is it right or wrong to allow men to die of untreated syphilis in exchange for new information about the disease?

3) Is it right or wrong to provide medical consultation when someone is going to be tortured, if you are sure someone else will do it anyway, and you'll face censure if you refuse?

4) Rightly or wrongly, your government has condemned people to death. Is it right or wrong to arrange the manner of their death in such a way that you will gather data about phenomena such as freezing or suffocation?

5) Is it right or wrong to intentionally inflict brain damage on a women you believe to have an incurable psychological condition, because it may reduce signs of her suffering, and will almost certainly cause her family to suffer less?

6) Have you noticed that virtually none of the real world counterparts to my questions involve bad things happening to white men?
posted by mobunited at 8:38 AM on March 31, 2013


Fists O'Fury: "Not really. The average single study of this type provides way, way less evidence than just not being "the required threshhold to say QED." Way, way, way less than that. Much closer to "virtually worthless" than to "not the required threshold for QED."


I hear what you're saying, I just don't think it relates to the real world. Any individual study has a possibility of being deeply flawed. I think the one in this article is as well, though as others pointed out, it does reference work done over the last few decades.

But to say that a single study is virtually worthless doesn't really hold up. If, for example, you have the opposite of this kind of study - one study, that conforms to the first 3 rules for evidence:

1- Methodologically rigorous, properly blinded, and sufficiently powered studies that adequately define and control for all relevant variables (confirmed by surviving peer-review and post-publication analysis).

2- Positive results that are statistically significant.

3- A reasonable signal to noise ratio (clinically significant for medical studies, or generally well within our ability to confidently detect).


but hasn't met number 4 yet because it's a new line of research or a sudden phenomenon, that kind of study is immensely valuable.

If a methodologically rigorous, properly blinded, and sufficiently powered study that defined and controlled for all relevant variables came out today that concluded there was a causal link between the heavy consumption of green tea and a 99% increased risk of developing prostate cancer, I would seriously reconsider my habits. I'd definitely be alert for other studies to add to the number of solid investigations, but that first one sure would be worthy.

If we're talking past each other and what we're both really saying is that the average crappily designed, unblinded and uncontrolled bad science study is virtually worthless, then you're probably right. I just don't want people to get so used to the idea that all research is intrinsically worthless that they discount something out of hand due to an overactive quantity bias.

Note also studies that suggest that most published research results may be wrong. Not many, but perhaps even most. That's disheartening stuff right there.

I'd be interested to analyze the validity of that study, for sure. =)
posted by lazaruslong at 4:15 PM on March 31, 2013


Note also studies that suggest that most published research results may be wrong. Not many, but perhaps even most. That's disheartening stuff right there.

I'd be interested to analyze the validity of that study, for sure. =)


I believe Fists O'Fury is referring to this paper: Why Most Published Research Findings Are False. There may have even been a FPP about it. The article is not without merit, though it shouldn't be used as the basis for the universal dismissal of all original research (which in my opinion is simplistic and, more egregiously, lazy). We could have a discussion about how the emphasis on novelty (or "conceptual advance") with regards to publishing papers actually hinders good research (including the need for independent verification), but I wouldn't even consider initiating such a conversation with anybody who thinks primary research is worthless.
posted by kisch mokusch at 9:19 PM on March 31, 2013 [1 favorite]


« Older "I thought I was the only gay person in the world...   |   There is some conundrum in there which no amount... Newer »


This thread has been archived and is closed to new comments