The (possible) power of unconscious suggestion
February 4, 2013 10:30 AM   Subscribe

The amazing influence of unconscious cues is among the most fascinating discoveries of our time­—that is, if it's true. The studies that raise eyebrows are mostly in an area known as behavioral or goal priming, research that demonstrates how subliminal prompts can make you do all manner of crazy things. A warm mug makes you friendlier. The American flag makes you vote Republican. Fast-food logos make you impatient. A small group of skeptical psychologists—let's call them the Replicators—have been trying to reproduce some of the most popular priming effects in their own labs. What have they found? Mostly that they can't get those results. The studies don't check out. Something is wrong.
posted by shivohum (36 comments total) 54 users marked this as a favorite
 
Hmmm, this is a very interesting article send all your money to Yoink immediately or you will be eaten by rabid hamsters and I'm sure that the ensuing discussion will be fruitful and thought-provoking.
posted by yoink at 10:35 AM on February 4, 2013 [4 favorites]


Thank God. Some serious - and public - skepticism needs to be placed on the entirety of behavioral economics. Unfortunately, it's a business fad, so it might be awhile yet.
posted by downing street memo at 10:39 AM on February 4, 2013 [8 favorites]


The scientific method at work -- I love it!
posted by Triplanetary at 10:40 AM on February 4, 2013 [2 favorites]


In other news, "motivational" posters in the workplace are found to increase productivity, but only when the workforce is already motivated.
posted by Curious Artificer at 10:44 AM on February 4, 2013 [1 favorite]


Plus there's the so-called file-drawer problem, that is, the tendency for researchers to publish their singular successes and ignore their multiple failures....

Picking the cases where stimulus A gives you response a and ignoring those cases where this didn't happen is NOT the file drawer problem. The file drawer problem is stimulus A = no response; stimulus B = no response; stimulus C = no response; stimulus D = response d, so you publish the D study and study A, B and C are forgotten in a file drawer.

Not publishing multiple failures (once you have arrived at a testing protocol) is cherry picking your data and is only a hair away from data fabrication.
posted by Kid Charlemagne at 10:44 AM on February 4, 2013 [12 favorites]


In a perfect world studies that concluded

"the mind is complex, we are fairly uncertain about how it works on an abstract level but this current study illuminates another small corner of this once pitch black cave. So as to the question considered herein: Maybe."

would still get funding and notoriety. Unfortunately, "Why Gender and Advertising Impact You and Yours in Ways That I'm Not Really Sure About: Genitals, Neon Signs, and the Giant Shrug" probably wouldn't make for a real snappy non-fiction bestseller.
posted by sendai sleep master at 10:45 AM on February 4, 2013 [2 favorites]


Anyone else feel that the accompanying picture of the researcher was designed to have unconscious cues?
posted by benito.strauss at 10:50 AM on February 4, 2013 [4 favorites]


Wait - the original experiments weren't double blind? That's a surprising lapse in (what I thought was) standard research protocol.

Bargh claims they were double blind, and that is why the criticism was unfair.

Not publishing multiple failures (once you have arrived at a testing protocol) is cherry picking your data and is only a hair away from data fabrication.

I don't think the point is that they ran the same test over and over until they got a hit. I think the idea is you test priming with a picture of a monkey and whether people talk more on leaving the room and you get nothing, so you don't publish it. They you test priming with recordings of classical music and whether people are more likely to clear away their trash on leaving the room and you get nothing, so you don't publish it. Then you test priming with pictures of Stalin and whether people condemn the research assistant to the Gulag as they leave the room, and you get nothing. Then you do the study with senior-words and walking times and you get a hit, so you publish.
posted by yoink at 11:00 AM on February 4, 2013 [1 favorite]


Not publishing multiple failures (once you have arrived at a testing protocol) is cherry picking your data and is only a hair away from data fabrication.

The Cherry Pickers Union supports this research. Cherry Picking creates jobs right here in America. Why do you hate America?
posted by It's Raining Florence Henderson at 11:02 AM on February 4, 2013 [3 favorites]


Why don't you pass the time by playing a little solitaire?
posted by The 10th Regiment of Foot at 11:06 AM on February 4, 2013 [2 favorites]


I read an article recently (which I cant find, unfortunately) with regards to the recent meta-study on fish-oil [1,2] was about the life-cycle of a new idea. You get an idea and find it significant in a small study. A bunch of people try to replicate it but cant and the results dont get published (or even completely written up and submitted). But then someone does replicate it and the replication gets published. Now anything that goes against the effect can get published because the new effect is part of What We Know. Eventually the studies get more numerous and larger and you find out a more nuanced version of the surprising study which was published originally.
posted by shothotbot at 11:07 AM on February 4, 2013 [2 favorites]


Relevant xkcd.
posted by Jeanne at 11:08 AM on February 4, 2013 [5 favorites]


Right now its niftiest findings are routinely simplified and repackaged for a mass audience; if you wish to publish a best seller sans bloodsucking or light bondage, you would be well advised to match a few dozen psychological papers with relatable anecdotes and a grabby, one-word title.

or

Burn: How a Single Sentence Encapsulated Everything Wrong with the Publishing World
posted by Tsuga at 11:10 AM on February 4, 2013 [21 favorites]


There is a very well established back channel whisper network amongst psychologists about who's research replicates and who's does not. For the most part it's only the unconnected researchers and their students who get burned by this stuff (and of course science journalists but I don't think they care that much - they got their clicks and bucks after all). A big clue for when you should have extra doubt is when researchers actively pursue publicity.

That said, psychology research can be extremely fickle. Sometimes effects disappear when you merely change schools (which really doesn't bode well for the supposed generalizability of the college student samples) or test later in the semester (when the less organized or motivated students are racing to fulfill their research requirements).

My view from a distance (as a pscyh grad school dropout) is that the field has several simultaneous problems. First is the obvious incentivised fraudsters - tenure, better pay, book money and adoration drives some people to be outright frauds. Second, psychologist are jack-of-trades researchers and masters of only a few. They are almost never statisticians or statistical consultants so they need to know both their own field of enquiry and know statistics well and most are deficient on the statistical side. For the most part researchers know the general statistical conventions of their own area within psychology (there are considerable differences between areas in both the details of particular types analysis and in the choice of which analysis to employ) and often do not even know the logical reasoning behind the statistics they use. This results in things like reification of p-values as an indicator of truth rather than as a statistically likely bet. Third, science is an accumulative human enterprise that works over time by telling us what is not true yet is never presented as such. So what seems like a field in crisis could be more aptly described as a very difficult field in the process of gradually self-correcting by the normal scientific process of eliminating falsehoods. If a field of science isn't in crisis then maybe it isn't making much progress.
posted by srboisvert at 11:45 AM on February 4, 2013 [8 favorites]


Radio Lab had a segment about something I've never heard of before or sense. In a nutshell, when you do a study and get results, every time you do the same study again the effects seem to be less and less. (pardon my overly simple explanation)

I've always wanted to hear more on this subject, as the very idea of it being true totally creeps me out.

Could this be that?
posted by cccorlew at 11:57 AM on February 4, 2013 [3 favorites]


So what seems like a field in crisis could be more aptly described as a very difficult field in the process of gradually self-correcting by the normal scientific process of eliminating falsehoods

Yes, although part of the problem that this article is pointing towards is that while this is in theory the "normal scientific process" in practice there are significant institutional pressures that tend to suppress large scale efforts at replication of published studies. A journal is always keen to publish a paper that says "we've discovered a hot new effect!"--it's much less keen to publish a paper that says "we tried to replicate that hot new effect and failed." And, perhaps more importantly, being the person who publishes the first paper gets you lots of kudos and enhances your institutional cred. Being the person who publishes the second paper makes you "controversial" and "uncollegial" etc. etc.

This is a problem not just in the behavioral sciences, of course--there's been a lot of stuff in recent years about how few claims in the biomed field get adequately tested/replicated and how frequently they fail to replicate fully on the rare occasions when someone does try. Here, of course, we have the problem that testing is often extremely expensive. There are deep pockets out there with an incentive to fund a test that might find a positive result (X is the new wonderdrug!) but there are no deep pockets at all that are keen to fund debunking research.
posted by yoink at 11:59 AM on February 4, 2013 [8 favorites]


Thank God. Some serious - and public - skepticism needs to be placed on the entirety of behavioral economics. Unfortunately, it's a business fad, so it might be awhile yet.

Is there any particular reason to single out behavioral economics here, as opposed to any other area of psychology?
posted by a snickering nuthatch at 12:06 PM on February 4, 2013


Is there any particular reason to single out behavioral economics here, as opposed to any other area of psychology?

One of the biggest reasons "priming" has become so popular is that it's one of the psychological phenomena at the heart of behavioral economics. Given the interest of business people (marketers and human resources people, in particular) in behavioral economics, interesting phenomena in the psychology literature pretty quickly turn to mechanisms of control and coercion when they're presented as behavioral economics.

That, of course, isn't behavioral economists' fault (well - lots of them consult with companies, so maybe it is!). But if your interest is in fighting the harmful effects of this "priming" work, you'll very logically start with the way it's been popularized.
posted by downing street memo at 12:14 PM on February 4, 2013


Perhaps I'll buy some dwarf hamsters at PetCo this evenng.
posted by smidgen at 12:30 PM on February 4, 2013 [4 favorites]


But if your interest is in fighting the harmful effects of this "priming" work, you'll very logically start with the way it's been popularized.

It should be said that "priming" exists in several domains, language foremost, and it's been documented in various forms in hundreds of studies stretching back as long as we've had millisecond timing in behavioral experiments. It's generally a small effect with no broader behavioral implications.

But then, people from "applied areas" who seem to think entirely in metaphors seize onto this work and run-run-run with it. Priming is not mind control, and anyone who describes priming in sensational and far-reaching terms should be treated with intense skepticism.
posted by Nomyte at 12:33 PM on February 4, 2013 [10 favorites]


Because it deals with such complex and varied contexts, psychology must limit sharply the number of contexts for research compared to the entire context space. This is totally understandable. Where it often errs is in applying results from studies in narrowly defined contexts and generalizing them to the entire context space. The rationale is that the study itself is only to confirm a particular behavioral theory in that context. However, the theories are too often vague and not quantitative enough to justify that rationale.

I don't know what the solution is to this problem. Perhaps more quantitative specificity in psychological theory is the answer, but that is a wicked hard problem.
posted by Mental Wimp at 12:59 PM on February 4, 2013 [2 favorites]


Metafilter: not mind control.
posted by herbplarfegan at 1:08 PM on February 4, 2013 [1 favorite]


In a nutshell, when you do a study and get results, every time you do the same study again the effects seem to be less and less.

This was popularized by an article by Jonah Lehrer in The New Yorker. As Lehrer is in disgrace these days, it's less likely that anyone will want to risk their reputation on an idea that is strongly associated with him.
posted by benito.strauss at 1:18 PM on February 4, 2013 [1 favorite]


interesting phenomena in the psychology literature pretty quickly turn to mechanisms of control and coercion when they're presented as behavioral economics

Although the implications of the FPP is that the terribly cunning things people think they've been doing to "control" and "coerce" us into making certain decisions actually don't work.

It's funny how these things go round and round in waves. I'm old enough to remember all the hoo-hah about "subliminal advertising" in the late 60s and early 70s--which all turned out to be a complete crock. There was a sufficient panic about subliminal advertising turning us all into mindless victims of our corporate overlords for several countries to pass laws banning it and for the FCC in the US to threaten to revoke the licenses of TV stations that used it.
posted by yoink at 1:23 PM on February 4, 2013 [2 favorites]


This was popularized by an article by Jonah Lehrer in The New Yorker. As Lehrer is in disgrace these days, it's less likely that anyone will want to risk their reputation on an idea that is strongly associated with him.

The idea is hardly "associated with" Lehrer; he was merely reporting on research done entirely by others. The 'decline effect' as it is called has been observed since the 1930s and plenty of serious scientists and philosophers of science have written on it. It's not like anyone's going to stop writing about or listening to Dylan because Lehrer invented a bunch of Dylan quotes.
posted by yoink at 1:28 PM on February 4, 2013 [2 favorites]


yoink: "I think the idea is you test priming with a picture of a monkey and whether people talk more on leaving the room and you get nothing, so you don't publish it. They you test priming with recordings of classical music and whether people are more likely to clear away their trash on leaving the room and you get nothing, so you don't publish it. Then you test priming with pictures of Stalin and whether people condemn the research assistant to the Gulag as they leave the room, and you get nothing. Then you do the study with senior-words and walking times and you get a hit, so you publish."

Exactly - and at this point you have no incentive to mention the earlier "failures."

I have not made a living doing science in over a decade, but I hope that newer online open publishing models make it easier to publish negative results. The data is still useful information in aggregate with other work, despite being ostensibly less exciting when standing alone.
posted by exogenous at 2:02 PM on February 4, 2013 [1 favorite]


Doing behavioral neuroscience experiments with even fairly dumb animals (fruit flies, in my case) is tremendously sensitive. Slight differences in sometimes subtle quantities (key ones include food quality, humidity, temperature, time of day, how many individuals were raised together, if they are mated, among many others) can produce profoundly different results. A good experimental setup has a consistent behavior once you can fix as many things as you can, and even then weird, unexplainable things will happen from time to time. Given this, it'd be perfectly easy to imagine genuine psychological effects in people that would be tremendously difficult to experimentally assay, as it is so difficult/unethical to control enough internal variables of the participants. This isn't to say that these results shouldn't be treated with skepticism — all behavioral results should be until shown repeatedly and in multiple labs, whether on humans or other animals) — just that folks should also be aware of how (frustratingly) sensitive entirely real effects can be.
posted by Schismatic at 2:15 PM on February 4, 2013 [3 favorites]


I read Kahneman's book a little while after he sent that email, and this passage definitely stuck out in retrospect (bolding mine):
When I describe priming studies to audiences, the reaction is often disbelief. This is not a surprise: System 2 believes that it is in charge and that it knows the reasons for its choices. Questions are probably cropping up in your mind as well: How is it possible for such trivial manipulations of the context to have such large effects? Do these experiments demonstrate that we are completely at the mercy of whatever primes the environment provides at any moment? Of course not. The effects of the primes are robust but not necessarily large. Among a hundred voters, only a few whose initial preferences were uncertain will vote differently about a school issue if their precinct is located in a school rather than in a church—but a few percent could tip an election.

The idea you should focus on, however, is that disbelief is not an option. The results are not made up, nor are they statistical flukes. You have no choice but to accept that the major conclusions of these studies are true. More important, you must accept that they are true about you. If you had been exposed to a screen saver of floating dollar bills, you too would likely have picked up fewer pencils to help a clumsy stranger. You do not believe that these results apply to you because they correspond to nothing in your subjective experience. But your subjective experience consists largely of the story that your System 2 tells itself about what is going on. Priming phenomena arise in System 1, and you have no conscious access to them.
Little wonder he reacted so strongly, if counter-intuitive notions like this were foundational to his work and reputation.
posted by rollick at 2:25 PM on February 4, 2013 [1 favorite]


The idea is hardly "associated with" Lehrer;

It certainly is in my mind.
posted by benito.strauss at 3:37 PM on February 4, 2013


Good piece, I was actually collecting links to do an FPP about Bargh, and his disgraceful reaction to those researchers and journalists, but wasn't happy with the flow.
posted by smoke at 4:46 PM on February 4, 2013


It certainly is in my mind.

Are you suggesting that you believe that Jonah Lehrer discovered regression toward the mean?
posted by Nomyte at 5:18 PM on February 4, 2013 [1 favorite]


Are you suggesting that you believe that Jonah Lehrer discovered regression toward the mean?

1) No. "Associated with" is not the same thing as "discovered".

2) The article you linked to has nothing to do with the "decline effect" that Lehrer's article discussed. About 12 paragraphs in that Wikipedia article says
In sharp contrast to this population genetic phenomenon of regression to the mean ... the term "regression to the mean" is now often used to describe completely different phenomena in which an initial sampling bias may disappear as new, repeated, or larger samples display sample means that are closer to the true underlying population mean.
Don't feel bad, though. Lehrer made the same mistake in his article. [Others may describe this as just letting one phrase serve double duty. I choose to call it a mistake. Having spent time trying to explain and make clear a 100+ year old, very well understood statistical phenomenon to students, I resent someone lazily applying the same term to a completely different phenomenon.]
posted by benito.strauss at 5:53 PM on February 4, 2013


Metafilter: not mind control.

I have this sudden urge to read AskMe....
posted by BlueHorse at 6:20 PM on February 4, 2013


In a nutshell, when you do a study and get results, every time you do the same study again the effects seem to be less and less.

Are you suggesting that you believe that Jonah Lehrer discovered regression toward the mean?

Regression to the mean has nothing to do with the observed effects being smaller and smaller. "If a variable is extreme on its first measurement, it will tend to be closer to the average on a second measurement" could just as easily mean that the variable was atypically low. In fact, this is why God gave us outlier analysis.
posted by Kid Charlemagne at 7:22 PM on February 4, 2013


Regression to the mean has nothing to do with the observed effects being smaller and smaller.

Regression to the mean refers, in the most general sense to the fact that a random variable (such as a sample mean or a difference in sample means) tends to take on moderate values, rather than extreme values. This means that effects that turn out to be spurious findings will tend not to get replicated, and effects that were previously thought to be larger will appear weaker on replication. This is pretty clearly separate from sample means tending toward the population mean, etc. And, in any event, associating a basic statistical idea with a 32-year-old hotshot journalist seems pretty silly, which is all I was trying to say.
posted by Nomyte at 9:00 PM on February 4, 2013


Regression to the mean refers, in the most general sense to the fact that a random variable (such as a sample mean or a difference in sample means) tends to take on moderate values, rather than extreme values

...for unimodal density function distributions. In general, regression to the mean is really regression to higher density or probability. And, contrary to the Wikipedia article quoted by benito.strauss, it's the same phenomenon evidencing itself in many ways. A simple experiment illustrates the effect. Take 100 fair coins (probability of heads = 50%) and flip them. Select the ~50 coins that came up heads (one extreme of the heads/tails distribution). The frequency of heads in this selected population is 100%. Now flip those ~50 coins again. I guarantee the frequency of heads will very close to the population mean of 50% on the second flip. The extent to which the measurement is random vs. determined or stable is the extent to which regression to the mean operates. In population genetics, the phenotype is a measurement of the genotype and each new generation is a resampling of the measurement. In publications, the first positive result is likely an extreme of the p-values or differences in the parent distribution of all possible study outcomes for that study. If I take a bunch of people who score high on a depression scale, give them a placebo, then measure them again, they will score lower. It's really everywhere.
posted by Mental Wimp at 11:20 AM on February 5, 2013 [1 favorite]


« Older Brain Project Centrifuge, The   |   "I still don’t understand what happened." Newer »


This thread has been archived and is closed to new comments