Powerless Posing
October 18, 2017 9:24 PM   Subscribe

When the Revolution came for Amy Cuddy. Amy Cuddy's 2012 TED Talk on power posing is the site's second most popular, viewed 43 million times. But after the reproducibility of her research was questioned, things have gone south in a complicated fashion. This NYT article discusses the controversy, the main players and the background behind the reproducibility problems (previously) in social psychology.

"Some say that she has gained fame with an excess of confidence in fragile results, that she prized her platform over scientific certainty. But many of her colleagues, and even some who are critical of her choices, believe that the attacks on her have been excessive and overly personal. What seems undeniable is that the rancor of the critiques reflects the emotional toll among scientists forced to confront the fear that what they were doing all those years may not have been entirely scientific."
posted by storybored (85 comments total) 23 users marked this as a favorite
 
The TED talk which kinda helped me at one time...
posted by Lukenlogs at 9:55 PM on October 18, 2017


The framing of this article is outrageous. The rules haven't changed. The rule is that you need to be able to convince reasonable people of the truth of your claims. The article seems to think experimental reproducibility is an arbitrary novel criterion, when actually it's a bedrock research principle and any scientific field which abandons it is churning out unverifiable garbage.
posted by Coventry at 10:12 PM on October 18, 2017 [49 favorites]


Man, reading this gives me post-academia shudders, almost a year out.

This is a conversation that badly needed to happen across the board, but especially in Gladwell-era psychology. We have the computational tools and techniques to just do better now. And yeah, a lot of point-and-laughing at a handful of examples like Cuddy is a defense mechanism that crusty full profs use as a shield, protecting their old work.

It sucks sucks sucks that a woman researcher is at the center of the hurricane, especially as it starts breaking into popular press, but I can't say I'm surprised. The burden of proof is just... shifted slightly off-center for men versus women. And is it a coincidence that it's a field with a higher percentage of women? I'm skeptical. (I believe women are the majority among psych assistant professors, and close to it among associates.)
posted by supercres at 10:14 PM on October 18, 2017 [30 favorites]


I totally did the superman standing pose and felt good doing it in a number of stressful situations.
Placebo effect or not, I still enjoyed her TED talk.
posted by eusebis_w_adorno at 10:15 PM on October 18, 2017 [5 favorites]


The need for reproducibility isn't novel; it's just suddenly in demand. For better or worse (usually worse), academic research operates on a supply-and-demand basis.

Modern computational statistics can simulate reproduction (without the time and expense), and tools like Simmons' AsPredicted.org platform make hypothesis generation (a fundamental part of the scientific method) real instead of something that happens while writing, after generating results and figures.
posted by supercres at 10:19 PM on October 18, 2017 [2 favorites]


I know many of the players involved, and my local Twitter feed is a bit abuzz about the article. I think the general feeling is:

1) The replication revolution has changed the rules in social psych (and increasingly in other fields of social science) and that this article was helpful for those of us in the field as well as others in pointing out the ways this is damaging as well as helpful. Though the story concentrates on the human side, there is also (replicable) evidence that fields that get caught up in perceptions of scandal have trouble attracting good researchers. So understanding how to deal with correcting flawed research properly is critical to the health of the field.

2) I think there is a worry that the pile-on of Amy went too far, but also some frustration that she does not seem to acknowledge the flaws in her methods (which are substantial, and acknowledged by her coauthor), and the fact that power posing was oversold from the beginning. Many academics struggle with these issues (how much to simplify our research, how much to popularize) so this is a complicated subject. And yes, there is general concern about the role of gender, academic prominence, and other factors in the way this has played out.

3) Gelman, Simmons and colleagues come off as crueler than they are. They do a lot to try and espouse their view, and create tools and articles for other scholars to use - they don't generally engage in witch hunts. I think people have complex feelings about the rapidly changing (and improving) standards of proof in the field, and those are reflected in the article.

So, in short, this is a good article about a complicated situation.
posted by blahblahblah at 10:34 PM on October 18, 2017 [43 favorites]


I am probably not smart enough to be able to comment on the scientific accuracy of power posing, but the book was pretty interesting at least until that point. I think she had some good points on power and what it's like to feel ineffectual vs. like you actually have some power.
posted by jenfullmoon at 10:48 PM on October 18, 2017


It sucks sucks sucks that a woman researcher is at the center of the hurricane

Because a woman must never fail? Look at it another way, the fact that a woman, too, can do bad research, attain media stardom, then crash and burn, might be a healthy sign.
posted by Segundus at 11:27 PM on October 18, 2017 [11 favorites]


According to her Wikipedia article, Cuddy suffered severe head trauma in an auto accident while she was an undergraduate at CU Boulder, and her doctors told her she would likely not be able to lead a normal life. Her IQ went down by 30 points, for a while, and I'm sure she had many other deficits.

Her recovery and subsequent success, in my opinion, is not so much miracle as brilliant achievement, and I would be surprised if her work on 'power poses' did not evolve out of the methods she used and developed to put her own mind and personality back together -- which might account for some of her confidence in them.
posted by jamjam at 11:46 PM on October 18, 2017 [9 favorites]


I think it's a given that gender dynamics play a stronger role in determining the force and the personal, vitriolic tenor of Cuddy's repudiation than many would like to admit, or are even necessarily consciously aware of. We live inside patriarchy, and this is part of what it means to live inside patriarchy. That part sucks.

But it appears to be true that — to the limits of my ability to determine, anyway — the work was shallow and methodologically unsound, made unjustifiable claims and was widely promoted based on those claims. And it is still more true that this body of work was taken up by an apparatus that exists to simplify the complex to the point of triviality, in the name of packaging and marketing 18-minute segments of uplift, with emotional beats engineered to Save The Cat! levels of predictability. As far as I'm concerned, anything that puts a wrench in that apparatus is welcome.

TED is a machine that makes a very few people wealthy and all of us poorer. The "power pose" was made for it, or rather they were made for one another, and we'll all be the better for it when both have disappeared from the stage.
posted by adamgreenfield at 2:07 AM on October 19, 2017 [44 favorites]


I agree Adam: it's impossible not to factor in misogyny with the response.

I also agree with Blalalalah: I've been a reader of Gelman's blog for years (I have no background in this stuff, I just find it interesting), and he comes across as a genuinely nice guy. I recall reading his stuff on Cuddy at the time, and truly it didn't strike me as gratuitously vicious.
posted by smoke at 3:30 AM on October 19, 2017 [4 favorites]


Almost all science today suffers from a major replication deficit. We should offer PhDs in replication, and make straight up replication of others' studies (based purely on their published methods, no conversation) a prestigious science job. Of course I'm asking for an overall system change, not a cudgel to beat existing researchers with for not having done. Arbitrarily picking specific individuals to castigate for lack of replicability isn't fair given that we've set up our incentive structures to skip that portion of the scientific method and of course scientists are just going to respond to those incentives.
posted by Easy problem of consciousness at 3:38 AM on October 19, 2017 [24 favorites]


Some further thoughts; I do think this piece understates the ways in which Cuddy could have responded without crucifying herself.

But I think also there's a few things going on here, which the piece doesn't really dwell on as much as I would like:

There is the problem with replicability in social science; there is the problem of laypeople discounting solid research in social psychology as a result of the 'crisis'; there is the problem of journal publishing and uni funding, and finally there is the problem of how modern western society co-opts university research to use in pat bromides and just-so stories - especially egregious in the corporate sector(don't get me started on the shit I see about growth and fixed mindsets through my work, ugh).

I recently finished Robert Sapolsky's latest book, Behave (magisterial, and I'm not one to bandy that word about lightly!). Behave, I think, captures this dichotomy quite well. On the one hand, Sapolsky is at pains throughout the book to point out the nuances that are often lost in the nexus of research and pop culture - especially when it comes to what is actually been proven, and how. He also regularly points out how any small thing may have an effect on behaviour, but that affect can be correspondingly small.

And yet on the other hand - especially in the later stages of the book - he references many social psychology studies that, from what he relates in the book, seem a little dicey to me. And he never mentions the size of the study, p value etc etc. These are very "primey" studies that do seem just a little too convenient.

For me I think it nicely captures the tension behind this stuff - we seek to explain things in an accessible way, but in doing so we might lose the very stuff that makes it valuable.

Anyway, I recommend the book.
posted by smoke at 3:54 AM on October 19, 2017 [8 favorites]


don't get me started on the shit I see about growth and fixed mindsets through my work, ugh

I'd be interested in seeing you get started on this. Do you specifically mean misapplication/over-interpretation of the ideas, or are you skeptical of the underlying data on the idea itself?
posted by metaBugs at 4:02 AM on October 19, 2017


Do you specifically mean misapplication/over-interpretation of the ideas, or are you skeptical of the underlying data on the idea itself?
posted by metaBugs


Eponycetera.
posted by adamgreenfield at 4:03 AM on October 19, 2017 [1 favorite]


The old Socratic idea that the worst thing is to pretend we have knowledge when we don't is supposed to be fundamental to science. The 'replication crisis' should be celebrated, not feared. It is a profound opportunity.
posted by thelonius at 4:26 AM on October 19, 2017 [5 favorites]


I mean the misapplication of the ideas to a gross degree ("if you're not in board with this restructure, you're being fixed!"; "if you don't agree with me, you're being fixed!"; "any criticism or doubt is fixed!"; "if something at work is not going well, it's your own fault for being fixed, trust in the infallibility of your corporate overlords and all will be well!")

Its especially ironic because Dweck herself has pointed out the limits of her research and cautioned against overreach. It's also ironic that a lot of her research was about children, and that is a relationship corporates are often, wittingly or otherwise, trying to replicate with their workers.

Beyond that, I do personally have some doubts about mindset. It feels very question begging to me: dweck has defined good behaviours as good and bad behaviours as bad, and the says, 'good things are good and bad things are bad'.

The hypothesis feels essentially unprovable to me because anything negative is fixed, and is defined by bad outcome and the theory admits no scenario where "fixed" attributes could be a right or useful response. It is defined and grouped by failure. Such a grouping of behaviours seems a bit arbitrary and convenient to me, an invented classification rather than an observable one. Surely people switch between mindsets regularly with different topics, relationships, work styles, health, time of day, age etc. As natural phenomenon , I'm not inclined to buy it, frankly, and I don't think there's proof for it.

That all said, nearly all of my exposure to her has been through corporate shite workshops where it has been wildly, wildly misapplied, and popular press. I lack both expertise and reading to legitimately critique her claims, which seem much more circumspect than how they are often portrayed. I would not be surprised if someone more informed than me was able to persuade me otherwise. Unfortunately, all discussion I've had to date on it has been from people that haven't read the primary sources themselves, and have wholly bought into the idea as a self help concept rather than an actual theory.
posted by smoke at 4:37 AM on October 19, 2017 [5 favorites]


Reproducible experiment design, experiment method, and results are the three criteria sine qua non of "scientific method." IF Ms Cuddy is unable to verify a theory by providing an intelligible and testable hypothesis, no one else can verify it. We are left then with a theory, critique of the theory, and a surfeit of coffee and weed to furnish the truth in the predictive power of metaphysical emanations.
posted by marycatherine at 4:47 AM on October 19, 2017 [3 favorites]


Good round up on where mindset is at currently can be found here. Sorry for the derail.
posted by smoke at 4:52 AM on October 19, 2017 [3 favorites]


NYT: "I asked Gelman if he would ever consider meeting with Cuddy to hash out their differences, he seemed put off by the idea of trying to persuade her, in person, that there were flaws in her work."

Really this is such a strange suggestion by the NYT. As though science is a matter of "hashing out ones differences", rather than Cuddy admitting that her results are no good - the original Study had only 20 participants!? I don't really see that there is anything to discuss. And it seems like a lot of the personal attacks are because Cuddy refuses to admit that her experiment was flawed or faked. ie that in the end it IS in fact a personal issue rather than scientific.
posted by mary8nne at 5:08 AM on October 19, 2017 [8 favorites]


Reproducible experiment design, experiment method, and results are the three criteria sine qua non of "scientific method."

Statements like this almost invariably get dicey one way or another. A common way is that they have to exclude fields that everyone would recognize as clearly science. In this case, an insistence on experimental methods would exclude that bulk of astronomy that's limited to observational research.
posted by GCU Sweet and Full of Grace at 5:29 AM on October 19, 2017 [11 favorites]


This all goes back to Carney, Cuddy & Yap (2010), "Power Posing: Brief Nonverbal Displays Affect Neuroendocrine Levels and Risk Tolerance", Psychological Science 21(10) 1363–8. The abstract makes some strong claims:
Humans and other animals express power through open, expansive postures, and they express powerlessness through closed, contractive postures. But can these postures actually cause power? The results of this study confirmed our prediction that posing in high-power nonverbal displays (as opposed to low-power nonverbal displays) would cause neuroendocrine and behavioral changes for both male and female participants: High-power posers experienced elevations in testosterone, decreases in cortisol, and increased feelings of power and tolerance for risk; low-power posers exhibited the opposite pattern. In short, posing in displays of power caused advantaged and adaptive psychological, physiological, and behavioral changes, and these findings suggest that embodiment extends beyond mere thinking and feeling, to physiology and subsequent behavioral choices. That a person can, by assuming two simple 1-min poses, embody power and instantly become more powerful has real-world, actionable implications.
If you read the study, you'll see that the abstract is a bit more enthusiastic than the experiment would seem to justify — they didn't actually test whether subjects "instantly became more powerful", but instead measured testosterone and cortisol levels in saliva; propensity for risk-taking via a gambling simulation; and self-reported feelings of power. But even the reported results are very impressive, and so there was a lot of interest in the phenomenon, and eventually a bunch of attempts failed to replicate the effect. In particular Ranehill et al. (2015) carried out a similar protocol but using a larger sample (n=200 rather than n=42 in the original study): they were able to replicate the "improved feeling of power" effect, but none of the other effects. As a result of this and other replication failures, Carney was convinced that the original study had been mistaken:
[S]ince early 2015 the evidence has been mounting suggesting there is unlikely any embodied effect of nonverbal expansiveness (vs. contractiveness)—i.e., “power poses”—on internal or psychological outcomes. As evidence has come in over these past 2+ years, my views have updated to reflect the evidence. As such, I do not believe that “power pose” effects are real.
This is worth reading for an (impressively honest) explanation of how the original results were achieved:
Initially, the primary D[ependent] V[ariable] of interest was risk-taking. We ran subjects in chunks and checked the effect along the way. It was something like 25 subjects run, then 10, then 7, then 5. Back then this did not seem like p-hacking. It seemed like saving money (assuming your effect size was big enough and p-value was the only issue).
If I understand this description correctly, they kept running more study participants until they got a result that could be reported as significant at the p < 0.05 level and thus publishable, without realising that this motivated stopping rule invalidated the statistical analysis.

(I prepared for writing this comment by assuming a high-power pose for 1 minute.)
posted by cyanistes at 5:46 AM on October 19, 2017 [28 favorites]


"The rules changed" is such an infuriating line. The rules are not some thing out there in the universe hung by God alongside the fixed stars. The rules are made of decisions made by people in a community. There are all kinds of communities with all kinds of rules and some of them are bad or inadequate or misguided and you don't get to live an accountability-free life because you ensconced yourself in a community with a high tolerance for bad science, achieved a prominent position in that community, and never once challenged that tolerance or those rules; that's on you.

And then there's the would-be exculpatory appeal to authority:
The abstract that they eventually wrote — that their editors approved — reflects the incautious enthusiasm that characterized the era
"That their editors approved?" These are fucking tenure-track faculty at Ivy League schools! Grown adults don't get to pull the "but mom said it was OK" card.
posted by enn at 6:07 AM on October 19, 2017 [13 favorites]


"The rules changed" is such an infuriating line. The rules are not some thing out there in the universe hung by God alongside the fixed stars. The rules are made of decisions made by people in a community. There are all kinds of communities with all kinds of rules and some of them are bad or inadequate or misguided and you don't get to live an accountability-free life because you ensconced yourself in a community with a high tolerance for bad science, achieved a prominent position in that community, and never once challenged that tolerance or those rules; that's on you

While the strong version of your statement is certainly right - people doing knowingly shoddy work shouldn't get a pass - I generally disagree with you very strongly.

Research methods are a field of study in and of themselves. In just the last few years, a few major changes in the social science include:

a) Rise of Bayesian approaches to statistics, and increased care about p-value interpretation
b) The "identification revolution" in economics, and its discontents
c) Improved modelling approaches becoming more accessible thanks to increased computing power, and the expectation that people use techniques like matching.
d) Rise of big data techniques like machine learning.
e) Advances in coding qualitative data
f) Increasing interest in mechanisms of action, over results that just tell you what happened
g) Increasing interest in effect sizes, not just signficance levels

...and this doesn't include all of the little advances happening all the time. It isn't surprising that work that was considered okay turns out to have problems. Techniques that once were acceptable turn out to be problematic upon further study or reflection, or are found to be misapplied. The advance in methods is a very big deal (though there is a downside as well, as focus is shifted to narrow problems, we see less Big Theory, fewer Karl Marxes and Karl Poppers, for example). Science advances not just in results, but also in approach.

The power pose work used bad methods (essentially garden of forking paths where researchers hone in on interesting results to the point that they find spurious connections). But the line between bad and good methods can be challenging (some exploratory analyses are critical to finding interesting problems, other kinds lead to p-hacking), and they were not doing anything intentionally dishonest or wrong at the time. Still, we know now why these approaches were a problem (thus The Rules Have Changed).

The essential difference between the authors of the power pose paper is that Carney acknowledged this and no one thinks worse because of it. Cuddy has not acknowledged that these results are problematic to the same degree, which is part of the problem.
posted by blahblahblah at 7:08 AM on October 19, 2017 [26 favorites]


smoke: That all said, nearly all of my exposure to her has been through corporate shite workshops where it has been wildly, wildly misapplied, and popular press. I lack both expertise and reading to legitimately critique her claims, which seem much more circumspect than how they are often portrayed. I would not be surprised if someone more informed than me was able to persuade me otherwise. Unfortunately, all discussion I've had to date on it has been from people that haven't read the primary sources themselves, and have wholly bought into the idea as a self help concept rather than an actual theory.

I'm mostly familiar with Jo Boaler's work on mathematical mindsets, where she looks at the combination of students' mindsets toward learning in general with their mindsets toward the subject area of math more generally. Eg., lots of people think of math as this something that you just have to memorize facts and algorithms, where the answer to "why?" is "it just is." Even though that couldn't be farther from the truth. And then there are cultural issues around use of math as a gatekeeping subject for entry into higher paid STEM fields, and ideas around "genius". Both math and computer science/programming, for example, suffer greatly from a cultural perception that individuals are either naturally talented in the area or naturally untalented, and this ties in with cultural perceptions about who a nerd is (male, white, socially awkward, etc.) - studies on this won't reference mindsets per se, but instead usually focus on gender bias or race. In the corporate world, the link that Boaler looks at would correspond to linking some version of fixed/growth mindset to cultural perceptions or assumptions about leadership? Anyway, there has also been a lot of research, going back a couple decades I think, about resiliency (in children and in adults), which deals more generally with whether individuals approach trauma or setbacks as "well, that happened, but what can I learn from it or salvage going forward?" or not.

But yeah, I've even heard of people rigidly applying the form of certain exercises in Freirian pedagogy, for example. (General background for readers: Freire worked with adults learning to read in rural and racialized communities in Brazil, and focused on validating the lived experience of his students as expertise co-equal with academic expertise, and on structuring learning experiences around the needs and interests of the students. One of the tools he used involved showing pictures and having the students make stories about what was going on in the pictures, then using those stories as the text to learn reading from, instead of some textbook that probably had no relevance or connection to his particular students' lives, and was likely in fact kinda racist, classist, and exclusionary. So I've heard that some people use that exercise, but have a "correct" story or interpretation of the picture in mind, set themselves up as experts with higher authority level than their students, and just generally do the opposite of Freire's actual main goals.) Then there's all the people who can take a "love thy neighbor" philosophy and apply it to oppress and harass other groups of people, some of whom happen to be their neighbors. People's ability to misinterpret something into its exact opposite seems boundless, so I am not in the least surprised to hear that corporate culture has co-opted and mangled the mindset idea. (Which probably could use more experimental validation, especially in looking at the interplay between some sort of individual mindset and broader cultural context.) And that's not even getting into the self help industry and its issues. You have my sympathy for having to sit through bs corporate workshops.

posted by eviemath at 7:40 AM on October 19, 2017 [9 favorites]




blahblahblah, I'm not disputing that standards of practice change over time. But the author of the article seems to be saying in essence: look, this woman showed up every day and did what people told her to do, and now she's being judged for that—how unfair! That might fly if you're clocking in on the assembly line every day, but I'm sorry, if you are a researcher—and especially if you are a researcher near the top of your field, as one must be to secure a tenure-track gig at Harvard—then turning a critical eye on the methods and norms of your field is just part of the job.
posted by enn at 8:03 AM on October 19, 2017 [6 favorites]


I'm a PhD scientist, I completely understand the role of replication, validation, convergent evidence to support a theory from disparate disciplines, etc. I understand what the replication people--by this I mean the both the academic principals in this story, and the laypeople/fans who flock around them--are trying to do.

Nonetheless: wow, what a pack of enormous assholes who can't see the forest for the trees.
posted by Sublimity at 8:12 AM on October 19, 2017 [2 favorites]


what the hell does that mean?
posted by MisantropicPainforest at 8:17 AM on October 19, 2017 [5 favorites]


I am sitting here wrestling with the New York Times article. I am kind of surprised at the tone that some of these scientists are taking with one another in these listservs or blog posts. I understand very strongly disagreeing with someone, or even having proof that they are incorrect, but I would hope that these interactions could be healthy and productive, since everyone is working for the common good.

And then, I got to this part of the article:

The author describes Gelman's blog by saying "He is respected enough that his posts are well read; he is cutting enough that many of his critiques are enjoyed with a strong sense of schadenfreude."

and then

When I asked Gelman if he would ever consider meeting with Cuddy to hash out their differences, he seemed put off by the idea of trying to persuade her, in person, that there were flaws in her work. “I don’t like interpersonal conflict,” he said.

and I wonder if that speaks to what I am getting at. Perhaps one of the many issues at play in this article is a need for better dialogue.

This is all still forming in my mind, so I may be completely wrong.
posted by 4ster at 8:31 AM on October 19, 2017 [3 favorites]


enn - I agree, but I think you are assuming they were not worrying about "the methods and norms" of the field. But there is tremendous worry about this stuff all the time among academics (including the Harvard OB group), we discuss it in conferences and in papers. When I review papers I raise these issues, and they are raised when people review mine. We agonize over the nature of the field and the kind of work we should be doing. It is a constant.

But there are many, many ways to mess up and some of the ways in which they did were actually rather subtle at the time (we now have more safeguards against some of these issues, partially because we have learned from exactly this case). I think reading Carney's letter gives a good sense of how and why this is the case. Good editors and peer reviews should have caught many of these, but sometimes they don't. Still, anyone reading the results section would know that this is preliminary, and not conclusive. Early, somewhat flimsy work gets published all the time, and then corrected, modified, and retested (or ignored).

So this would not have been a big deal if not for the TED talk and promotion of power poses as a solution to many hard problems (gender and confidence linkages, for example). The standard of proof for a clinical intervention is higher, especially if the limits of the research are not made clear once it is popularized.

The bigger issue is the refusal to acknowledge when things are wrong. In the end, I think the article is sympathetic to Cuddy, but not so much so that it excuses the flaws in the work.
posted by blahblahblah at 8:32 AM on October 19, 2017 [4 favorites]


This article is a nightmare about imposter syndrome.
posted by Going To Maine at 8:44 AM on October 19, 2017 [2 favorites]


“I don’t like interpersonal conflict,” he said.

And this is a fucking nightmare phrase.
posted by Going To Maine at 8:55 AM on October 19, 2017 [2 favorites]


Regarding my comment above, I just reread blahblahblah's comment, and will defer to that, since he knows the people involved.
posted by 4ster at 8:55 AM on October 19, 2017


Perhaps one of the many issues at play in this article is a need for better dialogue.

I don't think this is true at all. I don't really understand the arguments for "why didn't he just call her?". Would you really expect anything different to have happened? Would Cuddy have put out a statement like her co-author did, accepting that the evidence doesn't support the claims in the paper? I don't think someone presenting an academic argument that an experiment has flaws should be required to persuade the original author in person for it to be good science or good professional behavior.
posted by demiurge at 8:56 AM on October 19, 2017 [12 favorites]


Would you really expect anything different to have happened?

If the answer to this is "no", that says a great deal about the scientific environment as a whole, and that's just as grim. If all conversation is adversarial, we die.
posted by Going To Maine at 9:25 AM on October 19, 2017


Gelman himself isn't cruel but the commenters on his blog can be pretty bad. Doesn't shock me that things got ugly.

I mean just looking at the first few posts, I've found one guy who, in a post about replication failures, brought up the debunked Rolling Stone article about sexual assault at UVa, apropos of nothing. Clicking on his name, he writes for an alt-right blog, which along with his work features lots of articles about the "science" of IQ and ethnicity, and about the "anti-White" conspiracy by Jews and "Cultural Marxists". He's a pretty regular commenter although doesn't seem to bring up that stuff on Gelman's blog.

The driving force here is anger on social media. For a lot of people it's just sincere frustration with being punished for scientific honesty and seeing charlatans prosper. But for many others it's just that they finally have a socially acceptable reason to let out their misogynist derangement.

There's many, many men who have peddled way more egregious bullshit than Cuddy and they're going to get off much lighter.
posted by vogon_poet at 9:37 AM on October 19, 2017 [5 favorites]


When I asked Gelman if he would ever consider meeting with Cuddy to hash out their differences, he seemed put off by the idea of trying to persuade her, in person, that there were flaws in her work. “I don’t like interpersonal conflict,” he said.

and I wonder if that speaks to what I am getting at. Perhaps one of the many issues at play in this article is a need for better dialogue.


Have you ever read Gelman's work? Know anything about him? the idea that he's someone not engaging with others work is just preposterous. It like literally all he does. For the issues he discusses, he couldn't be clearer. There's plenty of dialogue going on. Cuddy maintains that her studies are good when in fact they are not. Then says that Gelman should try and help pyschologists instead of doing what he is doing. That's a flat out bullshit statement.
posted by MisantropicPainforest at 11:18 AM on October 19, 2017 [5 favorites]


“I don’t like interpersonal conflict,” he said.

Haha yeah, except when he instigates it through "cutting remarks."
posted by rhizome at 11:22 AM on October 19, 2017


Clicking on his name, he writes for an alt-right blog, which along with his work features lots of articles about the "science" of IQ and ethnicity, and about the "anti-White" conspiracy by Jews and "Cultural Marxists". He's a pretty regular commenter although doesn't seem to bring up that stuff on Gelman's blog.

Pretty sure you're talking about Steve Sailer. Used to write for VDARE--the same site where Michelle Malkin got her start. Also was famous for the Steve Sailer award on Andrew Sullivan's blog, back in the day. Strange to see he's still around.
posted by MisantropicPainforest at 11:36 AM on October 19, 2017 [3 favorites]


I am kind of surprised at the tone that some of these scientists are taking with one another in these listservs or blog posts. I understand very strongly disagreeing with someone, or even having proof that they are incorrect, but I would hope that these interactions could be healthy and productive, since everyone is working for the common good.

So, I definitely chuckled at "everyone is working for the common good", because you don't have to get too far into STEM research to realize Benefiting The Common Good is frequently incidental to Getting Them Grants, as well as Proving I Am Very Smart And That Guy Is A Dumb Wrong Asshole.

That said, I'm also surprised by how this played out. When I've seen scientific arguments hashed out in public, it's usually in the form of competing papers published very close to one another, particularly aggressive questioning during a conference presentation, a rapid-fire exchange of letters or comments across multiple issues of the same journal, or a deep cut in an abstract or introduction that effectively amounts to subtweeting your rival. Which isn't necessarily healthy--but at least they maintain a veneer of collegiality that allows outside observers the opportunity to evaluate arguments on their merits.

If you're tossing out personal insults and ad hominem attacks in front of God and Man and the editors of Science, well, first of all, those aren't going to get published in any serious journal. Second of all, most everyone is going to assume you've lost the debate because obviously your scientific arguments aren't good enough to stand on their own. If you go beyond that to throw all your Sick Burns into some article or comment written for laypeople, and now you have random Twitter dudes with no background in the field offering up commentary like they know what the hell they're talking about, well, you've gone past gauche into something disturbing and bizarre. Even if you aren't laying down the Sick Burns, you don't tolerate them in your comments or encourage them, especially in an era when legitimate scientific discussions are increasingly under attack.

"But she won't admit she's wrong!" OK, but this is far from abnormal. Do you know how many scientists cling to their pet theory long, long after it's been thoroughly disproven and the avenues they're pursuing are borderline irrelevant? I'd argue it's even more common among prominent researchers, particularly if aforementioned theory is what catapulted them to prominence in the first place. If your lab is flush with money, you've got tenure, and someone out there will publish your work, then you've lost the major material incentives to take a step back and honestly examine the degree to which ego is affecting you.

If an unwillingness to redirect one's research was cause for a social media firestorm, good God, everyone and their dog would be reading about nothing else. I'd argue the only thing unique about Cuddy's attitude is that she's a woman. In general, women scientists know they're under extra scrutiny and aren't allowed the degree of defensiveness or slipshod validation tolerated in male peers. Confidence, particularly in the face of massive critique, invites different levels of negative feedback based on one's gender.

It's not that p-hacking and reproducibility aren't massive issues. They are, and they sure as hell aren't limited to social and psychological sciences. This is something the STEM community is facing as a whole and has not done a great job of addressing (though there are powerful structural and institutional disincentives to do so).

But boy, it's one thing to repudiate a study in a peer-reviewed journal in the form of a detailed, methodologically robust critique. It's quite another when it's the topic of a tweetstorm or Facebook comment debate. It's hard enough to have an intellectually honest scientific debate without some anonymous account popping in with reaction gifs, or getting emails from your grandma because she just watched Fox & Friends spin the argument into a rant about the Elite Ivory Tower Leftist Academics and their Fake Science, and now she wants to know why you Hate America plus this is proof climate change isn't real.

I admit the stakes differ between research disciplines. Nobody's handing out book deals and TED talks based on, like, someone's spectroscopic analysis of the effects of varied calcium ion levels on binding site conformation in an arbitrary calmodulin target protein. The potential for fame and fortune are a bit . . . lower. However there's no way Cuddy's critics weren't aware of all of the above. It's universal. So yeah, you don't have to be in her fan club to have legitimate concerns about why they went this route and its larger implications.
posted by Anonymous at 12:04 PM on October 19, 2017


When I asked Gelman if he would ever consider meeting with Cuddy to hash out their differences, he seemed put off by the idea of trying to persuade her, in person, that there were flaws in her work. “I don’t like interpersonal conflict,” he said.

And this is a load of disingenuous bullshit. First, wow, pretending posting critiques on your extremely popular, very public blog doesn't count as "interpersonal conflict" is crap. Second, initiating your critique by contacting a fellow scientist in a private or semi-private arena--email, a phone call, asking questions at a conference talk--basic professional courtesy and a norm for research. If the article is accurate about their conversations, I feel Simonsohn and Simmons did not make enough of an effort to pursue the private route before going public with that blog post. But at least they tried.

Gelman is allowed to write whatever he wants, it's his blog. But if you claim you have an honest desire to fix a problem, you don't get to pretend the context and presentation of your criticisms and your suggestions have no effect on the success or failure of that ostensible goal.
posted by Anonymous at 12:38 PM on October 19, 2017


OK, sorry, one more point I forgot to make:

I don't really understand the arguments for "why didn't he just call her?". Would you really expect anything different to have happened?

Um, yes, because it does happen? The norm of initiating a friendly dialogue first is a norm because it works. And it appears they followed what they thought was Simmons and Simonsohn's suggestion in the first (and apparently only?) email--they dropped the offending curve.

Scientists are human. Like I pointed out, humans can be big-ass, stubborn jerks. But humans can also be thoughtful and reasonable, especially when they feel they're being engaged in good faith by someone they respect. It is a side-effect of being imperfect creatures whose decisions are inescapably, biologically tied to our emotional centers.
posted by Anonymous at 12:50 PM on October 19, 2017


Gelman's critiques were of the research done by Cuddy, and by a number of low-N, poorly done pyschological studies, many of which were done by men. They were not personal attacks against Cuddy.

As he has said:

"On the occasions that I have contacted researchers directly with questions and criticism, it hasn’t always gone well, and I do think that once work is published (and certainly when it’s published in one of the top journals in the field), it should be open to criticism from all, and I think that any implicit expectation or norm of contacting the original researchers could be a bad idea to the extent that it raises a barrier to criticism, if even a small one. But that’s just my view, and to be sure there are costs either way."

So what's your point anyway? That Gelman critique of Cuddy is misogynistic?

Its perfectly reasonable to believe all the following things: the general tenor of the criticism of Cuddy is misogynistic AND that her research was crap and she oversold it and made a ton of money off of selling it and has asserted that her study was not flawed AND that not all critics of Cuddy are misogynists.
posted by MisantropicPainforest at 12:54 PM on October 19, 2017 [6 favorites]


Those attacking Gelman should read his blog response. I felt the article was quite unfair to him.
posted by smoke at 1:04 PM on October 19, 2017 [2 favorites]


Pretty sure you're talking about Steve Sailer. Used to write for VDARE--the same site where Michelle Malkin got her start. Also was famous for the Steve Sailer award on Andrew Sullivan's blog, back in the day. Strange to see he's still around.

I also immediately assumed this was who vogon_poet meant. To be fair to Gelman, that guy really gets around.
posted by atoxyl at 2:00 PM on October 19, 2017 [2 favorites]


Cuddy wasn't doing all that "empowering of women and minorities" for free. A quick google of her speaker fees suggests 50K and up. Way up. She was profiting off of people who self-identify as vulnerable and in need of empowerment. There's no indication that her co-authors financially exploited their work this way. So yes, I think she does have an extra burden of responsibility in the public eye that goes well beyond what other researchers might handle for non-replication. Sometimes your science is wrong - that happens, and the field should absolutely be able to handle it more gracefully. But it's reasonable to be angry at someone who keeps selling a remedy that's been revealed as snake oil.
posted by BlueBlueElectricBlue at 2:28 PM on October 19, 2017 [3 favorites]


> observational research

hmm, yes, well, mathematical calculation is proof sufficient in this "evidence-based" era of inquiry into and mindfulness [!] of the mysteries of matter and anti-matter ("emanations"), is it not? Yanno, the numbers never lie, though the laser-printed labels may, accidental-like.
Over 30,000 Published Studies Could Be Wrong Due to Contaminated Cells
posted by marycatherine at 2:35 PM on October 19, 2017


Don't get me started on biomedical research. The only reason this crisis is restricted to social psychology is that the experiments are cheap and easy to replicate.
Over the past decade, before pursuing a particular line of research, scientists (including C.G.B.) in the haematology and oncology department at the biotechnology firm Amgen in Thousand Oaks, California, tried to confirm published findings related to that work. Fifty-three papers were deemed 'landmark' studies (see 'Reproducibility of research findings'). It was acknowledged from the outset that some of the data might not hold up, because papers were deliberately selected that described something completely new, such as fresh approaches to targeting cancers or alternative clinical uses for existing therapeutics. Nevertheless, scientific findings were confirmed in only 6 (11%) cases. Even knowing the limitations of preclinical research, this was a shocking result.
posted by Coventry at 3:02 PM on October 19, 2017 [5 favorites]


So what's your point anyway? That Gelman critique of Cuddy is misogynistic?

uh, wut?
  • Where did I say Cuddy's research was valid?
  • Where did I say Gelman was misogynist?
  • Where did I say Gelman's critiques were wrong?
It seems like you've interpreted my comment detailing scientific norms and the breach that happened here as a defense of Cuddy's research. If you reread my comment, you might notice I did no such thing.

Sure, I think it's likely implicit bias influenced the degree and repetitiveness to which Gelman targeted Cuddy--but the shitty stuff is coming from his fanbase.

I think Gelman was irresponsible and has not behaved in good faith, both by not first raising the critique personally, and then by doing absolutely nothing to moderate the ad hominem attacks on her by his fans. As I said, one norm within the scientific community encourages privately contacting researchers with your critiques and attempting to work issues out that way before posting it on your blog. Another norm is composing critiques in a formal manner and submitting it for peer review so that your critiques might be subjected to similar standards. A third is that you should actively discourage personal attacks--not just from yourself, but from your fans. These aren't there to repress free thought or free speech or support Big Science. They encourage better science and and reduce hostility within the scientific community. Following them wouldn't have impacted the quality of Gelman's valid critiques, and he'd have been more likely to actually change Cuddy's mind (and the mind of any remaining supporters) while not throwing out chum for a bunch of MRAs.
"On the occasions that I have contacted researchers directly with questions and criticism, it hasn’t always gone well, and I do think that once work is published (and certainly when it’s published in one of the top journals in the field), it should be open to criticism from all, and I think that any implicit expectation or norm of contacting the original researchers could be a bad idea to the extent that it raises a barrier to criticism, if even a small one. But that’s just my view, and to be sure there are costs either way."
Perhaps Gelman is exceptionally unlucky. But given his characterization of sending a damn email as "interpersonal conflict" (as opposed to blog posts?), I suspect he finds talking with researchers to be less rewarding than putting up a blog post and getting a bunch of cheers. Also, it's as true in science as it is anywhere else: if you keep running into assholes, it's possible you're the asshole.

Finally, I'm just going to repeat what I said in a previous comment:

Gelman is allowed to write whatever he wants, it's his blog. But if you claim you have an honest desire to fix a problem, you don't get to pretend the context and presentation of your criticisms and your suggestions have no effect on the success or failure of that ostensible goal.
posted by Anonymous at 3:57 PM on October 19, 2017


Gelman seems to come across as worse in the article, but Simmons and Simonsohn seem pretty bad, too. It seems especially cruel to say one thing in a pre-publication email about an article and then follow that up with a "damning" blog post after the article comes out. Without speculating about their motivations, how could anyone trust those two after that?
posted by HiddenInput at 4:04 PM on October 19, 2017


Cuddy and Carney's response to the pre-publication critique strikes me as disingenuous. The whole point of the revised p-curve was to show that the publications they were citing in their paper were untrustworthy. Yet they removed their erroneous p-curve and kept the publications without remarking on the suspicious p-value distribution.
posted by Coventry at 8:24 PM on October 19, 2017 [1 favorite]


Also, it's as true in science as it is anywhere else: if you keep running into assholes, it's possible you're the asshole.

Oh, I bet that's what Galileo's problem really was, too.
posted by Coventry at 9:50 PM on October 19, 2017 [1 favorite]


There are so many threads that overlap with my personal interests here. In addition to just knowing the Gelman cracks from reading his blog, there's the general issue small-sample size and bad statistics in science, the professional responses to it, the mess of science reporting, the internal psychology of grappling with a high profile error, the celebritization of "science" in TED Talks and by university press offices, women (and misogyny) in science, academic hierarchy issues and on and on.

I can't go on all about all of those so let me beat on the NYT a bit for being ever more like the NYT:

I asked Gelman if he would ever consider meeting with Cuddy to hash out their differences

A reporter has a story about young firebrands uncovering bad practices within a profession, then airing out professional dirty laundry in public. And the reporter's instinct is to sympathize NOT with the people bruising egos in the name of getting truth out there?! They side with the insular, "we have ways of handling these matters internally" mindset. Seems to think it's sad it wasn't worked out quietly, through the peer review process.

#NotAllJournalists, but too many. They like Cuddy are chasing the TED Talk and bestseller. Of course it seems cruel of the universe to give someone the big prize and yank it away. The nature of the profession has shifted the big papers point of view to instinctively identify with the polite professionals.
posted by mark k at 9:52 PM on October 19, 2017 [2 favorites]


Oh, I bet that's what Galileo's problem really was, too.

Ah, yes, because nobody embodies the persecuted underdog like a guy with a massively popular personal blog and a well-known Washington Post column who attended MIT and Harvard and has tenure and directorship of a research center at Columbia.
posted by Anonymous at 12:08 AM on October 20, 2017


P hacking sucks but these folks that do this academic chasing, trolling, etc. Can be awful.
A friend was in the middle of a replication-oriented attack (she's a woman, unsurprisingly) and the constant attacks were just terrible.
posted by k8t at 3:29 AM on October 20, 2017 [2 favorites]


Take a cue from law enforcement: "I can't comment on any ongoing research."
posted by rhizome at 10:19 AM on October 20, 2017 [1 favorite]


At the risk of sounding persnickety: all research is ongoing. You never really stop. Comments and critiques are a necessary part of science and should be expected. But the method by which you do so is pretty important.

A reporter has a story about young firebrands uncovering bad practices within a profession, then airing out professional dirty laundry in public. . . . They side with the insular, "we have ways of handling these matters internally" mindset. Seems to think it's sad it wasn't worked out quietly, through the peer review process.

"Young firebrands"? Have you seen their CVs?

Andrew Gelman is 52 and has been tenured at Columbia for nearly twenty years. For over a decade he's been running a research center there that they created for him. He has cultivated an extremely high profile and is one of the most well-known people in his field.

Both Simmons and Simonsohn are around the same age as Cuddy, in their early 40s. Uri Simonsohn went to MIT for his predoc courses and Carnegie Mellon for his PhD, and got tenure at the Wharton School at UPenn in 2011. He's a reviewing editor for freakin' Science. Of the three, Joe Simmons, is perhaps the closest to a "firebrand upstart", as he's ~only~ been tenured since 2014, and his Elite Education extends ~only~ to a Master's, PhD, and postdoc at Princeton. Then he had a professorship at Yale where he apparently did well enough that when he transferred to UPenn he got tenure after only three years. As was stated in the article, the Data Colada blog they run with Nelson puts a target on the backs of not only specific research papers, but casts shade on a researcher's entire body of work. They've literally ended research careers--and I don't mean just Cuddy, I mean the resignation of tenured professors.

Cuddy is no slouch--PhD at Princeton, and was teaching at Harvard Business School--but as of two years ago her CV indicates an academic career that's been far from smooth, especially compared to the guys above. Her history is much more typical: bounce around various adjunct-type positions at different schools until you get that tenure-track opportunity. Which, much like her academic career, is blown.

My point is that you shouldn't get it twisted: Cuddy was not the person wielding the power here and she did not represent the academic establishment. They did. Academia gives very few shits about best-selling pop-psychology books if people like Gelman are criticizing you. The only "establishment" on Cuddy's side were the professional courtesies one normally tries to extend to other members of the scientific community.

You don't seem to understand that if Gelman and Data Colada (particularly Gelman) didn't wield the power they do, then they wouldn't have been able to get away with violating those norms in the first place. I fault Gelman here more than Data Colada--like I said, it appears the latter do try. But if Gelman actually was a young firebrand, a bright mind scrabbling for tenure or even just tenure-track, his practice of jumping straight to broadcasting his criticism to the media and general public would likely jeopardize his advancement at any major university. Not the act of offering a critique, not the quality of a critique itself, but because he didn't present it in private first in good faith and offering the researcher the opportunity to correct themselves. At best, this behavior would raise some hackles and questions about his interest in building a stronger community. At worst, he'd be suspected of trying to get ahead by destroying the careers of other scientists. I don't think it is a coincidence that between Gelman and Data Colada, the guys who hew closer to the scientific norms of critique are the guys who are earlier in their careers.

Perhaps you think this is unfair. But I think the guy who yells "HEY FUCKHEAD, GET ME SOME WATER" at his waiter is an asshole who's being counterproductive, even if it is the waiter's job to bring him water.

--------------------------------------

Before you refuel your indignation, I will reiterate for the nth time:
  • I am not defending Cuddy's work or the validity of her research.
  • I am not saying Gelman or Data Colada are wrong.
  • I am not saying Cuddy is now a poor little match girl, as if the publicity, praise, and fortune she made from her books and speeches mean nothing.
  • I am not saying Cuddy's refusal to admit she was wrong is defensible.
  • I am saying being technically correct is not actually the best kind of correct, and the point of the article is not that Cuddy was right, but it raises the question of whether the manner and degree to which she was subjected to criticism was justified.
posted by Anonymous at 1:01 PM on October 20, 2017


Ah, yes, because nobody embodies the persecuted underdog like a guy with a massively popular personal blog and a well-known Washington Post column who attended MIT and Harvard and has tenure and directorship of a research center at Columbia.

Not saying he's a persecuted underdog, just that if you attack the concepts underpinning entrenched power structures (in this case the researchers fluffing their publication records with garbage results), the people who stand to lose are going to behave like assholes towards you. So your insinuation that Gelman might be an asshole because he doesn't want to sing Kumbayah with embarrassed researchers doesn't stand close examination, and was a fairly unpleasant thing to suggest.
posted by Coventry at 1:11 PM on October 20, 2017 [1 favorite]


I would argue "doesn't want to sing Kumbayah" is a fairly unpleasant characterization of "allowing for the possibility that the researcher was not operating out of malicious intent and not stringing them up before giving them a chance to correct themselves," but OK.

I am in favor of building a scientific community in which we treat one another like human beings and don't immediately unleash an offensive unless the situation obviously warrants it. So, like, say you're writing an algorithm for interpreting a set of imaging data. You think it's good, your collaborators think it's good, other people at your institution thinks it's good. It uses standard methodology, and your paper employing it goes through peer review. Then say someone believes it's overly aggressive, it's showing results that aren't there. What seems more productive: assuming that your analysis is based in ignorance rather than malice, and bringing their concerns to you first? Or dragging you in front of their 100k followers, implying you're a resume-padder and a fraud, inviting a torrent of abuse on you from onlookers and ultimately destroying your scientific career?

I mean, as long as we're deciding that going straight to social media, bypassing any sort of scientific community or engaging in any kind of research discussion, is totally excellent and brings about needed paradigm shifts, then why bother with peer-review at all? Why not just throw up all of our research on social media first? Who are we to claim the general public shouldn't be the ultimate arbiters of what is and isn't true in science? We know peer-review has some gatekeeper issues within science, so let's toss it out entirely, fuck peer review, it's a process established by elite Ivory Tower snobs who only prop up the importance of "scientific education" and "research experience" as a way to keep the common man from knowing the truth. Why should we be wary of the effects personal popularity and unwitting appeal to public biases might have on who is pilloried and who is celebrated?

If you don't see the issues I don't know what to tell you. To me it seems obvious that behaving in good faith and working towards building a community of collaboration is more likely to produce good science and scientists that listen to and welcome critique--not less. Everything I've seen in academia indicates the scientific community and scientific progress is damaged when we elevate egos, dismiss the importance of empathy, ignore questionable behavior if the offender is powerful and popular, and then claim it's all in the name of "excellence" or "not squashing talent" or some bullshit like that. Then again, I'm just a queer woman studying biophysics who's been doing research on the difficulties of POC trying to navigate post-graduate education and careers in STEM. What could I know about the influence of biases and power structures on one's willingness to violate social norms or choice of target or the degree of backlash for real or imagined infractions and the ultimate effect this has on the belief among under-represented minorities that the amount of recognition or denigration they'll receive for good or bad results will be equal to that experienced by a white dude?
posted by Anonymous at 3:04 PM on October 20, 2017


nope, I'm just a tool of The Establishment and The Man who's attempting to quash the brilliance of a brave white man with two decades of tenure at an Ivy League school who is fighting the system, yessiree
posted by Anonymous at 3:05 PM on October 20, 2017


It's unfair to claim that he didn't allow for those possibilities. As far as I can tell, he started discussing the failed replication in Sep 2015, and his most hostile suggestion was that perhaps the TED talk should be retracted. The failed replication was published at the end of May, Cuddy and Carney's response was apparently published in April. It's not like Gelman jumped down Cuddy's throat and started impugning her motives as soon as the results came out. It had been a matter of public record for months, and Cuddy and Carney had had ample chance to respond. And he didn't start bringing attention to her response to the failure until Jan 2016:
Our point here is not to slam Cuddy and her collaborators Carney and Yap. We disagree with their interpretation of the statistics and are disappointed that they don’t seem to consider the possibility that their published result was spurious. But it is natural for researchers to feel strongly about their own research hypotheses. Outside research teams have attempted replications, these null results were themselves published, and science is proceeding as it should.
Even still, he's not impugning her motives at this point.
posted by Coventry at 3:59 PM on October 20, 2017 [4 favorites]



I am in favor of building a scientific community in which we treat one another like human beings and don't immediately unleash an offensive unless the situation obviously warrants it.


Are you saying Gelman didn't treat Cuddy like a human being?
posted by MisantropicPainforest at 5:28 PM on October 20, 2017


Upthread I lobbed a stink bomb at the replication adherents, and didn't have time during the week to fill it out any more. Even though the discussion has quieted down, I wanted to offer a few things.

First--it's absolutely ridiculous that statisticians somehow feel entitled to declare what is "bad science" or "good science". In my professional role I have a very clear view into how computation and statistics are applied in a wide variety of contexts. Bottom line: Computational and statistical methods are tools, period. Yes, it may require someone with great expertise to use the tool appropriately and well. But for computational and statistical methods to be used correctly, they need to be used in intimate collaboration with deep subject matter expertise. Otherwise they are worse than useless, and I mean that in the most literal sense.

In my observation, people who are effective computational and statistical team members have deep respect for the expertise of their experimentalist counterparts. This is of course completely lacking in the brouhaha about Cuddy's work. The peanut gallery (and in this I include Gelman, who comes off like a completely insufferable prick throughout) somehow thinks that they have expertise equivalent to that of someone with a doctorate in the field from fucking Princeton. Give me a fucking break.

Whether something is "good science" or "bad science" has to do with every part of the scientific method. That includes making good, novel observations and asking good, novel questions, which Cuddy and collaborators absolutely did.

What about their results? I think the replication people need to get.the.fuck.over.themselves. Let's come right out and say it: designing experiments and getting clear results about people and biology is just plain more complicated and harder than--wait for it--math.

The pearl clutching about investigator bias and cherry picking of data and angling for impact scores is such a fascinating example of groupthink. As if the investigators who attempted to "replicate" these experiments were any less subject to those biases. As if the investigators who attempted to "replicate" these experiments didn't know from the very outset which result would lead them to more professional impact and visibility. As if declaring one's fidelity to statistical robustness somehow conferred the Scientific Good Faith Seal of Approval. Did we somehow forget that statistics used to be the next worst thing in the series after "lies" and "damn lies"? In this whole ridiculous boondoggle, the experimentalists who say they're trying to "replicate" the experiments are about as believable as Pruitt trying to "lead" the EPA.

Did Cuddy et al get it right or get it wrong? I have no clue--and neither do you. The peanut gallery gives the impression that reviewers should have blocked publication until they did an 800 person sample complete with full medical history workup, endocrine screen, DNA and metabolomics profiling, microbiome analysis, and full machine-learning aided interpretation of every potential biological mechanism to explain their observations? Completely stupidity about the reality of experimentation. That would be actual bad science for a modest observational study. Come on, people.

Lastly, there are other facets of this that are fascinating examples of groupthink:

Cuddy gets pilloried lack of replication--in the same breath, criticized for the enormous popularity of her book and TED talk. Let me get this straight: thousands (tens of thousands?) of people have tried and will attest to the effectiveness of her suggestion about power posturing, and somehow she is faulted for not having enough evidence of replications of the effect? Next question: Does it matter to the people who use this technique and find it effective, what their quantified levels of cortisol or testosterone are? Is it important to know what the biological basis of this effect is, for it to work? No. Would that be interesting? Yes. Do social psychologists have the experimental expertise to definitively discover these mechanism? No. Can we credit them for a shrewd observation and smart observations? Yes.

About that affirmation among lay people: the replication people have such a hard-on for all things Evidence Based. They may also suffer the common confusion that reading about at thing is equivalent to actually understanding a thing. The folks who I've met who cling most closely to the replication fetish have this problem in spades. My question to them: in the midst of all the bitching about it, who has actually tried it? Cuddy's experiment could not be more accessible. This is not studying neutron stars or population genetics. I'd bet large money that most of the people who critique her most bitterly have never even thought about trying what she suggests. So much for, you know, evidence.

Finally, I observe massive overlap between replication fetishists and the whole tech bro culture--which also leans heavy on the unrepentant capitalism. So it's jarring to read the "how dare she profit from this" critiques. Which is it, dudes? Kinda makes you think there's something more going on here than adhering to principle.
posted by Sublimity at 7:25 AM on October 21, 2017 [3 favorites]


"Do social psychologists have the experimental expertise to definitively discover these mechanism? No."

I think they should, either by training, or by working with people who have that expertise (or both), for any technical claim. This is what separates academic articles from self-help books. People didn't listening to the TED talk didn't just hear "This worked for me, this will work for you.", they heard a scientist saying there was evidence that these poses change your body chemistry.

I am very puzzled that you seem to be against trying to replicate results in science. You are certainly right that experimental design is more than just statistics, but you can't be oblivious to the poor quality of both experimental design and statistics in a bunch of scientific fields, due to the nature of the academic systems involved.
posted by demiurge at 9:36 AM on October 21, 2017 [2 favorites]


If you are suggesting that the bar for acceptable evidence for social psychology publication is ironclad evidence of physiological mechanism, I would say that you off your freaking rocker. This is a prime example "missing the forest for the trees" that I spoke of in my first. Cuddy et al had an observation and tested the things that were reasonable and relatively easy to test, as befits a first exploratory study. Maybe it will be validated and supported, maybe it won't. Will it expand the field of inquiry? Is it worth pursuing or not? This happens in every single field, every single paper. Bottom line is it is doing what science is supposed to do and the drama filled pronunciations of "bad science" is asinine.

Your puzzlement is off base. I am perfectly aware of how theories accrue support and strengthen through validation. The witch hunt around this specific case is ridiculous, overblown, and unprofessional.
posted by Sublimity at 9:59 AM on October 21, 2017 [1 favorite]


If you are suggesting that the bar for acceptable evidence for social psychology publication is ironclad evidence of physiological mechanism,

No we are saying that experimental studies with extraordinary claims resulting from an n of 20 where there is clear evidence of p-hacking should not be published in top journals, and when they are published, they should be criticized.

My question to them: in the midst of all the bitching about it, who has actually tried it?

They did. With more participants. Cuddy's main results could not be replicated.
posted by MisantropicPainforest at 10:17 AM on October 21, 2017 [3 favorites]


experimental studies with extraordinary claims

Neither of us are social psychologists, but I don't see any extraordinary claims there? Posing like a big important person can make you feel important? And maybe that will be reflected in the underlying chemistry that regulates emotion? That doesn't seem extraordinary to me. Why wouldn't we think that acting like someone who's important makes you feel more important? You're making this sound like the ESP studies or Courtney Brown talking with aliens.

from an n of 20

Eh. What will happen in the real world if journals insist on large-n studies only is that some grad students would have a good-seeming idea, test it on the 20 subjects they can afford with internal funding, and take it to conference for comment. Then some PI somewhere who can afford to do the 200 person study without having to take out a new grant does it and gets the publication that gets hundreds of citations while the people whose actual idea it was get cited in it, and no pub, and no job.
posted by GCU Sweet and Full of Grace at 11:10 AM on October 21, 2017 [2 favorites]


Are you saying Gelman didn't treat Cuddy like a human being?

Honest question: have you read any of my posts, and do you have some disagreement with the overall argument? Or do you just scan them for the most inflammatory sentences so you can quote them as if they were the sum total?
posted by Anonymous at 1:14 PM on October 21, 2017


In this whole ridiculous boondoggle, the experimentalists who say they're trying to "replicate" the experiments are about as believable as Pruitt trying to "lead" the EPA

This seems super dooper ignorant to me Subliminity. I believe that pretty much all replication studies are in fact carried out by phd scientists from the same fields (in this case, social psychologists) as the original studies? They are almost never outsiders.

Hmmmm, I kinda feel like you're using your own personal experiences interacting with statistics to project a whole lot of stuff onto "replication" and those doing it which is actually not at all reflected by the reality. It honestly doesn't sound like you know very much about this, and the replication "crisis" in general?

An analogy: you say it doesn't matter if Cuddy is wrong about the biological mechanism, and that thousands of people saying it works for them is a kind of replication; it doesn't matter if Andrew Wakefield is wrong about the biological mechanism of vaccines causing autism, thousands of people say it has cause their child's autism...

Surely you can see how problematic this reasoning is and that facts actually do matter?
posted by smoke at 2:07 PM on October 21, 2017 [2 favorites]


Mod note: Couple comments deleted; please skip the "how dare u"/pistols at dawn stuff. If you have points to make, just go ahead and discuss them.
posted by LobsterMitten (staff) at 7:43 PM on October 21, 2017


Next question: Does it matter to the people who use this technique and find it effective, what their quantified levels of cortisol or testosterone are? Is it important to know what the biological basis of this effect is, for it to work?

Er....Coincidentally, I recently read this book. Regardless of the issues of how her study went, I thought it was a really intriguing discussion on power vs powerlessness and while power corrupts, it can also make you generally feel great about yourself and more capable and more risk-taking. Well, at least until that point. I am a powerless person--I generally feel like I have to bow and scrape and apologize, apologize, apologize constantly at work--and I can't deny it's a shitty mindfuck to feel like I have to live in. (Though I suspect I'd get myself super fast fired if I started standing up for myself and saying no in this job big time, so...it's necessary. I've had enough trouble from angry people in this last year that I'm not willing to risk it.) She certainly cited a lot of other people's work on those topics, so I think she builds a case for trying to find ways to make yourself powerful. Because if you're fearful and bowing and scraping and having to appeal to higher-ups, it doesn't make your brain work in super great ways.

As was said above, hell if I know if this is quantifiable or not. But I have some quotes I wrote down from the book today that I thought were interesting:
Rebecca, a mother who says her daughter’s grades have gone up since power posing before tests: “It may be a case of Dumbo’s feather, I can’t be sure, but even if it’s fake (which I don’t think it is), has given my daughter so much confidence in herself and her ability to perform under pressure, it is a miracle to behold.”

At one point a horse trainer named Kathy sent in a letter saying she used power posing on horses....
“So, having thought about your work, I devised an exercise that would cause him to physically “act” like a badass (by chasing something as a predator would, trying to strike or attack it, which is what horses do in play or when flirting). It was wildly successful beyond any expectation I had.”
Amy: “In a way this is the most convincing anecdotal evidence of all--nobody told Vafi or Draumur or any of the other horses what power posing was supposed to do. Kathy and I discovered that trainers have been getting their horses to power pose for a long time-- for more than two thousand years, in fact:” She then cited a paragraph by Xenophon about training a horse to show off way back in the day.
It may all be Dumbo feathers and placebos for all we know. But if it works for some people....or hell, even horses...?
posted by jenfullmoon at 11:23 PM on October 21, 2017


I just read the article, I thought it was fair to all sides, and I'm unhappy (though not surprised) about all the people in this thread who are all "Yeah, that Cuddy woman sucks, she isn't a real scientist, anyone who doesn't want to stomp her into the ground is just a tool for groupthink and doesn't care about science!" Somebody up there compared this attitude to the techbro ethos, and I think that's a good comparison. I feel sorry for Cuddy, who is imperfect (like all of us) and didn't react well to being attacked (who does?); most of the people leading the charge against her seem like... well, only some of them seem like assholes, but most of them come across as caring more about their ideas of perfect science in a perfect world than about the actual human beings who try to do the science while trying to make their way in a world which is stacked against them, especially if they are women.

And no, that doesn't mean I approve of sloppy methodology in science and think rigor is unnecessary. The new emphasis on reproducibility is a good thing. But that doesn't mean grabbing some random woman (yes, yes, I know she dared to make money by "peddling" her idea, god forbid anyone anywhere should profit from their ideas, certainly, say, Richard Dawkins would never dream of such a thing) and jumping on her repeatedly is also a good thing.
posted by languagehat at 12:46 PM on October 22, 2017 [5 favorites]


Misantropic Painforest, smoke: thank you very much for perfectly exemplifying what I was talking about.

To my statement: the replication people have such a hard-on for all things Evidence Based. They may also suffer the common confusion that reading about at thing is equivalent to actually understanding a thing. The folks who I've met who cling most closely to the replication fetish have this problem in spades. My question to them: in the midst of all the bitching about it, who has actually tried it? Cuddy's experiment could not be more accessible. This is not studying neutron stars or population genetics. I'd bet large money that most of the people who critique her most bitterly have never even thought about trying what she suggests.

MP responds: They did. With more participants. Cuddy's main results could not be replicated.

Thank you; I rest my case.

OK, I will admit that I am making an assumption, which may be incorrect, that you are able to control the movement of your body and limbs. If you can't, it's likely to be difficult to try Cuddy's experiment yourself, and I apologize for the presumption. However, for people who have typical physical capability, it is pretty simple to hold the poses from Cuddy's study and reflect on your own, lived experience.

Here's what's missing from your "evidence-based" perspective: You can try this actual experiment in the privacy of your own home! You can make your own data points! In a matter of mere minutes! Heck, if you wanted to get really crazy, you could get a couple of dice and bring 'em to Thanksgiving and conduct a small-scale study with your whole family!

Clue that you're bullshitting about being "evidence based": it never even occurred to you to do the experiment, even though it's about as easy as tying your shoe.

Clue that you're not a scientist: When you take in the totality of the evidence--including Cuddy's original study, the study that did not replicate it in the lab, and the self-reports of very many laypeople--your response is "this is disproven". If you were an actual scientist, your response would be: "what is it in these contradictory results that we don't understand yet?" It would be to formulate further questions. Because in both studies, people agree that holding "power poses" lead them to feel powerful. In both studies there are many participants who have results in line with expected results--so it's not completely ridiculous and out of line. So what is going on with the people who don't? Are they sick or hung over? Do they have pain or other contraindication that interfered with the cortisol response? Are there any factors that relate to unease with personal power, such as abuse history?

Did the authors of the challenge paper look at any of these factors? Did they allow for the possibility that there were some confounding factors that might have led to a different result than the first paper? Why do you suppose their reviewers didn't hold their feet to the fire about providing experimental evidence to address such things?

smoke writes: This seems super dooper ignorant to me Subliminity. I believe that pretty much all replication studies are in fact carried out by phd scientists from the same fields (in this case, social psychologists) as the original studies? They are almost never outsiders.

This seems sooper dooper naive to me, smoke.

If you think that the human propensity to put the thumb on the scale was a factor in the first paper, why aren't you regarding the follow-up with the same skepticism? Do you somehow think that hewing very strongly to quantitative approaches prevents fraud and science in bad faith? Think again.

Then: An analogy: you say it doesn't matter if Cuddy is wrong about the biological mechanism, and that thousands of people saying it works for them is a kind of replication; it doesn't matter if Andrew Wakefield is wrong about the biological mechanism of vaccines causing autism, thousands of people say it has cause their child's autism...

Surely you can see how problematic this reasoning is and that facts actually do matter?


Facts do actually matter. I would encourage you to better acquaint yourself with what constitutes a "fact", and how it "matters".

Clue that you yourself are the one who doesn't understand about replication: you're comparing the robustness and validity of an idea that has been tested in two small experiments with a minuscule number of data points (power posing) with a conclusion drawn from literally decades worth of epidemiological data (vaccines and autism). Give me a fucking break.

It is ridiculously naive to think that because a single finding has been published in a peer-reviewed journal that it is an ironclad fact. That is not how discovery works. That is not how understanding builds. I regret to inform you that, no matter how high editorial standards may be, there are no scientific journals that require perfection and infallibility. I regret to inform you that the practice of science is not merely showing our work and comparing our answers to some pre-published answer key that Mother Nature hands out when you get a PhD.

The whole point of original research is to try to push the envelope of understanding about stuff we don't know yet. Even when we do a really good job at experiments, sometimes we are right and sometimes we are wrong. VERY often this isn't known until the field has grown and matured for a while and those novel results gain context. Remember that the literal definition of original research is to do something that no one else has done before. Even in a very well controlled experiment there may well be factors that cannot be known because by definition they have not been described yet.

This is a common confusion among people who think they understand "science" because they have learned science facts. Knowing science facts is different than the actual process of doing science. You are confusing the ends of stories with the middle of stories. (Spoiler: sometimes the ends of the stories change, too.)

So, let's touch on the concept of "matters". Autism matters. Understanding what causes autism matters. Protecting people against preventable, serious infectious disease? Also matters. Do you remember what happened when all those parents started raising the concern that vaccines were related to autism? You may remember that science took it seriously and looked at the data. You may also have noticed that, when vaccines did not turn out to be the causative factor, that science did another thing: it kept looking. Lots of people trying to figure out what causes autism. Lots of people working really hard to reassure nervous new parents about the safety of vaccines. Both parts of that story? Super important. Life and death. Serious sequelae on all sides. Matters. Definitely worth the big deal.

OK, now compare that to the public health impact of the contentious issue of whether power posing raises your testosterone. Go ahead.

Remind me again why this is such a big brouhaha?

You are accusing *me* of projecting *my* issues?
posted by Sublimity at 6:26 PM on October 22, 2017


This is getting a little intense!

I don't think that measuring levels of cortisol in the blood is a test at home kind of a thing, least of all in a rigorous way.

I do still maintain that replication studies are mostly carried out by scientists in the same field, which was your original contention.

I am less skeptical about the second study because it was done with a group nearly ten times larger for a start, and did not show evidence of p-hacking.

I absolutely agree with you that science is an evolutionary process and I kinda feel that Cuddy's doubling down is more indicative of wanting to stop the clock as it were.

I do still think that power-posing and its hormonal effect matters, whether it's saving lives or not.

Hey I'm really sorry if I offended you before, I don't know what your background or expertise is, just as you do not know mine. As my first comment reflected, I think there is a lot to be said here about sexism, the way media reports on science issues, our own desire for narratives etc. But I don't think that's all there is to it, and as someone who's read Gelman's blog for years, I do think he has been given short thrift in this piece, and I feel like he shouldn't be judged solely on this - especially not when taking into account the broader context of the "replication wars" and some of the shitty behaviour that has occurred.

And I do think it's problematic that Cuddy is so confident in results that are very shaky (so much so her collaborator has essentially refuted them!) and has presumably not taken the opportunity to replicate the original study herself with a larger group.
posted by smoke at 7:28 PM on October 22, 2017 [1 favorite]




it is pretty simple to hold the poses from Cuddy's study and reflect on your own, lived experience.

I honestly don't know how you could have read this far and commented this intensely while missing the most important point of contention about Cuddy et al's study and its ensuing replication: changes in biomedical markers could not be replicated. That was their big claim. Self-reported feelings of strength don't tell us much, which is why if that is all the study consisted of it wouldn't have been published in a top journal.

your response is "this is disproven".

No one here said that anything was "disproven"--only that the original findings were most likely the result of p-hacking and that they could not be replicated.
posted by MisantropicPainforest at 3:50 AM on October 23, 2017 [2 favorites]


Two out of the three measures in their study (feelings of power, attitude toward risk) you can do at home with no complicated equipment than dice. Knock yourself out.

Also, just cut the disingenuousness about "disproven".

Finally, another clue. It doesn't reflect well on you. Self-reported feelings of strength absolutely do tell us much and matter intensely to human beings. That you disregard the validity of that measure says a lot about your bias against social psychology, but it also is a major part of the "forest for the trees" remark.

What the hell do we do science for, exactly?

Another story for you. I've got a PhD in biochemistry and have worked in the biomedical sciences for the last 20+ years. It is incredibly common for research in the biomedical sciences to be tied one way or another to impacts in human health. Sometimes the people doing basic research actually give a shit about the impact their work might have on sick people, but in basic science these connections are speculative in the extreme. It's not very difficult to develop a cynical point of view that researchers really just want to research whatever their interest is and it's easier to get funding if you can link it to something do to with human health and disease.

Several years back at a molecular medicine conference I attended, the keynote speaker was a fashion photographer. This man had an amazing story and basically revolutionized how people with genetic differences are presented in the medical literature and beyond by photographing them as beautiful human beings. You can read or see more here. Photography is crucial to representing the anatomical characteristics of genetic syndromes, to doctors and families with newly-diagnosed children. Historically, these photos had been artless and dehumanizing, which was horrifying for the families and the individuals themselves, who were usually grappling with difficult news about other health impacts. When this fashion photographer started working with advocacy groups, he got photographers to make photos that clearly showed the diagnostic features of the syndromes--on kids just being kids, people just being people. The effect of being depicted in a beautiful and tender way was profoundly impactful on the families and the individuals themselves, and in how they were accepted and treated by medical practitioners as well.

Hope, acceptance, self esteem, beauty all matter. Self-perception matters.

And it mattered a whole hell of a lot more to the people in that cohort than the distant promise of someone, someday figuring out an explanation or a mechanism or an intervention that might possibly impact a person with the same condition. Maybe.

I tell this story because the effect on the scientific audience was also profound. We were all stunned. It was a powerful confrontation for very many people in the room. Do you really care about (fill in the blank) or are you only using it to further your careerist ends? If you really do care, what is the likelihood that anything you do is going to have an impact on peoples' lives as much as that fashion photographer has?

How does this relate to Cuddy? Did you watch her TED talk? She's clearly brilliant, her work was absolutely with merit, and in she used her visibility and status to bring something simple and free and impactful to people who could be helped by it. And her career got shredded because of a bunch of myopic, self-important tech bros who couldn't science their way out of a wet paper bag?

Disgusting.
posted by Sublimity at 5:01 AM on October 23, 2017 [1 favorite]


where are we getting this techbro thing? Is it because one of the statisticians played softball in grad school?
posted by logicpunk at 5:52 AM on October 23, 2017


Did you read the article? Did you read any of what we've discussed so far?
posted by Anonymous at 7:55 AM on October 23, 2017


Did you read the article? Did you read any of what we've discussed so far?

Is this your standard rhetorical approach? It is condescending. Do you expect me to say "Oh, shit, you're right, I just opened up a random fucking thread and asked this question that just happens to refer to details contained in the thread itself as well as the linked articles."

I don't see the fucking link between "tech bros" and people who advocate reproducible science. It is not obvious. I know scientists who are part of replication efforts, and they are not fucking "tech bros." They are diligently working to correct a pervasive fucking issue in the fucking literature. They do not have to devote their time to replications - they choose to do so because it is important for all scientists to be able to have some fucking faith in what is reported in the journals. It is a lazy, shitty insult to many sincere people who are trying to improve their field.
posted by logicpunk at 9:02 AM on October 23, 2017 [1 favorite]


I am all for serious people who know what they're doing to make positive change in the quality of science. See previous remarks about statistical and computational team members.

I object to bystanders in the peanut gallery who turn their limited understanding of the issue into cause celebre and ruin someone's career. That's where the tech bros come in.
posted by Sublimity at 9:25 AM on October 23, 2017 [1 favorite]


Is this your standard rhetorical approach? It is condescending. Do you expect me to say "Oh, shit, you're right, I just opened up a random fucking thread and asked this question that just happens to refer to details contained in the thread itself as well as the linked articles."


Your reduced the entire discussion to a comment about softball, in the middle of an article and a discussion that made it pretty damn clear the criticisms were not in any way tied to anyone playing sports. Forgive me for assuming you hadn't read anything.

Would you like to offer your thoughts on why you believe softball is the root of our criticisms? Perhaps even provide a refutation of the current arguments, or at the very least why you believe softball is a better explanation for why people are upset with the treatment of Cuddy?
posted by Anonymous at 11:09 AM on October 23, 2017


I am all for serious people who know what they're doing to make positive change in the quality of science. See previous remarks about statistical and computational team members.

I object to bystanders in the peanut gallery who turn their limited understanding of the issue into cause celebre and ruin someone's career. That's where the tech bros come in.


I think we can all agree on this in the abstract--the disagreement is to who falls in what category. For some, anyone publicly criticizing Cuddy's research means they fall into the latter category.
posted by MisantropicPainforest at 11:37 AM on October 23, 2017 [1 favorite]


Would you like to offer your thoughts on why you believe softball is the root of our criticisms? Perhaps even provide a refutation of the current arguments, or at the very least why you believe softball is a better explanation for why people are upset with the treatment of Cuddy?

See my above comment. You are arguing in bad faith - you address the softball comment without mentioning the rest of it in which I directly talk about why "techbro" is a stupid fucking insult.

Labeling Cuddy's critics (or advocates for reproducible science) as "techbros" implies (at least) universal misogyny amongst them. There are valid reasons to be suspicious of a social psychologist engaged in aggressive marketing of a marginal result that don't boil down to wanting to wreck her life because she's a woman. There are four women authors on the Ranehill et al study linked above which attempted to replicate: are they all techbros?

There have been repeated instances of social psych/neuroscience researchers being outed for fraud that have been discussed on metafilter without such a facile dismissal of the people who outed them. Definitely call out the fucking misogynist pricks - they are bad for science - but maybe not every person trying to improve research has an agenda beyond that.
posted by logicpunk at 11:50 AM on October 23, 2017 [3 favorites]


« Older It was the early days of the graphic novel   |   She had become the unwanted mad Newer »


This thread has been archived and is closed to new comments