These data are not just excessively similar. They are impossibly similar
August 20, 2021 9:56 AM   Subscribe

Evidence of Fraud in an Influential Field Experiment About Dishonesty is a blog post at Data Colada 🍹 where researchers uncovered dishonest data meddling in a PNAS-published paper about... dishonesty.

A group of anonymous researchers found problems with the 2012 paper "Signing at the beginning makes ethics salient and decreases dishonest self-reports in comparison to signing at the end". They brought their results to Uri Simonsohn, Joe Simmons, and Leif Nelson at Data Colada. The study is about people self-reporting odometer readings from their car. The paper's Excel spreadsheet of the source data indicated mathematical malfeasance. Statistics lay out the case for faked figures but may make the layperson's eyes glaze over. For them, the cherry on top is the examination of the Excel sheet's fonts: Calibri, but also Cambria.
Let’s start with our claim that the Calibri and Cambria observations are near-duplicates of each other. What is the evidence for that? First, the baseline mileages for Car #1 appear in Calibri font for 6,744 customers in the dataset and Cambria font for 6,744 customers in the dataset. So exactly half are in one font, and half are in the other. For the other three cars, there is an odd number of observations, such that the split between Cambria and Calibri is off by exactly one (e.g., there are 2,825 Calibri rows and 2,824 Cambria rows for Car #2). Second, each observation in Calibri tends to match an observation in Cambria.
posted by Monochrome (39 comments total) 26 users marked this as a favorite
 
I discovered this post via Belong linking me to Noah Veltman's tweet. From his Twitter feed I also found The Magazine of Early American Datasets (MEAD) which is unrelated but pretty neat: "MEAD provides sweet, intoxicating data for your investigations of early North America and the Atlantic World."
posted by Monochrome at 9:56 AM on August 20, 2021 [5 favorites]


Science people: Is it weird that the scientists uncovering the fraud are remaining anonymous?
posted by clawsoon at 10:05 AM on August 20, 2021


Here's the response from the author responsible for collecting data (linked toward the bottom of the blog post, but easy enough to miss). He basically claims the insurance company did it and he was just a dupe, failing to address any of the points in the blog post (and its footnotes) questioning why the insurance company would do certain things. I find it hard to come up with an explanation as to why an insurance company would bother to fabricate half the data. It's probably not the case they were being paid to produce mileage information contingent on there being at least 13k samples.
posted by axiom at 10:42 AM on August 20, 2021 [4 favorites]


First off: Really fascinating analysis.

Second: There are two points that should be highlighted (especially because there's a tendency for people to comment without reading all the materials available):

1 - The team published the original research in 2012. Over the following years, they were unable to replicate the results in later studies, and they published two papers in 2020 calling out that fact, and backed away from using the conclusions of the 2012 paper in further work.

2 - Four of the original authors have written replies to this post (linked to at the end), all agreeing with the analysis and conclusion that the data set was fraudulent. The only question at this point is who modified it: the insurance company before sending it, or one of the researchers after receiving it. That question will likely never be answered unless someone confesses, but I do think footnote 14 is particularly interesting:

The properties of the Excel data file analyzed here, and posted online as supporting materials by Kristal et al. (2020), shows Dan Ariely as the creator of the file and Nina Mazar as the last person to modify it. On August 6, 2021, Nina Mazar forwarded us an email that she received from Dan Ariely on February 16, 2011. It reads, “Here is the file”, and attaches an earlier version of that Excel data file, whose properties indicate that Dan Ariely was both the file’s creator and the last person to modify it. That file that Nina received from Dan largely matches the one posted online; it contains the same number of rows, the two different fonts, and the same mileages for all cars. There were, however, two mistakes in the data file that Dan sent to Nina, mistakes that Nina identified. First, the effect observed in the data file that Nina received was in the opposite direction from the paper’s hypothesis. When Nina asked Dan about this, he wrote that when preparing the dataset for her he had changed the condition labels to be more descriptive and in that process had switched the meaning of the conditions, and that Nina should swap the labels back. Nina did so. Second, and more trivially, the Excel formula used for computing the difference in mileage between the two odometer readings was missing in the last two rows. Nina corrected that as well. We have posted the file that Nina sent us on ResearchBox.org/336. It is worth noting that the names of the other three authors – Lisa Shu, Francesca Gino, and Max Bazerman – do not appear on the properties of either Excel file.
posted by NotMyselfRightNow at 10:43 AM on August 20, 2021 [20 favorites]


Science people: Is it weird that the scientists uncovering the fraud are remaining anonymous?
Definitely worth noticing, at least in the physical sciences. My best guess is it's junior people without a powerful ally who are taking on senior people, which is scary to do in the usual way.

I'm always astonished that when students cheat, they do it so badly. The same is true here.
posted by eotvos at 10:46 AM on August 20, 2021 [6 favorites]


(I also feel bad for anyone doing statistics on a 14000 column data set in a spreadsheet. But, I realize talented people in other fields actually do that without walking into the ocean in frustration.)
posted by eotvos at 10:49 AM on August 20, 2021 [1 favorite]


(It's ~13.5k rows, 19 columns with the 'font' column, not sure where the impression that the data was wide came from)
posted by axiom at 11:01 AM on August 20, 2021 [1 favorite]


Calibri and Cambria

I don't know about the statistical anomalies, but I thought Vaxis - Act 1 was OK.
posted by zamboni at 11:07 AM on August 20, 2021 [6 favorites]


I wonder if there are people scrutinizing other publications now... If it is meddling, its rarely a one-off.
posted by thandal at 11:28 AM on August 20, 2021


Is it weird that the scientists uncovering the fraud are remaining anonymous?

It seems likely that the anonymous whistleblowers are early-career researchers who fear retaliation from senior colleagues. In science as in any field, when wrongdoing is exposed, attacking the messenger is a common response. In 2016, Fazlul Sarkar sued pubpeer.com in an attempt to discover the identities of anonymous commentators who cast doubts on the integrity of the data in his papers (he has since had 40 papers retracted). This year, Didier Raoult threatened to sue Elisabeth Bik for identifying major errors in his paper on hydroxychloroquine treatment for COVID-19.

I wonder if there are people scrutinizing other publications now...

A search on PubPeer.com shows that this is not the only paper by this author where people have concerns.
posted by cyanistes at 11:49 AM on August 20, 2021 [15 favorites]


Well, this is all great data for my own scientific study about irony.
posted by qntm at 11:50 AM on August 20, 2021 [16 favorites]


Damn, that's some amateur-level fakery right there. Excel provides you with the exact tools you need for better fakery (NORMINV, yo) but no.
posted by memetoclast at 12:25 PM on August 20, 2021 [1 favorite]


My best guess is it's junior people without a powerful ally who are taking on senior people

I'm not familiar with the others (or with the field in general), but Dan Ariely is such a big name that even I know who he is.

In fact here's a previous post about his research on dishonesty.

The only question at this point is who modified it: the insurance company before sending it, or one of the researchers after receiving it.

If the insurance company modified it, it's still pretty damning that none of the researchers examined the data critically and noticed the problem.
posted by trig at 1:28 PM on August 20, 2021 [8 favorites]


It's not been emphasized from the post or the comments above, but the author responsible for the data collection is Dan Ariely, writer of multiple NYT bestsellers, Wall Street Journal columnist and listed as one of the 50 most influential living psychologists; clearly the highest-status author on the piece.

I don't know psychology research, but I'm a little staggered that a study would get authored with none of the authors doing something as simple as looking at a histogram of the distances driven. Like, this is not an artefact of the data or some weird correlation or outlier -- this is literally what the experiment is measuring. How do you write a paper on how much where you sign influences how much people claim to have driven, without even a cursory look at how much the people claim to have driven? Which is incredibly obviously a uniform distribution.
posted by Superilla at 1:31 PM on August 20, 2021 [12 favorites]


but the author responsible for the data collection is Dan Ariely, writer of multiple NYT bestsellers, Wall Street Journal columnist and listed as one of the 50 most influential living psychologists; clearly the highest-status author on the piece.

Is this going to turn into another Brian Wansink situation?
posted by clawsoon at 1:56 PM on August 20, 2021 [1 favorite]


It's bad to be a crook, but it's pretty embarrassing to be an incompetent crook who doesn't understand statistics.

Why did they even go to the trouble of fabricating the data if they were going to do it this badly - to dupe the coauthors who trusted them and could be expected to not bother looking at the raw data?
posted by each day we work at 2:17 PM on August 20, 2021


It seems likely that the anonymous whistleblowers are early-career researchers who fear retaliation from senior colleagues. In science as in any field, when wrongdoing is exposed, attacking the messenger is a common response.

In my own field, Jonathan Pruitt (previously) has been using lawyers to try to threaten journals into not retracting papers when other co-authors request retraction as well as trying to threaten scientists publicly discussing the process of going through his published work for irregularity. The whole process has been getting needlessly dragged out as a consequence. (American Naturalist only finished retracting all of his papers with them in May; other journals appear to have not even started.)
posted by sciatrix at 2:30 PM on August 20, 2021 [5 favorites]


A scandal in Tedhemia: Noted study in psychology first fails to replicate (but is still promoted by NPR), then crumbles with striking evidence of data fraud
But . . . wait a minute! The NPR segment, dated 17 Feb 2020, states:
That’s why Ariely describes honesty as something of a state of mind. He thinks the IRS should have people sign a pledge committing to be honest when they start working on their taxes, not when they’re done. Setting the stage for honesty is more effective than asking someone after the fact whether or not they lied.
And that last sentence links directly to the 2012 paper—indeed, it links to a copy of the paper sitting at Ariely’s website. But the new paper with the failed replications, “Signing at the beginning versus at the end does not decrease dishonesty,” by Kristal Whillans, Bazerman, Gino, Shu, Mazar, and Ariely, is dated 31 Mar 2020, and it was sent to the journal in mid-2019.

Ariely, as a coauthor of this article, had to have known for at least half a year before the NPR story that this finding didn’t replicate. But in that NPR interview he wasn’t able to spare even a moment to share this information with the credulous reporter? This seems bad, even aside from any fraud.
posted by clawsoon at 3:08 PM on August 20, 2021 [10 favorites]


From 2010: Should You Be Suspicious Of Your Dentist Or NPR's Source?
Ariely studies irrational behavior.

He wrote the New York Times bestseller, “Predictably Irrational: The Hidden Forces that Shape our Decisions.” His research is published “in leading psychology, economics, and business journals,” says his bio. He also holds the James B. Duke Professor of Behavioral Economics chair at Duke University.

So when he appeared on NPR’s air, there was every reason to trust him.
So he's been a successful bullshitter for a while now?
posted by clawsoon at 3:42 PM on August 20, 2021 [2 favorites]


Bit tangential. There are, or should be, two other red faces out there - the peer-reviewers selected by PNAS for the 2012 paper. I was a second rate [not third rate] career scientist who wasn't often asked to review the manuscripts of others - partly because my own publication record was kinda crappy so I was less visible to editors. I was a good reviewer though, because I spent a lot of time on the process. I could afford to do this because it happened rarely and I was doing the work over a weekend, effectively off company-time. "they say" that it takes half a year to do the work and write a paper; that it shd take half a week to scrutinize it; that it actually takes half a day. If there is a Big Cheese among the authors, it's tempting to cut t'bugger some slack and nod the manuscript through. Paying referees for the time they put in for Elsevier and redacting the authors to anonymize the manuscript for review would help.
posted by BobTheScientist at 3:45 PM on August 20, 2021 [1 favorite]


They should make Dan Ariely sign his research papers at the top, before writing them. (Not that this would actually make a difference, it seems.)
posted by chavenet at 4:31 PM on August 20, 2021 [8 favorites]


Man, I just full on laughed out loud on the train as I scrolled to reveal Figure 1. I guess there's some insurance company in Atlanta that cuts your brake line if you try to drive more than 50,000 miles over your starting odometer reading.
posted by ecreeves at 4:31 PM on August 20, 2021 [6 favorites]


I'm always astonished that when students cheat, they do it so badly. The same is true here.

Survivor bias! The ones who don't do it badly don't end up getting caught.
posted by Justinian at 5:46 PM on August 20, 2021 [7 favorites]


The article kinda buried the lede.

Instead of "Evidence of Fraud in an Influential Field Experiment About Dishonesty", to me it was "Holy Sh*t, Dan Ariely Faked his Data"
posted by storybored at 8:53 PM on August 20, 2021 [3 favorites]


Well. Prudence suggests I should stay out of this thread, as I have some professional connections with several of the authors of the fabricated study, though I don't know any of them very well. But fuck it, my scientific career is basically over anyway.

Science people: Is it weird that the scientists uncovering the fraud are remaining anonymous?

Yes, it's pretty weird, but perhaps not too hard to imagine reasons why: fear of retaliation, etc. Dan Ariely is very well-known and has a lot of clout in the field. Naturally the fact that they've chosen to remain anonymous means one could suppose some nefarious intent or lack of credibility on their part, but co-author Nina Mazar's response to the posters is quite clear that she's convinced by their evidence that her 2012 paper was indeed fraud, and she was one of the victims.

Why did they even go to the trouble of fabricating the data if they were going to do it this badly - to dupe the coauthors who trusted them and could be expected to not bother looking at the raw data?

Well, the thing is, the people who do this are egomaniacs, utterly convinced of their own cleverness, and I think it's genuinely unfathomable to them that someone else could detect their oh-so-clever fraud. When of course the real reason that people get away with this is that science has a culture of trust, and most scientists find it unfathomable that someone who appears to be a good scientist would fabricate data just to advance their career. (Not unfathomable intellectually, obviously we all know that fraud happens, but kind of unfathomable at a gut level. In the same way that discovering your friendly neighbor is an ax murderer is unfathomable.) The people who are actually clever enough to pull off such a fraud well, don't bother, because they're also clever enough to just do good original work.

I think it's been long enough now that it's safe for me to tell this story here. (That, and also, see aforementioned fuck-it.) Back in 2008, I was applying to graduate schools, and one of the labs I was considering was that of Marc Hauser at Harvard, a rather famous evolutionary biologist and ethologist. Yes, that Marc Hauser. In particular, Hauser had become quite well known for studying the evolutionary origin of human moral behavior, and also of perspective-taking and theory-of-mind. Much like Dan Ariely over the last decade or two, Marc Hauser at the time had a number of very well-received books, was frequently sought out for quotes by journalists, etc. I was pleasantly surprised when he called to say he'd seen my application and thought I'd be a great fit for his lab, and wanted to invite me to come for an interview. But, very oddly, he told me that due to some "internal politics at Harvard," he would only be allowed to invite a graduate student if he could give near-certainty that the student was planning to accept the invitation. So I would need to come interview before Harvard officially extended me an offer. Being young and naive, I didn't know that this wasn't a thing, so I arranged the interview.

As I recall, it was just the next day that the head of the lab I was working in asked to speak with me. He told me that he'd just gotten off the phone with a "very reliable source" that Marc Hauser was under investigation for scientific misconduct. Again, very naively, I wasn't sure exactly how seriously to take that information, and so finally the head of my lab said, "Well, they asked me not to tell you who told me, but it was [X]." X was the head of another lab I had applied to for graduate school, and had called my boss (they were friends and former coworkers) to ask him about my application. X was also a former trainee of Marc Hauser, and was friends with the students who had raised the accusation of fraud. (Academia is extremely incestuous, everyone is related to everyone else somehow.) Those students had apparently signed NDAs or something with Harvard to not discuss the allegations until the investigation was complete, so X wasn't supposed to know about it, and definitely wasn't supposed to be telling my boss or me about it. But when X spoke to my boss and learned that I was also applying to work with Hauser, they felt compelled to arm me with that information to help me avoid a serious mistake. X also was worried about it seeming like they were improperly trying to influence my decision of where to go for graduate school, hence asking my boss not to tell me that they were the source of the information. (Later getting to know X and their work better, I'm quite confident X just wanted to help me avoid making a mistake, and I'm very grateful for it.)

So now I was in a situation. I had already agreed to an interview with Hauser, and couldn't really back out of it without revealing that I'd learned about the allegations, potentially putting X's sources at risk. But of course there was no way that I could actually accept Hauser's offer. (I mean seriously, what the fuck was he thinking? He knew he was guilty, why the hell was he willing to burn my career on the pyre he'd built for his own?) So I had to go through with the interview, knowing that Hauser was under investigation for fraud, but pretending that I didn't know. I met his then-current students, all of whom except for one were very new and presumably had not overlapped with the students who had raised the allegations. To this day I don't know if his students knew, but they certainly didn't say anything to me about it, and were quite enthusiastic about the lab and the research. I find it hard to believe they didn't know, considering that all his computers had been confiscated by the university at one point, but I find it equally hard to believe that none of them was morally conflicted enough about the situation to at least give me some kind of warning. And who knows, Harvard was so messed up about the who thing, it's certainly possible they managed to create some crazy situation where Hauser's own students didn't even know that he was being investigated. Or were so constrained by the university's demands for secrecy that they felt compelled to go along with the dog-and-pony show. Hauser, of course, gave absolutely no indication that anything was wrong, other than a vague reference to some sort of unpleasantness that had led to his previous generation of students leaving his lab, hence all the current students being very new. The irony of a supposed expert in moral cognition behaving in this way certainly didn't escape me at the time.

Anyway, after I returned from the interview, I of course told Hauser that no, I was going to take my graduate studies in a different direction, but thanks for the offer. And shortly after that, Harvard sent me a polite letter of rejection to their graduate program. Two years later, it was finally publicly revealed that yes, Marc Hauser had faked a whole bunch of his data, using very dumb and obvious spreadsheet manipulations that are quite reminiscent of what Dan Ariely is now being accused of. He was banned from receiving federal research funding, stripped of tenure, and his scientific career was over. The sad thing about it is, Hauser was an excellent writer and a clever theorist, and if not for his dishonesty he could have contributed a lot to science. For years afterwards, maybe even still now, people in the field were frustrated by not knowing which of Hauser's studies may have actually been valid, which was a real shame because many of them were cleverly conceived and really changed how people thought about the field. But his fraud and lies tainted all of it, and his intellect was destroyed by his own ego. It is sadly a pattern I have seen more than once now, and while I had no reason to suspect that Dan Ariely was another member of this pitiable and despicable club, if these allegations turn out to be true he'll be neither the first nor the last to have joined it.

An amusing coda to my story, is that when X invited me to interview with their lab for graduate school, I couldn't acknowledge to them that I knew about Marc Hauser's fraud investigation, because that would reveal that my then-boss had told me that X was the one who had told him. So X knew that I knew about Hauser, and I knew that X knew about Hauser, but X didn't know that I knew that X was the reason I knew about Hauser, so I had to pretend that I didn't know about Hauser even though X knew that I did, because if I didn't know that X knew about Hauser then I wouldn't let on about Hauser, so letting X know that I knew about Hauser would also let X know that my boss had told me that X had told him about Hauser. So when X asked about other graduate programs I was looking at, I had to say "Well, I interviewed with Marc Hauser at Harvard..." And X knew that I already knew about Hauser but couldn't say anything then without potentially putting his sources at risk, because he didn't know if he could trust me not to do something to jeopardize them. The irony of being in this situation with one of the world's leading experts in social communication did not escape me. Anyway, I ended up not working with X for graduate school, though not because of any of this, and to this day I'm still extremely grateful for X's help at that time. I wonder if X knows that.
posted by biogeo at 9:40 PM on August 20, 2021 [35 favorites]


Ah, fuck. What an awful story. I wish I could say that I just can't imagine dealing with that during interview--students interviewing for grad school are so bright and cheerful and excited!--but, well, I received similar warnings over abuse of students rather than plagiarism when I was in that phase of my career.

I am so angry about the fucked up system we work in. I am so angry about the incentives that pressure people who care most about doing good and careful work into leaving, burned up like dying stars, while people who are primarily driven by their own sense of self importance or fucked up willingness to sacrifice everything else in their lives for the appearance of success stay.

I sat down with my new PI last week to talk about mentoring goals, and she asked my what mine were. I said: how do you stay in this field, with its never ending fire hose of things to do, the lack of support, the trade offs between looking after junior people and looking after yourself, the endless pressure: how do you figure out which balls in the air can drop and bounce, and which ones will shatter? How do you figure out which parts of your ostensible job to deliberately fail to do so you can stay sane? (She looked at me, laughed dryly, and said "...so no pressure, then," and I laughed and pointed out that I don't actually think it's entirely a solvable problem or indeed one that anyone has solved.)

I do think that in science we have a serious problem with misconduct because the jobs that we have created are so intensely fucked up, with so much competition and pressure and so little room for security or patience, that we are constantly bleeding the people who look at the system, look at themselves, and choose to leave when they lose faith in it. When you stay in a system you cannot fundamentally respect, you acquire the patina of a certain amount of bitterness. We are seeing these fucking cases over and over again because we are recruiting very bright people into an institution that is obviously not worthy of respect, that does not respect the people it consumes, and we offer nearly unlimited personal glory as the major payoff for survival. Of course this keeps happening.

There are certainly honest people who stay and try to maintain the faith that the first duty we have is to truth, and who insist that the second duty we hold is to the education and support of junior scholars. I can think of PIs whose integrity I trust without reservation. (Dan Bolnick, who has been intimately involved with the Pruitt shit, is one of them.) But they are thin on the ground and you generally do not find them as the lionized luminaries of their fields. The people for whom that endless cup of glory is the attraction, who are willing to sacrifice their whole lives for it and are bright enough and thoughtful enough to play the game well enough to win it--well. I don't think that it is surprising that we so often find rottenness when we peer beneath the shining skins.
posted by sciatrix at 3:17 AM on August 21, 2021 [13 favorites]


RetractionWatch for your daily source of scientific misconduct. In case anyone thinks this is an isolated and rare incident.
posted by Pyrogenesis at 4:31 AM on August 21, 2021 [2 favorites]


I've typed several long comments and then deleted them. I realize that I have so many grudges against Big Name Academia and Duke in particular and business schools in particular that I just better keep my mouth shut because I can't be fair or give anyone any benefit of the doubt. I really appreciate the comments here, especially yours biogeo. This shit is exhausting and has run so many people off forever.
posted by hydropsyche at 4:42 AM on August 21, 2021 [7 favorites]


Also, fuck the TEDification of science
posted by hydropsyche at 4:45 AM on August 21, 2021 [5 favorites]


Is this kind of dumb fraud more common among popular popularizers, or is it just as common among average scientists who nobody has heard of? Does fraud make you more likely to get rich and famous, or are rich and famous scientists picked at random from the fraudulent and non-fraudulent populations of scientists?
posted by clawsoon at 4:55 AM on August 21, 2021


The whole incentive system in scientific research is broken. In this case i can only assume that the real data revealed either no significant difference or the opposite of the hypothesis. But those aren't the kind of results that get you a reputation and good publications and the grants that those bring. So you fake it, you've already spent all this time and money on the study, and that can't all be for nothing. Inconclusive results, contrary results should be just as significant as the result you were looking for, and the scientific community should be far more focused on replication as a regular practice. In the early stages of the scientific revolution, if Newton published his papers on optics, everyone else in the community of Les scavants got out their prisms and saw for themselves. And they did the same when Faraday published a fascinating result on the interaction of magnetism and light. It was this universality of the physical experiment, the ability of another researcher to see the phenomenon in question happen again right in front of them that lay at the foundation of the entire epistemological dignity of science itself. Today instead you don't reproduce because well someone else did that and anyway who is going to fund the same experiment again just to see that it works when we have the peer reviewed publication in a journal we respect with a first author from a richer and more prestigious institution....
posted by dis_integration at 5:55 AM on August 21, 2021 [7 favorites]


Is this kind of dumb fraud more common among popular popularizers, or is it just as common among average scientists who nobody has heard of?

The issues are systemic as per usual, but broadly speaking the most fraudulent scientists are not that well known.
posted by Pyrogenesis at 6:14 AM on August 21, 2021 [2 favorites]


...and admittedly, in Newton's day, everyone went out and looked for themselves because just about everyone with the resources to purchase the equipment outright was born into wealth and privilege, and all of them were independently wealthy. Which also seems bad.

(It's not even a disagreement with the basic "everything is awful" thrust of your argument, nor your assertion that we should all be replicating things and publishing the results far more frequently; it's more that I think science has never actually been better than it is today, just differently beholden and compromised. I don't think that observable physical phenomena are the main factor that changes the constrictions on our institutionalized methods of acquiring and archiving knowledge. I think the main factor is resources and the pesky desire of scientists to be able to be scientists and human beings who get to eat and sleep and have families and so forth. Either you restrict the role to people who already have the capital or, apparently, you get this fucking madness, because science qua science isn't worth very much when the acquisition of capital is the highest goal of an economy--oh, dammit, I'm getting more political than I frankly like or want to be. And yet. It is impossible not to feel the societal devaluation.

And I am frustrated. And tired. And deeply, hollowly sad.)

Less bleakly, clawsoon, I think it's a mix: I think people who crave adulation and success above all else, and who are working at the very tips of their limits and run into what they perceive as an inability to succeed are the most likely to be... tempted. It's like the A- student who cheats on tests to get the A that every faculty member who has ever taught pre-meds has encountered: when you say with your attention and your actions and your resources that the most important thing is the brass ring, some very bright and very motivated people will take you at your word.

The more you create a harsh, hard threshold between Winners and Losers--as highly competitive grant cycles that tend to be repetitively awarded to people who have already won grants does--the more desperate people get. Cheating often comes not out of inability but out of desperation, and the entire academic infrastructure is designed to inculcate us all with a sense of desperation in order to extract maximum effort.
posted by sciatrix at 6:18 AM on August 21, 2021 [10 favorites]


Either you restrict the role to people who already have the capital or, apparently, you get this fucking madness,

I totally agree, but also want to point out that it's not necessarily one or the other. In my experience, the people who last the longest in science today are very often (though not always) the people who have some form of financial/material resources independently of their scientific career. It's not exactly the same level of aristocratic wealth that was required for a scientific career back in Newton's day, but most of the people I know who've persisted in academia through graduate school and postdocking get some form of material support from family, and those who rely entirely upon their graduate stipend or postdoc salary seem to burn out from the stress at a much higher rate. And perversely, I think the people who end up morally compromised aren't necessarily the ones driven by financial desperation for career success, but who are already comfortable enough to be driven solely by their egotism and desperate need for external approval.
posted by biogeo at 9:36 AM on August 21, 2021 [6 favorites]


I am wondering if the emphasis on the data, it's collection, and the cost of collecting the said data; is a contributing factor. Many of these retractions are based on data that was finally released after the original authors have moved on from that data for their work.

I come to this from Chemistry/Chem Engg. background. In any paper I was involved in getting published, we had to send files with the raw data to be published as supplemental material. For example; if we did any curve smoothing; the technique had to be specified. Especially in noisy spectra; it is easy to get artifacts to look like signal. So if anyone wondered why we were seeing something; when their spectrum could not; they could go and get the raw data and see if we were seeing ghosts.

In many of the retracted papers be it economics or psychology and the like; it looks like data gathering costs money and therefore is hoarded by the PI till they have squeezed every drop of info out of it. I can see why they would be loath to publish the data before that.

So how to avoid this? You can't force the publication of the raw data in this situation. One solution is the pre-approval of papers for publication, so that researchers aren't torturing the data to make it say something. But that only solves this partially; as it still does not tackle the cost of data collection and it's subsequent hoarding.

ETA: This does not, of course, explain medical paper retractions. There, I am assuming, data has to be released.
posted by indianbadger1 at 8:50 AM on August 22, 2021 [1 favorite]




Rebecca "Skepchick" Watson has just launched a 17 min take-down of Ariely and the Font-flippers.
posted by BobTheScientist at 9:29 AM on August 26, 2021 [2 favorites]


Interesting responses on the Green asking how we can trust a researcher focused on dishonesty, back from 2013.
posted by pwnguin at 7:12 PM on August 29, 2021 [1 favorite]


The paper is now retracted because "Simonsohn, Simmons, and Nelson have provided evidence to question the validity of the data in the article." That was quick!
posted by cyanistes at 12:30 PM on September 16, 2021 [4 favorites]


« Older The Rise and Swift Fall of ‘Jeopardy’ Host Mike...   |   Image of water, deodorant, air freshener, cereal... Newer »


This thread has been archived and is closed to new comments