Science publishing: The trouble with retractions
October 10, 2011 8:56 PM   Subscribe

 
"I don't think there's any doubt that we're detecting more fraud, and that systems are more responsive to misconduct. It's become more acceptable for journals to step in," says Nicholas Steneck, a research ethicist at the University of Michigan in Ann Arbor.

That quote sums it up. Automated detection systems, access that was nearly unimaginable a generation ago, and the simple spread of information has made fraud and plagiarism that much harder to conceal. This is a step forward for science and academic journals in general.
posted by Saydur at 9:19 PM on October 10, 2011 [1 favorite]


"...It's become more acceptable for journals to step in"

Though any jaunt through Retraction Watch will show a continuing serious inconsistency in that willingness between journals.
posted by Blasdelb at 9:28 PM on October 10, 2011 [2 favorites]


I was assigned an article for a methods class which said "the way everyone is doing time series cross section analysis is wrong (ie produces standard errors at lease 50% smaller than they should be)" and pretty much the whole field agreed with their argument (the paper has been cited 2400 times).

As exemplars they picked apart a few published papers and I was wondering how many people doing it the wrong way issued revised finding.
posted by shothotbot at 9:57 PM on October 10, 2011


The reasons behind the rise in retractions are still unclear. "I don't think that there is suddenly a boom in the production of fraudulent or erroneous work,"

Speculation/hypothesis - there are a *lot* of PhD candidates and post-docs out there.

There Are *VERY* Few Job openings. (in academia... and otherwise)

Candidates and post-docs work for a principle investigator (PI) who's got a tenure track/tenured professor position at an university.

In the hard(er) sciences in North America (even compared to Europe/Britain), as a candidate, you pretty much need to demonstrably contribute something novel (and important) to your field in order to graduate, which means getting published. The number and cumulative impact factor strongly affects one's competitiveness in getting salary support as a post-doc. As a post-doc, you need to publish a certain number of papers in journals that add up to a particular number of impact factor in order for you to even meet the cut to get your CV read by hiring committees at universities for a job.

That's pretty high stress to publish, to understate the situation.

Hypothetical PI takes in more candidates and more post-docs; candidates and post-docs who build on the work of previous candidates and post-docs.

Results cannot be reproduced. At all. PI goes over old data, sees that it's not entirely on the up-and-up. Asks the journal to retract.

Not entirely the fault of the PI that the original work was published under. "Successful" and prolific labs can sometimes have a dozen to scores of post-docs; one person - the PI - can't review all of the primary raw data collected and analyzed. In those labs, it's just not possible. Besides, the PI will be spending the majority of their time writing grants to keep the lab running or working on papers to be submitted to be published, not actually acquiring or analyzing the data.

--

I wonder; of course PIs who retract their hypothetical candidate/post-doc produced untrue papers will stop associating them, but what about the unscrupulous former candidates/post-docs... if they will even know that what they once published had been retracted.

Also, it would be interesting to see a detailed analysis of what kind of lab retracted papers come from; big big big labs or smaller junior PI labs...
posted by porpoise at 9:58 PM on October 10, 2011 [2 favorites]


I wonder; of course PIs who retract their hypothetical candidate/post-doc produced untrue papers will stop associating them, but what about the unscrupulous former candidates/post-docs... if they will even know that what they once published had been retracted.

Or maybe, y'know, the PI should stop insist on being listed as an author of a paper if they don't have the time or inclination to actually make sure the data is collected, reduced, and interpreted properly.
posted by chimaera at 10:39 PM on October 10, 2011 [8 favorites]


What I said; sorry about the ambiguity of the last "they" - "they" being the candidate/post-doc.
posted by porpoise at 11:12 PM on October 10, 2011


I am not in anynway associated with this sort of research. I raed thecarticle and i do not understand why, if there is an error or incorrect information for any and every reason why the paper would not be either corrected or retracted. I like the idea of coming up with a separate terms for retractions based on fraud, shortcuts, etc and those retracted because of a simple hinest mistake. It also seems to me that the onus of detection orvof confirmation of accuracy should first be on the institution atbwhich thevresearch is conducted and then on the publication.
posted by JohnnyGunn at 11:52 PM on October 10, 2011






From the footnote at the bottom of the infographic: "Acta Crystallographica E saw 81 retractions between 2006–2010."

What the what? This is more than three times as many as any other journal that's actually shown in the graph. Do they just publish a lot more papers than the journals shown in the graph, or is there some other reason why they're so off the scale?
posted by Johnny Assay at 6:35 AM on October 11, 2011


chimaera: the PI should stop insist on being listed as an author of a paper if they don't have the time or inclination to actually make sure the data is collected, reduced, and interpreted properly

Unless the PI does all the work or an unreasonably large amount of it, it will be possible (even easy) for people in the lab who know what they're doing to forge results. It's also common that collaborations span over different groups in different labs, and you just have to trust that you have chosen to work with honest people.
posted by springload at 7:59 AM on October 11, 2011


"Unless the PI does all the work or an unreasonably large amount of it, it will be possible (even easy) for people in the lab who know what they're doing to forge results. It's also common that collaborations span over different groups in different labs, and you just have to trust that you have chosen to work with honest people."

This is absolutely true, in most disciplines it would be trivially easy for someone with a little charisma and a little brains to fool their PI, colleagues, and reviewers. However, unless their field promptly died, or their research was a dead end with no connections to any other fields and no one else interested in it (useless anyway) they wouldn't get away with it. Unless you guessed exactly right, and if you could be sure it wouldn't be a scientific advance worth publishing anyway, you would be caught eventually when people wondered why you guessed wrong. You would also do stupid amounts of damage to the career of the person who found you out, their PI, everyone who used your research, and their patients if applicable, or their models if not.

It does pose a peculiar problem for PIs though. Lying about results absolutely ends the career of the person who did it, absolutely. No one could ever trust your results ever again, even if they wanted to, because usually at least one person wastes years of their life doing work that ends up ratting you out. However, it also causes havoc for all the careers immediately around the person and often their institution. If a PI picks a loser, who is to say they wont pick more? If I were dedicating three years of my life to a question that came out of such a lab, I would certainly consider that. It is a big enough blow that a graduate student's fabricated results often ends the career of their PI.

When I first started working, my first PI, who was a badass in so many ways, definitely gave us room fudge numbers while we were starting out in ways where he would notice. I only realized in retrospect, which I suppose was the idea, but he later copped to it reasoning that the only way to know someones character is to give them room for it to fail. I now always do the same thing with new undergrads and will explain their success in any letter I write for them.
posted by Blasdelb at 8:32 AM on October 11, 2011


Unless the PI does all the work or an unreasonably large amount of it, it will be possible (even easy) for people in the lab who know what they're doing to forge results. It's also common that collaborations span over different groups in different labs, and you just have to trust that you have chosen to work with honest people.

That's true to a certain extent, but a little fudging of numbers or analysis can mean the difference between a shaky paper and a better one, and I wouldn't necessarily expect all authors to be able to detect something particularly subtle.

However, you seem to be talking about outright fraud, and perhaps more tenured profs and higher-up PIs (and college tenure and compensation committees) should remember that just getting their name on a million papers isn't the purpose of research, and the more slipshod one is about allowing their name to be tacked on as an author, the more likely they are to be bitten when they haven't done their due diligence in making sure that the paper is sound.
posted by chimaera at 9:50 AM on October 11, 2011


« Older A half-century ago, in New Rochelle...   |   Synesthesia With A Hex Code Newer »


This thread has been archived and is closed to new comments