when scientists get angry
February 3, 2010 2:16 AM   Subscribe

"Papers that are scientifically flawed or comprise only modest technical increments often attract undue profile. At the same time publication of truly original findings may be delayed or rejected." In an open letter addressed to Senior Editors of peer-review journals, Professor Austin Smith (publications) and another 13 stem cell researchers from around the world have expressed their concerns over the current peer review process employed by the journals publishing in the field of stem cell biology.

The review process is a necessary, albeit stressful (warning: YT link, Downfall meme) part of every scientist's life. However, with the increasing dependence on high impact factor publications for adequate funding, many are worried that a "select group of a few reviewers who think of themselves as very important people in the field" are exerting undue control over what is published in such celebrated journals as Science and Nature).

The suggested solution? "...when a paper is published, the reviews, response to reviews and associated editorial correspondence could be provided as Supplementary Information, while preserving anonymity of the referees."

Sir Mark Walport, director of the Wellcome Trust, explains to BBC radio.
posted by kisch mokusch (25 comments total) 5 users marked this as a favorite
 
The suggested solution? "...when a paper is published, the reviews, response to reviews and associated editorial correspondence could be provided as Supplementary Information, while preserving anonymity of the referees."

Holy shit, now there's an idea! And since journals are, basically, online entities now, this should be simple.

There are a number of problems with the increasing focus on "high impact" publications - there are certain fields in science which are rarely "high impact", but are nonetheless vitally important. I think of the people doing systematics and cladistics and taxonomy. These people don't get many papers in Nature, but without their work much of the rest of biology would suffer greatly. The way the funding system is going, I don't know how these people can survive.
posted by Jimbob at 2:25 AM on February 3, 2010


Thomas Kuhn called. He wanted his insight back.
posted by DU at 2:58 AM on February 3, 2010 [2 favorites]


"Papers that are scientifically flawed or comprise only modest technical increments often attract undue profile. At the same time publication of truly original findings may be delayed or rejected."

I like that the Wikipedia contains a history of the edits. It sometimes gives you some idea of how bitterly contested some "facts" are.

But this Wikipedia-like proposal seems directed to embarrass (or harass) the "select group of few reviewers" whom cannot really be all that anonymous.

Perhaps they deserve it, but it seems like an ugly solution. Academic disputes are already bitterly contested. Hanging out the dirty laundry and making it pseudo-anonymous-personal isn't going to make it less so.
posted by three blind mice at 4:17 AM on February 3, 2010


I used to be an editor for a scientific journal that had you read the paper and then email the authors to ask any questions you might have and offer suggestions. It was really, really nice, because it emphasized the concept that the editor is there to help get a paper published, not to hinder it. And while I worried that I might garner enemies across the globe, instead I'd get e-holiday cards from authors, and reprints of subsequent papers.

This probably wouldn't work for a major journal, but it was a good exercise.
posted by acrasis at 5:01 AM on February 3, 2010 [1 favorite]


Hanging out the dirty laundry and making it pseudo-anonymous-personal isn't going to make it less so.

No, it'll stay dirty. But hopefully it will make it less personal, which is kind of the point. Science is a process without room for ego. Facts are facts, and they don't give a flip how popular you are. Of course this is an idealized view. Of course there's ego involved. But the extent to which it is is not helped by the fact that the review process is often conducted in the equivalent of a Gilded Age smoke-filled back room.

Unfortunately, doing this in the open has an unfortunate practical problem in a world where the popular conception of Science is as a set of laws etched on stone tablets that make your toaster work and the Internet keep getting faster (in turn carved in the 17th century to replace a previous set of laws placed on a previous set of stone tablets carried down from a hill somewhere in the Levant): it shakes confidence in the solidity of said tablets, and it makes the coal company stooges get ever louder about uncertainty in the "Climate Change Debate", as if there were such a thing.
posted by Vetinari at 5:09 AM on February 3, 2010 [1 favorite]


In some fields the reviewers are less anonymous than you might think - often the author is asked to suggest reviewers, asked to specifically name persons who should NOT be asked to review (no reasons necessary), and beyond that several times my colleagues have been fairly sure who a specific reviewer was simply by style or comments made in the review. On one occasion I received a review that ended with "Tell (coauthor) I said 'hi'" which is about as non-anonymous as I have ever seen.

Perhaps complete transparency would be nice. In my experience most reviewers are trying as hard as they can to make sure the manuscript, if published, is of high quality. Comments I get may not be pleasant to read but they are meant to improve rather than disparage my research. It's frustrating that some people let petty personal arguments spill over into their professional lives.
posted by caution live frogs at 5:21 AM on February 3, 2010


Looks like Nature tried this for a while too: Here's a description of the test.
posted by heyforfour at 5:32 AM on February 3, 2010


For those interested in seeing what this looks like, Nature's EMBO, a general biology journal, started doing it in January; here's their first example (pdf) (which shows a very typical exchange to my eye, having published and reviewed in mechanical engineering and cell biology).
posted by Mapes at 5:36 AM on February 3, 2010


Perhaps complete transparency would be nice.

Transparency in pursuit of facts, yes, always.

But hopefully it will make it less personal, which is kind of the point.

Yeah, but this seems entirely "personal." This call for transparency seems only intended to call out the "select group of few reviewers" who are perceived as (or singled out as) the problem.

it shakes confidence in the solidity of said tablets, and it makes the coal company stooges get ever louder about uncertainty in the "Climate Change Debate", as if there were such a thing.

Yes, but it not also a mistake to treat them as tablets? If there is disagreement, or dissent, over some aspect of research isn't this also useful knowledge? If there is a flaw in a paper on climate change, it should not matter if a coal company stooge points it out. The pursuit of science is facts, right?

But of course, silly me, the pursuit of science is funding.
posted by three blind mice at 5:51 AM on February 3, 2010


heyforfour, the 2006 trial conducted by Nature was not quite the same as the proposition being made in the open letter. The Nature trial was to allow open peer review - basically signed comments from scientists online much the same as Mefites comment on FPPs. The papers were still subject to standard peer review. As you cans see here, all commentary was voluntary and the trial was, by and large, unsuccessful. What Smith and co. want is to put the anonymous peer review correspondence online (see Mapes' EMBO link), which is somewhat more radical.
posted by kisch mokusch at 5:51 AM on February 3, 2010


At least some journals from the open access publisher BioMed Central put the details of the peer review process online, including the original manuscript, reviewer comments, revised manuscripts and author responses to the reviews (here's a random example from BMC Cancer), and they've been doing this for a few years. It's halfway between the old Nature trial and the new EMBO method, because while it is a formal peer review process and not pre-print comments, the reviewers are not anonymous.
posted by penguinliz at 6:48 AM on February 3, 2010


When I was a scientist and peer-reviewed papers, (if the journal didn't have a policy against it) I signed my reviews. Kept me honest. Or rather, kept me focused on improving the paper I was reviewing, rather than just finding reasons to tear holes in it without offering up help. The end goal is advancing science (Science!), right?
posted by gaspode at 6:51 AM on February 3, 2010 [1 favorite]


When I was a scientist and peer-reviewed papers, (if the journal didn't have a policy against it) I signed my reviews. Kept me honest.

Every journal peer-review system that I'm familiar with goes to some lengths to ensure that this are as anonymous as possible. This is usually in the instructions to the reviewers. I can see why you think that this was desirable, but at the same time, I think that identifying reviewers just leads to rancor and ill-feelings in the long term. A good editor is an essential mediator in the process.

While open reviews might restrict bad behaviour, being overly negative etc..., I think that lack of anonymity would also make it difficult for researchers to say no to papers, especially in the cases where the authors may be important to the reviewer's own advancement. I'll-scratch-your-backism can't lead to good science.

...several times my colleagues have been fairly sure who a specific reviewer was simply by style or comments made in the review.

This is also a serious problem. Again, a good editor can (and many do) catch these sorts of things.
posted by bonehead at 7:35 AM on February 3, 2010


Mapes's link is quite interesting. That looks like a typical review process to me too. If reviewers are kept anonymous, I can't but think that this would be a good thing.

One of the difficulties of scientific discourse is that we don't get to see a lot of review comments. We get to see comments to our own submissions and we occasionally get to proof those of our colleagues, but there's not a wide body of reviews out there to study to improve one's own review writing.

As well as possibly improving the writing of reviewers who conduct themselves like three-year-olds without their favourite truck, this could be a great teaching tool.
posted by bonehead at 7:41 AM on February 3, 2010


Huh. I know a whole heap of scientists that signed their reviews. Maybe a thing in my field? There was certainly (at least in my experience) no "scratch your back" kind of thing.
posted by gaspode at 7:48 AM on February 3, 2010


Somebody needs to do an fMRI study of reviewer number 2s.
posted by srboisvert at 8:24 AM on February 3, 2010


In general, with nonanonymous refereeing, I would be potentially worried about the effect on junior researchers reviewing the work of senior researchers. Not terribly worried, but the concern would be there in the background, making it less likely for junior researchers to agree to referee.
posted by leahwrenn at 8:24 AM on February 3, 2010




I've never understood why the authors of the manuscript also aren't kept anonymous. That seems to me to be the bigger problem. That would prevent a lot of favoritism/scratch-your-backism.
posted by one_bean at 10:37 AM on February 3, 2010


From what I've seen, the most vitriolic comments seem to be coming from junior researchers who haven't yet learned a sense of professional tact. There are many ways to say "I disagree with your conclusion" but only a few of those will actually be useful as editorial comments.

Drawing from two of my own review experiences, "The authors clearly have no understanding of the relevant literature because they fail to cite [research article]" was unhelpful, unprofessional, and to be frank sounded like it came from a grad student who was asked by the PI to do the review in his/her stead. "The authors may wish to consider [research article] in their discussion" was quite helpful. In both cases, the article we were asked to consider was highly likely to be one associated with the lab in which the reviewer works or had worked. However, in the final manuscripts, we cited the article in the second example, and went to press with no mention of the article in the first. Not because of the tone of the review - it didn't help their case, of course - but because we knew the literature, and the suggested article had no bearing on our discussion. All the statement ended up doing was pissing me off. It took me a LONG time to write a calmly reasoned response.

The difference between myself and the reviewer is that when reviewing an article, I take the time to write what I feel, then spend quite a bit of time revising it to ensure that my comments are impersonal, based in reason and not in passion, and are actually aimed at helping improve the manuscript rather than showing off the number of holes I can poke in it.

(I will for damn sure tell you when your figures are poorly done and/or inconsistently formatted, though. Sloppy figures implies sloppy science. If I reviewed it, I want to make sure you have a chance to address this before sending it out for anyone to see. This is your work, man. Take some pride in it!)
posted by caution live frogs at 10:39 AM on February 3, 2010


I've never understood why the authors of the manuscript also aren't kept anonymous.

It's very common practice to reference one's own work, particularly for methods and the like. this is also true if you're doing follow-on or related work, which is also common. It would be trivial in most cases to guess which group a paper was from.
posted by bonehead at 12:31 PM on February 3, 2010


I've never understood why the authors of the manuscript also aren't kept anonymous.

It's very common practice to reference one's own work, particularly for methods and the like. this is also true if you're doing follow-on or related work, which is also common. It would be trivial in most cases to guess which group a paper was from.


Additionally, many fields have preprint servers; any paper I write goes up on the preprint server at the same time it's submitted to a journal, so there is no question whatsoever who the authors are.

Even if there is no preprint server, even if you don't reference your own work, even if the work isn't a direct follow-up to your previous work- most scientists are working in niche fields. There are only a very small number of groups who could possibly have written a given paper. Add in writing style and a little knowledge of who is working on what, and anonymity just isn't possible.

It's too bad, because double-blind reviewing would be great (for eliminating all sorts of bias; gender, race, institution, well-known vs. not, etc). But I just don't think it's possible.
posted by nat at 3:24 PM on February 3, 2010


I'm not a scientist, but haven't there been a number of eye-poppingly awesome papers in obscure journals in the past? I remember in a class I took in engineering, the prof mentioned a couple of major advances in computer arithmetic that were originally published by nobodies in journals no one had heard of. My point is just that doesn't the continued survival of minor journals play an important role, and if so, isn't the focus on impact factors and such maybe kind of an overzealous thing?
posted by Xezlec at 7:15 PM on February 3, 2010


I'm not a scientist, but haven't there been a number of eye-poppingly awesome papers in obscure journals in the past?

Wikipedia points out that if you measure Einstein by the metric that's currently in fashion to rank the impact of scientists, the H-index, and if he'd died in 1906 just after his major contributions had been made, he would get a score of 4 or 5. My pathetic ass has a score of 3, last time I checked. My boss has a score of 27.

The focus on impact factors of journals and scientists is pretty awful. It's basically there so that the bureaucrats who hand out the funding don't have to actually know anything about the science or read the literature.
posted by Jimbob at 8:36 PM on February 3, 2010


Having thought about out it, I think I'm for the new proposal, if only for the potential to curb the "moving the goal posts" behaviour of some reviewers. Every submission I have been on or heard about has resulted in reviewer's requesting additional experiments. Which is fine. It's expected, and part of the process. But frequently, upon re-submission, some reviewers will request additional experiments that were not in the original review, even when the first set have been performed/addressed. It's incredibly poor form, and extremely unfair on the submitting scientists, especially when editors let it happen. An open policy might force some of the lazier editors to pay attention and maybe even, when required, grow a pair. It would also be interesting to see who doesn't need to perform additional experiments. I mean, maybe there really is a "clique".
posted by kisch mokusch at 1:06 AM on February 4, 2010


« Older Ten minutes of innuendo   |   Prawns Newer »


This thread has been archived and is closed to new comments