PLOS Journals Now OPEN for Published Peer Review!
May 23, 2019 5:44 AM   Subscribe

PLOS has added an opt-in option for authors to publish their peer review history alongside their accepted manuscript. The documentation for each review process will have its own DOI.

The process includes "an option at acceptance for authors to decide whether to publish the full peer review history alongside their work. This package includes the editor’s full decision letter, complete with reviewer comments and authors’ responses for each revision of the manuscript. Peer review history will have its own DOI enabling reviewers to take credit and earn citations for their contributions. If the reviewers have chosen to sign their reviews, their name will also appear on the published reviews but they can also chose to remain anonymous."

This is such a good idea.
posted by carter (16 comments total) 13 users marked this as a favorite
 
I'm not so sure that this is a great idea. (Also maybe the mods can edit that editorial comment out?) I do 2-4 reviews a month and over half of them I have to be really honest (in a constructive way) about the authors' work. There are many reasons why anonymity is good. When I have to "take down" a more senior academic, if they knew my identity, my career could be over.
posted by k8t at 6:40 AM on May 23, 2019 [7 favorites]


I'm also a bit skeptical, in that turning reviews from something going to the author(s) and editor(s) to something with potentially a wider audience might create more Greater Internet Fuckwads.
posted by GCU Sweet and Full of Grace at 6:56 AM on May 23, 2019 [4 favorites]


I'd say a good third of the papers I look at have some fundamental error in them, either in the models or the statistics. But I suppose this won't include the rejections and PLOSONE is supposed to be "Nature Quality". So presumably the originals they publish will be pretty high quality to begin with.

Still, I've been in fairly extended discussions with authors/reviews, and had to mediate that as an editor. I don't know that I'd really want some of those discussions out in the open either, especially if the reverse is true; a well-known reviewer uses their reputation to make ill-advised comments about a less well-known researcher's paper. There's lots of room for bad behaviour. This sort of social gaming happens enough already.

In short, I'm not seeing a situation where I would really want to opt in on this---as a reviewer in particular.
posted by bonehead at 6:57 AM on May 23, 2019 [4 favorites]


[Didn't mean to editorialize - apologies!]
posted by carter at 7:06 AM on May 23, 2019


Computer science has been doing this for a little while now in some major venues (openreview.net) and it mostly works fine. In fact rejected papers, along with their reviews, are also hosted and frequently cited despite having been rejected.
posted by vogon_poet at 7:24 AM on May 23, 2019


They are typically double-blind, though. That makes a difference.
posted by vogon_poet at 7:25 AM on May 23, 2019 [1 favorite]


Double-blind is a huge difference, but the authors obviously can't be. I just can't get my head around why I would ever want, as an author or reviewer, to have review comments de-anonymized.
posted by bonehead at 7:31 AM on May 23, 2019


I'm also not convinced this is a great idea. In general it seems to me that anything non-anonymous favors those with greater social capital, not greater ideas.

To amplify k8t's concerns: I'm coming up for tenure, and as part of that process my school will send my dossier to senior faculty at other institutions to solicit their opinion on me. The tenure committee for the school chooses those reviewers -- I have no control. It's terrifying enough as it is; the idea that they might send it to someone who has a grudge because of my honest review of their paper... ugh.

And while it looks like one could avoid this by choosing to remain anonymous, I'm concerned about pressure the non-anonymous option might create for early-career reviewers to become non-anonymous (nymous?) in order to provably demonstrate that they are engaged in peer review. Departments expect peer review for promotion and tenure, but in an anonymous review system there's no way to prove one's service; once there becomes a way to do so, I worry that the standard may move toward non-anonymous review, creating an uncomfortable catch-22.

vogon_poet, does the anonymous authorship for the double-blind conferences make people reluctant to submit on the grounds that they would not get credit for their work? Or are there other mechanisms/incentives to compensate?
posted by Westringia F. at 7:37 AM on May 23, 2019 [4 favorites]


I didn’t see anything in the announcement that stated reviewer identities would be disclosed - only that they could CHOOSE to disclose. There are any number of times that a specific reviewer has an axe to grind, this would simply ensure that readers could see this for themselves. Some people are really triggered by an innocuous use of a term and have a kneejerk reaction against the author as a result. For example, one specific molecule involved in research I have done has a naming conflict; if you use the “wrong” name for it some reviewers get instantly indignant and start criticizing irrelevant bits in the study. They seem to assume if you use name X then you are The Enemy and are clearly a poor scientist. This sort of thing needs exposure.
posted by caution live frogs at 7:51 AM on May 23, 2019 [2 favorites]


It’s anonymous to everyone but a few conference organizers, up until the conference date, at which point names are revealed of the authors (not reviewers, typically). This is the norm now for most of the high-impact venues.
posted by vogon_poet at 8:00 AM on May 23, 2019 [2 favorites]


Publishing anonymized reviews should be mandatory, and it would be nice to do this for past papers too. I'm very much with caution live frogs. I have had papers go through review where it was pretty clear that the reviewers didn't read through the paper (I'm assuming that machine learning papers get a lot of that). In the last paper I got published, there was a reviewer who, working with a MS Word draft and track changes active, made a change, then proceeded to ask us authors a question on the statement they wrote, which of course, we didn't, they just assumed we meant different.

The shenanigans that I've seen happen in peer review are just mind blowing in an age where the consensus of climate change is being attacked in a non-academic, non-scientific way.
posted by JoeXIII007 at 10:31 AM on May 23, 2019


Shenanigans are one of the reasons I am not certain that airing sometimes protracted and hard-argued disagreements (with an ultimate decision from an editor that might satisfy none of the parties) is a benefit to academic inquiry. Because I believe that increasingly those discussion aren't allowed to happen in a socially neutral context.

When arguments are made in the assumption of good faith, a particular kind of conversation can happen. When arguments are made in bad faith, immediately assuming the worst possible interpretation of any text, then honest, valuable interchange is not possible.

I've lived this up close and personal for more than a decade. I've been involved in a number of publications that are done on topics that make (our) national news regularly. When communications are being picked over by self-interested or ideologically-driven actors looking for phrases they can take out of context or misrepresent for their own agendas, scientifically valuable conversation gets shut down hard, and everyone starts circling wagons.

Climate change is exactly the sort of thing I'm talking about; emissions of methane or CO2 from industry sources. I have colleagues who have been part of the vigourous discussions on neonicanoid affects on Bee populations, and heard horror stories from both the industry and NGO sectors.

Maybe I'm being too cautious, but I've had my own work used this way, even attempted to be used to argue against me (in bad faith) in public fora. I am less than willing to open good-faith referee comments, let alone some agenda-driven or misunderstood comments to the broader really bad faith political actors out there. Imagine what the anti-climate change US senators would have done to the NOAA scientists if they could get at all the petty and wrong commentaries that almost certainly could be dragged up in review comments on their papers?
posted by bonehead at 1:21 PM on May 23, 2019 [2 favorites]


Now that this has been out for a day, no one I know that makes their living as a researcher and peer review is part of their life approves.
posted by k8t at 4:16 PM on May 23, 2019 [1 favorite]


Add me to the list of people who think this is a bad idea. This sounds like one of those things that looks good on paper but will be thoroughly trashed by bad faith actors. Even if the reviews are anonymous, they'll be using style/language analysis to try to identify the reviewers. Furthermore, if I as a reviewer know that my reviews are going to be available on some website, I'll be either less likely to accept to review. the other thing thats annoying is the carrot offered by a DOI and some credit points which will instantly turn into some meaningless iImpact Factor like index meant only for university evaluators to torture academics with.
posted by dhruva at 7:29 PM on May 23, 2019 [1 favorite]


Westringia F.: "...And while it looks like one could avoid this by choosing to remain anonymous, I'm concerned about pressure the non-anonymous option might create for early-career reviewers to become non-anonymous (nymous?) in order to provably demonstrate that they are engaged in peer review. Departments expect peer review for promotion and tenure, but in an anonymous review system there's no way to prove one's service; once there becomes a way to do so, I worry that the standard may move toward non-anonymous review, creating an uncomfortable catch-22."
I think this is already being pretty straightforwardly addressed by Publons, which works for researchers as a neat way to verify anonymous reviewer activity and for journals as a source of neat data. Early career researchers get a citable, documentable, independent source of metrics you can make public and put on your CV.

They seem to be trying to expand it to track other metrics they're not doing so great at, but for the core function its fantastic.
posted by Blasdelb at 4:23 AM on May 24, 2019 [2 favorites]


Interesting! I wasn't aware that Publons tracked review activity (I was under the erroneous impression that it just tracked citation info). That's nice on its own, & alleviates the concerns I had about pressure to de-anonymize. Thanks! Definitely gonna set up a profile & encourage my trainees to do the same now.
posted by Westringia F. at 5:12 AM on May 24, 2019


« Older What's Small, Yellow, and Buzzy?   |   "It was a little creepy" Newer »


This thread has been archived and is closed to new comments