The Emergence of a Citation Cartel
May 15, 2012 7:17 AM   Subscribe

The emergence of a citation cartel. "Cell Transplantation is a medical journal published by the Cognizant Communication Corporation of Putnam Valley, New York. In recent years, its impact factor has been growing rapidly. In 2006, it was 3.482. In 2010, it had almost doubled to 6.204. When you look at which journals cite Cell Transplantation, two journals stand out noticeably: the Medical Science Monitor, and The Scientific World Journal. According to the JCR, neither of these journals cited Cell Transplantation until 2010. Then, in 2010, a review article was published in the Medical Science Monitor citing 490 articles, 445 of which were to papers published in Cell Transplantation. All 445 citations pointed to papers published in 2008 or 2009 — the citation window from which the journal’s 2010 impact factor was derived. Of the remaining 45 citations, 44 cited the Medical Science Monitor, again, to papers published in 2008 and 2009. Three of the four authors of this paper sit on the editorial board of Cell Transplantation. Two are associate editors, one is the founding editor. The fourth is the CEO of a medical communications company." (from Scholarly Kitchen, via Andrew Gelman.)
posted by escabeche (26 comments total) 15 users marked this as a favorite
 
Reminds me about how Google's PageRank algorithm got exploited. PageRank was a good idea but as soon as people realized that there was money to be made by exploiting its weaknesses, Google had to start looking at other factors to maintain a high signal-to-noise ratio for web search results. The scientific research community will have to do the same thing with regard to impact factor, maybe even abandoning it altogether.

What Celsius1414 astutely wrote comes to mind:
It's interesting to ponder just what percentage of our population is primarily involved in the direct misleading of other people. I suppose all of us are to a lesser degree, but what if it's your vocation?

Never mind whether they can sleep at night -- how do they know what's really real?
posted by Foci for Analysis at 7:46 AM on May 15, 2012 [3 favorites]


It's the Heisenberg Unscrupulous Principle: any metric tied to a financial incentive will be gamed.
posted by Horace Rumpole at 7:59 AM on May 15, 2012 [10 favorites]


Thinking about the research for this article got my librarian heart pumping. Thanks for posting!
posted by activitystory at 8:35 AM on May 15, 2012


How does this not violate the "A Method for Making an Ass Load of Money in Science Publishing by Being a Massive Tool" patent that their competitors have build their business empires around?
posted by Kid Charlemagne at 8:39 AM on May 15, 2012 [4 favorites]


The JCR provides citing, and cited-by, matrices for all of the journals they index; however, these data exist only in their aggregate and are not linked to specific articles. It was only seeing very large numbers amidst a long string of zeros that I was alerted to something odd going on — that and a tip from a concerned scientist. Identifying these papers required me to do some fancy cited-by searching in the Web of Science. The data are there, but they are far from transparent.
...
Unlike self-citation, which is very easy to detect, Thomson Reuters has no algorithm to detect citation cartels, nor a stated policy to help keep this shady behavior at bay.
Are Forensic Statisticians recognized as a profession yet?

If so, maybe we can expect to see Thomson Reuters advertising for one.

Davis makes a prophecy:
If you don’t agree with how some editors are using citation cartels, you may change your mind in a year or two as your own title languishes behind that of your competitors.
Unless this little blaze is promptly extinguished, there could be a flashover effect such as baseball experienced (and cycling) when it ignored its steroid problem.
posted by jamjam at 8:42 AM on May 15, 2012


I'm getting grumpier every time I hear this kind of dishonest manipulation. Where do you find the pivotal papers that stand on their own merit? Not in journals, I guess.
posted by francesca too at 9:01 AM on May 15, 2012


Unlike self-citation, which is very easy to detect....

And this is doubly bad because self-citation for non-I'm a massive tool purposes is much more likely than this sort of thing. If you're doing work, publish a paper, then do more work on the same thing and publish another paper and so on, you should absolutely be citing your earlier papers in your later work. If I'm going to the trouble of reading paper two, I'm probably interested in paper one. (If paper two and paper one have nothing to do with one another, that's different.)

But if you're the Journal of Something Really Obscure it might come to pass that really obscure thing becomes interesting to lots of people and suddenly papers that cite tons of previously obscure papers in your journal are being published. But that requires you to have staked something out when it was really obscure AND some kind of paradigm shift in the field. It's not the sort of thing you can count on happening and is really really unlikely to happen twice.
posted by Kid Charlemagne at 9:03 AM on May 15, 2012


To clarify - is impact factor of importance in choosing what to read/cite or in choosing where to publish?
posted by maryr at 9:33 AM on May 15, 2012


When I was in law school, I worked for a major journal. I was assigned to edit a piece that wasn't an article per se, but a transcript from a panel discussion that I needed to adapt into an article. In legal writing every fact or claim needs a citation, but obviously a transcript of oral remarks doesn't include citations and the speaker/author hadn't included much to work with. So I spent untold hours digging through databases looking for sources, information, and citations. (It was actually great experience, and kind of fun.)

In the course of my research, I found a relatively recent article that addressed one particular aspect of my subject dead-on. It was well-written and offered some original insights, so I decided to cite two or three points from it. This article had been published by a journal I wasn't very familiar with, but its author was a professor at my law school. So I knocked on his door and let him know, "Hey, this is a terrific article and I'm going to cite it."

His response, polite and measured but genuine gratitude, was when I first realized how important "being cited" is in some circles. We talked and he explained to me some of the background, and some of the expectations that schools have for law professors. I hadn't been aware of that world or what a minefield it can be. I have to admit, learning about it changed how I thought about citations. I tried not to let it change how I approached them.
posted by cribcage at 9:42 AM on May 15, 2012 [2 favorites]


maryr: impact factor is supposed to be placeholder for influence- basically, how many more times a journal is cited than it publishes articles. "Desirable" impact factor can vary widely by field but generally the logic goes that more prestigious journals will publish better papers that get cited more often, thus increasing those journals' impact factor. And so because those papers get cited more often, you want to submit your paper to those journals because if it is accepted, it means your paper is good and will also get cited more often.

You'd think a bunch of intellectuals would be able to come up with a less embarrassingly cyclical way to measure their academic self-worth but things like hiring and tenure make it very pervasive.
posted by zingiberene at 9:51 AM on May 15, 2012


Are there any other researchers out there that think impact factors to be at best an irrelevancy? If I am trying to find the answer to what research has been on topic X, the first thing I do is hit Google Scholar. If the article that turns up from my search seems pertinent, I will look at it: I don't really care if it was published in Nature or in the Journal of Feline Architecture. Impact factors seem like a relic from the days when research was done by poring over hundredweights worth of bound journals in the library, not from downloading pdfs one finds through a search engine. And at worst, maybe impact factors are solely vehicles for manipulative games by publishers. Is there any transparency in how they are measured?
posted by Numenius at 10:16 AM on May 15, 2012 [2 favorites]


Jesus Christ, Mother in Bethlehem... academic research is looking like a slow motion Hindenburg. It hasn't crashed yet, but its burning up all over the place and many more people are looking at the problems than its comfortable with.
posted by Slackermagee at 10:19 AM on May 15, 2012 [1 favorite]


Are there any other researchers out there that think impact factors to be at best an irrelevancy?

I think it's field by field. In math, we certainly have a sense of which journals are more prestigious than others, but impact factors per se are rarely used; I've certainly never looked at the impact factor of a journal when deciding where to submit a paper. Nor have impact factors been discussed in any of the many hiring, tenure, and promotion committees I've been on. We rely more on a gestalt sense of how important the candidate's work is and how much effect it's had on the field. You might think it's strange that mathematicians are less apt to use quantitative measures like impact factor; but I think it's precisely because we're mathematicians that we're very aware of the pitfalls and limits of quantification.
posted by escabeche at 10:26 AM on May 15, 2012 [5 favorites]


academic research is looking like a slow motion Hindenburg. It hasn't crashed yet, but its burning up all over the place and many more people are looking at the problems than its comfortable with.

Academic research is the worst possible method of generating useful knowledge, except for all the others.
posted by escabeche at 10:28 AM on May 15, 2012 [1 favorite]


How does this not violate the "A Method for Making an Ass Load of Money in Science Publishing by Being a Massive Tool" patent that their competitors have build their business empires around?

Because I can't actually see that this makes anyone an ass load of money. It's douchebag behavior, to be sure, but the idea that scholarly research is worth more or less based upon the statistical analysis of citation patterns was never a good idea to begin with.
posted by valkyryn at 10:37 AM on May 15, 2012


Related: I've been interested to watch the rise of PLos One (published by the Public Library of Science), discussed in some detail here.
posted by joannemerriam at 11:25 AM on May 15, 2012


They should rename it the Journal of Citology.
posted by Kabanos at 12:49 PM on May 15, 2012 [1 favorite]


I am laughing for reals here.

Google Scholar tells me there is at least one paper on plant pathology citing the Journal of Submicroscopic Citology.

Is Submicroscopic Citology when you cite works that only exist on microfilm?

Or is it an artifact of the web and the use of nested small tags?
posted by Ayn Rand and God at 4:16 PM on May 15, 2012


The article says, "The data are there, but they are far from transparent."

But in this particular care, that's only true to those who haven't actually looked at the citations in the review articles.

Peer review can be hit and miss, but someone who receives a manuscript with 490 references, of which 450 refer to the last two years of the same journal and 49 refer to the same two years of one other journal is asleep on the job if they don't notice something funny is going on. Even giving them the benefit of the doubt and assuming they're citing the journal of record in a very specific and quite new field, it seems crazy to imagine that one wouldn't need to cite many tens of older and more diverse articles in order to actually provide a coherent review.

Gaming the system by citing a few more of your colleagues' papers than really necessary probably happens all the time. The remarkable thing here is that it was done so very clumsily, by journal editors, no less. They did a dumb thing. However, the reviewers and editor of the journal that published the citing work also did an incompetent thing.

That said, there's clearly some fun to be had extending this sort of analysis to explore more subtle manipulation. (And I, for one, am going to struggle desperately against the temptation to waste the next few weeks scraping and digging into coauthor-of-coauthor and papers-by-editors citation stats for fields about which I actually know something.)
posted by eotvos at 5:01 PM on May 15, 2012


Isn't the impact factor being pushed by things like the big UK committee overhauling their uni system? I seem to recall departments were asked to justify their existence by various (poor) quantitative metrics including impact factor...
posted by LobsterMitten at 5:08 PM on May 15, 2012


How does this not violate the "A Method for Making an Ass Load of Money in Science Publishing by Being a Massive Tool" patent that their competitors have build their business empires around?

Because patents only cover specific implementations, of course.
posted by No-sword at 5:48 PM on May 15, 2012


Google Scholar tells me there is at least one paper on plant pathology citing the Journal of Submicroscopic Citology.

Is Submicroscopic Citology when you cite works that only exist on microfilm?


That's a typo. Speaking of the benefits of paid editors..
posted by porpoise at 8:31 PM on May 15, 2012



My wife's current, but soon to be former, department is impact factor obsessed thanks to the UK's REF process, which turns the system into a game, to the extent that it is extinguishing research into areas that don't easily get into high impact journals like science and nature. As a result they pretty much hire neuroscientists and robototic researchers for every single psychology opening they can. The result will be (and pretty much already is) a wildly imbalanced department and on a broader scale a UK academic community with people who are not even trained in psychology research teaching psychology research.

It's not just journals trying to game metrics.
posted by srboisvert at 12:08 PM on May 16, 2012 [1 favorite]


Some comments have already addressed some of the questions above, but to maybe add a few things - impact factor isn't so much a tool to use to decide what papers to read/what journals to consult, it's much more of a 'will your department get funding' and 'will I keep my job' sort of thing. It directs/dictates where people will submit their work. I'm a bit distant from it all currently, but I remember hearing stories a few years ago from European/UK colleagues about funding and evaluations being tied to number of papers published in journals with impact factors of x-or-higher, and researchers in certain departments or on certain projects not being allowed to submit work to journals with impact factors below a certain number. In UK research assessment exercises (which are (were?) effectively a ranking tool for UK university departments and which have funding-related consequences), number of publications of returnable staff members is reported, and iirc, impact factor is important there as well. Again, my information is oldish and secondhand, so I'm a bit reluctant to comment, but this is my understanding.

It seems like impact factor was originally a useful metric to let publishers and editors know how heavily their journals were being used, how they were perceived in their fields, and something along the lines of what titles were making the publishers money - and I think it might have stayed that way if it was kept between the publisher and the journal editor. I can see it being useful to know that another journal in the field is being more heavily used than the one you edit or publish, and you might respond by tweaking what you print to be more relevant, or more/less tightly focused, or of a higher quality or whatever in response. It just seems like it started being used for unintended purposes with very damaging effects.

The article in the post is talking about the manipulation of impact factor by editors with the goal of boosting submissions and circulation (I...guess?), but the pressure to do this wouldn't exist without the improper use of IF as a metric for researchers/academics, and the proper response to a relatively low impact factor should be 'publish more stuff that people in your field will find useful' (or possibly 'get better at marketing this title'), not the equivalent of 'pad yer bra with lots and lots of tissues to attract more guys!' If IF (heh) wasn't available for anyone outside the publishing industry to see, citation cartels would (probably) (?) not exist. Or they would at least look different. Blah. At any rate, impact factor should be shot, because it's influencing research behaviour and funding, when all it should be influencing is editorial and marketing behaviour.

Also - small disciplines/subdisciplines will have journals with low impact factors, and I think it would/will be tragic to watch the probable consequences of that if things continue as they are.

Anyway, cool post. Thanks, escabeche.
posted by magdalenstreetladies at 2:19 PM on May 16, 2012


It's not just journals trying to game metrics.
posted by srboisvert at 12:08 PM on May 16 [1 favorite +] [!]

The problem is the the metrics, and the wider system of measurement they serve, are generally pretty poorly thought out, and just introduce a bunch of perverse incentives.

I'm not completely up with the UK or USA system of 'measuring' research output and its 'quality', but I'm guessing that the Australian system is basically following their lead.

Given the resources at stake, by which I mean ever increasing amounts of funding being based in competitive allocation systems you'd be a fool not to:

a) maximise your worth to your current and/or future institution by being strategic about where and what you publish if you actually want a career. Wittgenstein would not be regarded as 'research active' in the current system.

b) stack the deck, so to speak, if you are a research office manager/head of school/Dean etc to make sure your department keeps getting that sweet sweet research 'performance' based cash by hiring the right kind of researcher.

Where is this going? I'm sure the neo-liberal economic knobs who have apparently convinced enough people that competition solves everything would tell you that our destination is the promised land of capitalist milk and honey, or alternately we could be radically undermining the main purpose of one of the key institutions of Western liberal democracies.
posted by Hello, I'm David McGahan at 6:56 AM on May 17, 2012 [1 favorite]


whoops, should have been 'where and what and how often'
posted by Hello, I'm David McGahan at 6:59 AM on May 17, 2012


« Older Money Unlimited   |   British Bus Shelters Newer »


This thread has been archived and is closed to new comments