Join 3,512 readers in helping fund MetaFilter (Hide)


Back to square one.
March 30, 2012 3:47 PM   Subscribe

Bombshell investigation reveals vast majority of landmark cancer studies cannot be replicated. In a shocking discovery, C. Glenn Begley, former researcher at Amgen Inc, and a team working with him, has found that 47 out of 53 so called "landmark" basic studies on cancer -- a high proportion of them from university labs -- cannot be replicated, with grim consequences for producing new medicines in the future. These were papers in top journals, from reputable labs, which achieved landmark status with frequent citations. The consequences for cancer research are far-reaching.

The War On Cancer has been puzzlingly slow, with few treatments that can be called break-through. 'The failure to win "the war on cancer" has been blamed on many factors, from the use of mouse models that are irrelevant to human cancers to risk-averse funding agencies. But recently a new culprit has emerged: too many basic scientific discoveries, done in animals or cells growing in lab dishes and meant to show the way to a new drug, are wrong.'

Wrong incentives, pressure to publish, but also fraud are involved.
posted by VikingSword (78 comments total) 51 users marked this as a favorite

 
Just like that recent repudiation of the glass of wine a day good for health research...you just can't trust anything biological or squishy these days.
posted by Chekhovian at 3:51 PM on March 30, 2012 [1 favorite]


The truth about cancer research.
posted by steamynachos at 3:53 PM on March 30, 2012 [61 favorites]


The good news is we've discovered 47 ways not to cure cancer.
posted by Faint of Butt at 3:53 PM on March 30, 2012 [9 favorites]


The War On Cancer has been puzzlingly slow

Er, no, not really. It's pretty damn difficult to figure out how to kill only specific misbehaving cells while leaving healthy cells from the same organism intact.

On preview, that PhD link from steamynachos is pretty damned good.
posted by maryr at 3:56 PM on March 30, 2012 [2 favorites]


Damn, I say "pretty damn" pretty damn often.
posted by maryr at 3:56 PM on March 30, 2012 [1 favorite]


Waiting for the war on war metaphors.
posted by Dark Messiah at 4:07 PM on March 30, 2012 [14 favorites]


Another major problem: There's no incentive to admitting you're wrong and an actual powerful incentive to hope no one notices.

I'm wrapping up my thesis now on a certain enzyme and I based my first year of research on a major publication (science/cell/nature) that is utter bullshit. The entire second half of the paper is characterizing the effect of inhibiting this enzyme with a drug, but the drug does not inhibit the enzyme. It inhibits the assay they used to measure the enzyme's activity in vitro and just didn't realize it. So the entire second half of the paper is utterly meaningless. But they'll never admit it because a) that's a criminal amount of stupidity and b) it will only hurt their career.

Mind you, I'm purposely being vague because we all screw up and the researcher has more than redeemed himself in subsequent (science/cell/nature) papers, but they never bring up that inhibitor again.

But yeah, one thing science has taught me is don't trust anything that you can't reproduce in your own hands. This shit is finicky as hell.

(runs off to validate what he hopes will be his own cancer cure)
posted by slapshot57 at 4:09 PM on March 30, 2012 [31 favorites]


At the risk of being droll... here's the Nature journal article in which the findings about the failures of the findings of results published in scientific journals were published.
posted by gurple at 4:12 PM on March 30, 2012 [2 favorites]


Wrong incentives, pressure to publish, but also fraud are involved.

Uh, this looks like a job for Hanlong's razor.

That said, before you assume stupidity, remember that this shit is hard; bio-assays are notoriously noisy; and even with a good model, there's almost never time (industry) or funding (academia) to do an experiment a couple times and really get a good idea of just how accurate that first result was. Couple this with the fact that my ACS approved degree had a mess of calculus (which I've never used), but I learned all my statistics on the job at The Very Big Pharmaceutical Corporation of America and it's not hard to understand how this kind of thing happens.
posted by Kid Charlemagne at 4:13 PM on March 30, 2012 [5 favorites]


Fortunately this study will be 100% repeatable.
posted by Tell Me No Lies at 4:18 PM on March 30, 2012 [2 favorites]


Tonight on the Factor...Yet another reason to not trust science. This time, it could mean your very life!
posted by Thorzdad at 4:18 PM on March 30, 2012 [3 favorites]


It's rare to see a crank's talking point be born, but here we can. Every crackpot with a "miracle cure" based on herbs or crystals or zydeco music will pull this out. "See! Those 'authorities' don't know anything!"

-sigh-

People have a hard time grasping the truth that science usually takes the long route, with plenty of screwups along the way. This is science working. Just not in the most optimal fashion.
posted by Harvey Jerkwater at 4:22 PM on March 30, 2012 [8 favorites]


Cialis works well, though.
posted by Mental Wimp at 4:24 PM on March 30, 2012 [5 favorites]


File Drawer Problem at work here too I think.
posted by unSane at 4:29 PM on March 30, 2012 [1 favorite]


Some authors required the Amgen scientists sign a confidentiality agreement barring them from disclosing data at odds with the original findings. "The world will never know" which 47 studies -- many of them highly cited -- are apparently wrong, Begley said.

The most common response by the challenged scientists was: "you didn't do it right."


No, if you prevent data that contradicts your own from being published, you're the one not doing science right.
posted by Tsuga at 4:32 PM on March 30, 2012 [15 favorites]


No, if you prevent data that contradicts your own from being published, you're the one not doing science right.

Which is kind of ironic, considering that pharmaceutical companies pull these sorts of NDA shenanigans fairly frequently, especially when drug approvals and patents are on the line. Still, I'm curious which treatments and therapies these findings would call into question.
posted by Blazecock Pileon at 4:42 PM on March 30, 2012 [1 favorite]


So, being someone who once did "cancer research", I'd be curious to see some actual data here. Like a list of the articles implicated. I'm less interested in reading a news article that vaguely discusses the issue.
posted by sciencegeek at 4:49 PM on March 30, 2012 [2 favorites]


What!? But we figure out how to cure cancer in mice like every week!
posted by delmoi at 4:52 PM on March 30, 2012 [1 favorite]


I'd assume that "doing science" is several items from the top of the pharma company's list of priorities, though, Blazecock.
posted by hattifattener at 4:54 PM on March 30, 2012 [1 favorite]


So, being someone who once did "cancer research", I'd be curious to see some actual data here. Like a list of the articles implicated. I'm less interested in reading a news article that vaguely discusses the issue.

The authors' Nature comment is thin on details. Is there a supplemental somewhere else?
posted by Blazecock Pileon at 4:55 PM on March 30, 2012


The article explains that to get access to most of the flawed studies they had to agree not to publish or release any identifying data.

There's also this: "I explained that we re-did their experiment 50 times and never got their result. He said they'd done it six times and got this result once, but put it in the paper because it made the best story.

How is that not unethical?
posted by msalt at 5:06 PM on March 30, 2012 [18 favorites]


That said, before you assume stupidity, remember that this shit is hard

This is addressed in the article. The Amgen guy told the lead researcher on one of the studies involved that the Amgen group ran an experiment 50 times and never got the result the study claimed and the article's author admitted that they ran the experiment 6 times and only got the result once, but that was the result they went with because it made for a better paper.

That isn't "this shit is hard" it is "I care more about my career than my integrity".
posted by Justinian at 5:06 PM on March 30, 2012 [5 favorites]


boo, msalt.
posted by Justinian at 5:06 PM on March 30, 2012


I can't read the actual Begley commentary, so I only have the accompanying Nature editorial and the Reuters gloss in the FPP to go on, but something is a bit weird here.

The Reuters article describes Begley's piece as a "commentary," and the editorial describes it as a "comment." If this is indeed the case, then we should be very clear: this is not an "investigation," nor is it a study - it is, essentially, an editorial. Whatever allegations they make are based, it sounds like, on internal trials conducted within a pharmaceutical company (Amgen), not a carefully controlled series of trials that have undergone peer review. So we have no idea how these replication attempts were carried out and exactly what they found.

Then there's this strange comment from the Reuters article:

George Robertson of Dalhousie University in Nova Scotia previously worked at Merck on neurodegenerative diseases such as Parkinson's. While at Merck, he also found many academic studies that did not hold up. "It drives people in industry crazy. Why are we seeing a collapse of the pharma and biotech industries? One possibility is that academia is not providing accurate findings," he said.

Its very strange to hear scientists with strong pharmaceutical industry ties lambasting academic researchers for their inaccuracy, and blaming declining pharmaceutical profits on academic research. There is no doubt that scientific research is plagued with publication bias, perverse incentives, and pressure - as John Ioannides and others have amply documented. But this research also shows that one of the strongest influences on research is pharmaceutical industry pressure to produce and/or publish only positive results. It is widely-known that research conducted with pharma sponsorship is far more likely to suppress negative findings, and manipulate results to produce positive results.

So I wonder what the endgame is here. Are these whistleblowers trying to shine even more light on the difficulty of replication in scientific research, regardless of funding source? Or is this somehow an attempt at misdirection, in order to deflect some of the justified criticism of pharmaceutical industry influence on research? Its very hard to tell at this point.
posted by googly at 5:09 PM on March 30, 2012 [21 favorites]


Somewhere, inside his underground cellular regeneration chamber, a single tear runs down Ray Kurzweil's face.
posted by El Sabor Asiatico at 5:13 PM on March 30, 2012 [4 favorites]


I'd assume that "doing science" is several items from the top of the pharma company's list of priorities...

Pssst. I think Amgen is a pharma company.
posted by Kid Charlemagne at 5:16 PM on March 30, 2012


There is no doubt that scientific research is plagued with publication bias, perverse incentives, and pressure - as John Ioannides and others have amply documented.

Btw. I did an FPP on Ioannidis findings, back in 2010: Lies, Damned Lies, and Medical Science. Given that he found as much as 90% of the research flawed, I don't think the current findings are particularly surprising, even if they are shocking: "He charges that as much as 90 percent of the published medical information that doctors rely on is flawed. His work has been widely accepted by the medical community; it has been published in the field’s top journals, where it is heavily cited; and he is a big draw at conferences.'"
posted by VikingSword at 5:20 PM on March 30, 2012 [2 favorites]


There's too much money to be made in not curing cancer.
posted by mr_crash_davis at 5:21 PM on March 30, 2012 [4 favorites]


There's too much money to be made in not curing cancer.

True, there is more incentive to manage rather than cure. Still, without being able to tie problematic research to therapies in use, this is a bit of a murky result.
posted by Blazecock Pileon at 5:47 PM on March 30, 2012


I'm completely uninterested if there is no data. Without data, this is just speculative crap.

Sorry to be so harsh, but the similarity of the article to articles claiming "Eating Broccoli will keep you from getting [specific cancer]" is a bit too close for my taste. Sensationalistic.
posted by sciencegeek at 6:20 PM on March 30, 2012 [4 favorites]


Here is a link to the actual article. MeMail me with an email I can send a PDF to if you don't have access and require it to participate in this academic discussion that we are currently having.

In the actual article, the fuckers don't bother to actually cite the list of publications they found wanting, no supplementary information, no nothing. Thus they also don't cite the data they are relying on to declare irreproducability, don't give those authors (who likely understand their work better) a chance to defend it, and don't give anyone else a chance to see how cherry picked those articles were.

The original article in Nature is irresponsible at best, particularly since the authors must have known that bullshit like the Reuters article we're all skimming would be written from it.
posted by Blasdelb at 6:21 PM on March 30, 2012 [11 favorites]


The article in the FPP is a terrible example of some of the worst of science journalism based on an exploitation of the worst aspects of how Nature accepts papers.
posted by Blasdelb at 6:23 PM on March 30, 2012 [3 favorites]


Waiting for the war on war metaphors.

You do know there already is one, right? A lot of people in medicine feel that war metaphors are negative and set up a false scenario. You're not fighting cancer. You're living with it until it's killed or you are. Your immune system is not rallying the troops to fight off invaders. It's prioritizing the creation of certain kinds of cells you need to destroy the cells that are trying to destroy you.

The problem with war metaphors is it makes it seem you'd still be alive if only you'd fought a smarter better battle.
posted by cjorgensen at 6:31 PM on March 30, 2012 [9 favorites]


don't give those authors (who likely understand their work better) a chance to defend it

Did you read the Amgen guy's statements? The authors didn't want a chance to defend their work; they generally required that no contrary results be published and their own papers not identified.
posted by Justinian at 6:36 PM on March 30, 2012 [2 favorites]


I'm not normally big on the pharmaceutical conspiracy theories, but it is noteworthy that this is coming from a former Amgen executive who refuses to actually name the flawed studies, leaving us no evidence that what he claims is in any way based in reality. And on an entirely unrelated note, the stock ticker at the bottom of that page seems to indicate that Wall Street is pretty happy about this revelation...
posted by GIFtheory at 6:46 PM on March 30, 2012 [2 favorites]


I guess this is supposed to provoke somebody else into redoing the retesting of the experiments?
posted by LogicalDash at 6:56 PM on March 30, 2012


As usual, the analysis - and comments - from In The Pipeline is excellent
posted by lalochezia at 6:57 PM on March 30, 2012 [2 favorites]


So I wonder what the endgame is here. Are these whistleblowers trying to shine even more light on the difficulty of replication in scientific research, regardless of funding source? Or is this somehow an attempt at misdirection, in order to deflect some of the justified criticism of pharmaceutical industry influence on research? Its very hard to tell at this point.

Maybe Begley and his research team are simply following basic ethics?

And that if you were in his position with the same 47 null result findings, you'd do the same thing?

Consider the long game: if Begley were a black hat, and lying through his teeth, and these 47 results were actually replicable, then the big lie would be exposed, 5-10 years later. If you were Begley, or his line manager, would you sign off on the big lie for marketing purposes?

Also, Begley had a team under him. If this were the big lie, we'd have a few whistleblowers on the team who would come forward.
posted by sebastienbailard at 6:57 PM on March 30, 2012 [1 favorite]


    mr_crash_davis: There's too much money to be made in not curing cancer.
You have no fucking idea. Seriously, no fucking idea.

The article in the FPP neglects to mention that the criticism in the Nature commentary was limited to pre-clinical trials, but there you go, this is an industry stooge complaining about all the research that Uncle Sam does for them for free.

The people who do these pre-clinical trials won't go out of business if the cancer they work on gets cured, THERE WILL ALWAYS BE ANOTHER CANCER. It is a basic fact of how multi-cellular life and evolution work. When people talk about 'curing cancer', they are talking about a problem that billions of years of evolution hasn't solved and another few billion won't. The biggest difference between the 'War on Cancer' and the 'War on Terror' is that the War on Terror is theoretically winnable, there will always be funding for cancer. However, so long as our system is set up the way it is, the PIs who run those trials could always make twice as much as industry researchers, or physicians, doing more rewarding, less aggravating, more risk averse, and better supported work.
    Justinian: "Did you read the Amgen guy's statements? The authors didn't want a chance to defend their work; they generally required that no contrary results be published and their own papers not identified."
That is an awfully convenient excuse for Amgen, I'm surprised to see it taken seriously and at face value. Even if a PI were actually craven enough to write a non-disclosure agreement like that, Amgen didn't need to sign it. By not publishing the results of their work Amgen gets to use their findings to determine which drugs to develop while forcing their competitors to travel blindly as well as hide the bullshit that is inevitably associated with the percentages they generated. Amgen has a direct financial interest in producing the most dramatic results they can, it makes the money they invested in the project look well spent and it makes them look good to investors.
    LogicalDash: "I guess this is supposed to provoke somebody else into redoing the retesting of the experiments?"
No it is supposed to make Amgen money, and as GIFtheory pointed out, it seems to be working
posted by Blasdelb at 7:00 PM on March 30, 2012 [17 favorites]


So, basically, author complains he cannot reproduce results, but withholds information which would allow anyone to attempt to reproduce his results.
posted by unSane at 7:12 PM on March 30, 2012 [5 favorites]


Maybe Begley and his research team are simply following basic ethics?

I'd like to believe it but: (1) this problem is hardly news to anyone who follows biomedical research (cf: the Ioannedis and Lehrer references from my first comment); and (2) why didn't they go public with the totality of their research? A commentary is basically an opinion piece. If I published something saying "I've conducted an internal review of a bunch of climate science and found that it is all wrong, but I won't even name the studies that I reviewed," would you consider me a champion of ethical research? I wasn't commenting on whether they were right or wrong in their conclusion - they probably are - I was just wondering about why they chose to release this information in the way that they did.

If Begley were a black hat, and lying through his teeth, and these 47 results were actually replicable, then the big lie would be exposed, 5-10 years later.

Actually, no. If, as blasdelb observes, they didn't publish the names of those 47 papers, no one will ever find anything out.

Also, Begley had a team under him. If this were the big lie, we'd have a few whistleblowers on the team who would come forward.

Oh my. I think you drastically overestimate the likelihood of whisteblowing. No one working under the scientist who fabricated 21 studies about the efficacy of Vioxx ever blew the whistle on him.
posted by googly at 7:15 PM on March 30, 2012 [1 favorite]


Underlying the issues identified in the article is that fact that, despite what you were taught about the scientific process & the purpose of publication in high school, very few scientific studies are independently tested. And, as noted, there is a real bias against publishing results that run contrary to previously-published research.

Got results that run counter to accepted wisdom or previously-published results? No matter how strongly you confirm then, you're fighting uphill on a very hard road. It's got to be something either (a) intriguingly unusual or (b) demolishingly comprehensive before you can even think about submitting it for publication.

Peer-review of submissions? That mostly exists to weed out poor design, poor methodology, or overstatement of results - but, since reviewers are often The Names in the field, sometimes also retards publication of good results that run counter to accepted wisdom.

Independent replication/confirmation of experiments, &/or testing of results? Very rarely done in most fields. At best, you'll get some confirmation of general outcomes through vaguely similar experiments conducted independently - "We used method A to examine B in organism C, and obtained result D. These results are similar to those obtained by Pinback et al., who used method W to examine X in organism Y and determined Z".

Actual replication of experiments & independent confirmation/refutation of results? Very rare in many fields. I would have liked to believe that medicine was something of an exception to these general rules, but I'm not at all surprised that it's not.
posted by Pinback at 7:29 PM on March 30, 2012 [2 favorites]


The "fraud" comment reminded me of the Anil Potti case. The short version is that a paper came out of a research group at Duke claiming to be able to predict drug response of cancer cell lines using gene expression patterns (by microarray). This was a big deal. But two scientists at MD Anderson (Baggerly and Coombes) were unable to reproduce the results, and when the original authors didn't cooperate, they performed a "forensic bioinformatics" analysis of the original paper, where they found inconsistencies such as: While these could have been honest errors, most damningly, even after correcting all these problems, the authors listed some genes as predictors that still could not be reproduced by Baggerly and Coombes. Moreover, some of these genes were not even on the gene chip used in the study. And finally, doing the analysis on the correct data abolished the predictive power of the study. The full study is here.

The worst part was that there were ongoing clinical trials based on the original studies, and while they were briefly halted after this paper came out, they were reinstated after an investigation by Duke. Soon, though, it came out that Anil Potti had faked being a Rhodes Scholar on his CV (no, really), which sparked a sudden and dramatic re-evaluation of his work.

There's a 60 Minutes show (link is to transcript; warning, pops up a print dialog) about it as well as a NYT article. I definitely don't think this kind of outright fabrication is common but it does happen, and what I think this illustrates is that even in what turned out to be a totally blatant example the deception ended up being surprisingly hard to unroot.
posted by en forme de poire at 7:31 PM on March 30, 2012 [3 favorites]


Just finished reading Begley & Ellis' article in Nature 483:531-533 (2012).

I wonder if any of those accepting the truth of the Reuters article based on Begley & Ellis are aware of this small incongruity: Begley & Ellis present no credible data to support a claim that 47 of 53 research studies have, in effect, no credible data. No reference to the particular studies evaluated. No data of their own showing the claimed results. Just unsubstantiated claims, opinions and a maundering lecture to others to follow proper procedures that they themselves did not do for their Comment.

Am I wrong in thinking that the standard of proof should be the same for Begley & Ellis as it is for the work they criticize?

"Bombshell investigation"? I think not. Nothing but mooseshit and snowballs, to be generous. Which is not to deny that much research is ultimately shown to be incorrect. It's just that Begley & Ellis (2012) have not done that in their Nature Comment.
posted by dmayhood at 8:30 PM on March 30, 2012 [5 favorites]


He said they'd done it six times and got this result once, but put it in the paper because it made the best story.

How is that not unethical?
It is unethical, but only because this case is an oddity where the people publishing the successful trial actually know about the failed trials. Usually it's 6 different research teams doing the 6 different trials, and publication bias ensures that only the irreproducible success becomes well known.

Even publication bias can border on the unethical, though. Do you want the journal you edit to be the most highly cited? Then you had better be "publishing only the best original research", and so your policy needs to be that your journal "does not publish replication studies".
posted by roystgnr at 8:50 PM on March 30, 2012 [6 favorites]


Science gets respect precisely because of replicability. It is as fundamental to the scientific process as it gets. And yet, undeniably, there is systemic bias against conducting experiments - because of funding, because of time, because of perverse incentives, and for many other reasons.

Isn't that fact by itself troubling?

Regardless of whether the "47 of 53" claim has any merit, the fact is, that there is a crying need to replicate studies. But precisely because so few such studies are done, we have trouble evaluating whether the "47 of 53" claim is even plausible... which is a pretty recursive dilemma. Of course, investigations such as the ones conducted by Ioannidis are not exactly reassuring.

It would seem, that for really important results, ones which are regarded as "landmark", and on which many further investigations are based, it would be standard operating procedure that OF COURSE such results MUST be replicated, before being accepted and cited by a thousand other investigators, and before millions of dollars are spent on cures and procedures and drugs that are supposed to be based on this landmark result. And yet, that is not what happens. Isn't that astounding? If I claim tomorrow that I just saw Bigfoot in my neck of the woods, I would not expect my claim to be published by Nature and subsequently cited and discussed as if established fact, with no one bothering to verify my claims.

When reading any studies, I would always pay attention to methodology, design and limitations. It looks now, that really there is no reason to accept any study, no matter the reputation of the investigators, the institution and the publication - without independent replication.

Let us remember the proffered motivation for by Begley. It was so that before he spends millions upon millions of dollars investing in drug trials based on "landmark" studies, he can at least be sure that the result itself is not spurious. Seems eminently sensible, and a far more powerful and logical motivation to spend considerable resources on this investigation (fundamental economic interest!) than a dastardly plot to discredit the academic community and bolster the reputation of big pharma.
posted by VikingSword at 8:50 PM on March 30, 2012 [2 favorites]


Hrm. There is some funny stuff from people in this article.

George Robertson, quoted in article: "It drives people in industry crazy. Why are we seeing a collapse of the pharma and biotech industries? One possibility is that academia is not providing accurate findings."

"Collapse of the pharma and biotech industries?" That sounds like a particular variety of hogwash to me, at least looking at this - is Mr Robertson really saying that "the most profitable of all industries in the US," which year on year sees record profits that are still increasing, is collapsing?

Oh, but he used to work at Merck, so I suppose we ought to trust him.
posted by koeselitz at 10:10 PM on March 30, 2012 [1 favorite]


It looks now, that really there is no reason to accept any study, no matter the reputation of the investigators, the institution and the publication - without independent replication.

Vikingsword, you just accepted this particular "study" despite its lack of evidence. I don't ask for it to be replicated; I just want to see some data. That it was accepted for publication by Nature, even as a comment, just shows that Nature is willing to publish what is tantamount to a Bigfoot sighting with no photos, no casts of footprints, no hair sample analyses, no scats and no body. This article did nothing to advance what it claims to want to rectify because it is simply unsubstantiated. We are asked just to accept the authors' word on the subject.

It is not true that negative results can't get published. If they had demonstrated with actual evidence that the vast majority of studies in the area of interest could not be replicated, believe me that would get journal space. As it is, Nature was willing to go ahead and let them publish their BS (Bigfoot sighting) anyway. A more interesting question is: why?
posted by dmayhood at 10:25 PM on March 30, 2012 [3 favorites]


Koeselitz: when you break down the data, its actually pretty dire. Most of those profits come from a handfull of patented drugs. Which are going to expire soon. There isn't really anything that's going to replace those drugs on the horizon. Thus, the looming colapse

Did you know the ceo of pfhizer is a trial lawyer? They've cut funding for research, but its not clear that pooring money into r&d would have done them much good.

Anyway its something of a problem for science that only positive results get press. A brilliant scientist could spend years looking at the wrong thing, and a dumbass could luck into something major
posted by delmoi at 10:32 PM on March 30, 2012


admitted that they ran the experiment 6 times and only got the result once, but that was the result they went with

armchair fix-everything proposal: have the FDA require that any study or trial that they use as evidence for approval be registered with them before it starts; and that all studies registered must have their results published, even if they're canceled or delayed. There are still plenty of other ways to game the system, but at least it would cut down on the "run the trial 20 times and report the one that had p≤.05" shenanigans.
posted by hattifattener at 11:40 PM on March 30, 2012 [4 favorites]


Just like that recent repudiation of the glass of wine a day good for health research

Oh, is it a BOTTLE a day now? Thank god. I need that daily bottle of wine to help cope with my health anxiety.
posted by infinitywaltz at 12:17 AM on March 31, 2012 [3 favorites]


Having worked in two different research labs at top 10 US research universities, I have very little faith remaining in published findings outside of a few hard science disciplines. I'm thinking about physics and chemistry, and that is only because I assume the results there are clear cut enough that they lend themselves to scrutiny, but I could be wrong. The entire process has gone completely off the rails, and there is rampant publishing of data which has no chance of ever being reproduced.
posted by sophist at 12:47 AM on March 31, 2012 [1 favorite]


A brilliant scientist could spend years looking at the wrong thing, and a dumbass could luck into something major

Um, no, no they couldn't. That scenario is so incredibly unlikely I'd be surprised if it happened even once. Science does not happen in a vacuum, you don't get to just do whatever you want for years at a time without input (funding issues stop that if nothing else) and it takes more than one good experiment to make a career. And anyone working for years without producing results is not, by definition, "brilliant" anyway.
posted by shelleycat at 12:48 AM on March 31, 2012


Some authors required the Amgen scientists sign a confidentiality agreement barring them from disclosing data at odds with the original findings. "The world will never know" which 47 studies -- many of them highly cited -- are apparently wrong, Begley said.

What is this I don't even
posted by erniepan at 1:34 AM on March 31, 2012


A couple of thoughts, 53 "basic landmark studies" is a specific and low enough number that people in the field should be able to make a guess as to which studies.

And surely all drug companies do what Amgen said they did, replicate published research and see what looks promising. There must be plenty of "me too" corroboration if Begley's comments have any substance.
posted by epo at 2:20 AM on March 31, 2012


47 out of 53 so called "landmark" basic studies on cancer

Maybe he screwed up himself? 50 landmark papers would be a broad range of techniques and methods, I doubt that he could replicate this in a reasonable time.
I worked in science myself and it happened that people had problems replicating our (non cancer) results, since they started with the wrong assumptions and tools and often were to lazy to just dig to all the old literature (this groups then often send a visiting student to our lab to get trained).

I think it is highly unlikely that 47 out of 53 landmark basic studies were screwed.

By the way, this sounded like a preliminary result, too good to be true:

One Drug to Shrink All Tumors


Just a fact on the side, "shrinking" a tumor does not necessarily increase life expectancy.
posted by yoyo_nyc at 2:20 AM on March 31, 2012 [2 favorites]


This is troubling.
posted by Estraven at 2:46 AM on March 31, 2012


Anyway its something of a problem for science that only positive results get press.

That and the fact that the working definition of "positive result" is "marketable product" or "something positively affecting some fianancial ratio / bottom line".

An "original failure", or making an error that 1. hasn't been done previously 2. and that can be and has been replicated - it's possibly a discovery in and of itself, hence it's not a failure, it's part of the trial and error process; yet it's often seen only as a "failure".

So I doubt it would make headlines as a "success-failure story that will raise a stock quote", which is what financial people want in any company (unless they've have placed a bet against the company).

Googly has posted this article from the Newyorker on the topic of replication, it was an interesting read, worth re-linking.
posted by elpapacito at 5:20 AM on March 31, 2012 [1 favorite]


Statistically, you need more power to replicate a result than find it in the first place. People often forget this because almost nobody does power analysis and then they are puzzled as to why they were lucky once but not twice.
posted by srboisvert at 7:39 AM on March 31, 2012 [1 favorite]


Did you know the ceo of pfhizer is a trial lawyer?

Yes. Pfizer has found it far more profitable to buy small labs that make drug discoveries, rather than do the research themselves.

And, in our current economic climate and system -- in our current ways of rewarding economic behavior, he's exactly correct that it is far more profitable.

So, those small labs are looking for drug discoveries that can be sold profitably, because that's how they win. Again, they are behaving exactly correctly given the system.

That's the core problem with capitalism. It assume that increase in capital profitability is the one and only correct measure of economic success. It assumes that every aspect of a transaction can be priced, and priced correctly, and actually traded.
posted by eriko at 7:48 AM on March 31, 2012


Where there is a lot of money, a lot of fraud will be found as well. It's a sort of cancer in it's own way.
posted by Renoroc at 7:58 AM on March 31, 2012


Am I wrong in thinking that the standard of proof should be the same for Begley & Ellis as it is for the work they criticize?

Yeah, because it's a comment, which is an opinion piece. It's the science journal version of an op-ed --- the point is to state an opinion and spark controversy.

As for all the conspiricizing about the author's association with big pharma discrediting their opinion --- big pharma exists to make money, and that has a negative and distorting effect on the research process in many ways. But I think we are all in agreement that they exist to make money. Patents run out, and they are in a blockbuster business --- the need to search constantly to find their next big thing. This is their nature. Taking that into consideration, it certainly seems to be squarely in their self-interest to not waste time on will-o-the-wisps, and therefore it doesn't seem at all odd to me that their head guy would try and reproduce basic findings before committing considerable resources to try and create a useable therapy based on them.

He could be lying, when he says that he's striking out all over the place, even only using the creme-de-la-creme, widely-cited studies in well-respected journals. But I don't quite understand what he gets out of it, if he is. I'd trust a hitman if he was telling me that a certain brand of bullets jam a Glock. Your man from Amgen needs ammo to get his job done too, and he saying the clips are full of duds....
posted by Diablevert at 7:58 AM on March 31, 2012


Nature itself offers a supportive editorial mentioning the Comment piece:
[T]he overall impression the article leaves is of insufficient thoroughness in the way that too many researchers present their data.

The finding resonates with a growing sense of unease among specialist editors on this journal, and not just in the field of oncology. Across the life sciences, handling corrections that have arisen from avoidable errors in manuscripts has become an uncomfortable part of the publishing process.

The evidence is largely anecdotal. So here are the anecdotes: unrelated data panels; missing references; incorrect controls; undeclared cosmetic adjustments to figures; duplications; reserve figures and dummy text included; inaccurate and incomplete methods; and improper use of statistics — the failure to understand the difference between technical replicates and independent experiments, for example.
posted by Diablevert at 8:10 AM on March 31, 2012


Has anyone seen any systematic proposals for revamping the medical research system? The obvious other problem is that it is very difficult to get funding for non-patentable medicines, much less therapeutic or lifestyle solutions to problems that might be cheaper and more effective than medicine (eg for diabetes).

Why aren't Kaiser and the medical insurance companies themselves funding research into much cheaper generic or therapeutic medical alternatives?
posted by msalt at 10:02 AM on March 31, 2012


Where there is a lot of money, a lot of fraud will be found as well.

But there's not a whole lot of money in university research. I've met people who used to work at university labs where they would wash and reuse their pipette tips!

If that doesn't blow you away, think of it this way - if you had a plate that yesterday was contaminated with botulism toxin (about 50 ng is a lethal dose) would you wash it off real good or dispose of it? OK, now the same thing but 0.25 ng of contaminant is enough to screw your experiment to hell and gone?
posted by Kid Charlemagne at 11:19 AM on March 31, 2012 [1 favorite]


There's too much money to be made in not curing cancer

If only being the person who Cured All the Cancers got you a million dollars and a tiny golden portrait of some Swedish guy. Then I bet you'd see cures!

(Here's a hint: Nobody goes into academic research for money.)
posted by Comrade_robot at 12:15 PM on March 31, 2012 [2 favorites]


Did you know the ceo of pfhizer is a trial lawyer?

Assume you're a scientist. OK, space aliens came down and gave you a thumb drive with a protein sequence for an enzyme that is the cure for cancer. How many people do you think you could treat using the data on that thumb drive?

I think you might be able to do five or ten yourself but you'd have to keep a low profile. But if you want to put an actual drug on the market that would really make a difference to people there is a mountain of logistics, contracts, documentation and agency interaction between you and your goal and that is, supposedly, the sort of thing MBAs and JDs are really good at.

If you want to see someone who knows where the bones are buried rant about what's wrong with the drug industry, feel free to search my comment history. Basically, the problems are that there are too many moving pieces involved in drug discovery/development, the patent structure we now have punishes you for trying to do things right, most corporations have a mental model of the share holder as a sociopathic idiot savant and are in an abusive relationship with this imaginary friend of theirs, the agencies lack the manpower to be as flexible or as thorough as they need to be so industry timelines are always white knuckle and a lot of effort is spent on petty crap that has little to do with actual quality but a lot to do with looking good to the auditor if this is the one thing he looks at, the Peter principal is hard at work (I've yet to meet someone who really understood "computer validation" for example), people are very bad at statistics and very good at seeing patterns, etc.

I don't think anyone on Metafilter has the creds to be more bitter at Pfizer than I do but I'll be the first to say that the issue was almost never people deliberately doing bad science.

Doing science (and everything else) without stopping to think about things is a different matter.
posted by Kid Charlemagne at 12:29 PM on March 31, 2012 [3 favorites]


Regarding en forme de poire's (otherwise fantastic ) comment on Anil Potti, "I definitely don't think this kind of outright fabrication is common but it does happen, and what I think this illustrates is that even in what turned out to be a totally blatant example the deception ended up being surprisingly hard to unroot."

Great comment in general, although I wonder if this isn't a case in which active deception is actually much harder to detect than sloppy science. I'm certainly not the first person to point this out, but all the mechanisms built into the peer review (and evaluation of published results in general) more or less assume that while the author may be stupid, they aren't lying.

If I make some double-counting or selection bias mistake, or fail to perform a necessary experimental check, that's easy to catch. (Though, it sure as hell still happens with depressing frequency, even in statistics-heavy fields where there's no excuse for not knowing better.) If I instead publish simulated data, at least if I do it well, there's no way anyone else could figure that out except by replicating the experiment from scratch, or waiting for someone in my lab to notice.

The Rhodes scholar thing is pretty amazing. . . though, to be fair, would you check to insure that someone who claimed to that on a CV wasn't lying? I'll bet quite a few people looked carefully into how significant his contribution actually was to his most important papers and never once thought to check that his awards actually were awarded. It seems like such a gamble that nobody in that position would risk it. Wild, none the less.
posted by eotvos at 12:55 PM on March 31, 2012


I just got home from a conference on conflicts of interest in research because my day job is navigating those as well as reviewing allegations of research misconduct at one of the institutions mentioned in this thread.

I am probably less defensive about this stuff than most of the scientists in this thread simply by virtue of the fact that it is specifically my job to ferret out the bad actors - but I can attest to the fact that they're right when they say that this article and it's claims are supremely suspicious.

There are people out there who do this the right way. There is an anonymous allegant who works tirelessly to provide NIH with constant insight into research that may be plagiarized, fabricated or falsified (the three activities that comprise "research misconduct" as it is understood today). The allegant uses the pseudonym Claire Francis, and is a sort of scientific vigilante. He or she spends a great deal of time doing very detailed work establishing when and where there are problems with publications. Then, because there are good systems in place (which, ahem, do not include writing sensational axe-grindey articles), anonymity can be ensured for the claimant, and the ball is set rolling in order to get to the bottom of the issue in question.

Non-replicability is clearly a problem that has been talked about quite a bit lately. But here is how it should work:

1) Attempt experiment.
2) Fail to replicate experiment.
3) Identify point at which results begin to diverge.
4) Check yoself!
5) Discuss with researcher who originally performed experiment. Figure out what might not be working.
6) If still unsatisfied, allege that the results in question were fabricated or falsified.

Now, here's where I would come in. Guess what the first step is in reviewing such an allegation? Asking the researcher to provide the raw data backing up the research in question. When they do that, and it demonstrates that the experiment was done and did not deviate from generally accepted scientific practices, the case is closed. The above example of "we did the experiment six times but only got this result once" would be a great example of when our panel would arch our collective eyebrow and announce that in order to establish the integrity of the research, the experiment should be done again using generally accepted scientific practices. Where there is no data we would probably give the researcher the opportunity to replicate the experiment. (Unless we felt that this was the result of some sort of malfeasance.) But where there is evidence of data tampering or manipulation, there may be a finding of research misconduct.

We are pretty thorough when we receive allegations of research misconduct. We are tasked with protecting the reputation of the accused (and the accuser), but our ultimate duty is to the integrity of the research. We will read your emails. We will image the hard drives of every computer in your lab and comb them for clues. We will take, and sequester, every bit of raw data you have ever generated and you will only have supervised access to it. We take our job very seriously. This is not just figures in a journal article. This is people's lives.

So while I appreciate that research in higher education has problems, and that replicability is absolutely one of them, forgive me if I am highly suspicious of someone who is almost certainly aware of the proper channels for initiating some positive change and who instead employs sensational (and ultimately useless) tactics.
posted by jph at 4:34 PM on March 31, 2012 [7 favorites]


So while I appreciate that research in higher education has problems, and that replicability is absolutely one of them, forgive me if I am highly suspicious of someone who is almost certainly aware of the proper channels for initiating some positive change and who instead employs sensational (and ultimately useless) tactics.

So, is he lying or telling the truth?

Because you're calling him out on two different (and contradictory) issues:

You're saying, firstly, that he falsely called good science bogus. And secondly, he actually uncovered a stack of bogus studies, but didn't obey the proper forms by reporting them in a non rock-the-boat manner.

Never mind that he gave his word to the original researchers that he would not report them.

Some authors required the Amgen scientists sign a confidentiality agreement barring them from disclosing data at odds with the original findings.

Does this actually happen in your field or is this more confabulation on Begley's part?

Because if this actually happens, that's moral rot.

And if this actually happens in your field, then Begley is doing the right thing in pulling the fire alarm. He's going through the proper channel.
posted by sebastienbailard at 9:02 PM on March 31, 2012


Um, no, no they couldn't. That scenario is so incredibly unlikely I'd be surprised if it happened even once.
The discovery of the cosmic microwave background radiation gets mentioned a lot. People working on radar ended up discovering it, and having no idea why it was there. "Dumbasses" is probably not the right term, they were obviously smart people. But they weren't cosmologists and they were working on an engineering problem.
And anyone working for years without producing results is not, by definition, "brilliant" anyway.
Tell that to the people who were looking for the Higgs Boson. Well, except in physics a null result is still considered a discovery "We looked for this, and didn't find anything" is still important. The failure to discover luminiferous aether was an important result in physics as well.
Why aren't Kaiser and the medical insurance companies themselves funding research into much cheaper generic or therapeutic medical alternatives?
Higher medical costs means higher premiums which means more absolute profit, if the profit percentage stays the same. So why would they?
Assume you're a scientist. OK, space aliens came down and gave you a thumb drive with a protein sequence for an enzyme that is the cure for cancer. How many people do you think you could treat using the data on that thumb drive?

I think you might be able to do five or ten yourself but you'd have to keep a low profile. But if you want to put an actual drug on the market that would really make a difference to people there is a mountain of logistics, contracts, documentation and agency interaction between you and your goal and that is, supposedly, the sort of thing MBAs and JDs are really good at.
Eh. If you really had a cure for cancer that worked as well as, say, penicillin did for bacterial infections, it would be real obvious really quickly. You would only need to find one government willing to allow clinical trials, and once that was done you could hand it off to generic drug makers to manufacture. Charities could apply public pressure for quick FDA approval.

Or, as long as we're imagining a sci-fi scenario, you could engineer a rhinovirus to produce the enzyme, and just release it, ala 12 monkeys. Except it cures all the cancer in the world, instead of killing everyone.
posted by delmoi at 12:35 AM on April 1, 2012


No no, sebastianbailard, you're misunderstanding the point here. The point is that to establish the integrity of the scientific record, one must be SPECIFIC about the problems in the scientific record that require clarification and/or correction. Without a doubt, there are myriad problems in research that should be corrected. That's not a value judgment like "moral rot" but just a fact, and one that underlies the peer-review scientific method. In fact, one of the primary defenses to allegations of research misconduct is "honest error" which is, in my experience, incredibly common.

I'm not saying that he falsely called good science bogus. In fact, it appears that he hasn't really called any specific science bogus because they agreed not to reveal the studies questioned by their "study."

What I am saying is that IF they uncovered bogus research, then this was quite possibly the worst way to handle that information. If they were truly interested in the integrity of the research, they would have followed the guidelines that I laid out above rather than developing a "study to test replicability of landmark findings"; in short, if they really cared, then they should have made formal allegations about problematic finding that they identified - rather than promising not to reveal the problematic studies in question. (As I mention above, anonymous allegations are acceptable in this arena. They could never be identified. And furthermore, if they truly identified information that discredited the research, they'd be largely protected from any reprisal by the researcher even on the very outside chance that they were identified.)

It is important not to jump to conclusions where this stuff is concerned. Non-replicability is not necessarily a sign that the study was bogus. Consider the following:

John builds a spectacular model of Hogwarts Castle out of Legos and displays it prominently at the local Wizarding Convention. Everyone loves it. It is magnificent. He becomes and instant star among the fan base and JK Rowling sends him an autographed copy of The Tales of Beadle the Bard.

Jane goes home to Omaha from the convention and thinks, "I need to make myself a Hogwarts!" She lights a candle in her shrine to JK Rowling, reads 16 books on Lego design and goes out to buy Legos. Omaha has a lot of Lego vendors. Jane has a lot of money. After looking over the photos of John's castle, Jane realizes that without a Nearly-Headless Nick figure, her model will be incomplete. She buys a headless Lego guy on eBay. She buys a Lego guy with various heads from every shop she can find. And none of them are right. Finally, in desperation, she turns to John. Where did he get his Nearly-headless Nick figure? Oh, he invented him. Right. Could he make her one? Maybe. Possibly. Under the right circumstances. It just depends on whether he can find the right materials or if his mom says he can mail something to Omaha or if Jane can even afford such a specialized piece or if the government says that he is allowed to share his invention with her.

There are a million places in that story where Jane might end up with something completely different from John's project, through no fault of John's. That's why we have to be careful not to immediately assume that John's research is bogus because Jane isn't able to do the same thing that he was.

As for the "confidentiality agreements concerning adverse findings" - the sword cuts both ways. I review about 600 contracts between researchers and pharmaceutical companies a year, and one of the most common sections to be included in those agreements are confidentiality agreements which can reasonably be read so as to bar the physicians and researchers from disclosing confidential information, including adverse findings and events.
posted by jph at 6:23 AM on April 1, 2012 [3 favorites]


What I am saying is that IF they uncovered bogus research, then this was quite possibly the worst way to handle that information. If they were truly interested in the integrity of the research, they would have followed the guidelines that I laid out above rather than developing a "study to test replicability of landmark findings"; in short, if they really cared, then they should have made formal allegations about problematic finding that they identified - rather than promising not to reveal the problematic studies in question.

I think you have failed to grasp a crucial bit of context: The Nature piece is not a study. It is a comment. It is not science, it is opinion. The authors did not set out to study the issue of whether many landmark papers were in fact irreplicable and publish the results; they set out to find new treatment methods and used landmark papers as a guideline to uncover innovative techniques. Thus they were perfectly willing sign confidentiality agreements in return for raw data --- they weren't on a quest to improve science, merely to pan for useful drugs and treatments.

Moreover, the authors have not accused any of the authors of the problematic papers with falsifying results. Nature itself, in its supportive editorial, averrs that by far the most likely cause of the issues is simple error, not deliberate falsification --- indeed, it goes so far as to state in its own editorial that it think pure sloppiness is the bigger problem.

Supposing the writers to be telling the truth about their experiences for a moment, it's an open question whether the mere fact that 47 of the 53 studies proved irreplicable does not in itself imply fraud in at least some cases. An actual for-real study of landmark papers which attempted to suss out bogosity would be interesting. This isn't it and the authors are claiming it is. Their message is, instead, "This is a phenomema we have come across in the course of our work which we find troubling, and we need to think about how to prevent it from occurring."
posted by Diablevert at 10:19 AM on April 1, 2012 [1 favorite]


So, is he lying or telling the truth?

I think he thinks he is telling the truth, otherwise he wouldn't put his name to paper. But without data, the scientific community can't repeat his work, which is kind of the crux of the problem.
posted by Blazecock Pileon at 11:27 AM on April 1, 2012


Lot's of messenger attacking going on in this thread. Go team status quo!

Given that neither of the authors is currently affiliated with Amgen, they are not in a position to release any of the data so I don't understand the criticism that they need to put up or shut up. And I'm pretty sure that Amgen has more to lose than gain from publishing that data so they won't release it.

The authors cite a recent article from a group at Bayer that came to a similar conclusion so they are not the only ones crying foul here. The first step is to admit there is a widespread problem.
posted by euphorb at 11:44 AM on April 1, 2012


Jane goes home to Omaha from the convention and thinks, "I need to make myself a Hogwarts!" She lights a candle in her shrine to JK Rowling, reads 16 books on Lego design and goes out to buy Legos. Omaha has a lot of Lego vendors. Jane has a lot of money. After looking over the photos of John's castle, Jane realizes that without a Nearly-Headless Nick figure, her model will be incomplete. She buys a headless Lego guy on eBay. She buys a Lego guy with various heads from every shop she can find. And none of them are right. Finally, in desperation, she turns to John. Where did he get his Nearly-headless Nick figure? Oh, he invented him. Right. Could he make her one? Maybe. Possibly. Under the right circumstances. It just depends on whether he can find the right materials or if his mom says he can mail something to Omaha or if Jane can even afford such a specialized piece or if the government says that he is allowed to share his invention with her.
Argument by metaphor is always a bad idea, IMO because if you capture all the relevant information then your metaphor ends up being more complex then the thing you're talking about. If it's simple, then you end up leaving out key details.

The problem is that the people are not showing off a 'Lego sculpture'. but rather, they're just showing pictures and a paper about the Lego sculpture they claim to have made. When someone tries to build it, it just doesn't work out and they can't even put together the same basic shape out of legos (maybe it has 5-fold symmetry or something)

It's certainly ironic that he didn't name the studies he couldn't replicate, but still... it wouldn't be that difficult to go through the major studies and see if they can be replicated.
posted by delmoi at 11:47 AM on April 1, 2012


To address the article more specifically: I don't really doubt that there's a lot of irreproducible research out there, throughout the literature but particularly in very "hot" areas where there is a rush to publish.

However, I think it would be a serious mistake to conclude that industry's "enlightened self interest" makes them immune to problems like this. After all, drug companies have been known to bury evidence of serious side effects, inflate effect sizes via publication bias, and even make up fake journals that portray them in a positive light (all recent-ish stories). There are problems at all levels.
posted by en forme de poire at 12:17 PM on April 1, 2012


>>Why aren't Kaiser and the medical insurance companies themselves funding research into much cheaper generic or therapeutic medical alternatives?

>Higher medical costs means higher premiums which means more absolute profit, if the profit percentage stays the same. So why would they?


Not so sure about that. First Kaiser is not fee for service, they're a completely different financial model. Second you're making the same mistake business leaders always do when they say that costs of regulation will always be "passed on to the consumer." You can't just raise prices for ever, because you lose customers. Conversely, if Kaiser provided cheaper care, many employers would switch to them, which would definitely mean more absolute profit.
posted by msalt at 9:48 PM on April 1, 2012


« Older @Sheboyganscan attempts to transcribe everything t...  |  Fantasy Paper Minatures does e... Newer »


This thread has been archived and is closed to new comments