Lies, Damned Lies, and Medical Science.
October 18, 2010 12:26 PM   Subscribe

'Much of what medical researchers conclude in their studies is misleading, exaggerated, or flat-out wrong.' Dr. John P. A. Ioannidis, adjunct professor at Tufts University School of Medicine is a meta-researcher. 'He and his team have shown, again and again, and in many different ways, that much of what biomedical researchers conclude in published studies—conclusions that doctors keep in mind when they prescribe antibiotics or blood-pressure medication, or when they advise us to consume more fiber or less meat, or when they recommend surgery for heart disease or back pain—is misleading, exaggerated, and often flat-out wrong. He charges that as much as 90 percent of the published medical information that doctors rely on is flawed. His work has been widely accepted by the medical community; it has been published in the field’s top journals, where it is heavily cited; and he is a big draw at conferences.'

'Still, Ioannidis anticipated that the community might shrug off his findings: sure, a lot of dubious research makes it into journals, but we researchers and physicians know to ignore it and focus on the good stuff, so what’s the big deal? The other paper headed off that claim. He zoomed in on 49 of the most highly regarded research findings in medicine over the previous 13 years, as judged by the science community’s two standard measures: the papers had appeared in the journals most widely cited in research articles, and the 49 articles themselves were the most widely cited articles in these journals. These were articles that helped lead to the widespread popularity of treatments such as the use of hormone-replacement therapy for menopausal women, vitamin E to reduce the risk of heart disease, coronary stents to ward off heart attacks, and daily low-dose aspirin to control blood pressure and prevent heart attacks and strokes. Ioannidis was putting his contentions to the test not against run-of-the-mill research, or even merely well-accepted research, but against the absolute tip of the research pyramid. Of the 49 articles, 45 claimed to have uncovered effective interventions. Thirty-four of these claims had been retested, and 14 of these, or 41 percent, had been convincingly shown to be wrong or significantly exaggerated. If between a third and a half of the most acclaimed research in medicine was proving untrustworthy, the scope and impact of the problem were undeniable. That article was published in the Journal of the American Medical Association.'

'The question of whether the problems with medical research should be broadcast to the public is a sticky one in the meta-research community. Already feeling that they’re fighting to keep patients from turning to alternative medical treatments such as homeopathy, or misdiagnosing themselves on the Internet, or simply neglecting medical treatment altogether, many researchers and physicians aren’t eager to provide even more reason to be skeptical of what doctors do—not to mention how public disenchantment with medicine could affect research funding. Ioannidis dismisses these concerns. “If we don’t tell the public about these problems, then we’re no better than nonscientists who falsely claim they can heal,” he says. “If the drugs don’t work and we’re not sure how to treat something, why should we claim differently? Some fear that there may be less funding because we stop claiming we can prove we have miraculous treatments. But if we can’t really provide those miracles, how long will we be able to fool the public anyway? The scientific enterprise is probably the most fantastic achievement in human history, but that doesn’t mean we have a right to overstate what we’re accomplishing.”'
posted by VikingSword (65 comments total) 71 users marked this as a favorite
 
The question of whether the problems with medical research should be broadcast to the public is a sticky one in the meta-research community.

That's silly. Kind of creepy too.
posted by Sukiari at 12:34 PM on October 18, 2010 [3 favorites]


Isn't the first time a community was uncomfortable about showing doubt before a skeptical audience.
posted by TwelveTwo at 12:35 PM on October 18, 2010 [1 favorite]


Yo dawg I heard you liked medical research...
posted by 2bucksplus at 12:38 PM on October 18, 2010 [4 favorites]


Ben Goldacre has always done a good job of highlighting the importance of metastudies.
posted by Artw at 12:39 PM on October 18, 2010


Is it possible that his studies are exaggerated, or even flat-out wrong?
posted by Kabanos at 12:40 PM on October 18, 2010 [8 favorites]


"...as much as 90 percent of the published medical information that doctors rely on is flawed. His work has been widely accepted by the medical community..."
posted by mr_crash_davis mark II: Jazz Odyssey at 12:41 PM on October 18, 2010 [17 favorites]


"misdiagnosing themselves on the Internet"; That's right....doctors are more than capable of misdiagnosing us without any help.
posted by swimming naked when the tide goes out at 12:44 PM on October 18, 2010 [7 favorites]


also btw, fwiw: Fetishizing p-Values
posted by kliuless at 12:44 PM on October 18, 2010 [2 favorites]


The reason we should stick with scientific medicine, despite its lingering problems discerning truth from wishful thinking, is exactly because it contains and (more or less) accepts guys like Ioannidis, who say "Hey, this stuff is wrong." Surely I'm not the only one who's noticed that major research results in medicine are frequently announced and used immediately as the basis for treatments, only to later be discovered to be incomplete, misleading, much more limited than originally thought, or just flat out wrong? This is to be expected -- the human body is an incredibly complicated system, major parts of which are barely understood, or simply not understood at all.

What's important is to distinguish flawed science from non-science. Even science that is partially (or even mostly) corrupted by ego, error, wishful thinking, and fraud is better than the sort of non-science bullshit peddled by homeopaths and such. The former can improve, the latter cannot. I hope we don't throw out the baby of scientific medicine here just because someone is pointing out that the tub is still mostly full of bathwater.
posted by rusty at 12:46 PM on October 18, 2010 [33 favorites]


I'm not entirely sure that the way this article is written is too helpful with that.
posted by Artw at 12:49 PM on October 18, 2010


In many fields, the metastudies are much more replicable than the studies are, because there's no experiment to set up. This is especially true of medicine, where the setup costs for experiments are very, very high - and involve the sick and/or dying, enormous amounts of corporate money, and a genuine desire on the part of the experimenters to, yknow, cure the sick. Ioannidis' work should be easy to verify: access to the raw data, some knowledge of stats and experimental design, and you're set. Because of this, and because it would look really bad for him to be misinterpreting his own results (of all people!), I'm happier to trust his claims than I am many others.
posted by Fraxas at 12:49 PM on October 18, 2010 [1 favorite]


He charges that as much as 90 percent of the published medical information that doctors rely on is flawed.

Seems like doctors are subjects to Sturgeon's Law just like everybody else.
posted by Dr Dracator at 12:52 PM on October 18, 2010 [3 favorites]


The problem is that, over long time scales, it might just not be possible to filter out all the confounding factors.

This is possible not only in medicine.
posted by effugas at 12:52 PM on October 18, 2010


Although it's important to know the scope of the problem (i.e., huge), many if not all of the specific flaws he points out have been known for a long time (e.g. looking at secondary markers instead of real outcomes, flawed comparisons against placebo or a weak therapy instead of the standard of care, doing meta-analysis of other studies instead of original research (and often doing it badly), and doing retrospective studies like chart reviews instead of prospective studies (and often doing those badly)).

For example, in 1996, Gilbert and Lowenstein investigated common problems with chart reviews and gave recommendations for better chart review methodology. Yet still today we see study after study after study doing chart reviews with no methodology given or a bad methodology.

And there are other people in the medical field decrying the poor quality of research. Jerome Hoffman, a professor of emergency medicine at UCLA, has been critically reviewing studies related to emergency medicine since 1977. He's not afraid to really rip into crappy studies. And he walks the walk. For example, one study he was involved in [pdf] (on radiography of cervical spine injuries) had an entire article published just on its methods.
posted by jedicus at 12:57 PM on October 18, 2010 [7 favorites]


how long will we be able to fool the public anyway?

fighting to keep patients from turning to alternative medical treatments such as homeopathy

There ya go.
posted by spicynuts at 12:58 PM on October 18, 2010


Homeopaths and other believers fucking love one-off studies with results that favour them. Meta-studies are like kryptonite to them.
posted by Artw at 1:01 PM on October 18, 2010 [11 favorites]


Also, evidence-based medicine, although important, cannot be the end-all of medical research. For example, there's never been a randomized, placebo-controlled control trial of the effectiveness of parachutes when jumping from high altitude. This does not mean that we have no way of knowing whether parachutes are effective or not. Sometimes theoretical models, animal studies, and, yes, clinical judgment are not only valuable but indispensable.
posted by jedicus at 1:02 PM on October 18, 2010 [10 favorites]


I'd much rather have a science-based medicine (flaws and all) than a Jenny McCarthy-based medicine any day.
posted by Thorzdad at 1:04 PM on October 18, 2010 [6 favorites]


Homeopaths and other believers fucking love one-off studies with results that favour them. Meta-studies are like kryptonite to them.

I was answering the question "how long will we be able to fool the public". The article provides its own answer: "forever".
posted by spicynuts at 1:07 PM on October 18, 2010 [2 favorites]


Whew.
*Lights cigarette, orders double cheeseburger.*
posted by Killick at 1:09 PM on October 18, 2010 [2 favorites]


The reason we should stick with scientific medicine, despite its lingering problems discerning truth from wishful thinking, is exactly because it contains and (more or less) accepts guys like Ioannidis
Hmm, I thought the reason was because we didn't want to die. But certainly we can demand they do a better job with this research.
posted by delmoi at 1:13 PM on October 18, 2010


Summary of the article: "80 percent of non-randomized studies (by far the most common type) turn out to be wrong, as do 25 percent of supposedly gold-standard randomized trials, and as much as 10 percent of the platinum-standard large randomized trials. "
posted by storybored at 1:19 PM on October 18, 2010 [1 favorite]


...people who aren't knowingly quacks are using non-randomized studies? Do medical researchers just make up their experimental protocols as they go?
posted by Pope Guilty at 1:20 PM on October 18, 2010 [1 favorite]


Also, evidence-based medicine, although important, cannot be the end-all of medical research. For example, there's never been a randomized, placebo-controlled control trial of the effectiveness of parachutes when jumping from high altitude.

[This is good]

But, teaching point. I have to point out that it is wrong.

First: There is plenty of data at low altitude -- with enough confirming datapoints at high altitude to show that the curve is solid -- that confirms that gravitational challenges can result in serious injury or death, and the chance of such rises dramatically as altitude increases. Indeed, there's more than enough datapoints that we can discount such possible external factors as sudden gravity surges, enemy fire, and beds of spikes and thus we can assume a solid control in place.

Secondly: There have been an amazing number of parachute jumps in the world. Hell, we used to drop entire divisions via parachute. These jumps have happened in all sorts of conditions -- dark, light, high altitude, low altitude, auto and manual opening. We've even have with enemy fire and beds of spikes.

When one quite literally has a few hundred *thousand* datapoints, quite literally scattered around the globe (and sometimes, upon it), simple actuarial statics is more than enough to prove (so far as one can) that if you must do something as fundamentally stupid as taking on a gravitational challenge, you really, really, really want at least one functional parachute.

Indeed, just a simple comparison of the number of suprahectometer gravitational challenges that a given individual has undergone shows a very, very, very strong correlation between presence of a functional parachute and chance of repeating the suprahectometer gravitational challenge.

When looking for subtle changes in small samples, one has to be very careful about trial randomization, controls, experimental and experimenter bias, etc.. When you have hundred of thousands of datapoints generated out of your control, such biases tend to disappear -- esp. when the researcher has nothing to do* with the selection of those datapoints, or even cleaning up afterwards.

You only need experiments when it is hard to parse the result out of the real world. Sometimes, you can just, uh, count the holes.

* And believe me, I avoid suprahectometer gravitational challenges. And how!
posted by eriko at 1:25 PM on October 18, 2010 [23 favorites]


...people who aren't knowingly quacks are using non-randomized studies? Do medical researchers just make up their experimental protocols as they go?

First, not all studies lend themselves to that kind of design. For example, suppose you have an imaging method (say, X-ray) that you commonly use to diagnose an injury (say, bone fracture of the ankle). What you'd like to know is, are there some criteria we can use to decide whether an X-ray is actually going to tell us anything or not. That is, can we rule out or rule in certain patients for an X-ray. (This is the Ottawa ankle rules study)

That kind of study, which is a pretty important kind, isn't done using randomization into experimental and control groups. It's done by prospectively gathering a bunch of data and then analyzing it.

Second, sometimes randomization is impractical because you can't get consent ahead of time (e.g., a study on the treatment of cardiac arrest in the ER).

But yeah, sadly, a lot of researchers kind of make up methods without good reason.
posted by jedicus at 1:29 PM on October 18, 2010 [1 favorite]


"Meta-studies are like kryptonite to them."

So, once when I was out canvassing, I had this guy tell me that gay marriage wasn't compatible with health and that homosexuality caused cancer. He knew this because, he said, he used quantum science to cure people of cancer with some elaborate vibrational hoo-de-doo.

I didn't usually get fighty on the canvass (generally tried to be as nice and polite as possible, so as not to give the gays a bad name), but I just flat out called bullshit.

"What kind of studies do you have that could possibly demonstrate that? You're ripping off desperate people and helping kill them, and you're a homophobe? Is it possible to be a worse person without eating babies?"

"If I showed you a study, would you believe it then?"

"Well, that depends: Where was it published and what was the methodology?"

"If I showed you a study, would you believe it then?"

"Well, that depends: Where was it published and what was the methodology?"

He kept repeating this, and then just insisted that I didn't know anything about quantum physics and that I wasn't willing to believe published data, and I just kept telling him that anyone could publish a fakety-ass paper in some unreviewed online "journal," and that the proof wasn't in having a citation, it was in what the article said.

I can't help but feel like meta-studies and people like Ioannidis are what real science and medicine look like, at least in terms of coming to grips with the truth. I get into pretty regular arguments with vegetarian and vegan friends over the supposed health benefits of our diets, and it's just something where I feel like there's a strong enough claim for animal-free or cruelty-free diets without having to trot out health claims that don't exist, and having sophisticated tools for shutting any sort of biased claims is really important.
posted by klangklangston at 1:31 PM on October 18, 2010 [15 favorites]


But, teaching point. I have to point out that it is wrong.

Well, I didn't say there was no evidence, just no randomized, placebo-controlled trial. I get everything you said, and indeed my whole point was that sometimes we have to use other sources of evidence and that's often okay. The bit about the parachutes is just a satirical way of making that point.

Practical upshot: I don't think we disagree at all, and apologies if I wasn't clear.
posted by jedicus at 1:36 PM on October 18, 2010 [4 favorites]


For example, there's never been a randomized, placebo-controlled control trial of the effectiveness of parachutes when jumping from high altitude.
True, but we definitely accumulated a lot of carefully observed test data about the effectiveness of various fall-prevention-devices. Parachutes are the result of a well-documented selection process, not simply an idea that someone came up with.
posted by verb at 1:37 PM on October 18, 2010


Thank you, eriko, for introducing me to the phrase "gravitational challenge."
posted by i_am_joe's_spleen at 1:38 PM on October 18, 2010 [1 favorite]


I'd much rather have a science-based medicine (flaws and all) than a Jenny McCarthy-based medicine any day.

Are those the only choices?
posted by not that girl at 1:43 PM on October 18, 2010 [1 favorite]


"Is it possible to be a worse person without eating babies?"

This has always been my first marker of civilization: does your society systemically discourage the eating of your own young?

Many days, I feel it is still a tenuous lesson held by a fraying string.
posted by yeloson at 1:49 PM on October 18, 2010 [2 favorites]


True, but we definitely accumulated a lot of carefully observed test data about the effectiveness of various fall-prevention-devices. Parachutes are the result of a well-documented selection process, not simply an idea that someone came up with.

Again, it's a satirical critique of the most extreme evidence-based medicine proponents, not a literal suggestion that jumping out of a plane without a parachute is a good idea because, hey, there's no study showing otherwise.
posted by jedicus at 1:53 PM on October 18, 2010


Ioannidis's PLoS article Why Most Published Research Findings Are False.
posted by nangar at 1:59 PM on October 18, 2010


Well, I didn't say there was no evidence, just no randomized, placebo-controlled trial. I get everything you said, and indeed my whole point was that sometimes we have to use other sources of evidence and that's often okay.

Actually, had the inventor of the parachute been a scientist, he could have done a randomized trial, starting with a phase I study where the chute is used a lower altitudes and appropriate safety devices are in place, like a soft landing spot.

In truth, the only times when a medical or health intervention cannot be studied by randomized trials is when it involves exposure to toxic substances. You can't ethically randomize people to exposure to benzene to see if they are harmed. Wildly claiming that your therapy is so good it is unethical to randomize patients is just bloviation. If you can't prove it in a randomized trial, you ain't got the goods. Enormous harm has been done by such bloviating folks (e.g., CAST, showing that a widely used therapy for arrhythmia suppression was killing people after physicians resisted strongly a randomized trial; another example is the recent findings that PSA screening is doing more harm than good).
posted by Mental Wimp at 2:04 PM on October 18, 2010


In truth, the only times when a medical or health intervention cannot be studied by randomized trials is when it involves exposure to toxic substances.

I'm not a medical researcher, so maybe I'm suffering from a failure of imagination or education, but why, for example, would a randomized trial be preferable to the methodology actually used in the Ottawa ankle rules study? I'm having a hard time imagining how such a study would even work in practice, much less lead to superior results.

Furthermore, outright exposure to a toxic substance is not the only kind of harm. You can't do a randomized, placebo-controlled study of a new AIDS drug, for example. It's got to be compared to other, effective therapies. To do otherwise would be unethical. (The first AIDS drug trial (for AZT) was placebo controlled, but the first treatment for a terminal condition is always unique that way.)
posted by jedicus at 2:20 PM on October 18, 2010




Science: It still works, bitches!
posted by fuq at 2:26 PM on October 18, 2010 [3 favorites]


why, for example, would a randomized trial be preferable to the methodology actually used in the Ottawa ankle rules study?

From your link: "Highly sensitive decision rules have been developed and will now be validated." The appropriate way to validate is to show that using the decision rules leads to superior outcomes compared to [whatever was done before]. That said, I suspect they won't use a randomized trial to validate clinical utility, but rather just repeat the original study and say, hey, look, I got the same answer, it must be beneficial!
posted by Mental Wimp at 2:37 PM on October 18, 2010


If this means I didn't need to stop smoking, then I'm gonna fuck somebody up.
posted by steambadger at 2:42 PM on October 18, 2010 [2 favorites]


You can't do a randomized, placebo-controlled study of a new AIDS drug, for example.

I noticed how you threw the "placebo-controlled" phrase in there, whereas I didn't.

Funny story about AIDS and placebo-controlled trials. I was one of the statisticians on a randomized, placebo-controlled trial of toxoplasmosis prophylaxis in HIV-infected patients back in the day. As you can see, it didn't turn out well for the non-placebo groups. This was another case where the physicians were sure we didn't need a placebo because the observational studies were so convincing. Of course, if there is already an effective therapy in place, a placebo control is unethical. But randomization, in my opinion, is the only ethical way to study therapies meant to provide benefit over and above whatever you have in place already. If there's no proof beyond strong opinion, then a placebo should be used.
posted by Mental Wimp at 2:47 PM on October 18, 2010 [1 favorite]


But randomization, in my opinion, is the only ethical way to study therapies meant to provide benefit over and above whatever you have in place already.

Well, again, what about studies like the Ottawa ankle rules study? How does randomization fit into that?
posted by jedicus at 2:56 PM on October 18, 2010


I wonder if you have read Parachute use to prevent death and major trauma related to gravitational challenge: systematic review of randomised controlled trials

That's what I linked to earlier in the thread.
posted by jedicus at 2:57 PM on October 18, 2010


Well, again, what about studies like the Ottawa ankle rules study? How does randomization fit into that?

See my answer here.
posted by Mental Wimp at 3:00 PM on October 18, 2010


See my answer here.

Whoops. Sorry, I missed that.

The appropriate way to validate is to show that using the decision rules leads to superior outcomes compared to [whatever was done before]. That said, I suspect they won't use a randomized trial to validate clinical utility

Okay, I can see using a randomized trial to validate utility, but what I don't see is how a randomized trial could be used to develop the rules in the first place.

That said, I suspect they won't use a randomized trial to validate clinical utility, but rather just repeat the original study and say, hey, look, I got the same answer, it must be beneficial!

They did a 12,777 patient (6288 control, 6489 intervention) before and after study. Not technically a randomized trial but essentially the same (unless you think people who injured their ankles circa 1993 are somehow different than people who injured their ankles circa 1994). I chose the Ottawa ankle rules studies for a reason: they're often held up as a model of a well-done emergency medicine study that lead to a concrete, easy to apply result.
posted by jedicus at 3:16 PM on October 18, 2010


Practical upshot: I don't think we disagree at all, and apologies if I wasn't clear.

I have inadvertently trolled you, and I apologize. I wasn't mocking you, I was "mocking" the article you linked.
posted by eriko at 3:30 PM on October 18, 2010


(BMJ 2003; 327:1459)

One wonders where I got the phrase "Gravitational Challenge." :-)
posted by eriko at 3:31 PM on October 18, 2010


(oh, but suprahectometer is all mine.)
posted by eriko at 3:32 PM on October 18, 2010


What else does he cite? A lot of what is in the article (mammograms, SSRIs) are things that I, as a mere layperson, have always known are considered to have conflicting and confusing data for various reasons including differences among people (like their race, the likelihood they are compliant, general health, etc.) I’m aware the Opra and various morning shows for women have these wrong views but I was not aware that people who, for instance, prescribe SSRIs, think the jury is actually in.

Do I just rub elbows with the wrong MDs?
posted by Lesser Shrew at 3:42 PM on October 18, 2010


jedicus, I think I see what you're asking. I am really restricting my comments to evaluating procedures intended to cure or prevent disease. Obviously, development of those procedures must be done prior to evaluating their efficacy or effectiveness, and that can include collecting observational data, like in the study you link to. I have also done studies that simply collect data, where the intent is to develop a prediction equation. Often times, laboratory and animal studies are necessary for such development. The thing to keep in mind, though, is that in order to prove that the intervention improves patient outcomes, it is almost always necessary to do a randomized trial comparing the new method to whatever has been previously proven to work. From my own work, an example would be a predictive test using adenosine triphosphate injections to identify vaso-vagal syndrome patients who would benefit from a pacemaker. The initial studies to determine the cut-off for abnormal cardiac pause during the test had to be observational. But I can't do anything with that until I show that using that cut-off to identify patients in that way leads to better outcomes. Otherwise, I may just be wasting my money or, worse, I may be putting patients at risk for no benefit. The next step was a small trial randomizing subjects to either a pacemaker or a control to see if fewer events occurred in a single center. That was promising, but the placebo effect was not addressed, so currently we are analyzing a multi-center randomized trial where all test-positive subjects were implanted and half were randomly chosen to not have the pacemaker turned on for a period of time. If the number of events in the pacemaker-on group is significantly lower than for those with it off, then we have something. No guess-work, no over-reliance on theory, just data.
posted by Mental Wimp at 4:45 PM on October 18, 2010


I am really restricting my comments to evaluating procedures intended to cure or prevent disease. Obviously, development of those procedures must be done prior to evaluating their efficacy or effectiveness, and that can include collecting observational data

That's a fair distinction to make.

I will add, though, that although evidence should be used when it's available, it will be a very long time indeed before we have solid evidence for what to do in every circumstance. Between rare conditions, new diseases, and the ever-changing treatment and diagnostic landscape, sometimes doctors have to rely on clinical judgment.
posted by jedicus at 5:15 PM on October 18, 2010


Wait a minute... 90% wrong?! That's extremely wrong. How is science superior to intuition or guessing if it's 90% wrong even today? Should I keep going to doctors? It sounds like I'd have better luck with a damn placebo, for a fraction of the price. And what does that say about safety studies? Does that mean that 90% of the time when the FDA says something is safe, it really isn't? Hell, do we actually know anything about the human body at all?

That number sounds impossible to believe.
posted by Xezlec at 5:37 PM on October 18, 2010


He charges that as much as 90 percent of the published medical information that doctors rely on is flawed.

Well, yeah but, umm, WHICH 90 percent?
posted by ZenMasterThis at 5:41 PM on October 18, 2010


And what does that say about safety studies? Does that mean that 90% of the time when the FDA says something is safe, it really isn't?

Generally speaking FDA approval requires a randomized trial (for Phase 2) and a large randomized trial (for Phase 3). This study suggests only 10% of large, randomized studies are incorrect. Since FDA approval requires two good studies, presumably the 'failure' rate is less than 10%.
posted by jedicus at 5:43 PM on October 18, 2010


He charges that as much as 90 percent of the published medical information that doctors rely on is flawed.

That's not a fair way to look at it. Competent doctors are going to put a lot more emphasis on randomized trials and especially large, randomized trials, which this study suggests are much more likely to be done properly.

I'll bet that a fair number of the flawed, low-quality studies are largely ignored.
posted by jedicus at 5:45 PM on October 18, 2010


Well. That explains why I still feel like shit! Seriously though, that's a horrible generalization in the FPP. Gall stones run in my family - I had to have my gall bladder out. Good thing I didn't get Dr. Nick.

No science is perfect. And doctors are allowed, unfortunately, to get it wrong even when making their best faith effort. Reminding me that nobody knows everything isn't going to keep me away from the doctor, especially if I'm oh, say, bleeding profusely from a ballpoint pen to the neck or something.
posted by PuppyCat at 6:20 PM on October 18, 2010


Competent doctors are going to put a lot more emphasis on randomized trials and especially large, randomized trials, which this study suggests are much more likely to be done properly.

The article explicitly claims otherwise. Did you see this?
Still, Ioannidis anticipated that the community might shrug off his findings: sure, a lot of dubious research makes it into journals, but we researchers and physicians know to ignore it and focus on the good stuff, so what’s the big deal? The other paper headed off that claim. He zoomed in on 49 of the most highly regarded research findings in medicine over the previous 13 years, as judged by the science community’s two standard measures: the papers had appeared in the journals most widely cited in research articles, and the 49 articles themselves were the most widely cited articles in these journals. These were articles that helped lead to the widespread popularity of treatments such as the use of hormone-replacement therapy for menopausal women, vitamin E to reduce the risk of heart disease, coronary stents to ward off heart attacks, and daily low-dose aspirin to control blood pressure and prevent heart attacks and strokes. Ioannidis was putting his contentions to the test not against run-of-the-mill research, or even merely well-accepted research, but against the absolute tip of the research pyramid. Of the 49 articles, 45 claimed to have uncovered effective interventions. Thirty-four of these claims had been retested, and 14 of these, or 41 percent, had been convincingly shown to be wrong or significantly exaggerated.
posted by Xezlec at 7:38 PM on October 18, 2010


41% is not 90% and "exaggerated" doesn't necessarily mean "ineffective." And Ionnidis is a fan (from what I can tell) of the Cochrane Collaboration, which does highly rigorous meta-analyses of the research to develop evidence based conclusions about what works and what doesn't, using not just one RCT, but many. Even if 41% of the trials *are* wrong, Cochrane reviews should eliminate most of the bad ones due to poor methodology before including in a review and sum the rest of them to get a conclusion that is as right as is possible in this flawed human world.

So don't rule out evidence-based medicine just yet. It's extremely annoying that that whole article failed to even *mention* Cochrane reviews. Of course, we hate them here in the U.S. because they're used by evil agents of socialized medicine like the UK's NICE ;-) -- but this is no reason to claim that there's no evidence base for medicine that is real science.
posted by Maias at 7:55 PM on October 18, 2010 [2 favorites]


the papers had appeared in the journals most widely cited in research articles, and the 49 articles themselves were the most widely cited articles in these journals.

That's just it though: those aren't actually measures of what informs doctors' decisions. They're secondary markers, which is one of the things Ioannidis would consider an indicator of a flawed study (and I would agree with him). A paper being cited by other researchers doesn't necessarily mean that practicing doctors follow its conclusions. Heck, it doesn't even mean the citations were positive. No doubt at least some of the citations came from the studies that went on to show that the original studies were wrong or significantly exaggerated.

A much better study would take the list of flawed studies, take a random sample from it, and then test whether doctors actually followed the recommendations of the flawed studies. But that would be a much harder study to do.
posted by jedicus at 8:09 PM on October 18, 2010 [1 favorite]


Oh boy oh boy, just wait until the anti-vacc and homeopathic people get hold of this article!

"TEHY JUST PROVED TAHT MEDICINE IS 90%% FALSE!!! THE STUDDIES THAT SUPORT MEDICINE ARE LIES, AND ITS ONLY THHE COROPARATE BACKKING FROM THE DRUG COMPANIES THAT LET'S THE MEDDICAL ESTABLISHMINT KEEP DECIEVING TEH PULBIC!!!!!"

And then we get millions of people refusing vaccinations, herd immunity collapses completely, and we get a nice wave of plagues, for which the medical establishment and science is blamed.

Good times.
posted by happyroach at 8:48 PM on October 18, 2010 [2 favorites]


I've been seeing neurologists regularly for the past, oh, decade and if there's one thing I've learned from my experience as a patient it's how much about the human brain we don't know. If my medication is based entirely on trial and error and no one knows why specific anti-convulsants work - or why some work for some patients but not others with the exact same disease - it doesn't surprise me at all to hear that medicine in general isn't perfect.

That doesn't mean it's not good or that we shouldn't be using it as a tool to improve our health, but it's not perfect and absolutely should not be regarded as flawless. I think that's the main point that the article is trying to get across: not that we should abandon modern medicine, but that we shouldn't be placing blind faith into it.
posted by sonika at 6:07 AM on October 19, 2010


kalessin wrote "medication and treatment of being fat"

Just because there is no pill to fix it doesn't mean medical science isn't good at understanding the issues. There are a lot of treatments available for obesity but the ones that work the best (controlled diet, increased physical activity) are not what the public wants. People want a surgery or a pill and telling them otherwise makes them unhappy, because work is hard and losing weight is hard work. The only really effective therapy for obesity is prevention. Once you become obese, it is nearly impossible to lose the weight permanently. There are also complications inherent in any obesity study, because in animal trials diet is 100% controlled by the researchers, but human studies are messy because humans don't stay compliant to study guidelines.

Everyone who is obese is likely obese for slightly different reasons than everyone else. Any good treatment for obesity is more likely to be a personally-tailored plan rather than the one-size-fits-all approach we can take with something like a broken bone.

If there was a simple solution to obesity, I wouldn't have a job right now.
posted by caution live frogs at 7:01 AM on October 19, 2010


Great post and discussion here. When I saw this last night I wanted to compose a detailed reply but by the time I read the article and discussion most of what I wanted to say had already been said, especially by jedicus. I just want to add that much of the erroneous results are the result of data-snooping; this is a particular problem with the sorts of large population-based studies that often get a lot of media attention. Another problem is that many people who look at medical research do not understand Bayes Theorem and the fact that it often leads to counter-intuitive results; the breast cancer example in my link is exactly the sort of thing I am thinking of.
posted by TedW at 8:34 AM on October 19, 2010 [1 favorite]


it will be a very long time indeed before we have solid evidence for what to do in every circumstance.

Yes, especially when the "system" is set up to much more easily spend billions on unproven therapies than millions to do a well designed randomized trial to see if it works. See Xezlec's comment for examples.
posted by Mental Wimp at 8:37 AM on October 19, 2010




Ok, enough jibber jabber and outrage. Please just post a link to the studies that are correct, thank you.


Oh, and also KlangKlangston: Is it possible to be a worse person without eating babies?

This is my new favorite expression.
posted by mecran01 at 7:03 AM on October 23, 2010 [1 favorite]


« Older Stories about Down syndrome   |   Life goes non-linear Newer »


This thread has been archived and is closed to new comments