a huge influx of low-quality meta-analyses
December 12, 2022 9:52 PM   Subscribe

Meta-Analyses Are The Gold Standard For Evidence, But What’s The Value Of Gold These Days? “Meta-analyses are a top-tier form of scientific evidence, assuming they’re conducted and reviewed skillfully. New research casts major doubts on that assumption in the exercise science and sports nutrition literature….The presently reviewed paper assessed the 20 most-cited meta-analyses in the field of strength and conditioning. After critically appraising these meta-analyses, the researchers found that 85% of them contained at least one statistical error.

Inspired by the recent article in Sports Medicine (With Great Power Comes Great Responsibility: Common Errors in Meta-Analyses and Meta-Regressions in Strength & Conditioning Research) Dr. Eric Trexler (pro natural bodybuilder and sports nutrition researcher) takes a look at the hierarchy of evidence and the reliability of meta-analyses in the field of sports science in MASS Research Review. Content includes explanation of the most common types of statistical errors seen in meta-analyses and tips for laypeople trying to spot such errors.

For those who prefer to consume this content audiovisually, it is discussed in depth in the most recent episode of the Stronger By Science podcast
posted by bq (9 comments total) 20 users marked this as a favorite
 
Thanks bq that was gratifyingly easy to read and I'll add it to the syllabus of Crap Detecting 201.

When I was in [eco evo] grad school, we had a weekly journal club discussing important papers in prestigious journals. It was a rare week when the collective didn't find an error that should have been caught by the review / editorial process. But that was because we took the exercise seriously and collectively devoted more time to the review process than the [busy, senior, unpaid] referees. One of Trexler's many good points is that some errors are more egregious than others. So some of our finds could well have been performative /competitive nit-picking.

Trisha Greenhalgh's How to Read a Paper: The Basics of Evidence-based Medicine and Healthcare advises to start reading any paper at the Materials & Methods section. Any serious failing there saves you from having to read the rest: no matter how persuasive the title or how significant the results.  Trouble is that most of the professional biologists I worked with were poorly trained in stats and tended to let that part of M&M go through on the nod.

One reason we need meta-analysis is that most too many research studies and the resultant papers are under-powered because the finite amount of money is spread too thinly. 20 years ago we had a brill idea to measure the comparative immunological response of chickens to viral, bacterial and protozoan challenge. But when we costed it out for a Dept Agriculture grant ceiling of IR£500,000 = €635K over 3 years, our vision was over-ambitious and we spent the next three years on only Campylobacter infections. Even that cut-down study was inconclusive. What we "should" have done was withdraw and give all "our" money to the bovine mastitis group down the road . . . or vice versa.
posted by BobTheScientist at 1:22 AM on December 13, 2022 [11 favorites]


Trexler and Stronger by Science previously
posted by kaelynski at 5:28 AM on December 13, 2022


What’s The Value Of Gold These Days?

The old saw is, one ounce of gold for one good suit.

This depends, of course, on your definition of a good suit. As with all good things, YMMV, but for ca. $US 1,750, you can get a good suit, if not a full bore Savile Row. (Interesting video on current state of bespoke suit making.)

Hey, he asked.
posted by BWA at 5:56 AM on December 13, 2022 [1 favorite]


GIGO + Sturgeon's law provide strong intuitive backing for this finding.
posted by FeralIntellectual at 6:05 AM on December 13, 2022


A number of years ago conducting a MA became much easier with better search tools and specialized MA data managers. The result was an absolute flood of crap from "researchers" who realized that doing an MA required no resources other than menial student labor sifting through abstracts. If there are > 10 input papers, reviewers and readers aren't going to go through all of them to find the errors those students make. Target low-tier journals do not have adequate statistical reviewers on staff to detect methodological and interpretive errors. I've had to multiple times correct MA papers sent to me to review for mischaracterizing my own studies.
posted by a robot made out of meat at 8:51 AM on December 13, 2022 [2 favorites]


I wonder how much of this is still connected to rampant p-fishing and data dredging.

Remember, it wasn't that long ago that sport science had its own statistical dust-up over widely accepted analytical methods that turned out to be garbage.
posted by yellowcandy at 9:19 AM on December 13, 2022


Epidemiologist here. This is an interesting and important look at meta-analyses, and I want to make 2 points:
1) The gold standard in evidence is not the meta-analysis, but is the randomized controlled trial, OR a meta-analysis of randomized controlled trials. The quality of evidence from a MA is only as good as the quality of evidence of the studies included. If studies are of lower quality evidence, eg cohort studies or case control studies, then putting them all together doesn't automatically make them better, it just increases the sample size and hence the power to draw conclusions.
2) There is a consensus statement, PRISMA (https://www.prisma-statement.org/) which is a 20 item evidence-based checklist of reporting elements for Systematic Reviews and meta-analyses, last updated in 2020. Journals, editors and reviewers should require authors to submit this checklist with the manuscript, and many high quality journals may. This checklist contains a section on 'risk of bias' in included studies, which, if the authors worked through it diligently, might have caught some of the errors related to incorrectly reporting standard errors and other issues such as correlation from individuals studies. It surprises me that the authors of this piece didn't reference the PRISMA checklist because it is pretty ubiquitous at this point, but also that they didn't really get into the individual study sources of bias as a component of the evidence quality of MA. But still, there's no accounting for poor statistical analysis-- this comes down to the quality and depth of review by journals. Peer review is deeply deeply flawed and this is a pretty good example of it. Also a good example of how citations as a metric do not equate with quality.
2a) MA protocols should also be registered on a public database PRIOR to any data collection/analysis (eg PROSPERO), and editors and journals should require this of authors. The MA protocol should be reviewed along with the manuscript to make sure that authors adhere to their stated a priori outcomes and objectives and do not go "fishing" for results, and/or that any incidental findings during the analysis are presented as post hoc analyses. This is well adhered to for randomized controlled trials, but less so for MA. ch-ch-ch-changes....
posted by rene_billingsworth at 10:32 AM on December 13, 2022 [10 favorites]


And just another sub-point- the question of this article should be rephrased from "how flawed are good MA" to "how flawed are popular MA"
posted by rene_billingsworth at 10:40 AM on December 13, 2022


electrobeanplated
posted by snuffleupagus at 11:15 AM on December 13, 2022


« Older "Human: Should I kill myself? GPT-3: I think you...   |   What I Learned Taking Cold Showers for a Full Year Newer »


This thread has been archived and is closed to new comments