I could have told you that
October 25, 2019 8:06 AM   Subscribe

Predict science to improve science "Last year, a huge group of researchers collaborated to try to replicate the results of some very famous social science research. They determined that only 62% of the studies found similar results when they were repeated. But the researchers found something else intriguing: other scientists were astonishingly good at guessing which of the results would replicate. Does that mean we can just ask scientists for their hunch on what research is robust? It's a lot more complicated than that, but predictions could have a useful role to play in science (paywalled, but see this), and new projects are springing up to make use of them."
posted by dhruva (21 comments total) 18 users marked this as a favorite
 
While my experience in this is area is limited, it seems like this is painfully obvious. Once something is published, there's such a strong bias against retraction or amendment and that process moves so slowly, that it is highly likely that people who work in the field would know which studies are crap. Bad methodology, bad statistics, massaging your results to get something interesting out of them, etc are often pretty obvious. The real question is why isn't this being caught in peer review?
posted by ssg at 8:18 AM on October 25, 2019 [2 favorites]


Because that's not what peer review is? Having been a scientist, I think the general public understands published scientific papers to be some sort of established truth when they're not: they're really part of a conversation. It's where you firm and polish up your research and points to publish so that other interested scientists can look over them and pick them apart. Peer review is mostly there so there aren't people submitting total, unedited crap and wasting people's time. It isn't the real conversation, the real picking apart of ideas...

Most papers don't end up being all that great or all that useful. But then you have some that do stand out and provide very solid advancement in human knowledge. But you only really know that in retrospect, after many people have had a chance to look over it, write their own papers, and reply. Bad papers rarely get retracted. Mostly, they get ignored.

"Peer review" is really only a quick check to make sure it's worth putting under people's noses so they can really dig into things and have a discussion. So the record of published scientific works isn't a record of the best knowledge we have, it's a record of the conversation leading to that knowledge.

So it is actually a very interesting question, how we know what we know, since the real record of scientific knowledge is not the papers, but also the knowledge and opinions of them held by working scientists.
posted by Zalzidrax at 8:34 AM on October 25, 2019 [24 favorites]


Interesting result! Has anyone replicated it?
posted by agentofselection at 9:10 AM on October 25, 2019 [5 favorites]


I think the general public understands published scientific papers to be some sort of established truth when they're not: they're really part of a conversation.

I think this depends very much on the field and the journal and all kinds of other factors. When you get into fields where studies can have a real effect on people's lives (like changing how diseases are treated), then that's really not how things should be (and yet it very much can be how things are).
posted by ssg at 9:24 AM on October 25, 2019 [1 favorite]


The recent post about open math problems has some quotes along the lines of "Erdos says this is probably true, but probably hard to prove", and someone replies "Yeah, but you don't need Erdos to tell us that, it's pretty obvious (to as a new assistant professor of math in 2011)"

Anyway, this idea is interesting, and maybe not that surprising, but I also wonder what it's good for. There's the idea out there of intellectual debt that was discussed here recently, about the problems that you get into when you start getting answers that you don't understand, and then using them to do things in the real world. Maybe you use equally opaque methods to get another answer that uses the first. The ignorance accumulates, and because nobody understood why the answers seemed to work before, they have no ability to know when they will stop working either. So all of a sudden stuff falls apart with no warning and people can die etc.

Here we have a similar issue, except the neural networks are biological wet ware rather than models in silico. You're getting answers without a detailed understanding what's going on, corroborating evidence, theoretical basis, and all the other things that generally go with good scientific results. And that's fine as a sport or casual pass-time, but potentially very dangerous if you try to do stuff based on it, like treat a disease as ssg mentions. There's a bunch of stuff in my field that I'm pretty sure is true but can't easily prove, and I save that for chats over coffee, not for broadcasting to the world. Because even though my opinion is informed, it's still just an opinion, and the whole reason I do science is so that my claims have support outside of my own head.

Science is ultimately about understanding the world. Let everyone else play with fire they don't fully understand, but let's stick to science being science, and not a bunch of clever guesses.
posted by SaltySalticid at 9:37 AM on October 25, 2019


But, in another vein, what the Science article is talking about isn't really anything different than I've experienced in how people do science. It's just suggesting formalizing what (I thought) scientists were mostly already doing as standard operating procedure. You go to conferences, you talk to your peer group, you ask if this seems like a true fact, you discuss how you might get evidence to support it, etc. etc. And then if odds seem good, you go out and do the work. Nobody just says "hey here's a random thought, let's go spend a bunch of time and money researching it!" So maybe my doom and gloom above is misplaced, but I can't shake the feeling that turning this natural process into a "platform" for mass-collected prediction is going to erode some of our standards and decrease our mean level of understanding over time.
posted by SaltySalticid at 9:45 AM on October 25, 2019 [1 favorite]


From the paper:
For example, in this context, highly cited faculty performed no better than other faculty, and Ph.D. students did best.
Would anybody who's inside the system care to venture a guess as to why PhD students would be better predictors than professors?
posted by clawsoon at 9:45 AM on October 25, 2019 [4 favorites]


What are the chances that this study won't be replicated and thus break the irony meter?
posted by clawsoon at 9:46 AM on October 25, 2019 [3 favorites]


why PhD students would be better predictors than professors?
Cynically, they are almost as educated and much less encumbered by politics. Even on a subconscious level, it's hard to think poorly of work done by your buddies. Students don't know as much of the political lay of the land, lineages, camps, schools-of-thought, and whose egos they may tread upon.
posted by SaltySalticid at 9:48 AM on October 25, 2019 [10 favorites]


"why PhD students would be better predictors than professors?"
And maybe something to do with Arthur C. Clarke's remark about the differences that set in with Seniority?
posted by aleph at 9:58 AM on October 25, 2019 [1 favorite]


Zalzidrax: Having been a scientist, I think the general public understands published scientific papers to be some sort of established truth when they're not: they're really part of a conversation.

This is definitely a change of understanding that came as I got more familiar with scientists. The basis for knowledge in the sciences is different than in other groups, but the habit that scientists have of keeping criticisms of published work to those in-the-know resembles that of pretty much every group that relies on its public prestige. Preachers don't publicly criticize other preachers, doctors and teachers don't publicly criticize other doctors and teachers, CEOs don't publicly criticize other CEOs.

It became clearest to me when I emailed one scientist about another scientist's work, suggesting that there might be an interesting connection. The second scientist very politely said that the first scientist's work wasn't taken seriously by most scientists in the field. No word of that to any of the journalists who wrote about the first scientist's study, though - or at least none that I remember in the popular press articles about it.
posted by clawsoon at 10:00 AM on October 25, 2019 [4 favorites]


PhD students just spent a year setting up method A to see if they can use it for question B. No? Question A’? No? Replication for question A?... weak results? Hmmmmm.

The psychological reasons above explain why the full profs don’t remember this as vividly.
posted by clew at 10:02 AM on October 25, 2019 [1 favorite]


I predict that attempts to use vaguely trendy, pseudo-quantitative wisdom of crowds methods to improve reporting will be overhyped and end up underperforming.

I also predict that the observation that Ph.D. students did better than professors will not cleanly replicate* but will be remembered as fact in future MeFi threads.

That aside: I actually really like this! And if I have time to post again will say nice things later.


* To wit: It's a single relatively small study limited to economics departments and, if I'm reading it right, one that shows no "dose response": full professors are in between associate and assistant professors in their accuracy.
posted by mark k at 12:25 PM on October 25, 2019 [2 favorites]


The second scientist very politely said that the first scientist's work wasn't taken seriously by most scientists in the field. No word of that to any of the journalists who wrote about the first scientist's study, though - or at least none that I remember in the popular press articles about it.

This epitomizes my reason for looking profoundly askance at the "I Trust Science!!!" rhetoric that is so often adopted by earnest non-scientists. A scientific community is just another club of people who are nervous about hurting the other members' feelings by speaking truth out of season.
posted by Not A Thing at 1:14 PM on October 25, 2019 [4 favorites]


but the habit that scientists have of keeping criticisms of published work to those in-the-know

I'm not familiar with this habit. I see criticism being openly shared in the exact same types of venues where the original work was shared all of the time. The "conversation" referred to above is quite frequently a debate. It's true that bad work frequently gets ignored, but it's not because scientists are being cagey about it: It's because they have better things to criticize.

No word of that to any of the journalists who wrote about the first scientist's study, though

The reason that this usually happens is that the journalist is under pressure to write an article about eXciTINg reSUlTs quickly. They rely too much on the original researcher or the university press release and don't do much checking. In my field, we usually find out about the latest bullshit to hit the popular science press just like everyone else. And then we complain that they couldn't find someone else to talk to, who could have provided a much-needed sanity check.

Some of us participate in public forums (like this one!), where we respond to public discussions about the research, but our voices get drowned out. Some of us do get contacted, and give our opinion. You don't want to be too rude or personal, but I have certainly seen other research described as incredible, questionable, speculative, etc. Many of us have blogs or social media where we share our opinions, too!

Basically, scientists have very little control over popular science reporting.
posted by Kutsuwamushi at 1:18 PM on October 25, 2019 [7 favorites]


Kutsuwamushi: I see criticism being openly shared in the exact same types of venues where the original work was shared all of the time.

To me that means "in scientific journals", but I know just barely enough to suspect that you mean something else.
posted by clawsoon at 3:35 PM on October 25, 2019 [1 favorite]


why PhD students would be better predictors than professors?

Maybe they were more likely to have the time and inclination to read everything carefully and put effort into particiapting?
posted by straight at 4:11 PM on October 25, 2019 [3 favorites]


What are the chances that this study won't be replicated and thus break the irony meter?

Scientist predict with a high degree of certainty that this study's results won't replicate.
posted by Just this guy, y'know at 4:16 PM on October 25, 2019 [2 favorites]


clawsoon, it seems like you believed the second scientist rather than the first -- why?
posted by clew at 5:00 PM on October 25, 2019


clew: clawsoon, it seems like you believed the second scientist rather than the first -- why?

I'm on the fence about this particular one, as it happens. The second scientist dismissed the first one because, "He fails to convince anyone that this kind of event was a consistent enough selective pressure to warrant an adaptation," but I had just been watching a bunch of boxing while reading the first scientist's work on emergency room admissions and it seemed like a plausible enough selective pressure to me.
posted by clawsoon at 5:23 PM on October 25, 2019


To me that means "in scientific journals", but I know just barely enough to suspect that you mean something else.

Yes, it mostly means in scientific journals, but there are other avenues of publication too. Conferences, for example. I've also seen criticisms of research in popularizations written for a general audience, but since the purpose of popularizations is usually to educate the audience about how something works rather than to debunk ideas they might never have even heard of, it doesn't come up as often.

If you have access to the original work published in these venues, you have access to criticism of it. I mean, yes, you probably don't have access to our conversations with each other unless they're on public forums, which means that you don't have access to every criticism. But that's not because we're hiding our criticisms from the public to make ourselves look good. It's because ... people in the same line of work talk to each other about their work and don't write up reports on everything they say to each other?

It seems to me that you're taking how science is communicated in the popular press as the result of scientists being cagey about the fact that sometimes bad research is done. That hurts, because all of the scientists I know have problems with how the popular press reports on science and would love if journalists presented more than a one-sided view of research. There are all sorts of reasons they don't, but it's not because we're ... just ... circling the wagons, or something. For the most part it's because we're not the ones writing the articles and we're not asked.
posted by Kutsuwamushi at 1:23 PM on October 27, 2019 [4 favorites]


« Older "An offense to the eyes as you drive up the...   |   "The Internet Archive is determined to preserve... Newer »


This thread has been archived and is closed to new comments