Science is hollow, and he's got a pin
December 10, 2010 3:37 PM   Subscribe

Has something gone wrong with the scientific method? That's the big question Jonah Lehrer (pr-e-vi-ous-ly) raises in his new New Yorker piece on the Decline Effect. (Sub required; check the summary here, or pdf here.) Dave Bry at The Awl uses Lehrer's revelations to start and extended riff on how much science one really needs; Lehrer himself goes into more detail about why he wrote the article in a lengthy blog post at Wired, and as for me --- well, I think I'm just going to spend a few days being a little less certain that we can prove very much about how a few extra X chromosomes affect corporate bottom lines, whether you can tell dick about the nature of liberalism or conservatism by where and how people glance at things, or even what the hell is going on with that damn burger.
posted by Diablevert (48 comments total) 31 users marked this as a favorite
All the easy questions have been answered... the problem is, some of the answers are wrong.
posted by oneswellfoop at 3:40 PM on December 10, 2010

This is just the the start of James Blish's Cities in Flight.
posted by sciurus at 3:57 PM on December 10, 2010 [1 favorite]

One thing he didn't really address is that methods get better over time, which means it's harder to find effects if you are improving your analysis, I would think.

And, yet again, we get a "science is all wrong" article that doesn't address the attempts to address the problem, like, for example, Cochrane reviews—as I've written previously.
posted by Maias at 4:03 PM on December 10, 2010 [5 favorites]

I think this supports, at least in some areas, the need of a place to publish null results. You may have studies that don't find support for your hypothesis, but in the beginning there's very little chance of those getting published. When someone does a similar study (not knowing there were earlier ones) that does find support, it is publishable. After that point, other research showing no effects might be more likely to find publication outlets because they run counter to the finding that brought all the buzz. This isn't to say that it always happens this way, but I could imagine that some changes might result from that sequence of events.
posted by bizzyb at 4:07 PM on December 10, 2010 [3 favorites]

In short, nothing has gone wrong with the scientific method. But people go wrong when they think that, at the end of some pre-defined experiment, the conclusion has been reached and the method has run its course.

People must act on the dicta of science within a limited amount of time. Lehrer discusses, among other things, Ioannidis' results that Maias alludes to, which focused on major, double-blind studies. It's all well and good to take the position that the mills of Newton grind slowly but they grind exceeding small; in the meantime, people are out there being prescribed the drugs and the therapies and basing social policies on current science as it is presented to them by sciencists. In that sense, the idea that our biases have blinded us to the flaws in the way research is presented and selected for public consumption --- the ways in which new truths enter into the discourse --- is, I think, legitimately troubling.
posted by Diablevert at 4:14 PM on December 10, 2010 [1 favorite]

Look, you have the hard sciences, basically physics and math, and then you have an enormous amount of research that has adopted the nomenclature of the hard sciences, without actually adopting what makes the nomenclature work.

Then add to that the desire to have results, any results, add to that the fact that science reporting functions as part of the entertainment industry, and add to that an underlying scientism, that is, a belief that science can help people to make meaningful life choices, and you get a soggy mess. But all human knowledge is like that, with the possible exception of the hard sciences.
posted by eeeeeez at 4:18 PM on December 10, 2010 [8 favorites]

This is a pretty well-known and unsurprising effect in many fields. For instance, consider the electrical conductivity of cell membranes. Original techniques involved using a needle-like electrode which tore tiny holes, increasing the conductance to dissolved, charge carrying ions. If you chart the experimentally obtained values of this resistance over time they smoothly descend to a "converged" value in recent times where techniques have gotten much better at isolating and measuring conductivity.

But really the problem goes deeper. To use the membrane conductivity example again, note that the conductivity of the membrane never changed. Nature itself is not known to change in such ways. Instead, it was the conditions of the experiment, and vitally, what we defined as conductance. Our models only contained an idea of resistance via charged particles crossing the membrane, not via them slipping through a hole; our experimental techniques lacked the ability to measure anything to account for that deficiency of model.

It's important to remember that all scientific papers are opinion pieces. Hopefully they're the best damn researched opinion pieces ever, but they're also designed to take risks and make big claims. Though I haven't read the source material on this Conductance Effect, I would best that every author that published a conductance value using the needle-like electrodes included a paragraph saying that there was likely a lot of leakage and that the published value was inaccurate by it. At the end of the day, however, we condense the knowledge we obtain and it's easy to lose those warnings.

So that all said, I expect that this "weakening of the scientific method" is better attributed to answers that probably should have been interpreted as "x is estimated as y, but is probably a lot smaller unless you can really ignore the MYRIAD of biochemistry that we just ignored" were interpreted instead as just "x = y".

Knowing accurately when the things you're saying are in the realm of the first statement is important and difficult. Really difficult. I'm willing to say that the fact that accurate, meaningful statistics is still a research topic directly means that the scientific method as we know it is still evolving, growing, improving. Then, even if you know when you're making that kind of statement, conveying it in a world of journalism and policy...

I think it's fair to expect that in most scientific journalism where it'd be meaningful to say something like "the size of the measured effect is smaller than they say", it probably probably is.
posted by tel at 4:30 PM on December 10, 2010 [12 favorites]

There are hundreds of thousands of studies carried out each year. Even if every one of them were done correctly, some would statistically have unusual, anomalous results. Unusual results are, by and large, more interesting than usual results.

So once something is published and attracts interest, people need to go back and replicate the study. Is the problem that this isn't happening? That people assume once a study is published no one needs to do that study again?
posted by justkevin at 4:40 PM on December 10, 2010

people are out there being prescribed the drugs and the therapies and basing social policies on current science as it is presented to them by sciencists.

Also basing therapeutic decisions on what their neighbors say, what Jenny McCarthy says, what their homeopathic reflexologist says, and basing social policies on what politicians say. These information sources are far from ideal, and mostly based on anecdotes. I'll take science with a probability of error over any of them.
posted by kuujjuarapik at 4:52 PM on December 10, 2010 [5 favorites]

Look, you have the hard sciences, basically physics and math, and then you have an enormous amount of research that has adopted the nomenclature of the hard sciences

This is not how science works. Biology has not "adopted the nomenclature" of physics or math. Chemistry has not "adopted the nomenclature" of physics or math.
posted by Sidhedevil at 4:58 PM on December 10, 2010 [8 favorites]

Question #1: Does this mean I don’t have to believe in climate change?

Me: I’m afraid not.

posted by Artw at 4:59 PM on December 10, 2010

Sorry, kuuj, tough to quote properly on my phone - but in response I would say, I don't think the idea that there are dumber-assed things out there to base a decision on than science meaningfully alters the point at issue, which is, how flawed is the science?
posted by Diablevert at 5:00 PM on December 10, 2010

I thought this was an interesting article, but if it were an article about how experimental findings are often replicated successfully over long periods of time, do you think it would get published in The New Yorker?
posted by snofoam at 5:02 PM on December 10, 2010 [7 favorites]

I'm reading Cities in Flight for the first time this week.

So, if this leads to antigravity and eternal lifespans, hoorah.
posted by BungaDunga at 5:07 PM on December 10, 2010

This post by Cosma Shalizi touches upon this. (In fact, judging from the bit of the New Yorker piece I can see, it appears to be a very similar point, just couched in probabilistic terms.)

The answer is pretty clearly that p-values aren't enough; we need theorists. As has been said a few times before, 1 out of 20 pulls from a normal distribution will be statistically significant. Moreover, vastly cheaper computers and software mean that calculating and re-calculating slightly different variants of a regression until the magic value of .05 is reached is much more feasible than it was twenty or thirty years ago. (I of course was never guilty of anything resembling this in either undergrad or grad school, cough cough.)

So rather than taking datasets in varying states of pristineness and dropping them into your favorite stats package until something significant falls out, we need to be thinking more careful about what relationships are likely a priori, and what different results actually imply. p<.05 should be the last step, not the first one.
posted by PMdixon at 5:09 PM on December 10, 2010 [4 favorites]

I am not a scientist. Have no background whatsoever in science. But having a few days ago read this New Yorker piece, I don't see any comments here that address what is being said. To say that science always needs to be redone; that you need lots of this or that for a study etc etc is all fine and to the point. But that it seems does not really deal with the issue raised in the article.
posted by Postroad at 5:14 PM on December 10, 2010 [2 favorites]

Look, you have the hard sciences, basically physics and math, and then you have an enormous amount of research that has adopted the nomenclature of the hard sciences, without actually adopting what makes the nomenclature work.

You must live in an alternate universe where the following phenomena were never discovered:

Plate techtonics
The columnar organization of the cerebral cortex, especially visual cortex (mapped in exquisite detail at this point)
DNA transcription
Speciation and how it happens (allopatry vs. sympatry)
Neural plasticity--reorganization of neural maps, generation of neurons in the adult
Spatial location mapped in grid cells and place cells in the hippocampus (probably the most amazing finding to date in systems neuroscience)
Etc. etc. etc. etc. etc. fucking etc.
posted by IjonTichy at 5:15 PM on December 10, 2010 [5 favorites]

Diablevert, I'm a huge supporter of following and trusting most science, so I think you've got a great point.

How flawed is the science?

Here's the general structure of a valid scientific experiment in, say, pharmaceutical testing: 30 patients exhibiting symptoms of The Condition were selected from the local hospital in accordance with willingness to participate, ability to survive possible harm under any potential side effects, inclusion with some age group, gender (probably male), and finally convenience to the hospital that the Principle Investigator is associated with. Of the thirty, half were given a slightly modified treatment while the other half were given the usual treatment. Neither the doctors or the patients are not informed of which treatment is used (but it is kind of obvious to the doctors because it involves an entirely different procedure from the usual one).

Later analysis shows that some of the subjects given the new treatment recovered faster. Some performed just the same.

Recommendation: patients which improved faster suggest that this study has great promise to deliver a more appropriate treatment for The Condition. Further funding is requested so that the study may be repeated with 50 patients this time. 50 was chosen because after this pilot study we believe that the effect we think we might have observed can be confirmed "significant" at the somewhat arbitrary criteria that if we're right about the size of the effect and we were to do this study 100 times, only say 5 of those times would the study not see anything.


I'm being slightly facetious, but much of those details are not so farfetched. (1) Multihospital trials are VERY DIFFICULT to run due to information sharing, politics, and money. (2) The treatment and controls are pretty vague because there's a LOT of variability within the treatment of some number of patients in a hospital. (3) The subjects involved are both too specific (for convenience) and too varied (because it's impossible to avoid). (4) A lot of money exchanges hands at the behest of these studies and researchers are always searching for more money to continue. This conflation of money and results means that the results are not solely motivated to be True (whatever that means). (5) My favorite, previous studies and results rarely take into account the myriad of prior information because the statistics are too hard.

So how flawed is it? Pretty damn flawed. Miles and miles and miles above other options for validating drugs and perfectly capable of finding useful treatments... but still pretty damn flawed.
posted by tel at 5:24 PM on December 10, 2010 [1 favorite]

It's hard to really explain things from the ground up, it's far easier to write down some statistics.

Since a living has to be made, they get published ...
posted by vertriebskonzept at 5:30 PM on December 10, 2010 [1 favorite]

Thanks for posting this, very interesting. I just downloaded some of Palmer's and Ioannidis' work online. It seems to be open access or on faculty web sites:

Ioannidis -

Tatsioni -

Palmer -

Palmer -'00.ARES_Quasireplication.pdf

Also there was an article on Ioanidis in the Atlantic recently.
posted by carter at 6:12 PM on December 10, 2010 [2 favorites]

Anyway IANAS but the scientific method seems to be relatively fine. It's the humans that are the problem. Although perhaps it is a mistake to think of the two as being separate - in other words you will always have fallible humans and equipment, therefore results will never be strictly replicable.

The sociologist Harry Collins has written about this in relation to tacit knowledge e.g. in Harry M. Collins, “The TEA Set: Tacit Knowledge and Scientific. Networks”, Science Studies 4: 165-186, 1974.
posted by carter at 6:21 PM on December 10, 2010 [1 favorite]

I read this one when it was first published, and one of the possible causes given was, studies that don't produce interesting results tend not to get published. Pharmaceutical companies aren't interested in confirmations, they want new medicines to exploit. Thus there is pressure to produce findings.

Also, if there were some unknown flaw in the experiments that caused them to break down as they repeated by more people, how would that look any different than this? Wouldn't it be better to assume that is the case than to imply some fracture in basic causation?

Finally, how many of these studies use subjective human experience as some aspect of their yardsticks? Is there "something wrong with the scientific method" in fields other than biology?
posted by JHarris at 6:22 PM on December 10, 2010 [1 favorite]

Tel - see, that's what's interesting to me. The link PMDixon put in upthread is a good alternative explication of the same problem. One way to address it in part might be to change the admittedly arbitrary definition of the threshold by which something is "statistically significant". Let's say for the sake of argument that the consensus became instead of a .05 probability, it had to be .01.

In one sense, you'd think changing the bar of publish-ability to such a degree would result in a lot fewer papers passing the threshold. Could the Condition Treatment Experiment you mention above meet such a threshold, with its 30 first round, 50 second round sample set? If you had to find and test a phenomenon with 200 or a 1000 subjects in order to potentially produce a publishable result, how many fewer experiments would get done? (there are plenty of rare diseases where you'd have a tough time digging up that many patients at all, and even in the social sciences, it's a lot tougher to get 900 hungover undergrads to sit in separate rooms and hold experimenter's coffee cups for them while playing Prisoner's dilemma than it is to find 90. For one thing, it's thousands more in grant money to lure them in.) could the scientific establishment as we know it today even exist? Could we afford as many scientists?

Or conversely would changing the publication threshold have little effect on what's published at all? The paper that PMDixon cites posits, after all, a null set universe in which no one ever actually discovers a real phenomena, but the journal stacks remain fat and glossy as ever,
posted by Diablevert at 6:55 PM on December 10, 2010

But people go wrong when they think that, at the end of some pre-defined experiment, the conclusion has been reached and the method has run its course.

I think that this is a good and interesting point and, at least in the US, has something to do with the way we teach science in school, or at least in elementary school. When I was a kid, I had sort of a hazy notion that the scientific method was a worksheet on which you filled out specific items such as the hypothesis and results rather than viewing it as a process for finding out information.

I'm now studying to be an elementary school teacher and I've just taken a class on teaching science (with a FANTASTIC professor) and that kind of thinking (that the "scientific method" is this big obscure procedure that involves a prefabricated worksheet) is the kind of thing I want to combat in my own teaching, but it can be surprisingly difficult to do. Something I've learned is that many kids and, I suspect, adults as well, would much rather have a cut and dried process and concrete steps rather than a really useful framework for looking at the world. Trying to explain to people that the scientific method is a logical tool for structuring experiments and observations and that it is used to find information and data and to keep asking questions is really, really hard, and this is not helped by many of the external demands on teachers as well as the fact that since in many schools science is not tested in the same way that math and reading are it kind of gets shafted in the curriculum. This is really unfortunate for everyone because a good science education at the elementary level helps create thoughtful and intelligent citizens later.
posted by Mrs. Pterodactyl at 7:05 PM on December 10, 2010 [2 favorites]

Trying to explain to people that the scientific method is a logical tool for structuring experiments and observations and that it is used to find information and data and to keep asking questions is really, really hard,

If you think this is hard with elementary school kids, try teaching doc students ;)

Another problem is that they often see science as a thing/fact that can be memorized and implemented with checksheets, and not as a process/skill/practice that unfolds over time. But as a practice, it can take years or a lifetime to learn, and so teaching it is perhaps understandably hard.
posted by carter at 7:31 PM on December 10, 2010 [1 favorite]

How flawed is the science?

It's a huge leap to go from that to the question "How flawed is science?"

Because I think the problem here is not the concept of the scientific method, it's the fact that it is carried out by human beings. In the blog post Lehrer makes hay out of the theories of symmetry and sexual success in biology. There seemed to be consensus about it, then it fell apart.

To me that shows that the scientific method is working exactly as it should. No thought system or philisophic construct is going to make human being free of bias and the perils of groupthink. That's just asking humans not to be human. Hopefully, science overcomes those errors, but it doesn't happen overnight.

Copernicus, Newton, Einstein, all great luminaries in scientific thought. One thing they all had in common was being dreadfully wrong about some major things in their field. That doesn't matter because we didn't take their thinking as holy writ, we applied experiment and critique to everything they did.

And we progressed. It's painful that people have to suffer from flawed medical science, but the fact is that it's better than no medical science at all.
posted by lumpenprole at 8:31 PM on December 10, 2010 [1 favorite]

I agree with lumpenprole, with a big caveat. The fact that findings are being questioned and that basic concepts like "statistical significance" or the objectivity of a lot of scientific research are facing doubt is evidence that the scientific method is at work. But that doesn't really invalidate the points that the article is making.

The word science is being used in the thread in at least three different ways. One is to refer to the method. The second is to talk about a form of knowledge. The third is to refer to a complex social system. You can't completely separate one from any of the others. It's not as if you could get rid of the pesky humans, and have science go happily on its way producing the purest of knowledge. Science is a human enterprise down to its core, and cannot exist without individual, institutional, and cultural constraints operating on it.
posted by mariokrat at 8:58 PM on December 10, 2010 [4 favorites]

Over-simplified version:
Has something gone wrong with the scientific method?
Are some people stupid?
posted by ovvl at 9:08 PM on December 10, 2010 [4 favorites]

Science can't be "flaw3ed" because it' just the process of looking into stuff really rigorously. Studies, people, theories, results, conclusions and media reporting thereof can be flawed. Which isn't that big a deal on the time scale of "science" which is pretty long. It's only a big deal on the timescale of collective human memory (which I theorize is about 30 years btw).

Also more data people, a study with an n of 14 is not really all that useful.
posted by fshgrl at 10:26 PM on December 10, 2010

Over the past how every many years or so we have all decided that we want crap and we want it cheap, right now! We've embraced the McDonaldsification of damn near everything. Food, clothing, cars, houses. How many of us believe that furniture that we've bought new will be in our families generations from now? Or twenty years from now?

Why would you expect our science to be different? Carefully develop your methodology for measuring what you're trying to measure? That takes time! If you want grants and/or venture capital you need results quick and you can worry about whether they're real or just an artifact later. Of course the results of our scientific work is less and less able to be replicated over time. The number of years our kitchen tables are capable of holding a plate of food thirty inches off the floor is going down too. Does this surprise anyone?

And all things being equal, I'd love to be doing really good science. Or making really high end furniture.

And I welcome anyone interested in paying me to do so to send me a Me-Fi mail.
posted by Kid Charlemagne at 10:28 PM on December 10, 2010

Diablevert —

I personally don't think that just changing the p value threshold will really fix the problem. It will make it more rare, but there's something endemically wrong with statistical practice in my opinion.

I don't think science is all that wrong. It's a little bit wrong in that I think the interpretation and mathematical engines behind most randomized-study statistics is a little misleading and not as useful as it could be.

I think the marketing of it all is what's really, really wrong. I think the business of it all is what's really, really, really wrong.

Here's a perspective I like a lot. An alternative method of statistics called Bayesian statistics defines probability to mean "belief". You think of quantities like the speed of light or the effectiveness of a drug as not single values but ranges of values each with a certain amount of belief attached. The speed of light is pretty well known, so you put nearly all of your belief right at 299,792,458 m/s. The effectiveness of a drug you don't know so much about so you spread your belief all around everywhere pretty near evenly.

Now lets say you've done a very well researched study. Even with hundreds of subjects, I think that the complexity of the human body is likely to not give you a lot of firm evidence (unless you control for many, many things). So now maybe the effectiveness of this drug is spread around slightly less, bunched slightly more on the positive side. Great, what now?

How do you communicate that in a public journal: "We're maybe just a little bit more likely to believe that this drug has a non-zero, beneficial effect. Look, here's a picture of my belief distribution."?

How do you perform an economic decision over this belief?


Actually, that one is well known. You decide what the cost to you would be given every value of that drug. Multiply it by your belief of the drug effectiveness being that value and look for the greatest expected return.


I think the answer really is a change in the scientific method. I think we need to recognize that in arenas outside of "hard" sciences, models are very much less useful, but they are also still necessary to learn new things. I'm a big fan of the work of Andrew Gelman who does fantastic statistical work to analyze also-very-very-complex political motions. One thing you'll find him repeating a lot is to iterate and examine models very seriously. Each one is definitely wrong, but good ones will teach you something.

This sophisticated manner of thinking about science is a good way to Improve the Method, I believe. It's a cultural shift, though. It'll take a long while.


Finally, it's always worth noting the fabled Cabinet Drawer effect where the many, many failed attempts, which may be statistically useful as null-effect examples, are just shoved into a drawer somewhere to never be seen. We, as people, are interested in the surprising, but it's easy to forget that the more banal things you cut away the very much more surprising the things that remain become.
posted by tel at 10:44 PM on December 10, 2010 [1 favorite]

But that it seems does not really deal with the issue raised in the article.

I think part of it is that the article has the same flaws it is showing in its scientific examples. It's selecting certain examples to prove a point. At no point does the article address instances in which repeated experiments show more pronounced effects over time. Science should have situations in which results are not repeatable, it's assumed, and the control for that is built into the scientific process. That's why we can see the effect in the first place, because people were replicating the experiments. Selectivity in selecting papers for publication probably hampers science, but there's nothing wrong with science itself.
posted by snofoam at 11:04 PM on December 10, 2010

There's another piece here too (above: tl;dr):

Science has to stay open-ended. Since we've never seen nor can see the entire universe, there's always a chance that something important (e.g. dark matter) that's been missed all along that'll change everything.

Which is why, as The World Famous so succinctly puts it, 'science (and, indeed, reality itself) doesn't have final answers or conclusions.' Because someone finally realized it's all too big for us to ever get our heads wrapped around it all.
posted by Twang at 11:39 PM on December 10, 2010

science (and, indeed, reality itself) doesn't have final answers or conclusions.

That's right, it doesn't. Because science is a method. We have answers and conclusions. We arrive at many of them through science. If you can propose a better method at arriving at conclusions, I'd love to hear it.
posted by lumpenprole at 2:04 AM on December 11, 2010

Hmm... I think in formulating this post I went with a glib lead which has shaped the discussion in an unfortunate way. I regret that.

Because there are two ways to look at "the scientific method." one is the abstract, or ideal application of these techniques, in which knowledge is continually tested and retested, in which we never really trust a conclusion until it has been proven many, many times. And the other is "the scientific method" as it is currently, actually, practically used by scientists today to discover new knowledge. And in that sense what the article's suggesting -- that hidden flaws and biases are leading us to accept many new pieces of "knowledge" which are in fact false, and which are not being re-tested effectively and disproved in an efficient manner -- is the important and interesting point being raised.

It was my error to have phrased this so badly in the first place. But having raised the distinction, I don't think that it's enough to say, ideally the method works, or eventu method works. In Ionidis' famous paper, he reviewed 49 of the most widely cited studies in the most highly regarded journals --- a third had been discovered to be flawed but more importantly, eleven had never been retested at all. This seems to imply that whatever the scientific method would have ideally or eventually revealed in that instance, the scientific method as it is actually being used often isn't good enough to catch the fact that drug X doesn't do dick for heart disease for over a decade, despite the fact that drug x is being prescribed to millions of people in the interim.

Is this acceptable? Having realized the flaws and incentives exist which tend to favor the publication and result in such a plethora of false positives, how can the system and incentives be changed? One wonders --- my little thought experiment above was but a half assed rule of thumb switch, and I don't know dick about statistics. But even thinking about something like that, about how one small change to make criteria for publication more stringent would work, it suggests to me that better, more reliable science would cost a hell of a lot more, and maybe that would affect the number of people able to become scientists (which, as I understand from numerous grad school threads, is a tough enough gig to get as it is).
posted by Diablevert at 4:43 AM on December 11, 2010

The word science is being used in the thread in at least three different ways. One is to refer to the method. The second is to talk about a form of knowledge. The third is to refer to a complex social system.

I think that Mariokrat's observation is a good reason why many of these discussions can go round in circles. Particularly, there is a folk belief that science = 'fact' (which is something that scientists would argue with), science = 'best guess supported by empirical evidence' (probably something that scientists are happier with), science = 'method/work practice', and also science = 'socially situated work practice' (pressure to publish for grants and tenure, etc., which seems to be the focus of at least some of the article).
posted by carter at 4:52 AM on December 11, 2010

Oh and one reason that the 'science as method' definition is often misunderstood is that most of the time, in scientific publications, we are not provided with any insights into nuts-and-bolts methods in scientific papers. All we see (and what the authors are mainly rewarded for) are the results (= facts).
posted by carter at 4:55 AM on December 11, 2010

the scientific method as it is actually being used often isn't good enough to catch the fact that drug X doesn't do dick for heart disease for over a decade, despite the fact that drug x is being prescribed to millions of people in the interim.

In this case the problem is also partly that that the current configuration of interests and stakeholders in the production of 'scientific facts' (="results of clinical trials of multibillion dollar R&D gambles by pharma" in this case) is deeply flawed. The societal context drives the way that science is done. But that is another thread I suspect ;)
posted by carter at 5:16 AM on December 11, 2010

To echo and add to some other comments above, the current scientific enterprise often does not follow the ideal scientific method. Since scientists (whether in academia or industry) generally do not get credit for re-checking or verifying others' work, and due to increasing pressures to publish new, novel results, the verification work of peer review doesn't occur as frequently or as assiduously as it should. Resource and other constraints mean that many experiments, such as drug trials, are not conducted with sufficiently large or diverse samples, or over sufficient time periods. Also, I'm told that many scientists do not themselves understand statistical data analysis sufficiently well, and use inapplicable tests or draw incorrect conclusions (as a more classically trained mathematician, I don't even know enough statistics myself, but this is what my statistician friends tell me).

I think, though, that because of the lack of good understanding of the scientific method (the ideal) and the scientific enterprise (what I'm called the as-it's-actually-applied method) among the general public, articles like Lehrer's, which follows typical science reporting in drawing sensationalized grand conclusions from more careful original statements, are not helping to address the problem. ("It’s as if our facts are losing their truth." Really? Maybe if you considered the result of one, unverified experiment with a too-small sample size as a fact in the first place.)

To fix the current problems in the scientific enterprise, we'd need funding sources (governments, industry) to recognize that most of (good) science is in the details, not in novel results. Often, continued funding is conditioned on producing novel results on a regular basis. This rewards sloppy (and sometimes outright fraudulent) science rather than good science, unfortunately. Whereas, as lots of people have already mentioned, often the most important science is the careful, detailed work to test and re-test and either build up confidence in a theory or model, or to find errors that will lead to revisions of the theory or model. As the work of Ioannidis and others mention, high quality journals need to publish all results (including negative results and "yep, I checked and that experiment worked the same for me" results), not just novel results. Publishing pressure for tenure and career advancement of scientists in academia also rewards sloppy science rather than good science, and basically penalize scientists for doing the verification and careful peer review work necessary for the scientific method to work well (and I imagine that the promotion and bonus structures for scientists in industry produce a similar incentive structure at many research labs). I worry that this article will only encourage funding agencies to be more directed toward a sort of "accountability" questionably defined as producing a quota of publishable (under current publishing schemes) results.
posted by eviemath at 6:32 AM on December 11, 2010 [2 favorites]

It's probably true that Lehrer's article is a bit sensational and simplistic; however, I don't think that means it's not helpful. Here we enter the realm of public policy, of politics, of culture --- and in those realms a controversy is generally necessary to get people's attention and shift public attitudes to a new stance. Jerimiads, j'accuses, jests, japes and jingles --- that's what's needed to get people to argue about something long enough to come to a new consensus.

I dunno, though...I do wonder. At least one of the researchers working on these issues declined to speak with Lehrer for the record. There's a certain strain of "not before the children" in some of the reactions to this piece. In one respect, I get that; we live in a world where Jenny McCarthy gets more play and more respect than scientists in certain quarters. I can understand the impulse not to publically engage in a debate that could potentially impair the credibility of one's work. But. If there's a widespread problem with heralded and exciting results not holding up --- and other such results not being tested at all --- then oughtn't something to change? Cosma Shalizi's post, cited by PMDixon upthread, posits a null set universe in which scientists aren't discovering anything at all, and it doesn't look or behave all that differently from our own. I dunno, it strikes me as interesting, and I worry that there's too much invested in the status quo - not just by the evil drug companies, but by scientists themselves - to change things.
posted by Diablevert at 8:57 AM on December 11, 2010

Mrs. Pterodactyl: "I think that this is a good and interesting point and, at least in the US, has something to do with the way we teach science in school, or at least in elementary school."

Don't worry; by the time you get to high school you realize you're graded by the formula so if the data doesn't match, you better go in and massage the numbers. SCIENCE!
posted by pwnguin at 12:11 PM on December 11, 2010 [1 favorite]

I dunno, it strikes me as interesting, and I worry that there's too much invested in the status quo - not just by the evil drug companies, but by scientists themselves - to change things.


Seriously, though, the even bigger problem is, what would you change and who do you think would accept it?

Let me give you my back story - for my little impurity assay, the FDA wants us to demonstrate accuracy, precision, linearity, specificity, range and quantitation limit. This is a good thing (TM). The thing is, my little assay is complex and not well behaved so for some of these things, we can't do the things we would do for other assays so we (and by we I mean the entire industry and the various regulatory agencies around the world) have empirically hammered out some workarounds to deal with this.

Anyhow, I've been doing this for about 15 years now. About seven years ago I got frustrated by my lack of understanding and started digging up the theory behind what I'm doing. What I've found was that all that kinetics stuff that was being worked out between 1900 and 1930 by people like Michaelis and Menten, Scatchard, Sutherland, Einstein - that crowd - it's almost like those guys were trying to understand the foundations of reality and not just making shit up! When I plugged in a few rough approximations for what I thought my conditions were - I got curves that explained things pretty well. As a reward I got a lackluster annual review and was informed by a department director that it had been voiced that I was "too theoretical".

So here's how I see the problem: Imagine I blow off the stuff on my overloaded to-do list to teach myself differential equations, linear algebra and R so that I know everything I need to to put all the pieces I've gathered up into a single explanation of all the weird crap that we've observed over the years (and I don't get fired for failing to get the stuff done on my overloaded to-do list).

OK, now imagine that I convince a bunch of low level management types with PhD's biochemistry (but probably no more mathematics than I have) to blow off the stuff on their overloaded to-do lists to wade through five years worth of scientific reasoning and obscure mathematics cranked out by some clown with just a bachelors degree that isn't even in biochemistry and despite the fact that it's a bunch of complex math that they can just barely follow, they are so inspired that they take my little opus up to some senior middle management type (and they don't get fired for failing to get the stuff done on their overloaded to-do lists).

Then imagine that senior middle-management type deciding to take this before the FDA and tell them that we've decided to do this other thing that some clown with a bachelors degree worked out in his spare time and totally blow off the industry standard that has evolved over decades (and him not getting fired after the FDA delays our project(s) for months while they make us explain and then explain again all this stuff I've worked out).

Next imagine the FDA (the same FDA I describe at the bottom of this comment) not swatting us like bugs or massively derailing our timeline, such that we have to charge a zillion dollars a pill to break even on this thing, knowing that whole swaths of FDA bureaucrats will be crucified if a newly approved drug has some kind of horrible issue after the FDA allowed one of those evil drug companies to deviate from industry standards on an impurity assay (and them not getting fired - do you see a trend here yet?).

Finally, imagine the public responding rationally when a new drug causes horrible adverse events in a small population with a previously unknown genetic variant, instead of going absolutely ape shit when they learn that the FDA didn't make one of those evil drug company comply with the industry standard testing protocol, even if that protocol had nothing to do with the adverse event and was not well grounded scientifically.

The scientists - the people doing science aren't invested in the status quo - they're mired in it.
posted by Kid Charlemagne at 2:49 PM on December 11, 2010 [2 favorites]

Actually, the journalist has some interesting insights into the scientific method, but the framing of this article is kinda tawdry. It reminds me of those guys who published that recent book, "Darwin Wuz Rong!!"
posted by ovvl at 4:06 PM on December 11, 2010

Kid Charlemagne: " Imagine I blow off the stuff on my overloaded to-do list to teach myself differential equations, linear algebra and R ..."

I think you've made a compelling case for more math and stats classes for the undergraduate life sciences. And that could you're probably ready for a PhD yourself :P
posted by pwnguin at 4:09 PM on December 11, 2010

One difference between Leher & Stephen Jay Gould is that Gould loved science & other scientists, — he wasn't debunking, he was critiquing from within, demonstrating the never-ending process by which science makes adjustments as a consensual pool of knowledge. Another difference, which is more interesting, is that Gould was an historian, working with the aid of hindsight. This lets Gould tell stories about bias in science gets made without freaking out. Leher is looking at biases that might have life & death consequences for people living right now. The things he's saying are useful - if you want to do good science, or be a critical consumer of scientific information, it helps to understand the culture of science and how the process works. But the alarmism permeating this article is really unhelpful. Science requires skepticism.

Personally, I'm hooked on having access to an academic library system so I can easily download scientific reports behind the science news. Invariably the scientists themselves are much more candid about the contingency of their findings than the journalists. Also, if you read real science, you understand that for every paper published in Scientific American there are a wack of other papers published in science journals that cast doubt on the big-news results. It's great! A damn excellent system with built in processes of critique.
posted by aunt_winnifred at 4:32 PM on December 11, 2010

Applying the scientific method to a small data set is part of the problem.

Doing a medical study on 50 people and drawing a conclusion about how the result would effect 50,000 people is a hard sell.

How many here would accept the results of a poll with 50 people as representative of the opinions of a small city?

Chemistry for example is a poll done typically on about 1,000,000,000,000,000,000,000,000 molecules on average. The bulk result is relatively reliable if careful observations are made. The number of factors that need to be controlled are much smaller.

Another problem with applying science to medicine is the variability in the human condition, genes, diet, history of each person in the study.

Back to chemisty, all molecules of a given structure are effectively identical.

I think the problem is applying science with too many free variables, small sample size, and monetary pressure to get the result you want.
posted by dibblda at 4:42 PM on December 11, 2010

in the meantime, people are out there being prescribed the drugs and the therapies and basing social policies on current science as it is presented to them by sciencists

Big pharma pill pushers aren't scientists, and neither are the sucker ass MDs who prescribe drugs without understanding their mechanism of action. The fact that such people are ascribed the mantle of scientist is the major thing wrong with science at this juncture.
posted by solipsophistocracy at 1:06 AM on December 12, 2010 [1 favorite]

That's why all those papers on IgG2 disulfide isoforms came out of MIT Cal Tech Washington University U of I Duke Amgen.
posted by Kid Charlemagne at 5:07 PM on December 12, 2010

« Older Not a Band Name   |   "You're in good hands and Gibbs will call last... Newer »

This thread has been archived and is closed to new comments