Causes Are Hard
January 5, 2012 12:10 PM   Subscribe

Trials and Errors. Jonah Lehrer's latest piece in Wired is a sort of sequel to his earlier article in the New Yorker on the decline effect (previously). Where that article focused on the institutional factors interfering with the accumulation of truth, this one focuses on the philosophical issues of causation and correlation in modern science. [Via]
posted by homunculus (22 comments total) 24 users marked this as a favorite
 
I don't know, I like the thought that Lehrer is putting into the scientific method, but it just seems like he doesn't quite get it. Everything worked according to plan: Pfizer was testing an inverse correlation between higher HDL and heart attacks, and they developed a drug (torcetrapib) to increase HDL to see if it lowered heart attacks in patients. They then got into the trial stage (i.e. where you actually test causation), and it didn't. So the drug didn't make it to market. That's exactly how it's supposed to happen. We use apparent correlations to design experiments (in the case of new drugs, "trials") to test causation. If we just relied on correlation, torcetrapib would be on the market right now. It's not.

We use apparent correlations, hopefully well-tested ones, to identify areas to conduct more rigorous experiments. It's a pretty efficient way to do things. I'm not sure what the alternative would be, and Lehrer surely doesn't suggest anything.
posted by one_bean at 12:38 PM on January 5, 2012 [3 favorites]


The alternative would be to go straight to clinical trials with only a hypothesis; probably not a good idea.
posted by ZenMasterThis at 12:41 PM on January 5, 2012


Who cares about causes? We just need to figure out how to stop the effects. [sips Big-Gulp]
posted by blargerz at 12:46 PM on January 5, 2012


one_bean: Read the two paragraphs starting from "But here’s the bad news: The reliance on correlations has entered an age of diminishing returns."
posted by pascal at 1:22 PM on January 5, 2012 [1 favorite]


Did you know that the current CEO of Pfizer is actually a trial lawyer?

There was an interesting article a while back about how Pfizer is totally screwed. They shut down their research divisions after years of not finding anything useful, and the patents on their important drugs are expiring soon.
posted by delmoi at 1:22 PM on January 5, 2012


one_bean: Read the two paragraphs starting from "But here’s the bad news: The reliance on correlations has entered an age of diminishing returns."

Okay, I read it again. The first paragraph, about complicated pathways, is pretty well-addressed with modern statistical techniques (e.g. structural equation modeling). I don't know exactly what the medical field uses to do this, but I assume that when Lehrer uses the word "correlations" he's not talking about univariate linear regression. Although he does seem to imply strongly that's how people showed the relationship between lung cancer and smoking, which actually required multiple decades of multivariate regressions because the tobacco companies fought it every step of the way.

The second paragraph is actually about diminishing returns, which has nothing to do with the scientific method (except the problem with significance testing and publishing that Lehrer already addressed in his previous article), and more to do with the fact that things are complicated. It's not like there's some better process out there to deal with complexity. The scientific method remains the gold standard. But yeah, people are pretty unhealthy these days and, as blargerz suggested, we should probably be focusing on policies that are affecting the causes of those problems. But guess what the best way to identify those policies is -- statistics!
posted by one_bean at 1:37 PM on January 5, 2012 [2 favorites]


Yeah, I'm afraid I read this a while back and I'm quite sure he's missing a lot of the equation as to what constitutes failure, why and how money is being spent, and a number of other factors. It was an interesting read, but mainly becuase he's wrong in an interesting way.
posted by BlackLeotardFront at 1:42 PM on January 5, 2012


"The scientific method remains the gold standard."

Perhaps I should read the article again, because I missed the part where Lehrer said that it wasn't.

I don't believe he suggests anything of the sort, just that the economics attached to following the scientific method are changing in a way that is slowing progress.
posted by pascal at 1:53 PM on January 5, 2012 [1 favorite]


I don't believe he suggests anything of the sort, just that the economics attached to following the scientific method are changing in a way that is slowing progress.

Yeah, ditto. Lehrer's thesis is not "the scientific method is incorrect." It's "the scientific method is not enough." He appears to suggests that a number of the problems we're attempting to solve may be irreducibly complex, and that the human mind is too hard wired for easy answers to solve them.

One_bean, you suggest that difficulties "about complicated pathways, [are pretty well-addressed with modern statistical techniques," yet, Lehrer points out, "One study, for instance, analyzed 432 different claims of genetic links for various health risks that vary between men and women. Only one of these claims proved to be consistently replicable." He is suggesting that "modern statistical techniques" are producing all kinds of apparently "significant" results which don't bear up over time. A test that gives you 431 false positives over 432 trials is not such a great test.

I dunno if there's a way to improve the scientific method, but if we're spending about $3.9 billion in R&D for ever single new effective drug we're finding, perhaps it is time to think about it.

It's an interesting point, and something I wonder about lately. I wonder if the next century may not be one of the discovery of limits, both in terms of the level of natural resource extraction a planet can sustain and of our own minds. Take Kahneman's work on cognitive biases and their effect of economic decision making --- it's not at all clear to me that possesing the knowledge that such biases exist does anything to solve them, or even whether it should, necessarily. Neurology may yet show us the limit of free will; will that put a kink in the long arc of the universe? Perhaps...
posted by Diablevert at 2:27 PM on January 5, 2012 [1 favorite]


That article caused me to die a little inside, and that's a fact.

Lehrer has a frustratingly superficial understanding of Hume (who wouldn't have known an error bar if one had hit him in the face but did endorse a collection of rules for discovering causal relations). He adds to that a complete ignorance of the history of statistics: for example, that Bayes answered the essential problem in Hume's Enquiry within a year of its publication. He evidently knows nothing about current statistical methods: no mentions of causal Bayes nets, structural equations, or data-mining techniques in an article about discovering causal relations in complex systems? Perhaps excusable, but then he doesn't even understand the well-worn statistics that he thinks are au courant. I mean, take this gem:

This test defines a “significant” result as any data point that would be produced by chance less than 5 percent of the time.

No! Wrong! That is not how p-values work!

Would that the train wreck ended there, but it doesn't. He writes:

And now we don’t really know what matters, since raising HDL levels with torcetrapib doesn’t seem to help.

Apparently, he didn't even read his own article. The problem was not that raising HDL doesn't help (at least in most cases), the problem is that the drug had a nasty side-effect as well. The drug case illustrates the exact opposite of what Lehrer claims: we didn't understand enough of the individual pieces of the causal mechanism until we did the clinical trial. Having done the clinical trial, we understand a little more of the mechanism, and specifically, we know to look out for drugs that raise HDL while simultaneously raising blood pressure.

My conclusion: Science isn't failing us, journalists like Lehrer are.
posted by Jonathan Livengood at 2:28 PM on January 5, 2012 [7 favorites]


Only one of these claims proved to be consistently replicable." He is suggesting that "modern statistical techniques" are producing all kinds of apparently "significant" results which don't bear up over time. A test that gives you 431 false positives over 432 trials is not such a great test.

And, again, that's a problem with publishing and traditional hypothesis testing, which is what he addressed better in a previous article.

Perhaps I should read the article again, because I missed the part where Lehrer said that it [the scientific method] wasn't [the gold standard]

It's kind of in the title, "Why Science is Failing Us." But I don't want to get into a pedantic argument about editorial control, so I'll let it go.

Lehrer has identified a problem: that human health is really complex, and the low-hanging fruit in improving human health has already been grabbed, so the additional improvements are really complicated and expensive. That is a "yeah, no shit" article that would never be published. He then tries to shoe-horn into it this belabored, inadequate critique of hypothesis testing that is not only out-dated, but wrong. But because nobody gets taught enough statistics and he tries to quote Hume and reference one of the many founding fathers of the discipline, it comes off as a critique of the ways science isn't working. But it's not because science isn't working (in fact, if you look at this Pfizer story, the method worked exactly as it should have), it's just we're working on things that are really complicated so it's expensive and slow-going.
posted by one_bean at 2:42 PM on January 5, 2012 [4 favorites]


Lehrer has a frustratingly superficial understanding of Hume

and of science, and industry, and writing, and vowels, and oxygen...


"Because scientists understood the individual steps of the cholesterol pathway at such a precise level, they assumed they also understood how it worked as a whole."

No one assumed this. They hoped it, they planned for it, they speculated on it, they acted methodically. None of this has anything to do with causation.

He has the suspicion that we are close to knowing everything, but the rest is harder to learn. Hence "The reliance on correlations has entered an age of diminishing returns.... First, all of the easy causes have been found"

The "problem of science" is actually the problem of science journalism. Because he-- a science minded, smart guy, has an understanding that "HDL is good and LDL is bad", he sees confounding evidence for this as a problem with science-- not with his own superficial understanding of it.

Scientists are also guilty of this at times. In his previous atrocity, "The Decline Effect" he gave examples of rigorous scientific conclusions not holding up over time; but in fact the problem is that the subsequent scientists started from a layman's understanding of that same science. A psychiatry example: a 1971 psychiatrist finds that "antidepressants double the rate of mania." WOW! A 1999 psychiatrist discovers that it's not true, and that in fact evidence over the decades finds a smaller incidence of antidepressant induced mania. The thing is, the "antidepressants" in 1971 were completely different chemicals than those in 1999; the way we diagnosed mania is different. Everything is different. No actual psychiatric researcher would see this as a decline effect. Lehrer would, as would many clinical psychiatrists.

"We think we understand how something works, how all those shards of fact fit together. But we don’t."

Ignorance of the science is not the same as a problem with science.
posted by TheLastPsychiatrist at 2:58 PM on January 5, 2012 [5 favorites]


I'm experiencing rapidly diminishing marginal utility with each new Lehrer article I read. Perhaps all the low hanging sensible points are gone and modern Lehrerism is in decline because it can't cope with complexity.
posted by Philosopher's Beard at 3:30 PM on January 5, 2012 [10 favorites]


posted by Philosopher's Beard at 6:30 PM

Eponysterical to +/-3σ!
posted by ZenMasterThis at 3:33 PM on January 5, 2012


I wasn't sure if this warranted also posting to the front page, but here's an article that's an excerpt from an upcoming book- "To Know, but Not Understand" by David Weinberger. Its thesis is that with the accumulation of more and more data, science is getting to the point where human minds can no longer grasp the complex models necessary to explain phenomena.
posted by Apocryphon at 3:55 PM on January 5, 2012


[he] has an understanding that "HDL is good and LDL is bad", he sees confounding evidence for this as a problem with science-- not with his own superficial understanding of it.

I don't think Lehrer sees the problem as the fact that the drug didn't work. He sees the problem as the fact that despite having studied the cholesterol production line for nearly 100 years and having spent $1.1 billion dollars on this drug and having got it to stage three FDA testing the drug didn't work.

in fact the problem is that the subsequent scientists started from a layman's understanding of that same science. A psychiatry example: a 1971 psychiatrist finds that "antidepressants double the rate of mania." WOW! A 1999 psychiatrist discovers that it's not true, and that in fact evidence over the decades finds a smaller incidence of antidepressant induced mania. The thing is, the "antidepressants" in 1971 were completely different chemicals than those in 1999; the way we diagnosed mania is different. Everything is different. No actual psychiatric researcher would see this as a decline effect.

Let me see if I understand you. Subsequent scientists --- professionals conducting rigorous peer-reviewed research which meets the publication standards of major journals in their field --- have a layman's understanding of previous work in their specialty? Such that they fail to understand that the definition of a key term of their art has so changed in the past couple decades that previous work done on the subject was in effect dealing with an entirely distinct phenomena? Secondly, the definition of a basic term of art has so changed that observations as to its prevalence in the past are entirely irrelevant to the present? But this is not problematic in any way as regards our understanding or practice of science, beyond its propensity to confuse poor benighted science journalists?
posted by Diablevert at 4:56 PM on January 5, 2012


Does the decline effect apply to the decline effect?
posted by L.P. Hatecraft at 6:53 PM on January 5, 2012


Subsequent scientists --- professionals conducting rigorous peer-reviewed research which meets the publication standards of major journals in their field --- have a layman's understanding of previous work in their specialty? Such that they fail to understand that the definition of a key term of their art has so changed in the past couple decades that previous work done on the subject was in effect dealing with an entirely distinct phenomena?

While I'm not sure I agree about the specific example given regarding psychiatric medications, more generally, yes, this is a frequent occurrence, among people rightly regarded as experts in their fields. It's a selective effect, though- some things are never forgotten- but I have no idea what makes some of these things stick and some not.

The mode of scientific discourse changes- while I can still find identifiable elements of form and content in papers from the 50's through early 90's, often they refer to concepts that are completely foreign to me; in other cases 'false friends' may be serious problems. Is a 'suppressor T cell' from the early 80's the same as a 'regulatory T cell'? No, and one important factor is that suppressor T cells don't exist, even as they were imputed to have many of the same properties as a regulatory T cell, which I will bet this American Dollar is a real thing.
posted by monocyte at 7:09 PM on January 5, 2012 [3 favorites]


I don't know anything about Lehrer, but I've seen something similar over my own career in biochemistry/biotechnology. It seems like the old one-factor-at-a-time (OFAAT) approach doesn't work for studying complex networks, yet studying all factors in all combinations (the full factorial approach) is overwhelmingly complicated, slow, and expensive. Shortcuts like fractional factorial studies are alluring but often fail because in many cases we don't even know which (non-obvious) factors are important, except in hindsight.

I've worked on an enzyme that "ran uphill" in situ, yet behaved normally in isolation. No violation of thermodynamics here - inside this enzyme's native organelle, mass action drove the "uphill" reaction because the overall reaction catalyzed by the organelle was "downhill". (Like the way a siphon makes water flow uphill briefly in an aqueduct.) I've worked at a biotech company that had a promising product crash and burn in late clinical trials because of an unexpected immune response in a few patients. With hindsight it was possible to connect the dots, but nobody predicted which dots were involved.

OFAAT has served us well for a few centuries, but as science tackles more complex systems it may not be productive to study everything in isolation. I think it will always be the best way to learn about an existing thing (e.g., an enzyme) in great detail, but sometimes things are more interesting in the way they interact with other things. And we've already learned great detail about lots of things - the easy work has largely been done already even though it didn't feel easy at the time (ask any scientist about their grad school days).

And if you want to design something to affect other things (e.g., a drug) you need to consider All The Things. This is enormously hard, and it's why we do clinical trials in the end - nobody can predict (yet) how things will interact in a staggeringly complex network like a living human. We make our best educated guesses but in the end it's trial and error, and most experimental drugs fail.

I'm not smart enough to come up with the answer, but I wonder what overarching method will supplant OFAAT for scientific research. I suspect vast amounts of computing power will be involved but how, exactly, is beyond me. Heck, we can't even agree how to depict networks visually let alone tweak them to our desire.
posted by Quietgal at 9:47 AM on January 6, 2012 [5 favorites]


Favorite part: Although this account felt true, the brain wasn’t seeking the literal truth—it just wanted a plausible story that didn’t contradict observation

Holy self-referential goble-dee-guck.
posted by TheShadowKnows at 4:48 PM on January 6, 2012 [1 favorite]


"Instead, we live in a world in which everything is knotted together, an impregnable tangle of causes and effects. Even when a system is dissected into its basic parts, those parts are still influenced by a whirligig of forces we can’t understand or haven’t considered or don’t think matter. Hamlet was right: There really are more things in heaven and Earth than are dreamt of in our philosophy".

I'm a little surprised at the hate for Lehrer here. It almost seems as if some are protesting too much... you don't have to deny the principles of the scientific method to see that there are at the very least many and accumulating difficulties and contradictions in research without clear or easy solutions. Many of the arguments (being a complete layman) sound like semantic/pendantic differences. I think Leher's overall all argument can't just be blasted away. He may not be exactly correct about the details of Hume's philosophy or the subtleties of p-factor statistics for his overall point to remain valid. As Quietgal observes, the problems that Leher is identifying are features of the system itself.
posted by blue shadows at 12:28 AM on January 7, 2012


Great stuff Quietgal. I've been chewing on this article for days now, mainly because it seems so defeatist. Yes, the absolute reductionist days are gone, or fading fast into triviality; no more saffire bullets i.e. one gene one disease, one pathway one cure are going to emerge in the near future. That does not, however, spell doom for the scientific method, or research as a whole. What we have now are reams of data, accumulating ever faster and more cheaply, with more networked and nuanced findings waiting to be found within. The problem is that the toolset, and mindset needed to go in and pull out cause and effects hasn't been formulated and codified yet. There are no Koch's postulate or equivalent metric for complex traits or diseases. That, I'll hazard, is why we see so many GWAS studies that don't ever really pan out, there aren't enough means to recapitulate a complex system in isolation of all the other compounding factors and prove that the load bearing nodes are your real culprits.
But there should be, and soon. However, we're still trying to get our heads around the umpteenth genome, or interactome or whatever ome that just came out last week. This is distracting and frustrating... but also tremendous and exciting. What basic research is doing now is developing the metrics necessary to assign the weak, but significant interactors a finite function. More data is helpful, always, but much of it comes as a huge, unannotated text file, or spiderweb which can't, initially, bring you any closer to causality.
I used to joke about this, but there comes a time when you really just need to embrace the complexity and use it towards your conclusions.
On that note, some have modeled that the robustness of any system is not necessarily a function of its specific topography, but rather of its sheer complexity. As we presume ourselves to be robust, through-going organisms, the finding that we are biologically more intertwined and complex than a reductionist might hope should not be a cause to shut down a research division, but a challenge to redesign our methodologies and causality metrics to begin to understand and define ourselves with complexity always in mind.
posted by Cold Lurkey at 9:33 PM on January 12, 2012 [1 favorite]


« Older Gillian Jacob's nickname is 'Walking NPR'   |   DON'T DRINK THE NECTAR OF PROPAGANDA UNTIL AFTER... Newer »


This thread has been archived and is closed to new comments