Additive-noise methods
January 12, 2015 7:42 PM   Subscribe

How to tell correlation from causation - "The basic intuition behind the method demonstrated by Prof. Joris Mooij of the University of Amsterdam and his co-authors is surprisingly simple: if one event influences another, then the random noise in the causing event will be reflected in the affected event."
posted by kliuless (25 comments total) 60 users marked this as a favorite
 
this is pretty cool. I like what statistics can do with all the noise that we try to schlep into a single datapoint.
posted by rebent at 7:57 PM on January 12, 2015 [1 favorite]


i'm going to have to read the last two links later because i'm tired, so maybe it's in one of those, but how do you tell which event is the "originator" of the noise? or are they saying that just the presence of simlar noise indicates the possibility of a link?
posted by ArgentCorvid at 8:06 PM on January 12, 2015 [1 favorite]


I wonder if you could take all kinds of disparate data sets, run this additive noise method on every possible pair, and come away with some lucky hits (since it's only ~80% effective) on potential cause/effect pairs to study further that you didn't even consider before? Could be an interesting tool for pathology if that was possible.
posted by jason_steakums at 8:17 PM on January 12, 2015 [2 favorites]


The breakthrough is in discovering that in certain cases you can discover which is cause & which is effect by comparing the noise in both.
posted by scalefree at 8:20 PM on January 12, 2015 [1 favorite]


The approach is very clever, and yet it seems so obvious once you hear it. When so much of statistics is about cutting through noise, maybe there was a sort of collective mental block around using that noise for something useful. It almost feels like a paradigm shift, like there might be other secrets hidden in the stuff we used to throw away, other techniques that could leverage all of the incidental trails it leaves behind. Statistics for me is something that begins around traditional data analysis and ends with a bit of quantum mechanics, so these sorts of devilish and practical tricks are totally alien to me. Are there other, similar manipulations that can be applied to large data sets like these that yield unexpected insight?
posted by WCWedin at 8:26 PM on January 12, 2015 [2 favorites]


Determining how noise correlates with a signal is arguably the crux of practical signal processing and control systems in engineering, and has been for decades -- it's interesting to see this type of theory applied to other sciences. I'd be surprised if this was completely new: it's a logical approach to noise management. Similar statistical methods are used in particle and astrophysics, especially when attempting to detect extremely minute signals (e.g. isolating a distant gamma burst or determining if a neutron has hit your detector).

I'd love to see this retroactively applied on all major social studies research from the past decade. I suspect it will reveal fascinating patterns in secondary effects.
posted by Darmok and Jalad at Tanagra at 8:29 PM on January 12, 2015 [3 favorites]


ArgentCorvid: one would expect noise in the cause to correlate with the effect, but not visa - versa. You test for both directions and look for cases where it is one way
posted by idiopath at 8:50 PM on January 12, 2015 [5 favorites]


Am I wrong in thinking that this feels like a cross-sectional extension of Granger causality?
posted by ROU_Xenophobe at 9:12 PM on January 12, 2015 [2 favorites]


The second link states everything more carefully. I'm not clear how you can tell noise in the cause from noise in the effect but it looks like it involves some subtle statistics-fu or signal-processing-fu, simple as it is to describe the general principle. It also looks like it's imperfect (worked on 80% of the datasets they threw at it; I assume that means 20% of the time it said that windmills caused the wind?) and only works on certain kinds of datasets.

With all those cautionary notes in place, it's still pretty damn cool.
posted by edheil at 9:18 PM on January 12, 2015


This doesn't sound like they're untangling correlation vs. causation but just figuring out direction of causation (and presumably assuming that one variable is causing the other). The principle problem in going from correlation to causation (the reason you can never empirically prove causation) isn't direction, it's the problem of spuriousness. Outside of a randomized experiment you can never be sure there isn't a third confounding variable. If there is a third confounding variable, presumably its noise is reflected in everything it affects and thus the noise in everything it affects would be correlated. That is, it seems likely this method would show "causation" between two variables that were spuriously related.

Am I missing something?

I guess it's nice to have a way to determine direction of causation when that's in doubt, but this doesn't sound to me like the scientific breakthrough that will finally eliminate the eternal bickering over causation.
posted by If only I had a penguin... at 9:20 PM on January 12, 2015 [4 favorites]


If only I had a penguin... --
It’s worth pointing out that this applies only in the very simple situation in which one variable causes the other. But of course there are plenty of much more complex scenarios where this method will not be so fruitful.
From the second link.

The headline giveth; the end of the article taketh away.
posted by edheil at 9:25 PM on January 12, 2015 [6 favorites]


Am I wrong in thinking that this feels like a cross-sectional extension of Granger causality?

Not one bit. Anybody with even one semester of econometrics knows that your missing variables are going to be reflected in the error term; it stands to reason that noise from a causative event would influence the caused event. I'd have to read the paper in more detail, but it sounds like they just applied Granger-style lags to ɛ, which is clever but hardly ground-breaking.
posted by fifthrider at 10:04 PM on January 12, 2015


If I only had a penguin, the test looks for whether there is a directional difference in the noise, if noise from X is showing up in Y in ways that noise from Y does not show up in X.

Presumably if X and Y are both caused independently by Z, then you'll find neither an X --> Y noise asymmetry nor a Y -- > X asymmetry.
posted by straight at 10:06 PM on January 12, 2015 [1 favorite]


If there is a third confounding variable, presumably its noise is reflected in everything it affects and thus the noise in everything it affects would be correlated. That is, it seems likely this method would show "causation" between two variables that were spuriously related.

You've hit that on the head. It's entirely possible that the "noise" isn't random at all, but rather the result of an omitted variable, in which case this approach would probably do exactly what you describe.
posted by fifthrider at 10:06 PM on January 12, 2015


On second reading of the Quartz piece:

Still, this method isn’t a silver bullet. Like any statistical test, it doesn’t work 100% of the time. And it can only handle the most basic cause-and-effect scenarios. In a three-event situation—like the correlation of ice cream consumption with drowning deaths because they both depend on hot weather—this technique falters.

So, yeah; this doesn't really unpick the causality problem at all unless you're completely sure you haven't omitted any variables, which is essentially impossible. Talk about a 'spherical cow' solution.
posted by fifthrider at 10:10 PM on January 12, 2015


So to determine if it's causation or correlation, you first determine if it's causation?
posted by blue_beetle at 10:16 PM on January 12, 2015 [1 favorite]


So to determine if it's causation or correlation, you first determine if it's causation?

You jest, of course, but the prevailing method relies on a much simpler intuition than this one: namely, that if the previous observations of one variable seem to affect the current state of another, then the causation runs in that direction. Rather than demanding that there be absolutely no missing variables, this test just assumes that we live in a universe where present events can't alter the past.

tl;dr: time machines wouldn't just screw up history, they'd make it so we'd never be able to be sure about correlation and causation!
posted by fifthrider at 10:26 PM on January 12, 2015 [1 favorite]


It seems to me that the crux of the matter is this:

As far as we know, there are no theoretical results on the choice of that threshold that would lead to a consistent way to test whether p(x, y) satisfies an ANM X → Y . We circumvent this problem by assuming a priori that p(x, y) either satisfies an ANM X → Y, or an ANM Y → X, but not both.

The basic idea is that if Y is a non-linear function of X, then if we regress Y on X the errors will be independent of X, but if we regress X on Y, the errors will not be independent of Y. As other have pointed out though, if Z causes both, then the errors will (I think) not be independent in either direction. As it is, they assume X → Y or Y → X and just pick the direction where the errors are more independent of the input. But if they had a proper threshold, they could potentially conclude that neither direction crossed the threshold, and thus there was probably some Z causing both -- though I imagine it will be very hard to figure out a threshold that works across all kinds of data and relations. On the other hand, one of their simulated test data sets, sim-c, does include a confounder Z and works just as well as the others, although it is still appears to be the case that X and Y are also directly causally connected, as well as being both influenced by Z.

Another nit-pick, if one wants to go in for these things, is that Figure 9 (reprinted in the medium.com writeup) has the main results I think, and the one that really matters is the blue bar, which is the real-world data. And here they somewhat shoot themselves in the foot by trying so many independence measures -- some of those blue bars are well about the 50% line, and others aren't. But of course they're all correlated with each other, so the fact that they are mostly all above doesn't really add anything (as the authors discuss at the end, with the Bonferroni correction stuff). So at least with the real-world data this is only arguably doing better than chance, although with the simulated data it seems to work much better (as is usually the case). All of that's not a put-down, just to say that it's a neat idea with lots of room for improvement still.
posted by chortly at 12:04 AM on January 13, 2015 [3 favorites]


Well, I guess this just about wraps it up for that chump David Hume.
posted by Segundus at 2:29 AM on January 13, 2015 [3 favorites]


if one event influences another, then the random noise in the causing event will be reflected in the affected event."

Not always though.

The bigger problem that social scientists experience are endogeneity problems. Its not clear how this method addresses this.
posted by MisantropicPainforest at 4:46 AM on January 13, 2015


FTFA:
"But the key insight is that random fluctuation in traffic will affect John’s commute time, whereas random fluctuation in John’s commute time won’t affect the traffic. By detecting the residue of traffic fluctuation in John’s commute time, we could show that traffic causes his commute time to change, and not the other way around."

So they're using a very generous and expansive definition of causality?
posted by MisantropicPainforest at 4:48 AM on January 13, 2015


Well, I guess this just about wraps it up for that chump David Hume.

Not in the slightest. This is just a fancier way of empirically determining that A causes B, and it really doesn't touch Hume's point that that's not a judgement that is determined according to rational principles; rather, it's an association based on repeated observations.
posted by thelonius at 4:52 AM on January 13, 2015


Not in the slightest. This is just a fancier way of empirically determining that A causes B, and it really doesn't touch Hume's point that that's not a judgement that is determined according to rational principles; rather, it's an association based on repeated observations.

it has always seemed to me that "strong AI" research has foundered on this particular metaphysical shoal...
posted by ennui.bz at 5:49 AM on January 13, 2015




Um, perhaps I'm missing something but for time-varying parameters, if you are certain that there are only two possibilities (A->B or B->A) and not additional parameters (e.g. C->A and C->B), then cant you just look at which variable is leading or lagging in time?
posted by jpdoane at 2:28 PM on January 16, 2015


« Older Design off the beaten path   |   SE BUSCA, WANTED: For causing generations in... Newer »


This thread has been archived and is closed to new comments