Here's
February 12, 2002 6:01 PM   Subscribe

Here's a damning indictment of the (mis)use of regression analysis in the social sciences.

[Y]ou may have fallen for a pernicious form of junk science: the use of mathematical models with no demonstrated predictive capability to draw policy conclusions. These studies are superficially impressive. Written by reputable social scientists from prestigious institutions, they often appear in peer reviewed scientific journals. Filled with complex statistical calculations, they give precise numerical "facts" that can be used as debaters' points in policy arguments. But these "facts" are will o' the wisps. Before the ink is dry on one study, another appears with completely different "facts." Despite their scientific appearance, these models do not meet the fundamental criterion for a useful mathematical model: the ability to make predictions that are better than random chance.
posted by electro (11 comments total)
 
Time to roll out the good old Frankfurter School of Critical Theory. Banzai!
posted by cx at 6:25 PM on February 12, 2002


I believe Mark Twain beat him to this critique..
posted by srboisvert at 6:58 PM on February 12, 2002


The problem is not the tool, it is the application. Such models do have predictive power when constructed correctly. Most social scientists trying to play real scientists, unfortunately, don't have the background to do this correctly. But saying that all mathematical models in the social sciences are flawed is, well, flawed. It's just as much junk logic as the things he cites are junk science.
posted by louie at 7:30 PM on February 12, 2002


It's amazing how seductive and contagious pseudo-scientific factoids are when they seem to deliver a truth about policy.

On a tangent, I'm reminded of the recent book Arming America, which claimed that gun ownership and the gun culture were relatively unknown to early America and did not really spread until the time of the Civil War. This provocative thesis was widely embraced, as you can imagine. But now a (pro-gun-control) law professor with a background in quantitative analysis has said that the work was shoddy, or even fraudulent [via Arts & Letters Daily].
posted by Zurishaddai at 7:55 PM on February 12, 2002


I agree with louie in that the models can be accurate if applied correctly. The problem is, is that all of this research being done is being done under grants by people who want the results to come out a particular way.

Sorry to put a conservative spin on this but Dr. Thomas Sowell points out many of these false statistics in several of his books. One of my favorite was that scientists always seem to find that all the known oil in the world will be exhausted in 20 years. What they fail to consider is that fact that:

a) technologies for locating oil reserves will improve over that time period.

b) it's not economical for any company to look for oil beyond a 20 year world supply.

But even though these facts (and others) have been pointed out in terms of explaining why only 20 years worth of oil exists, scientists keep publishing reports with big scary headlines saying that the world is about to run out of oil.

Whatever group will profit the most from that conclusion takes the report and waves it has high as it can, shouting that the world oil supply is about to run out and the only solution is . . .

As they say in police work, follow the money trail. Was it surprising that Microsoft was able to find researchers who had concluded that Microsoft's practices were good for the tech industry?
posted by billman at 8:15 PM on February 12, 2002


In the good graduate programs, aspiring social scientists are indeed taught not to confuse correlation with causality. They're taught that models are only valid if the underlying statistical assumptions are met. And they're taught that no amount of fancy statistical technique will fix that what was bungled in the research design phase.

And then they go out into the real world, and find that there's absolutely no reward for being conservative about one's statistics. Policymakers don't want the hedges; they want answers. Journals don't publish you when you're honest about potential flaws in the design.

In short, the only thing you gain in being meticulous in your statistics is a reputation as a meticulous researcher. That counts for something, but in the publish-or-perish world of academia, it doesn't count for much.

And so bit by bit, researchers tend to cut corners, until they're down to circling any correlation where p < .05, and claiming causality. It's bad science, but it often gets the good grants.
posted by Chanther at 8:42 PM on February 12, 2002


Chanther: I wouldn't even give the graduate programs that much credit. My roommate is a political science grad student at quite probably the best program in the country *cough*Hah-vahd*cough*. He can't count his way out of a paper bag. Love the guy to death, but he's math-challenged. And his classmates think of him as the methodological giant in the class. No matter how many times you say 'correlation is not causality' to folks who haven't taken a hard science course since junior year of high school, it isn't going to sink in at the level it does for people with real science backgrounds. It happens that at some programs (Michigan, for example) departments are actively recruiting people who were hard scientists first (physicists and mathematicians mainly) and then political scientists later. Those people are starting to do incredible work in a number of sub-fields of political science. In 20-30 years, the field will be much more rigorous as a result of their work. But until then it will still suffer from people who were soft and then tried to be hard instead of the other way around.
posted by louie at 10:37 PM on February 12, 2002


louie: math and method are separate kinds of knowledge. So it is pretty possible to be mathematically challenged and be methodologically sound. Method is about the design of the research, the logic of analysis, the way the question is framed. The math often comes later, its used to answer the questions that the research is designed to answer. A good scientist (social or otherwise) should have both.
The people coming from the hard sciences might have the math down, but that is not sufficient to make them good scientists. They are often the ones who miss the forest for the trees. Often they don't have the breadth of knoweldge you need to have. I don't think that social science reseachers with physics and math background is going to be magic pill that will solve all problems. Its good graduate training that is the answer.
posted by rsinha at 11:27 PM on February 12, 2002


whats with the [y]ou ???
posted by monkeyJuice at 1:45 AM on February 13, 2002


MonkeyJuice - the [Y]ou is an editing convention. It means that the Y was not capitalized in the original. It's a heads up to a reader that the quote they're reading starts in the middle of a sentence from the original text.

Louie - yeah, I agree the methods training out there in many (if not most) graduate schools is crap. I tried to qualify by saying "the good graduate programs" - but I'll have to agree that they're the exception rather than the norm, even at top-flight universities.

[plug] As for your roommate, tell him to cross register into the S-030 and S-052 sequence at (of all places) the Harvard school of education. They're the best methods courses at the university, and so popular now that they're only about half filled with Ed school students. He'd find the actual statistical methods taught to be elementary, I'd guess - but the training they give in how to conduct methodologically sound social science research is the best anywhere. [/plug]
posted by Chanther at 5:21 AM on February 13, 2002


And so bit by bit, researchers tend to cut corners, until they're down to circling any correlation where p < .05, and claiming causality. it's bad science, but it often gets the good grants./i>

i read someplace that p < .05 is just an arbitrary confidence interval that fisher(?) invented to make something count as statistically significant so i'm not even sure p .05 counts. a href="http://mathworld.wolfram.com/BayesianAnalysis.html">bayesian analysis might provide an alternative because it allows you to update your "degrees of belief" (if you believe in such things :) but it presents it's own difficulties when assigning the prior. although it seems some inroads are being made using maximum entropy methods.

what's interesting to me though is what happens when shannon's definition of entropy is generalized(!) to nonextensive cases?

posted by kliuless at 7:34 AM on February 13, 2002


« Older Tales for The L33t presents 'Romeo and Juliet'...   |   Newer »


This thread has been archived and is closed to new comments