“race scientists” and neo-Nazis
February 27, 2022 1:58 PM   Subscribe

 
I’d be a lot more excited about this if it weren’t common knowledge that

1) Fisher was a total asshole
2) Pearson and Galton, like most of their peers, applied statistics to genetics in pursuit of the same sort of husbandry we see in livestock, and
3) Nautilus was notorious for stiffing writers.

From an (alleged) scientist: don’t put people on pedestals. It just makes a huge mess when they inevitably fall off. Aside from a handful of pioneers (mostly women), the majority of scientists are at least as flawed as any random citizen of their times.
posted by apathy at 3:08 PM on February 27, 2022 [8 favorites]


I knew there was a reason I preferred the Spearman correlation coefficient - googles Charles Spearman...
posted by piyushnz at 3:40 PM on February 27, 2022


Here, as is often the case in applied math, it's tremendously important to distinguish statistics as a topic of mathematical knowledge and statistics as a collection of cultural beliefs and practices. This distinction must be emphasized, over and over again, because pseudoscientists, political operatives, tech industry utopians, and academics with regressive ideologies all tend to play a bait-and-switch game. They love to argue that the fact that the former sense of "statistics" includes many unassailable truths in the form of powerful theorems should somehow translate to accepting that the latter sense of "statistics" is not a matter of belief but of self-evident fact. This rhetorical trick works alarmingly well on most people, especially since most people pulling this maneuver are often quite statistically naive and simply take on faith that the math says whatever is convenient for their beliefs. Charles Murray is a particular egregious example of this, having spent the last 28 years resolutely ignoring calls that he look up what "multicollinearity" is and why it might be a problem for many, many of this claims.

The theorems on which statistics depends (and, more broadly, the foundation of probability theory that those foundations rest upon) are, like all theorems, ideologically neutral. However, being both abstract and highly technical, those theorems also usually say much more precise things, appropriate to much narrower contexts, than most people who crunch numbers for a living realize. As such, the intuitive, language- and belief-mediated way most people tend to think about statistics is a minefield laden with exactly the kind of prejudice and narrow-mindedness this article calls attention to. Having a statistical technique for dealing with binary categories does not make those categories a valid interpretation of the world, for example. As such, number crunchers owe it to themselves and to those whose data they handle to really reflect on all the baggage that surrounds their community's "best statistical practices." A lot of baggage that appear at first glance to be a pragmatic simplification to facilitate computation reveals itself, upon closer inspection, to be motivated by dangerous assumptions about how the world works (and how those beliefs about the world should be instantiated in the abstract world of mathematics).
posted by belarius at 3:43 PM on February 27, 2022 [27 favorites]


R has a lot to answer for.
posted by Jessica Savitch's Coke Spoon at 3:45 PM on February 27, 2022 [3 favorites]


OTOH, we can view this as one more weapon in the war on frequentist statistics.
posted by kaibutsu at 4:01 PM on February 27, 2022 [5 favorites]


"lies, damned lies, and collections of cultural beliefs and practices" has a much better meter to it.
posted by NoThisIsPatrick at 4:28 PM on February 27, 2022 [11 favorites]


I tell myself I’ve plumbed the depths of such awful stuff, but when I read an article like this I always seem to encounter a new low I hadn’t anticipated:
Pearson had extreme, racist political views, and eugenics provided a language to argue for those positions. In 1900, he gave an address called “National Life from the Standpoint of Science,” in which he said, “My view—and I think it may be called the scientific view of a nation—is that of an organized whole, kept up to a high pitch of internal efficiency by insuring that its numbers are substantially recruited from the better stocks … and kept up to a high pitch of external efficiency by contest, chiefly by way of war with inferior races.” According to Pearson, conflict between races was inevitable and desirable because it helped weed out the bad stock. As he put it, “History shows me one way, and one way only, in which a high state of civilization has been produced, namely the struggle of race with race, and the survival of the physically and mentally fitter race.”
We can see the roots of White Supremacy's fears of 'replacement' here. White Supremacists are terrified of replacement because their most fundamental motive and raison d'être is an active campaign of genocide and replacement.
posted by jamjam at 6:03 PM on February 27, 2022 [7 favorites]


This is fascinating. I had no idea of this, despite being a data scientist and using these tools on a day to day basis. Believe it or not, I have never once seen their racism mentioned in a statistics textbook - even the ones that mention how the term "regression" came from looking at human heights over generations.

As for whether the statistical tools created by these men - which we continue to apply today - have been influenced by their evil philosophy: of course they have. As a previous comment said, there's a difference between a mathematical truth and a best-practice, and the entire field of null hypothesis testing falls into the latter category. I'd say there's a good case to be made for throwing it out entirely and not teaching it again except as a historical cautionary tale.
posted by splitpeasoup at 6:20 PM on February 27, 2022 [3 favorites]


It always surprises me when younger folks have been given the impression that eugenics was some kind of fringe thing and not the horrifyingly mainstream thing it was (and could be again).
posted by The Underpants Monster at 9:35 PM on February 27, 2022 [6 favorites]


As well as being willfully wrong [Mefi Llama-Lime 2012] about the association between smoking and lung-cancer(*) Fisher was wrong about the efficacy of sterilisation [or death camps, indeed] in improving any racial stock. His colleague and rival JBS "Red" Haldane wrote a classic paper showing a) that most genetic defects are recessive b) by definition they are rare c) Hardy-Weinberg showed that in such cases almost all the defective genes reside in 'carriers' where a 'good' gene masks, or compensates for, its defective partner. Haldane went on to show that eliminating all the double-defective gene cases would have negligible effect on the prevalence of the gene in the population and it would take hundreds of generations to appreciably reduce the frequency. We are less that 100 generations from Caesar Augustus and Pontius Pilate. It's hard being perfect.
(*) Fisher died of colon cancer whc is also associated with smoking.
posted by BobTheScientist at 3:20 AM on February 28, 2022 [1 favorite]


The article does a good job of laying out the racist and eugenicist views of the three target figures. There's a lot more to say (in a bad way) about Fisher's social and political views. But the (causal?) connection between frequentist methods (which ones, exactly?) and racist-eugenicist attitudes does not seem to me to be very well defended. One reason that it doesn't look very plausible to me is that other late-19th and early-20th century frequentists were definitely not racist eugenicists. For example, Jerzy Neyman and Hans Reichenbach. Here's Daniel Lakens (a better authority than me) going through the details for Neyman.

In terms of philosophy of statistics, the claim of real connection between statistical methods and political agendas also doesn't make much sense. The idea is alleged to be that the racists wanted their "results" to be (or to be seen as) objective. But anyone who actually worked with frequentist methods would know that they are shot through with values and judgment calls -- including but not limited to how much you care about different kinds of error, what kind of error matters in practical terms, how you connect your (real) scientific theory to the narrow statistical hypotheses that you want to test in individual experiments, and how you go about picking your statistical models in the first place. On the other side of the statistical debate, suppose we accept (dubious though it is) that Bayesian methods are not objective or even trying to appear objective. I suppose this is because the Bayesians have to dream up a "subjective" prior probability and be explicit about it. But then, are we supposed to believe that Harold Jeffreys wanted physics to be seen as less objective, since he developed Bayesian statistical methods for application in physics at around the same time that Fisher was working in genetics? What should we make of the fact that Bruno De Finetti (1906-1985) -- perhaps the person most associated with the personalist Bayesian approach to probability -- was an outright fascist in his youth before moving to Christian socialism and later in life to a more radical left-wing position? Did his shifting political views somehow cause him to adopt a statistical framework that emphasized updating degrees of belief on one's evidence?

It seems to me that a fair test of the thesis would have to look at all four of the main possible boxes: frequentist & racist; frequentist & not-racist; Bayesian & racist; and Bayesian & not-racist. Exactly how the counts are done and what statistics we ought to use to assess the contingency table seems ... fraught ... in this case. But my conjecture is that if one were to do the work honestly and make predictions beforehand -- including being clear about the proposed analysis -- one wouldn't find much to support the article's thesis.
posted by Jonathan Livengood at 6:39 AM on February 28, 2022 [5 favorites]


But anyone who actually worked with frequentist methods would know that they are shot through with values and judgment calls --

Well of course those of us who have studied statistics know that, but the point of frequentist statistics seems to be to obscure those assumptions for the general audience. Other statistical schools make the assumptions explicit, sometimes mathematically explicit.

It's a pretty regular strategy of white supremacists to hide, and pretend that they are something else. Using frequentist statistics also allows a researcher to obscure their assumptions, which is convenient for supremacists.
posted by eustatic at 8:41 AM on February 28, 2022 [1 favorite]


It always surprises me when younger folks have been given the impression that eugenics was some kind of fringe thing and not the horrifyingly mainstream thing it was (and could be again).

The circuit breaker for me, back in my last semester of university (aged 23, in 2012), was when I learned that a Journal by the name of The Annals of Human Genetics was previously named The Annals of Human Eugenics.

Prior beliefs just dissolved from that...
posted by JoeXIII007 at 9:38 AM on February 28, 2022 [3 favorites]


It's a pretty regular strategy of white supremacists to hide, and pretend that they are something else. Using frequentist statistics also allows a researcher to obscure their assumptions, which is convenient for supremacists.

I definitely agree with the first part. But I'm not convinced that Bayesian statistics are intrinsically any more resistant to obscurantism. Mendacious (or sometimes just badly-trained) Bayesians might appeal to "conventionally accepted" reference priors without really justifying them or they might use conjugate priors for the explicitly stated reason that they're mathematically more tractable but with bad intentions behind the scenes or they might fail to do any number of post-predictive checks with their models -- things that non-practitioners probably don't know to be worried about or they might handwave the likelihood term or they might misrepresent the various merger of opinion theorems when assuring their audience that the result is reasonably "objective" or ...

If a statistician wants to lie with and/or obscure their true motives behind some sophisticated mathematical machinery, they can do it with Bayesian stats, frequentist stats, likelihoodist stats, or whatever else you like. Especially when dealing with laypeople, the imposing mathematics itself can be used as a cover.

At least, that is my current conjecture.
posted by Jonathan Livengood at 9:43 AM on February 28, 2022 [3 favorites]


Well of course those of us who have studied statistics know that, but the point of frequentist statistics seems to be to obscure those assumptions for the general audience.

Grandma Underpants always said, "Figures don't lie, but liars sure can figure."
posted by The Underpants Monster at 10:40 AM on February 28, 2022 [3 favorites]


For a concrete and very current instance of the still-ongoing Bayesian / Frequentist wars, Aubrey Clayton (the author of this piece) just had an op-ed in the Times about child vaccination. The entire last two years have been a great illustration of just how badly frequentist reasoning can go when you need to make life-and-death decisions quickly under massive uncertainty.
posted by chortly at 1:31 PM on March 2, 2022


Steven Hayes, the co-creator of Acceptance and Commitment Therapy (prev), recently delivered a lecture that speaks to these topics and more, arguing for a reconceptualization of how we use statistical models in psychological research: The End of Normal.
posted by soonertbone at 10:28 AM on March 4, 2022


« Older The Dick Cavett Show, March 9, 1971, 11:30pm ET on...   |   A future from our past might appear in our present Newer »


This thread has been archived and is closed to new comments