A general theory of individuality
May 5, 2010 10:05 AM   Subscribe

We Need a General Theory of Individuality : "One of the unspoken secrets in basic scientific research, from anthropology to zoology (with intervening stops at physiology, political science, psychology, psychiatry, and sociology) is that, nearly always, individuals turn out to be different from one another, and that—to an extent rarely admitted and virtually never pursued—scientific generalizations tend to hush up those differences"
posted by dhruva (75 comments total) 16 users marked this as a favorite
 
Sounds like a great way to bring Randroids out of the woodwork.
posted by mccarty.tim at 10:10 AM on May 5, 2010


I think the main problem here is domain error. You can have a general theory of uniqueness (in other words a general theory that describes the range and distribution of variation, and the factors that lead to increase or decrease in variety). Of course the individual unique object cannot itself be the subject of this knowledge (that is not what science is for), but we can use our general knowledge of how this uniqueness works to inform judgments about the individual case.
posted by idiopath at 10:15 AM on May 5, 2010 [1 favorite]


needs the Special Snowflake tag.
posted by hippybear at 10:18 AM on May 5, 2010 [3 favorites]


scientific generalizations tend to hush up those differences

Unspoken secret? Rarely admitted and never pursued? Hush up?

Scientific generalizations are supposed to find commonalities across individual instances. That's the point.
posted by DU at 10:20 AM on May 5, 2010 [22 favorites]


What a gigantic goddamn dishonest strawman of an article. What's next, an article about how scientists are all emotionless robots who need to allow for the possibilities of life and love in their personal lives?
posted by Pope Guilty at 10:21 AM on May 5, 2010 [6 favorites]


I agree with DU. It's so obvious as to be a non-issue.
posted by Brent Parker at 10:23 AM on May 5, 2010


photons too! cf. people as particles :P
posted by kliuless at 10:23 AM on May 5, 2010


What's next, an article about how scientists are all emotionless robots who need to allow for the possibilities of life and love in their personal lives?

... or at least an acknowledgment of Intelligent Design.
posted by philip-random at 10:25 AM on May 5, 2010


See, these scientists think like this, these scientists think like this.
posted by iamkimiam at 10:25 AM on May 5, 2010 [1 favorite]




The article just keeps walking around and not looking at the glaringly obvious: it's nearly impossible to create down-to-the-particle duplicates in organisms of staggering complexity and in the neighborhood of a trillion trillion atoms. A stray ion, an errant methylation, a random hormonal flux, any one of these alters development.

Non-uniqueness is hard for anything much bigger than a virus.
posted by adipocere at 10:28 AM on May 5, 2010 [1 favorite]


I am very much in favor of this line of thinking. When I was on campus a few years ago, I did a psychological test that was intended to measure perceptions of value -- how much money one thought a coffee cup was worth in different situations. What the test had no ability to measure was that I walked in, decided the test was abysmally silly (and was in kind of a bad mood anyway due to, actually, not having had any coffee that morning) and decided to make an outlier. Talking about it later, I ran across a friend of mine who did the test because he wanted to buy something from a vending machine with no dollar acceptor, and tried to optimize his chances of getting a 75 cent reward without getting a dollar bill (he missed, in the end, and had to ask for his reward in change, which he considered 'losing' the 'game').

In those particular cases, and in many others, the biological structures (if such things exist) that assess value were entirely dormant; it was my individual personality (such as it is) at work -- which is why a lot of the behavioral 'research' I read in the paper strikes me as meaningless averages of behaviors with wildly different individual causes. Biologists and sociobiologists might oppose all this on principle, but there's no need -- a theory of individuality wouldn't invalidate good research, it would just provide another means by which to test the increasingly outlandish claims that we see in the science section of the local paper.

On preview, I agree with you folks saying that this is all quite obvious, but I don't think it's obvious to everyone. Having a systematic way to talk about the fact that people are actually unique might rein in some of the excesses of contemporary pop research, and it certainly doesn't hurt anybody.
posted by Valet at 10:34 AM on May 5, 2010 [2 favorites]


Sounds to me like the Front Page blurb pretty much covers it. There's not really any need to talk about hushing up, because like DU says, duh.

What's next on the agenda?
posted by Caduceus at 10:39 AM on May 5, 2010


More seriously, I feel like the author is taking nerdview here, to an extreme. It's not like scientists are literally ignoring variability, but rather, they need a starting point, so we can actually talk about things generally, as well as in specifics. It'd be like if stopped using words because we couldn't agree on exactly what each one means. Or if I said, "no, this is not a set of mugs, because see, each one is different." We pick the level of specificity or generality we need based on the idea we're trying to convey.

Also, I think somebody somewhere said something about identity once. Maybe even published a paper.
posted by iamkimiam at 10:40 AM on May 5, 2010 [7 favorites]


From the article: good doctors know that individual Homo sapiens may, for example, develop tuberculosis without fever, or idiosyncratic unresponsiveness or hyper-responsiveness to certain drugs. That is why The New England Journal of Medicine and most medical-specialty journals devote considerable space to individual case reports, something rarely found in other sciences

Doctors are not scientists. They are practitioners of medicine. They are more like car mechanics than physicists.
posted by Pastabagel at 10:52 AM on May 5, 2010 [2 favorites]


In the US, our whole social system, culture, and the basic mechanics of our "enterprise" system elevate individuality to the highest possible status. Individuality is the Soma we slip into our kids' oatmeal, the ideological carrot we dangle in front of folk to keep them running, competing, and clawing at one anothers' backs. The pursuit of "individual liberty" and "individual accomplishment" and your own "individual home" and "individual car" and "individual-serving yogurt treats" and "Your YOUnique YOUness" has left us with a country of opera singers, everyone a-chirping "Me, me, me, me."

But I guess we could use positivist reasoning and a cute tagline to say it's a neglected idea if you want to, Mr. Barash.
posted by ford and the prefects at 10:55 AM on May 5, 2010 [14 favorites]


He's saying that scientists would do well to learn from doctors' methodologies.
posted by polymodus at 10:55 AM on May 5, 2010


they need a starting point, so we can actually talk about things generally, as well as in specifics

That's true -- but each different kind of starting point will create different results. Assume that people are basically similar, you'll get one kind of result. Assume that the genders are basically similar to themselves and different from each other, you'll get a different one. Understanding -- and testing, and discussing -- the starting point is really important. I agree with you that the conspiracy stuff is pretty dumb. But oh my God do we need a paragraph about individual variation in every story about how babies prefer faces of their own race.

It'd be like if stopped using words because we couldn't agree on exactly what each one means.

I think the point (of the article, as well as of actual language) is not to stop using words, but to use them with the understanding that we can't agree on exactly what they mean, and to examine the ways in which they accrue meaning. That was the revolution in literary criticism in the 60s-90s, and what we learned about how power affects language, I think, was really valuable. We didn't regress to grunting and pointing as a result, either.
posted by Valet at 10:59 AM on May 5, 2010 [1 favorite]


I think the point (of the article, as well as of actual language) is not to stop using words, but to use them with the understanding that we can't agree on exactly what they mean, and to examine the ways in which they accrue meaning. That was the revolution in literary criticism in the 60s-90s, and what we learned about how power affects language, I think, was really valuable. We didn't regress to grunting and pointing as a result, either.

I would argue that the end result of postmodernism is in fact grunting and pointing by a different name.
posted by Pope Guilty at 11:00 AM on May 5, 2010 [1 favorite]


PG: If you took it to its logical extreme, perhaps it would. But nobody's saying 'insist on perfect individuality and cease all scientific research that creates averages' -- the article essentially wants to pose the question about what we can know about groups considering what we know about differences within groups. I'm not getting why that question seems objectionable. Obvious, maybe, but not objectionable.
posted by Valet at 11:04 AM on May 5, 2010


The author is asking a pretty basic and profound question. Why does genetic individuality/diversity exist? What is its role in evolution?

I had a high-school science teacher who once asserted/believed that (essentially due to globalization), humans would converge towards the same skin color. I knew this was false but didn't have a good explanation why.

The author is not saying that scientists should stop making generalizations, etc.; that's a wrong reading of the article.
posted by polymodus at 11:05 AM on May 5, 2010


I would argue that the end result of postmodernism is in fact grunting and pointing by a different name.

Well, I'm a lot more careful about use of the word "Lady" now, and it's mostly served me well.
posted by philip-random at 11:08 AM on May 5, 2010 [2 favorites]


Chaos theory covers it.
posted by jimmythefish at 11:57 AM on May 5, 2010


Doctors are not scientists. They are practitioners of medicine. They are more like car mechanics than physicists.

I think this view, the familiar 19th century view that theoretical physics is the model of what science "really" is, is an unnecessarily reductive and demonstrably false view of science. I actually think a good argument can be made that both doctors and car mechanics are working scientists of a sort, albeit scientists who specialize in applied techniques based on highly sophisticated and technical knowledge skills (that have been gathered over time from the practice of their respective traditions). What irks me is the scientistic and ahistorical microphilia implied in the suggestion that only theoretical physicists really deals with the universal laws of what there is, and that everything else (biology, for instance) is somehow merely trading in superficial knowledge.
posted by HP LaserJet P10006 at 12:02 PM on May 5, 2010 [2 favorites]


Regarding the FPP: I think philosophy has been struggling with this question, the question of the ontological status of the individual, for several thousand years (for a modern example see for instance this justly famous book, first published in 1959), and is to philosophy that one must turn to adequately address the question.
posted by HP LaserJet P10006 at 12:10 PM on May 5, 2010 [1 favorite]


Yeah, but see, here's the thing:

High school basic physics class. We're all doing various probability exercises and noting the results. One of them is this little hand-held pachinko thingie with lots of little ball bearings in it. Turn it over, let the bearings roll to the top, right it, and they all fall plink-plink down the staggered pegs to land in the slots at the bottom in a lovely little bell curve, the physical demonstration of which was the point of the exercise.

Only I flip mine and right it and plink-plink-plink, I get an unmistakable reverse bell curve: bearings stacked high on either end, falling down to a couple of middle slots with only one or two bearings each.

Um, I said, calling the teacher over. What do I do with this?

He looked at it, looked at me with an eyebrow cocked, picked it up, flipped it over, righted it, plink-plink. There was a lovely normal bell curve just as was expected.

There, he said. Fixed it for you.
posted by kipmanley at 12:13 PM on May 5, 2010 [5 favorites]


I actually think a good argument can be made that both doctors and car mechanics are working scientists of a sort, albeit scientists who specialize in applied techniques based on highly sophisticated and technical knowledge skills (that have been gathered over time from the practice of their respective traditions).

No. There's a reason physicians and others refer to themselves as 'practitioners' instead of 'theorists'. There's a clear separation between professional practitioners and researchers in the medical field. Practitioners don't really function as scientists - they rely on the science to make diagnoses. All good science has an element of the theoretical - its a necessary step in the scientific method. But, any scientific method is generally off-the-cuff in making a diagnosis and is almost always within the framework of an accepted practice and knowledge base.

While they all have a solid foundation in the sciences and are all scientists in their own right (usually biologists) their practice doesn't directly lend itself to frequent contribution to the academy. They're too busy practicing. If anything interesting or unusual comes up a scientific method can aid in a diagnoses, but the science is ultimately performed by researchers.
posted by jimmythefish at 12:18 PM on May 5, 2010


I agree that there is a case to be made for a more comprehensive study of "individuality". Why is the biosphere speciated in such diversity? Why are individual species not identical or very nearly so? Why hasn't evolution created the an ideal creature or even seem to be converging to such an ideal? Is it constant environmental change preventing the convergence to an ideal?
We know that random changes do DNA over millions of years leads to evolution and speciation, but why the broad inter- and intra- species divergence even in somewhat constant environments? Maybe this has been already been studied or is obvious, but I don't recall a solid theory that explains bio-diversity. I understand how species diverge genetically, but I don't understand why things don't converge over long periods of time to a small finite set of "best".

Sorry to ask so many questions, but I don't recall this being explained very well in any of the classes I took.

On preview: Chaos theory doesn't explain it. Chaos theory provides a model of interesting behaviors in boundary conditions which bio-diversity would seem to (maybe) fit. But why would Chaos theory apply and not some other theory. In other words, what's the forcing function that forcing the chaotic boundary (assuming bio-diversity can be modeled by Chaos Theory).

On second preview regarding bell curves: If evolution can selectively add or remove pachinko pins for the "best" performing balls, it would seem that you would eventually get nearly all the balls eventually falling in nearly the exact same spot. But what you see in reality is that there is a widening rather than a narrowing. That seems counter-intuitive to me.

Now I admit I'm assuming that intra-species difference follows from the same reason that inter-species diversity exists, that may be mistaken, but I still think that determining what drives things towards divergence rather than convergence in biology could be useful (if it hasn't already been done and I missed that day of class).
posted by forforf at 12:22 PM on May 5, 2010 [1 favorite]


Doctors are not scientists. They are practitioners of medicine. They are more like car mechanics than physicists.

My wife, in med school, would be surprised to hear this.

The day to day practice of medicine is chock full of empirical methods. In fact, often times, doctors will refer to treating patients "empirically" given certain conditions.

That's not to mention all the research that doctors do and participate in on an ongoing basis.
posted by device55 at 12:23 PM on May 5, 2010


No. There's a reason physicians and others refer to themselves as 'practitioners' instead of 'theorists'.

I understand the historical reasons why the division between practice/application on the one hand, and theory/research/experimentation on the other, exists, but I think the division is far less important than it's made out to be when one is attempting to gage the nature and scope of scientific practice. The truth of what science is is less narrow than just "theory," and certainly less narrow than just "theories in physics."
posted by HP LaserJet P10006 at 12:24 PM on May 5, 2010


But, any scientific method is generally off-the-cuff in making a diagnosis and is almost always within the framework of an accepted practice and knowledge base.

A diagnoses is a hypothesis based upon observations (of symptoms) and the treatment is a test of that hypotheses.

That's science.
posted by device55 at 12:26 PM on May 5, 2010


The day to day practice of medicine is chock full of empirical methods. In fact, often times, doctors will refer to treating patients "empirically" given certain conditions.

I work in tech support, and the same is true of my work, but I'm not calling myself a scientist.
posted by Pope Guilty at 12:27 PM on May 5, 2010


I work in tech support, and the same is true of my work, but I'm not calling myself a scientist.

But "computer scientists" do, and maybe nurses don't. But the discussion from my POV is not really about fixing the word "scientist" so that it fulfills everyone's own personal tastes, but rather looking at the history of how we view science the way we do: often we forget the ways in which technology and medical knowledge are intertwined in this history. The view that theoretical physics is the model of what science is, is actually a recent development, and to my mind it's unnecessarily narrow.
posted by HP LaserJet P10006 at 12:34 PM on May 5, 2010


Look, all this "this is obvious" obviousness misses part of the point of the methodologies of the social sciences - they may not account for your specific individuality, but within the statistical margins they can reconcile your personality with others behaviors in a way that is statistically valuable and prescriptive, independently of whatever yours or others motivations. Indeed, these tests actually don't care about our motivations, and so the belief that you somehow gamed the system ("I made an outlier") misses the point, since the game isn't at the level of motives but rather at the level of behaviors, and so all you did - theoretically - is to help define the data set at the highest standard deviations. And one of the things you'll learn when you do these sorts of statistics is to avoid something called the individuation fallacy (there are a number of names for this, depending on field and methodological influence) - the belief that the results tell us anything about individuals, rather than about the collective figure that is the average individual and its figural cousins, the average+1 standard deviation individual, +2 SD, and so on.

Not that I agree with this methodologically; I'm just saying that the "these tests don't count my snowflake like uniqueness and everyone knows it" is, indeed, already known, and accounted for through the math.

The better response is to argue that, on its own merits, the math is simply insufficient to the task and that the conceit of taking a series of discrete data points, making of them a set, and looking for commonalities (the production of the average individual) is, mathematically speaking, far too simplistic, and that a more complex thinking of individuals is necessary for successful modeling. Check out R. Keith Sawyer's Social Emergence for a more thorough discussion of the problems here and the importance of using network theory, complexity theory, and agent-based modeling to address the insufficiencies of the model.

Or, to offer a quick and common example: Imagine a concert in a closed concert hall. There's good seats, mediocre seats, a Mezzanine, some side balconies, an upper balcony. Etc. Performance ends, and it's good, very good. Will it get a standing ovation? Well traditional social science would provide us with something like the likelihood of a standing ovation based on a relatively limited set of variables based around what the average person might do, in its most simplistic form: if quality_of_performance > average_minimum_quality_for_a_standing_ovation, then we might get a standing O. But the quality varies from person to person, so we could predict using traditional social sciences, based on past research, that for the average individual attending this type of concert, the minimum threshold is an X, so if the quality of the performance is > or = to X, then we should expect a standing O. We might even use some sort of regression to determine likelihood across more specific demographic qualities - the average white male, the average black female, the average child under 13, the average conservative voter who attends concerts, whatever. In all of these instances, we're still dealing with the "Average Individual" conceit, we're just refining and multiplying those individuals based on different identity metrics.

But there is a complication, one obvious to anyone who has ever been in this concert scenario: standing Os don't usually just appear at once, they usually build through a series of waves of different groups standing. We could model this using the average individual conceit by making a more complicated formula: if quality > minimum O threshold, then stand and if # of people standing > minimum number for sufficient standing O peer pressure, then stand. In each case we could try to determine the average numbers for Average Individuals in different identity categories. And the two number sets probably interrelate - it may be that there is a minimal quality threshold for triggering the peer pressure with a sufficient number of people - in other words, if you think it sucked, it may take a lot more people standing to make you feel compelled to stand. If you thought it was the. awesome. incarnate. then it may take just a few others who appear to agree with you to get you on your feet.

But we're still running into a basic problem, namely that the agents in this scenario aren't average - not because they aren't regular joes and janes and whatevers, but because they are all differentiated from each other by something called positionality - they're all in different seats, and that means they're all relating to each other in space. The individuals in the best seats, the one closest to the stage, don't really see the people behind them, so it doesn't matter how many folks jump to their feet in the upper balcony - it won't influence the folks in the front rows much at all. At the same time, what these front row peeps do (stand/not stand) has a very high degree of influence because people all over the concert hall can see the front row's standing Ovation or its lack thereof. So we would say that these front-row folks have high influence (over others' perceptions/behaviors) but little knowledge (of others' perceptions/behaviors). And we could subsequently try to consider the role that positionality plays section by section, and row by row within a section, and only then would we begin to get a model of what that standing O actually looks like, mathematically. But the starting point for doing this is the dismissal of what is the kernal of the more conventional social science model - the belief that we can and should aggregate individual data contributions and take and refine their means in order to produce an average individual about which we can make predictions. The network model treats the social outing as a system of positionally related (and thus constituted) agents, rather than as individuals statistically constituted by their proximity to the average of all of those individuals combined. The network model is way more difficult to do.

But once you think about it, it can be hard to have a lot of confidence in the more conventional social science model, because the same issues of positionality, influence, peer knowledge, agent-decision trees, system adaptations, and so on, all appear in most areas being studied. So, point being, it's not the "But I'm a snowflake" objection that calls these social science projects into question, but the insufficiency of the method to account for how any individual, snowflake like or not, relates to the actions of other individuals and the perceptions and knowledges those actions create.

That being said, I'm still much more of a poststructuralist in inclination. Just sayin'.
posted by hank_14 at 12:34 PM on May 5, 2010 [7 favorites]


I work in tech support, and the same is true of my work, but I'm not calling myself a scientist.

Of course you don't. You don't have 8 years or so of science training and education to back up that claim.

Day to day work in medicine may well be described as 'applied science' but so is the lab work done for any research project.
posted by device55 at 12:34 PM on May 5, 2010


Btw, kipmanley, I'm stealing that story. Awesome.
posted by hank_14 at 12:40 PM on May 5, 2010


I work in tech support, and the same is true of my work, but I'm not calling myself a scientist.

But "computer scientists" do, and maybe nurses don't.


Computer scientists are not analogous to doctors.


Day to day work in medicine may well be described as 'applied science' but so is the lab work done for any research project.

And how many doctors are involved in research projects as opposed to simply applying that research?
posted by Pope Guilty at 12:55 PM on May 5, 2010


Thanks, dhruva, I'm really glad I got the chance to read that article.
posted by jamjam at 12:56 PM on May 5, 2010


In psychology, a common statistical approach is analysis of variance. Whatever the "cells" that define your design (within- and between-subject factors), for each cell you have summary data points for each of a number of subjects. The analysis then looks at whether the experimental factors account for more of the variance in the data than would be expected due to chance alone. It might compare multiple levels of a factor to see which differ, but that's often it. The variance not accounted for by the factors is termed error variance.

Much research is conducted in situations where individual differences dwarf any and all experimental manipulations, and statistical techniques are designed to cope with this in order to determine whether the investigated effects are present. Note that for an effect to be present, it only needs to rise above chance (or for the more stringent, have a certain average effect size) -- but this can easily be due to only half of your subjects. By no means would that invalidate an effect, but it should inform the interpretation.

The point of the article is that we should be asking why we see the ubiquitous individual differences we so often set aside to do our stats. It's not clear to me that there is a general theory of individuality to be found, but I'm sure there is a lot to be scientifically gained from paying closer attention to systematic individual differences and perhaps finding reasons for those differences.
posted by parudox at 1:18 PM on May 5, 2010 [1 favorite]


It is more useful to for me to know the temperature of my bathwater than the fact that one of the individual water molecules is, specifically, doing a miniature foxtrot, although the latter fact would not be without some interest.
posted by Wolfdog at 1:27 PM on May 5, 2010


A diagnoses is a hypothesis based upon observations (of symptoms) and the treatment is a test of that hypotheses.

That's science.


You're confusing the application of science with being a scientist. Anyone can use and apply science and the scientific method in their job. We all do it from time to time. But, we're not all scientists. A scientist conducts research, collects data, performs experiments etc. for the sake of contributing to the academy - to the body of knowledge in their respective field. A practitioner does not do this. Doctors do not do this. They practice medicine to keep their patients healthy. It's possible that their work aids the science from time to time, sure, and they certainly use the science, but science is not the primary endeavour.
posted by jimmythefish at 1:33 PM on May 5, 2010 [1 favorite]


A scientist conducts research, collects data, performs experiments etc. for the sake of contributing to the academy - to the body of knowledge in their respective field. A practitioner does not do this. Doctors do not do this.

This is just silly. Someone working in R&D at DuPont, with a PhD in Chemistry, is not running lab tests "for the sake of contributing to the academy," but to capture data on certain pharmaceutical combinations, etc. And yet no one is denying such a researcher, employed by industry, is doing science. And yet an MD who conducts tests on a patient, collects data from the tests she sends into the lab or the scans she runs on a patient, and performs medical procedures on a patient, all for the sake of contributing to the health of the patient, is somehow excluded from "science"?
posted by HP LaserJet P10006 at 1:50 PM on May 5, 2010 [1 favorite]


Your argument is based almost entirely in semantics.
posted by jimmythefish at 1:53 PM on May 5, 2010 [1 favorite]


Yes, we're all different.
posted by anigbrowl at 1:58 PM on May 5, 2010


Weird. Given that the author of the original article is a psychologist, I'm rather surprised that he seems to be unaware of the fact that psychologists have an entire subdiscipline devoted to the measurement and analysis of individual differences. It's not a new thing either: it's been going on since at least the 1940s. Whole classes of statistical techniques (e.g., factor analysis) were developed to allow scientists to precisely state what they mean when they say that "well, everyone's kinda similar but sorta different". Any time you suspect that there are individual differences in your data, it is (or at least should be) standard practice to use hierarchical models to analyse the data, which allow you to be able to make statements about *both* the general characteristics of the group, and the specific characteristics of each individual. It's actually not very hard to do. And in practice, it's generally trivially easy to code up models that automatically identify smartarses who think they're being clever by "inducing an outlier", using methods that do this thing called "outlier detection". After those people have been culled from the analysis, you can get down to the interesting and serious business of trying to provide proper descriptions of how the honest participants vary in their thinking. What bugs me is that it seems like the author of the article didn't actually bother to look at standard practice in his own bloody discipline. We *do* study individual differences: it's a cool and interesting topic, but it's not a deep mystery, because we already have extremely powerful tools for studying it.
posted by mixing at 2:10 PM on May 5, 2010 [2 favorites]


Your argument is based almost entirely in semantics.

It may sound that way, but it's also based on some modest understanding of how the history of science is messier than we often assume: for instance, here's a book I read many years ago, which carefully unpacks certain popular misconceptions about what science is (the history of science being more complicated than is often realized).
posted by HP LaserJet P10006 at 2:12 PM on May 5, 2010


This brain had a name; "His name was Abby..."
posted by ovvl at 4:59 PM on May 5, 2010


The research group that I'm in explicitly studies this issue. One major issue that doesn't appear to have been raised so far is the presence of homeostasis. The vast majority of variability in organisms is hidden because layer upon layer of regulation produces relatively stable behavior. At least in our system, when you go looking to answer the question, "how much variability is there at the lower, mechanistic levels of cells, and how does it produce the observed variability in behavior?" the answer is startling: there's huge amounts of variability in say, the density of sodium ion channels, but the nervous system behaves almost the same from one animal to the next. It's completely different from how we build machines.

Organisms that diverge wildly from normal behavior simply die. However, that hidden variability can be unmasked when the organism is subjected to unusual conditions. So, when you see organisms operating under "nominal" conditions, they look pretty similar. Really stress them out though, and their individual differences will emerge. There will be winners and losers, with all that signifies in terms of evolution.

This principle is so important, that it seems to me at least (and certainly to many scientists before me) that it's literally impossible to explain the workings of organisms without understanding the role of variability and homeostasis in producing behavior. Without the homeostasis, literally nothing would work. It is the bubble gum that turns a bunch of broken parts into a working machine.
posted by Humanzee at 5:04 PM on May 5, 2010 [3 favorites]


We address individual differences all the time. That's what a lot of psychology and, increasingly in recent years, cognitive neuroscience are all about.
posted by solipsophistocracy at 5:04 PM on May 5, 2010


Humanzee: "This principle is so important, that it seems to me at least (and certainly to many scientists before me) that it's literally impossible to explain the workings of organisms without understanding the role of variability and homeostasis in producing behavior."

According to some, maintaining homeostasis and reproduction can be thought of as nearly the only things living organisms ever do. Nearly every act and process has some part in maintaining homeostasis or reproducing (though in pathological cases this breaks down of course). The nerve is a tool for maintaining homeostasis by sensing perturbations and changes in the environment (in order to compensate for them and maintain homeostasis), and the complex nervous system of hominids is an extremely complex and specialized mechanism for extending sensing and memory to effective prediction and planning, with the side effect of creativity.
posted by idiopath at 5:25 PM on May 5, 2010


It seems obvious. There is a reason for diversity (individuality). Describing this reason as a 'motivation' confuses the issue.

DNA is very complicated, with lots of flaws, flukes, and variations built in to the format. One of them might later be proved to be an 'intentional flaw' buried in the perfect piece of work, which later makes it thrive... But using the word 'perfect' confuses the issue.

Who knows what will survive? DNA multiplies the options.

When the meteorite hits, the bell-curve shifts over.
Most bell-curve charts represent only one axis.
posted by ovvl at 5:31 PM on May 5, 2010


Yeah, I know, they represent two axis. I meant to say that they don't represent three or four...
posted by ovvl at 5:34 PM on May 5, 2010


it's not the "But I'm a snowflake" objection that calls these social science projects into question, but the insufficiency of the method to account for how any individual, snowflake like or not, relates to the actions of other individuals and the perceptions and knowledges those actions create.

See, now, this makes perfect sense to me -- partially because I must have missed the creation of the "snowflake" meme. It doesn't matter, at least it hopefully doesn't matter from a scientific point of view, whether or not you believe in the individual as a transcendentally ineffable 'universe in a bag of skin'. It just matters that if you slap one twin, and then slap the other, two entirely different things will happen, because they have different experiences, slightly different calcium ion balances, the second twin could guess what was coming, etc.

The question is whether or not we can predict, like we can with the ball bearings or the other physics metaphors people are bringing in, the standing ovation. And we can't, and individual differences and the interactions between those slightly different individuals might be one good reason why. This seems not like giving up on science, but practicing honest science. The low road would be something like this:

it's generally trivially easy to code up models that automatically identify smartarses who think they're being clever by "inducing an outlier", using methods that do this thing called "outlier detection".

Which sounds to me for all the world like a scientist admitting that they use mathematical models to throw out data that doesn't agree with their mathematical models. This is especially disgusting in the social sciences, where you don't and can't have any proof (except here, anecdotally, because I told you) of the motivations of your test subjects. Assuming that all people with one statistical profile are 'outliers' is super shoddy.
posted by Valet at 7:28 PM on May 5, 2010


The author is asking a pretty basic and profound question. Why does genetic individuality/diversity exist? What is its role in evolution?

It is a basic and profound question, but it was already answered by Darwin 150 years ago. Evolution doesn't happen unless there is variation in a population. There is so much diversity on earth, because life has been around a very long time, 4 billion years of evolution gets lots of weird divergence.

The study of individuals is important though. Many people argue that one of the reasons for the financial collapse was because of a neglect in paying attention to the importance of individual events, or "black swans".

I think asking if doctors are scientists isn't the relevant question here. The more interesting one is if medical case studies about individual patients count as good science. Many would argue they do not. But if you are a patient with some freakish condition this is cold comfort.
posted by afu at 8:03 PM on May 5, 2010


I think asking if doctors are scientists isn't the relevant question here. The more interesting one is if medical case studies about individual patients count as good science.

To paraphrase a professor in my department: how many pigs do you need to observe in flight to believe that pigs can fly?
posted by parudox at 9:46 PM on May 5, 2010


Valet: "It just matters that if you slap one twin, and then slap the other, two entirely different things will happen"

By that logic me and me ten seconds from now are completely different and incommensurable entities.
posted by idiopath at 11:01 PM on May 5, 2010


Which sounds to me for all the world like a scientist admitting that they use mathematical models to throw out data that doesn't agree with their mathematical models. This is especially disgusting in the social sciences, where you don't and can't have any proof (except here, anecdotally, because I told you) of the motivations of your test subjects. Assuming that all people with one statistical profile are 'outliers' is super shoddy.

Put bluntly, that's ridiculous. Firstly, no model in any science has ever been able to capture every datum produced in every relevant experiment, so the standard that you're proposing would require us to discard every hard-won insight about the structure of the world that any science has come up with.

Secondly, I kind of suspect that you're confusing outlier detection with bad data analysis. If you're analysing the data properly, you *aren't* allowed to discard the data. What you can do, however, is introduce a latent "labelling" scheme, where some participants' data are fit using a principled cognitive model, while outlier data are assumed to have been generated in an arbitrary fashion. You can then compare two theories, one that uses the latent labelling, and as second one that doesn't (introducing the obvious controls for differences in model complexity, which are required in order to make a fair comparison). If the first model is statistically superior, you have evidence that it is sensible to label some subset of the data as random, and then not analyse it further (by definition, random data lack any interesting structure that you can model). Now, that's a fairly complex form of outlier detection, and it's a little unfortunate that in practice people resort to simpler but less principled methods, but nevertheless, it's about as far from "throwing out data that doesn't agree with the theory" as you can get, because the theory is still required to absorb the noisy data via the part the model that fits the outlier data. If some other scientist can develop a theory that actually explains those data that my theory labels as "random", then their model will be preferred by any sensible model selection method. However, if the data have in fact been produced by someone behaving badly in an experiment, then the only model that will actually succeed in beating my model that labels that participant as "random", will be the correct model that labels them as a "smartarse". However, since we don't have any good modelling tools to predict the behaviour of smartarses, "random" will work just fine as a first approximation. That's more or less what good outlier detection does: it tells you which data make sense (from the perspective of your theory) and which ones don't. Then, when you compare two different theories, one part of comparing the two of them is asking, how many people's behaviour can they explain successfully.

I hope that helps clear up your confusion about how data analysis in the social sciences works. Or have I misunderstood? Perhaps you were just trying to insult us without offering any evidence?
posted by mixing at 11:03 PM on May 5, 2010 [1 favorite]


parudox: I only need to see one. Conversely, many thousands of others have seen pigs fly UFOs, and I still don't buy that.

Our culture is overly obsessed with the use of nouns to describe what people do. Some doctors do science; I work with a couple who I would feel decidedly comfortable calling scientists, because they are heavily involved in the creation of new knowledge. Some doctors only apply knowledge, without generating new knowledge. Some doctors mostly just apply knowledge, but sometimes contribute new (e.g., medical case studies).
posted by McBearclaw at 11:08 PM on May 5, 2010


idiopath: I'm not saying 'incommensurate' or even 'completely different'. Just 'different'. We often expect behaviorists in the social sciences to at least attempt to control for the impact of age on respondants, time of day, whatever. Why not encourage them, and those in related fields, to more energetically examine the impact of individual differences within groups? Think of it as the advance between the period when we studied 'silverback gorillas' as a group, and when we learned that individual troops have different behaviors and cultures.

mixing: I find your analysis incredibly confusing on a sentence level, but not significantly different from my previous understanding of outlier detection. What's the difference between 'throwing out' data and 'not analysing it further' (except for my unnecessarily brusque tone)? How can it be that 'the theory is still required to absorb the noisy data via the part the model that fits the outlier data' if you're saying that the model doesn't fit the outlier data, and that for that reason the data will not be analysed 'further'?
And what is the point of what outlier detection does, which is that "it tells you which data make sense (from the perspective of your theory) and which ones don't" when in fact the point of your experiment is to tell you which theories make sense from the perspective of your data? Shouldn't a result with one strong trend and a measurable, reproducable set of outliers indicate the need to study the outliers specifically, rather than set them aside? Shouldn't a result with a trend that includes a measurable, unreproducable set of outliers include those outliers in the final analysis as crucial indicators of experimental error?

Also, do you consider your view to be a consensus view? I'm seeing a lot of real doubt in the community about outlier detection that is 'automatic', as you put it, and not referred to or justified at some level by experimental design -- i.e. you need to know in some reasonable way that I am a smartarse before it's kosher to analyze your data without me in it. This would be something that people in the physical sciences would do as a matter of course.
posted by Valet at 11:36 PM on May 5, 2010


You don't need to prove that the data point is from an asshole trying to poison your dataset. All you need to prove is that the data point does not meaningfully correlate with the other data (there are a number of reasons this could be the case, and perhaps a future study can even show a meaningful pattern in the outliers that the current cannot if you keep the raw data - in other words if you don't "throw it out"). And outliers are not just data points not confirming a theory - they are data points that don't fit well with the rest of the dataset and would be the same set of outliers no matter which theory you were investigating.

My point about the future me was that you cannot statistically predict my future behavior all that more meaningfully than you can predict my individual behavior. But that doesn't matter because you can predict the aggregate behavior of the group I belong to. Or if you are only studying me (for whatever bizarre reason) you cannot predict any individual reaction but will probably be able to eventually predict my aggregate reactions over repeated trials.
posted by idiopath at 12:29 AM on May 6, 2010 [1 favorite]


idiopath: "you cannot statistically predict my future behavior all that more meaningfully"

Sorry, I meant "you cannot predict my behavior at a specific time any more meaningfully"
posted by idiopath at 12:45 AM on May 6, 2010


I guess I'm saying that the point of the article seems to be that it does matter that you cannot statistically predict future behavior, or individual behavior. That fact should be a topic of study in its own right, and it is true that a great many studies start from the assumption that groups are similar, as well as stable over time.

I follow now that you both seem to have interpreted "throw it out" as "delete and pretend that the data never existed". I meant, from the start, "to disregard data statistically". I had thought that people who actually delete raw data get strung up, or some sciency version of such with a pulley and a counterweight.

In general, though, I think I'm highly motivated to make this argument by politics -- what it means, when one studies human behavior, to intentionally cull difference from experimental results. These data are people doing things: to say 'not meaningfully correlate' and 'does not fit well' is one thing in the abstract, but quite another when your experimental result gets out there in the world with 5% or whatever of your sample going unanalyzed. Because although that 5% might be mostly comprised of huge jerkwater tools like me, it could also be a racial or ethnic minority that hasn't been controlled for, or an ideological group, or a bunch of free thinkers, or some important mutation forward. I would so much rather see experimental data hit the press as something like "among 95% of babies, two thirds prefer to look at same-race faces" than what we do get, which is much less equivocal. The kids who don't care about faces and won't look at the screen, or who otherwise don't 'fit in', are important. They're even important when you're specifically studying racism. That's a subjective political value, though, not entirely a 'your science is crap-ass' kind of thing, so I hope you don't feel offended that we disagree.
posted by Valet at 1:03 AM on May 6, 2010


The real question is, really, what do we do with the n% of data that isn't predicted by our theory? Valet seems to be disturbed by the tendency to paper over those discrepancies, which I am somewhat sympathetic to, but:

(a) very often that is a tendency of science journalism, which by nature oversimplifies results that are a lot more nuanced in the actual journal paper; and

(b) if we have no explanation of that data, we really can't do much other than note that some people did not act in accord with the majority action. I agree that it's good to do that -- and, arguably, this should be done more -- but it's complete rubbish to suggest as the original article does that individual differences aren't noticed or taken into account at all. As mixing and idiopath point out, they are noticed all the time; some people's entire careers revolve around studying individual differences, and there are very well-studied statistical models designed to do exactly that. And mixing even describes a technique that is useful for moving from "these people aren't explainable by our theory" to "these people aren't explainable by theory A, but are by theory B" in a principled way.

My hunch is that individual smartarses will probably never make it into some theory, because there are a thousand ways of being a smartarse, and it's just not very interesting psychologically to try to explain that (unless you're studying smartarses), any more than it is interesting for a physicist to explain the behaviour of every water molecule. But if there are substantial subgroups that have interestingly divergent behaviour, that is something to explain, mixing has described one way of working toward a theory of that -- and this is something scientists already do.
posted by forza at 1:20 AM on May 6, 2010 [1 favorite]


One more thing:

We often expect behaviorists in the social sciences to at least attempt to control for the impact of age on respondants, time of day, whatever. Why not encourage them, and those in related fields, to more energetically examine the impact of individual differences within groups?

I'm a little confused by what you think noticing individual differences is, if not for controlling for factors like age, etc. Okay, time of day is an external factor, not an individual difference, but controlling for (or evaluating the impact of) factors like age, SES, ethnicity, group identification, IQ, gender, etc, etc, just is trying to take individual differences into account. And, again, people do this all the time.

So what are you saying we should do that we aren't doing? Are you saying we should somehow try to come up with a theory that explains every single weird datapoint, even though -- as is implicit in your example -- they are probably caused by idiosyncratic factors like not having had enough coffee that morning or wanting to be a smartarse or wanting to impress the research assistant who is cute? I fail to see how trying to explain those people, when by nature we don't have the information to do so, is at all scientific. Or would be interesting if even we could do so. I am honestly confused about what you think we should be doing that we aren't already doing.

tl;dr: As far as I can tell, the interesting interpretation of "study individual differences" is something we already do: we try to identify the sub-group factors that might make individuals act as they do. You might plausibly argue we should do it more, but it is rubbish to suggest that we don't do it at all.
posted by forza at 1:35 AM on May 6, 2010


Valet: Yeah, my comment was a bit dense. Sorry, it's been a long day and I found the original article to be very insulting, since the author (as a psych professor) should be well aware that there is a massive psychological literature on exactly this topic. Even so, I shouldn't have been that aggressively technical in my comments. With that in mind, I'll have a go at explaining how individual differences modelling usually works without the technical details. Apologies if this turns into a tl;dr situation, or if I'm being tedious.

Suppose I have a theory about how people play chess (to pick an example that I'd have no idea how to model in real life!). There's a very large number of strategies that people actually do follow, and an even larger number of things that they might follow but never actually do (e.g., human chess players don't play the same style of game as Deep Blue). Let's denote the set of "strategies" that my theory incorporates by S. Next, notice that when I watch people play chess, I can't directly observe a strategy: I can only observe the set of moves they make. Let's imagine that I watch a lot of players, and let X(i) denote the moves made by the i-th person in my "chess study". One (very silly) thing I could do is try to lump everybody together, and see whether the "average" chess game belongs to my set of strategies S. An alternative approach would be to try to match each person onto one of the strategies that my theory predicts (I'll skip over how that matching works, because it's technical and not really relevant). Let S(i) refer to the strategy that best matches the behaviour of the i-th person when I use this approach.

Okay. The first thing I described ("lump everyone together") is often called the "averaging" approach, and the second thing I described ("analyse everyone separately") is sometimes called the "individual fitting" approach. Both of these are used a lot in the literature for pragmatic reasons, but neither of them is actually a very effective method. The averaging method ignores the fact that people are different from one another, and the individual fitting approach ignores the fact that people are often similar to one another. As a result, both of them produces catastrophic errors when you try to make predictions about new people. The upshot of all this is that psychologists have developed a whole collection of models that allow for both similarities and differences to be expressed. And some of those models really do date back to the 1940s, which makes it seem so bizarre to me that the original article didn't mention them. That was my main point originally.

The question of outliers relates to something that I've glossed over in the story so far. Specifically, what should I do if some of the people in my study aren't using any of the strategies in my predicted set S? Nothing in the description that I've given so far allows me to test this, but it's a horrible thing to get wrong: because it leads you to think that you know something about a person (e.g., that person i plays chess using method S(i)) when you actually don't know that. That's where a good outlier detection method comes in handy (though a lot of the bad ones don't help all that much, to be honest). What it lets me do is ask the question: is it more plausible to believe that the data X(i) were generated by someone playing chess using method S(i), or that the data were generated by some other process that I don't know how to describe? Since (by definition) I don't know how to describe these other possibilities, I instead try to test a slightly weaker question, which is: is it more plausible that X(i) came from S(i), or that X(i) are random? The idea here is that if my theory of person i is actually worse than a theory that person i is random, then I need to admit that I don't have any good theory of person i. That's what I meant to say by talking about "latent assignments", earlier. When analysing the data, I try to determine which of the participants in my data set that I actually can describe sensibly using my theory.

In an ideal world, my theory should be so awesome that I can explain the behaviour of every single person I encounter. However, in the real world, that's about as plausible as a theory of water flow that correct predicts the exact trajectory of every H20 molecule (h/t forza), so I just have to admit that my theory has some failures. Eventually, someone else will produce a theory that explains a greater percentage of chess players' behaviour, and that theory will supplant mine. That's progress! But even so, it's probably the case that there will always be some proportion of people whose behaviour you can't explain, even if your theory really is spectacularly good. Sometimes, people are weird, and sometimes they are jerks, and sometimes they behave in random ways. The more random people's behaviour actually looks, the more (literally) impossible it becomes to build a theory that says anything deep about them. That's (again, literally) the mathematical definition of randomness: it can't be simplified in any way.

At this point, I hope it's clear why "outliers" aren't the same thing as "throwing away data": it's because you're actually obligated to report outliers. Lots of outliers means that either (a) your theory isn't very good, or (b) the people in your study really were being nasty to you. Sometimes (b) is actually true -- sometimes, people are jerks. But it's not typical. Usually, it means you're missing something. In effect, by making your "outliers" accessible to other researchers, you're inviting them to try to improve on your effort, and come up with a better theory. Basically, outlier detection is an open admission that you can't explain all of the data. (That being said, what I've described is a bit of an idealisation... it's a sad truth that there's a fair amount of jerkiness among scientists too).

Hope that explains what I'm trying to say. It's a horrible oversimplification: somewhere, Thurstone is rolling over in his grave at my failure to build a model that properly accommodates the complete correlational structure of individual behaviour, but I hope he'll forgive me. This post is too long already.
posted by mixing at 2:41 AM on May 6, 2010 [1 favorite]


And what is the point of what outlier detection does, which is that "it tells you which data make sense (from the perspective of your theory) and which ones don't" when in fact the point of your experiment is to tell you which theories make sense from the perspective of your data?

Oh, and just to give the obvious response here, since it's really very important: the two are closely related. If only 20% of people make sense from the perspective of your theory, then it's a bad theory. If you can find a theory that makes sense of 95% of people, it's a better theory. I guess I wasn't explicit before, but it's exactly this method of assessing the data with respect to the theory that lets you then reverse the process and then determine which theories make sense with respect to the data. This is actually the single most critical feature of Bayesian data analysis, which is one fairly commonly used statistical theory.
posted by mixing at 2:56 AM on May 6, 2010


Okay, now even I know I'm being tedious, but I should also point out that there's subtle differences between the way I'm defining outliers and the way that idiopath is. Both play an important role in data analysis, but they're used for slightly different purposes. For once, I'll keep my mouth shut on the details, but I do just want to be clear that I don't disagree with idiopath's point about the importance of detecting whether "this data point really belongs with the other ones".
posted by mixing at 3:12 AM on May 6, 2010


Ahh. I am learning a great deal from all of this, ideopath and forza and especially mixing -- and I agree with all of you that the original article would have been boundlessly enriched by the kind of education you're giving me. I am starting to understand the strong language and snark I saw at the head of the thread.

I get, too, that since some of you seem to be practicing scientists, you resent the idea that someone may join your study with non-study-related goals. I offered it up as more a limit case than some kind of 'I beat your study' type thing (if anybody has the illusion that I consider myself an exemplary person, let it here be shattered), and I wanted to offer a case where non-compliance gave an outlier (my example) and one where it gave an expected result (my friend, who wanted a certain amount of change). I'm not sure how the techniques mixing cites would approach the problem of a set of moves X(i) matching a strategy in the set S, but actually coming from undefined strategy T, whose other move sets are being separated as outliers (i.e. 35 people use strategy T, but only the 10 of them whose T strategy ends up resembling S are included in the data set that undergoes analysis) -- erroneously reinforcing confidence in the predictive power of S. The comment on homeostasis above makes it seem like this could happen more often than one might guess. That's just my curiosity, though.

My argument, and honestly the reason I saw something to appreciate in the article, is about this question:

So what are you saying we should do that we aren't doing? Are you saying we should somehow try to come up with a theory that explains every single weird datapoint, even though -- as is implicit in your example -- they are probably caused by idiosyncratic factors . . .

YES. I am saying that even though that 'theory' is impossible to produce practically with 100% accuracy, and even though it is an immensely challenging problem both theoretically and with regards to experimental design, this is the actual gold standard -- not to look for reasonable ways to identify outliers, but to look for better, more intricate, more robust, more diverse patterns. I know that this is something that's inherent to science, but even in this extremely well-considered conversation, I see the group of you using words like 'idiosyncratic', 'random', 'weird,' 'not very interesting psychologically,' -- indicating at some level that you feel that your outliers shouldn't count. This is fine when you're looking at earthquakes or seashells, but these outliers are people -- what do you say to the woman who married a man with a hugely asymmetrical face? Sorry, ma'am, you're weird? We've taken you out of our dataset because we lack an appropriate model for your behavior? You are not psychologically interesting? Perhaps it has to do with the preference in the academy for papers that say 'I know why' over those that say 'I don't know why'. I'm sympathetic to practical limits behavioral researchers encounter, but I'm less sympathetic to researchers who think it's sufficient to study 95% of all people and repeatedly leave out the rest.

Mostly, though, I agree that what I'm getting through the popular/semi-popular science media is very different from what you are all actually doing. Your patience explaining all this makes me increasingly aware that it's not necessarily scientific structures that I mistrust, but the vast majority of writers who start articles with the sentence "A new study shows that."
posted by Valet at 6:02 AM on May 6, 2010


Thanks, Valet, for your classy and well-reasoned response. You raise a really interesting point, I think.

I am saying that even though that 'theory' is impossible to produce practically with 100% accuracy, and even though it is an immensely challenging problem both theoretically and with regards to experimental design, this is the actual gold standard -- not to look for reasonable ways to identify outliers, but to look for better, more intricate, more robust, more diverse patterns.

I find myself in the odd position of both agreeing with you strongly, and disagreeing quite profoundly at the same time.

I agree: On some level, I think that this is precisely my conception of what science ought to strive for. I probably wouldn't be a scientist if I thought it wasn't in principle possible to explain every last possible thing on earth (even if part of the explanation consisted of "and here is where random quantum mechanical fluctuations take care of the rest"). And that includes all of the idiosyncrasies of every individual's behaviour.

I also definitely don't think that people who are weird outliers are "not worth anything" or bad in any way as people... I'm quite a weirdo myself, and have nothing but love for the weirdos, speaking personally. Phrases like "not very interesting psychologically" are simply meant in a narrow sense that if I am studying X in a study, and people do not-X for a highly idiosyncratic reason, then it's not really in the bounds of what I am capable of theorising or hypothesising about.

Which leads me to...

I disagree. I think that anytime you pursue a theory that explains everything, you're in danger of overfitting. In the history of science, a common error is to go so far in trying to explain everything that you start finding patterns where there are none, or making theories so incredibly complex that they are useless (and probably wrong). A classic example of this is Ptolemaic astronomy, which tried to account for small errors in their predictions of planetary motion by adding ever-more epicycles to the predicted orbits, until the theory was ludicrously complicated.

The essential problem is that the only theory that could possibly explain everything would have to be as complex as the thing to be explained; in which case it is not an explanation at all, since by nature an explanation simplifies. Which is a long way of saying what somebody up there said at the beginning of the thread: the purpose of science is generalisation. I think there is a very good reason for that.
posted by forza at 8:07 AM on May 6, 2010


I guess I took the article a completely different way. I didn't think the article was trying to say HOW are individuals different, but WHY. I'll try to do a better job of explaining this, so it's more clear why 'evolution' is only a partial answer to the problem. For example, take one behavior called 'breathing', every single person does it. The variability around the behavior is also very low. Another behavior is sleeping. Pretty much every single person sleeps, but the variability around that behavior is quite a bit more variable than breathing. Why should sleeping be more variable than breathing? Maybe breathing is more important to survival than sleeping? That could be on theory, and if so then you should be able to develop a model that shows those behaviors that are more directly related to survival are less variable. So eating should be less variable than singing. This seems to make sense, but I don't know if its been rigorously pursued and quantified with a hierarchy of behavior variability. Does there exist a model that can predict the variability of a behavior before a sample group is measured? Something that says psychological tests that are x levels removed on the hierarchy of needs from survival should show statistical deviations that highly correlate to f(x). I think that was the point the article was trying to make (at least how I read it).
posted by forforf at 8:19 AM on May 6, 2010


forza: I get that, that makes sense. I hadn't thought of the matter in terms of elegance: there is something good about a theory that abstracts some portion of a complex system into a manageable, usable chunk -- I just hope that the chunk is inclusive of as many people as possible.

I will remember this bit about Ptolemaic astronomy, too -- it gets to what has always made me (rightly or wrongly!) scratch my head and eventually skip articles about superstring theory. Well, that and everything sounding like a Star Trek episode, except without the Crusher-Picard flirtation.
posted by Valet at 8:46 AM on May 6, 2010


Very interesting post and comments.

I am a media guy, not a scientist, but what strikes me the most about this topic is its timeliness. Until recently, a huge part of any kind of science has been classification, ontology. The priority is always to find some kind of order.

Now we have explored most of the general structure of scientific domains, so we have maps. And with computers, the Web, tagging and search, we have reached an unprecedented stage in the history of knowledge: unlimited memory space, unlimited classification, unlimited findability.

In brief: it's time to study individuality because now we can.

As for the benefits of studying individuality, I believe that we have barely scratched the surface. I have not decided yet if this splendid graph by Michell Zappa is a joke or the underlying truth.
posted by bru at 8:49 AM on May 6, 2010


forforf: You might be right. It could be that the author is calling for a theory of the origin of individual variation that holds across sciences. If so, it's probably fair to say that it doesn't exist at the moment. Of course, I'd assume that "step 1" for constructing that theory would be to develop a formal language for talking about the form that individual variation can take. Which is what psychological models of individual differences are trying to do. That is, to answer the "why" question, it would help to start by answering the "what" and the "how" questions.

That being said, I'm pretty skeptical that any truly domain-general theory is possible, other than in the most abstract terms. As a simple example, I ran an experiment (looking at what kinds of "not so wild guesses" people make during learning) a while back in which I deliberately left out a key bit of information, because I wanted to see what default assumptions people used to fill in the gaps. When I analysed the data, it turns out that there was really interesting pattern of individual differences in how people solved the problem. To cut a long story short, I suspect that each person in the study had a different set of previous experiences that made it make sense to them to rely on a different assumption when solving the problem I gave them. But nothing in the problem had any ties to evolution, physiology etc, so it's hard to see how you can develop a general-purpose-but-detailed theory that provides a causal explanation for both my data, and (say) the variation among panda-paw sizes. You probably could devise a mathematical system that would generate patterns of variation similar to both my data and the panda data, but if that's the kind of theory that the author wants... well, that's starting to look awfully similar to the "describe the form of individual variation" models that we've been building for the last 70 years or so.
posted by mixing at 2:07 PM on May 6, 2010


Are articles like this what people mean when they talk about "intellectual wankery"?

I tried to read the whole thing, but it didn't seem to be making a very coherent point. Maybe I'm just a simple individual.
posted by Soupisgoodfood at 10:04 AM on May 8, 2010


This is fine when you're looking at earthquakes or seashells, but these outliers are people

That is ridiculous. The outliers aren't people, and psychometric instruments are not designed to measure people. No scientist is out there to distill the essence of what it is to be a human and slap it on a regression line. We're trying to discover patterns of behavior. Some patterns are part of a larger whole that we know is beyond our empirical grasp, and they make for aberrations in the patterns we can see. Instead of just throwing up our hands and saying "oh, well, if we can't explain everything, we certainly shouldn't try to explain this," we just find the patterns we can, and build on them so that we can incrementally examine more of the world around us in a disciplined way. We do not, however, boil human beings down to datapoints. That does not jive with the respect most scientists have for complexity in any way.
posted by solipsophistocracy at 7:20 PM on May 8, 2010 [1 favorite]


« Older Stanley, I was raised in a tiny Canadian town...   |   Crazy Mike's Discount Legislative Emporium Newer »


This thread has been archived and is closed to new comments