The Dunning-Kruger Effect is Autocorrelation
April 16, 2022 9:46 AM   Subscribe

The Dunning-Kruger Effect is Autocorrelation. As previously discussed on MetaFilter, the Dunning-Kruger effect is probably not real. In this blog post, economist Blair Fix explains how it is a statistical artifact.

Be sure to check out Blair's other interesting posts such as A Case Study of Fossil-Fuel Depletion where he performs a detailed analysis of oil well depletion in Alberta or the post Redistributing Income Through Hierarchy where he uses social hierarchy as a means for exploring income distributions. Blair belongs to an emerging school of economists who argue for a complete replacement of neoclassical theory with ideas based on biophysical principles and thermodynamics, which are discussed in the freely downloadable book Rethinking Economic Growth Theory From a Biophysical Perspective.
posted by FuturisticDragon (84 comments total) 43 users marked this as a favorite
 
So Dunning and Kruger thought they were really good at statistics, is what you're saying?
posted by clawsoon at 9:59 AM on April 16, 2022 [82 favorites]


Human behavior is hard to measure; there are so so so many inputs into each person and each moment. It may be that it's been actually impossible to measure an effect, but the effect still exists and affects human behavior and decision making.

Which is to say, failing to prove a thing is not the same as disproving it.
posted by amtho at 10:00 AM on April 16, 2022 [4 favorites]


Compared to most scientific papers, theirs is very funny, clawsoon. In fact, this is the end of their concluding remarks:

"Although we feel we have done a competent job in making a strong case for this analysis, studying it empirically, and drawing out relevant implications, our thesis leaves us with one haunting worry that we cannot vanquish. That worry is that this article may contain faulty logic, methodological errors, or poor communication. Let us assure our readers that to the extent this article is imperfect, it is not a sin we have committed knowingly."

It's very dry humor.
posted by MengerSponge at 10:10 AM on April 16, 2022 [48 favorites]


Can we still call Bitcoins Dunning Krugerrands though? Because that gag is just too on the nose to give up.
posted by St. Oops at 10:11 AM on April 16, 2022 [64 favorites]


Yes, St. Oops. I say yes.
posted by amtho at 10:20 AM on April 16, 2022 [5 favorites]


Jokes aside this is a really stunning takedown. Is it correct? I think so, but then I'm easily confounded by statistical arguments.

The moment I saw the Dunning-Kruger chart (Figure 2) I was like "huh, that's hinky". Both the complication of the binning in quartiles but also the straight line X=X graph. Why publish that at all? Blair Fix explains why, to set up the comparison to the meaningful data. But according to Fix that's what sets up the autocorrelation, the implicit y-x. I think the analysis is sound, and his thought experiment with random data input sure is damning. But then how did this false result stand unchallenged for 17 years?

I've long thought that any paper that relies heavily on statistical inference needs to have a fully accredited statistician. The whole replication crisis is a serious problem, particularly in psychology. A lot of the simplest mistakes could be solved by having an honest, slightly adversarial statistician evaluating the work as it's done.

I'm troubled that such a basic mistake would exist in such a famous paper. Maybe the fame is related to the snarky, easily understood explanation of the Dunning-Kruger effect.
posted by Nelson at 10:27 AM on April 16, 2022 [10 favorites]


The best counter argument I saw in the comments (hackernews) is that the fact remains that NO correlation between self assessment is still interesting; you’d expect a positive correlation where the most uninformed understand that they are so. Still, that doesn’t pack the punch of an inverse correlation.

Can we still call Bitcoins Dunning Krugerrands though?

Literally my first thought too.
posted by condour75 at 10:47 AM on April 16, 2022 [7 favorites]


I accept the statistical review and conclusions. Nevertheless, there is still a fundamental reality that has to be grappled with. Namely, that ignorant people are often convinced that they are knowledgeable, and don’t have the understanding of how much they don’t know.

I mean, that’s a cold, hard reality I’ve encountered dozens of times in my teaching, so I know it exists. In almost every class I’ve ever taught, over thousands of students, I have had at least one student who “has a theory” about how some incredibly complex phenomenon works, and their very confident explanation shows they don’t even have the most rudimentary understanding of the subject matter.

If the Dunning-Kruger effect can’t be demonstrated statistically as a population-wide effect, then perhaps the effect among some subset of the population — which is real — is offset by people in another subset of the population who may not be well informed on a subject and may incorrectly underestimate their knowledge or competence in the subject matter. So that overall, the effect is not demonstrable.

But for some people, in some subject areas, it’s absolutely real that some of the most ignorant in a subject can have an overinflated sense of self-confidence in it.
posted by darkstar at 10:56 AM on April 16, 2022 [28 favorites]


Statistics be damned… I still believe it is possible for someone to be stupid enough to not know how stupid they are. The correlation being not knowing you’re stupid means thinking you’re smart. Having met a few people who do think this way is enough empirical evidence for me. The Dunning-Kruger Effect lives, at least in my mind…
posted by njohnson23 at 10:56 AM on April 16, 2022 [9 favorites]


Statistics be damned… I still believe it is possible for someone to be stupid enough to not know how stupid they are. The correlation being not knowing you’re stupid means thinking you’re smart.

but this does scale up. I've encountered some very smart people who've wandered into realms in which they don't really know what they're talking about. Yet they keep talking. aka Engineer's Disease. I think Jordan Peterson is a prime example.
posted by philip-random at 11:10 AM on April 16, 2022 [28 favorites]


On further reflection, I see my error in reasoning.

The Dunning-Kruger Effect suggests that there is a correlation between ignorance in a topic and an inflated sense of competence in it. However, no such correlation is shown to exist.

That makes perfect sense if you consider that there is a statistical distribution of self-confidence among ignorant (in a particular subject) people. That, there is just as much likelihood that an ignorant person is overconfident as they might be underconfident, or accurately confident. Call it a normal distribution of confidence among ignorant people.

What that means, then, is that for any group of people who are ignorant in a topic, there will be a tail of the distribution curve in the “overconfident” region. And these are the folks that tend to stand out in our experience. The very ignorant yet very overconfident people irritate us more than the very knowledgeable yet underconfident folks, so we notice them more.

It’s those people that make us feel like there is a Dunning-Kruger Effect in the world. But there isn’t, really. It’s just that supremely overconfident idiots really stand out to us as a phenomenon, and we try to draw patterns around those experiences.
posted by darkstar at 11:10 AM on April 16, 2022 [45 favorites]


I mean when people use ideas like Engineer's disease and also just concepts like "low information voters" those are in practice ways of accusing (ad homineming) other people of metacognitive deficit, which is basically the same as the Dunning Kruger effect. So the problem is how you resolve that discrepancy, if the new statistics are saying this doesn't happen.
posted by polymodus at 11:14 AM on April 16, 2022 [1 favorite]


It’s true that random data produce a seeming D-K effect, but think what that means. Random data produce pure guesses about competency. Suppose instead that everyone just gave themselves a 50% rating. Then the horizontal competency line would intersect the diagonal line in the center. Is that meaningless? Well no. It is an actual D-K effect! Similarly, with random self-assessment, the lower performers would assess too high overall and the higher performers would assess too low. Does this disprove the D-K effect? No, it demonstrates it!

But in the original data, the competency line did trend upward slightly. So in fact, people with lower skills did rate themselves somewhat lower and people with higher skills did rate themselves slightly higher, overall. Just at a lower slope than justified and with a higher origin. So as amusing as this article is, and it had me convinced for a moment, in reflection I just don’t buy the argument.

My conclusion is that, on the whole, people aren’t very good at self assessment and generally rate themselves, in the Lake Wobegon way, as slightly above average with some tendency for higher performers to modestly rate themselves a bit higher. And in that case you get exactly the D-K effect. My take away isn’t that the D-K effect is wrong, but that it isn’t deep or unexpected. Rather, it’s exactly what you would expect under broad and reasonable assumptions.
posted by sjswitzer at 11:24 AM on April 16, 2022 [17 favorites]


The best counter argument I saw in the comments (hackernews) is that the fact remains that NO correlation between self assessment is still interesting; you’d expect a positive correlation where the most uninformed understand that they are so.

The graph from the newer paper that bins by educational level (if you think that’s an equivalent way to set it up, which I’m not sure about) shows about equal average error in every bin which wouldn’t support this.

But it does seem that, if one hypothetically did this experiment and did get back totally random data for the self-assessment as in the constructed example:

a.) yes it would stand to reason that the gap between self-assessment and actual assessment would be larger for values of actual assessment

b.) one still would have implicitly found that people are bad at self-assessment

Unless I’m missing something?
posted by atoxyl at 11:27 AM on April 16, 2022 [1 favorite]


In other words, it’s not so much wrong as essentially tautological.
posted by sjswitzer at 11:29 AM on April 16, 2022 [5 favorites]


This analysis is really interesting, but I would say that while it at least demonstrates that some of the effect is due to autocorrelation, it may be that there is some residual "real" effect. It would be interesting to repeat the analysis using a statistical procedure called "bootstrapping" that can be done in such a way to account for the autocorrelation. I'm not saying the effect is real, but we do have the statistical tools to test this.
posted by piyushnz at 11:31 AM on April 16, 2022 [1 favorite]


I find this curiously meaningful:
(snip)
The blood-dimmed tide is loosed, and everywhere
The ceremony of innocence is drowned;
The best lack all conviction, while the worst
Are full of *passionate intensity*.

Just me
posted by aleph at 11:32 AM on April 16, 2022 [6 favorites]


I've encountered some very smart people who've wandered into realms in which they don't really know what they're talking about. Yet they keep talking. aka Engineer's Disease. I think Jordan Peterson is a prime example.

The book Range dives into this a lot when talking about people and teams predicting outcomes. The more specialized knowledge one acquired, the more they would try to fit every problem into that framework.

If your deep specialization is authoritarian governments in the 80s, you’re seeing coup attempts at your grandsons soccer after party.

Teams doing competitive predictions made up of non-experts with broad knowledge outperformed by such a huge margin. Been a few years since I read the book, but I loved it. I was putting together a team of amateurs to predict movie box offices and us all having different careers and specialties was an interesting time.
posted by OnTheLastCastle at 11:35 AM on April 16, 2022 [11 favorites]


Maybe I’m confusing myself but it seems like a more to-the-point explanation of the problem with the original is that it’s setting up a rather specific, dubious expectation, and knocking it down like it means something. Any correlation weaker than the perfect one they drew in would have shown a widening gap, right?
posted by atoxyl at 11:38 AM on April 16, 2022 [2 favorites]


I'll take my analysis one step further. Suppose people were actually really good at self assessment, say at around 90% accuracy, and have no upward bias whatsoever. You'd still have variance up and down. You'd find the D-K effect! And the effect is real because indeed on average lower performers rate themselves too high and higher performers rate themselves too low. It's just that it's a deeply trivial result in that with a moment's thought it's entirely expected.
posted by sjswitzer at 11:41 AM on April 16, 2022 [5 favorites]


>I accept the statistical review and conclusions. Nevertheless, there is still a fundamental reality that has to be grappled with. Namely, that ignorant people are often convinced that they are knowledgeable, and don’t have the understanding of how much they don’t know.

Note that the linked article does provide a possible solution to this:
Figure 11 does show an interesting pattern. Moving from left to right, the spread in self-assessment error tends to decrease with more education. In other words, professors are generally better at assessing their ability than are freshmen.
***IF*** this holds up then this is actually a kind of "better" Dunning-Krueger. That is to say, less educated people are just worse at assessing their own skill level. They are not particularly optimistic or pessimistic in their assessment (on average), but they are just worse.

That helps explain why 10% or 20% of those "ignorant people" you mention are really convinced that they are much, much smarter than they actually are.

However - especially as a teacher, parent, mentor, trainer, etc - please don't forget that (assuming the more ignorant people are just worse at assessing their own skill level) that leaves another 10-20% who are convinced that they are much dumber than they really are.

These people are likely to give up too soon, not even try doing things they are perfectly capable of, etc.

The same effect also neatly explains "Engineer's disease" and Nobel Prize winners getting things outside their specialty very, very wrong while completely convinced that they are right. Outside of their specialty these people are "the ignorant" and so worse than average in estimating their own ability in that area. Some people will underestimate their ability but some will overestimate it, too, and these are the ones we tend to notice.

Having written a full dissertation about this brilliant new insight, however: Based on the data the author presents I am not very convinced that it really exists.

* The author doesn't give an actual statistical analysis of a significant difference in spread. It is merely mentioned in an offhand comment.

* There is no particular noticeable difference in spread going from freshman to senior - and I would expect a noticeable difference. Why does the change only, suddenly, appear with grad students?

* The sample sizes of grad students and professors are markedly smaller than for undergrads. This will, all on its own, create a spread that looks smaller - possibly completely explaining the observation.

Finally - and on a completely different tack - if you look at the "green bubbles" pointed out in Figure 11, there is a noticeable downward trend to them (Dunning-Krueger strikes back!!!1!!!!1!). There is clearly nothing there that rises to the level of statistic significance, so I am not going to state a claim. But it would be interesting to have a study similar to this but highly powered enough to tease out an effect of the size seen there, to see if it really exists or not.
posted by flug at 11:44 AM on April 16, 2022 [14 favorites]


On the whole, in general, for most people, practically everyone ---

Maybe the problem is trying to address critical issues by treating populations as homogenous. Real change, positive and negative, comes from small groups or even individuals, enabled by a wider culture.

We need to address people -- each other -- as individuals, not homogenous automata.
posted by amtho at 11:49 AM on April 16, 2022 [5 favorites]


METAFILTER: seeing coup attempts at your grandsons soccer after party.
posted by philip-random at 11:52 AM on April 16, 2022 [6 favorites]


The debunking of the Dunning-Kruger effect doesn't overthrow the basic anecdotal observation that some people are too ignorant to know how ignorant they are. All the debunked version says is that the inverse correlation, where the least competent are almost always the most confident and the most competent are almost always the least confident, is not true.

Instead of a world where most people fit into two boxes (incompetent people unaware of their incompetence, competent people too timid to know their own competence), there are four boxes (incompetent people who think they are competent, incompetent people who think they are incompetent, competent people who think they are competent, & competent people who don't think they are competent). There are still incompetent people who have enough self-knowledge to know their own incompetence. However, on average, when we're talking about humanity as a whole, the take-away is that humans do a very bad job of measuring their own competence.

Sometimes, some humans do correctly assess their own competence, but some of that may be due less to self-awareness than being a lucky guesser.
posted by jonp72 at 11:54 AM on April 16, 2022 [6 favorites]


My own experience with D-K isn't simply that people who are lacking don't only think that they know more than they do, but also that if you start to try to talk to them about their errors, they refuse to listen because they already know. It's not just ignorance, it's like they can't mentally process that they aren't smart enough to already know this and that they need to revise their worldview. And I'm not talking explaining physics or something. Basic stuff, like how baking doesn't mean you dump everything into one bowl, but you have a wet side and a dry side and mix them later and that's why my cookies work and yours don't. "Oh, my mom always did it this way, I learned from her, mine should be better, I'll try again." *uses same bad method*

I mean, I've a bit of a history with really dense boyfriends and sometimes it was just remarkable how stubbornly they would clean to false world views because they couldn't understand how they could be wrong.
posted by hippybear at 12:03 PM on April 16, 2022 [6 favorites]


some people are too ignorant to know how ignorant they are

I would offer that "too ignorant to know how ignorant they are" may be the defining trait of human consciousness. Even the most Socratic-minded among us are vastly unaware of most of what we're vastly unaware of; trying to fit square pegs in round holes is how human thinking works, and learning is probably more the result of random chance, trial and error, and positive/negative reinforcement than of insight.

The Dunning Kruger effect may have more to do with socialization and one's relative confidence about their place in society. The privileged (or those who aspire to be/identify with the privileged) are more likely to assert their half-baked ideas (like I just did) than everyone else.

But then again, I was an English major, so what do I know?
posted by Saxon Kane at 12:09 PM on April 16, 2022 [3 favorites]


well, it's true in coding. redo a narrow study at that. mainly because the difference between knowing a feature correct approach and a feature correct, maintainable, extensible, secure approach is 10-12 years of hard knocks.
posted by j_curiouser at 12:10 PM on April 16, 2022 [2 favorites]


> But in the original data, the competency line did trend upward slightly.

Yes, I noticed that right away back with the original D-K graphs. People really can, on average, place themselves as better or worse on the scale - maybe not perfectly, but they are clearly trending in the right direction.

Compare that to Figure 10 in the linked article - that is a truly random sampling. It's a straight line at 50% with just a bit of variation due to the noise of randomness.

By comparison, the Dunning-Krueger graph is offset well above 50% - because "everyone is above average" as the saying goes. And there is the clear upward trend that you observe.

It's just that the trendline is not moving up at a 45 degree angle, which is the expectation they set up by the comparison graph.

> Suppose people were actually really good at self assessment, say at around 90% accuracy, and have no upward bias whatsoever.

It's actually worse than this, because of well known biases in answering survey questions and such. Say you have a survey where the expected answer is "10" and the range given for your answer is 0-100.

People will, on average, choose answers way above 10 here, simply because there is so much space to the right of 10 and so little space to the left.

Change the range - say 0-20 instead - and now your answers won't show the same upward bias.

This will happen in reverse if the range is 0-100 and the expected answer is 90 or 95. People will, on average, choose a lower number because there is more "space" to the left.

End result of this in context is that people on the "ignorant" end of the scale will certainly trend towards averaging up, and people on the "skilled" end of the scale will trend towards averaging down.

So when you look at the Dunning-Krueger figure, you are seeing to a great degree what you would expect given the format: There will be an upward trend to the skill estimates. But because of survey/response design that upward trend will be somewhat reduced compared the 45 degree angle you might naively expect.

What is less expected and more interesting in the data:

- People can indeed estimate their ability levels, at least in a relative sense.

- There is a strong tendency to believe that you are above average (people place themselves at the 65th percentile overall when of course they should be at the 50th).

- There really is a huge discrepancy shown in the lowest quartile. Their "true" place should be around the 12.5th percentile, yet they are placing themselves nearly at the 60th percentile!

Even if you realize that some of that is explainable - survey design as explained above, plus the "everyone is above average" phenomenon - the discrepancy is still really huge.

If they were coming in at 30%, or 40% - OK, believable. But almost 60%?

That's where D-K got a lot of its oomph and even with the debunk it's not quite all explained away.

(Though what remains seems to be due mostly to how much people really do believe they are above average (and by "people" I mean, U.S. college students - do other populations all have this same bias?). And any remaining effect beyond that might well be explained by that fact that less skilled people are simply worse at estimating their own skill level - just 'worse' though, not with any particular bias towards estimating it high or low
posted by flug at 12:18 PM on April 16, 2022 [1 favorite]


If the conditions are rephrased to state that black and white thinkers are incompetent at difficult decision making from sweeping generalizations (such as everything either absolutely true or false, right or wrong), when compared to moderate thinkers who see things as shades of gray, it becomes more apparent that black and white thinkers would have higher confidence in their ability. Absolute thinking simplifies assumptions, exaggerating the conclusion. This is more in line with Bertrand Russell's observation: “The whole problem with the world is that fools and fanatics are always so certain of themselves, and wise people so full of doubts.”
posted by Brian B. at 12:27 PM on April 16, 2022


Interestingly, the wiki article makes no mention of the problem with the Dunning Kruger Effect.
posted by storybored at 12:50 PM on April 16, 2022 [1 favorite]


Interestingly, the wiki article makes no mention of the problem with the Dunning Kruger Effect.

You could change that.
posted by clawsoon at 12:54 PM on April 16, 2022 [1 favorite]


Has anyone done a similar study with a sample group pulled from winners of the Darwin Award?
posted by njohnson23 at 1:22 PM on April 16, 2022


Figure 11 does show an interesting pattern. Moving from left to right, the spread in self-assessment error tends to decrease with more education. In other words, professors are generally better at assessing their ability than are freshmen.

The problem with this: if you consider an extreme case of someone with an Einstein level of knowledge covering 100% of the tested subject area. It is not possible for that person to overestimate themselves, but someone at the bottom of the scale can both overestimate and underestimate themselves.

Even if the spread at the bottom averages out to zero, the spread at the top end has to average out as a negative value. (possibly a small difference but its still going to exist).

So the D-K effect can both be real and also reflect a completely random allocation of the possible over/under estimations.
posted by Lanark at 1:58 PM on April 16, 2022 [1 favorite]


The Wikipedia article does have a section on criticism, including some of the articles criticizing the statistics that are discussed in the blog post we're talking about here. It's not very clear or forceful though.

Down in the comments on the blog post they talk about editing the article.
My guess, though, is that given how popular the Dunning-Kruger effect has become, any Wikipedia edits may be quickly undone. But I guess its worth a try.
I have this reaction pretty much any time I think of making a minor change to Wikipedia. Absolutely not worth my time arguing and re-applying an edit.
posted by Nelson at 2:00 PM on April 16, 2022 [1 favorite]


any time I think of making a minor change to Wikipedia. Absolutely not worth my time arguing and re-applying an edit.

I feel like there is a parallel Dunning-Kruger-Jimmy-Wales effect waiting to be researched here. e.g. people with limited knowledge in a given domain are more likely to edit the Wikipedia page for it!
posted by Lanark at 2:11 PM on April 16, 2022 [4 favorites]


Is there maybe some drift from the actual and discredited Dunning-Kruger effect, and its popular understanding? Kind of like with Occam's Razor, 'do not multiply causes' becoming 'the simplest explanation is usually true.'

There's definitely such a thing as being overconfident in a new skill or area because you don't yet know enough or have not yet encountered its constraints, flaws, limits or risk. 'A little knowledge is a dangerous thing,' etc. Does anyone want to get on a jumbo jet with a barely qualified pilot on the strength of the paper? Or feel as comfortable as with one who has 1000 hours on the type?

Technical diving mishaps are also a good example.
posted by snuffleupagus at 2:18 PM on April 16, 2022 [1 favorite]


My guess, though, is that given how popular the Dunning-Kruger effect has become, any Wikipedia edits may be quickly undone. But I guess its worth a try.

I've found that adding a short (1 or 2 sentences), clear, separate paragraph with references usually stays in. I've made edits to the Human Genetic Variation, Race and Genetics, and Capitalism articles that are still there, and if I'd expect any articles to be heavily policed and argued about it'd be those ones.

The edit I made that didn't survive was to the Flugumýri Arson article, which was deleted because I quoted too much from a possibly copyrighted/possibly not source.
posted by clawsoon at 3:02 PM on April 16, 2022 [2 favorites]


Getting Darwin Award winners to answer a survey question would be quite the feat, considering the criteria for qualifying for the award.
posted by biogeo at 3:17 PM on April 16, 2022 [9 favorites]


The thing about the idea that people who are bad at something are less likely to know how bad they are at it, is it's not a new idea. Plato wrote about it 2400 years ago in the Meno, and before Dunning and Kruger came along, people just called it Meno's Paradox. What Dunning and Kruger purported to add to this very old idea is a specific statistical assessment, turning this intuitive perception that most of us have into a scientific observation. The "Dunning-Kruger effect" isn't just that people without skill lack the means to recognize their own low skill, but that being low skill causes one to inflate one's self-estimate of skill, and being high skill causes the opposite. This causal relationship is inferred from the flawed statistical analysis that they did. The analysis of the flaw presented in this blog post (which is not original to this piece, and has been previously reported in the scientific literature) shows that that causal inference is not justified, because the correlation is spurious by construction.

(Incidentally, the author describing this as "autocorrelation" slightly puts me on edge. In my field, autocorrelation is a perfectly cromulent analysis technique examining how a signal correlates with a time-lagged version of itself, which isn't the same thing at all. I've literally never heard the word used the way it's used in this blog post, to mean correlating a (non-time-dependent) variable with itself, or a linear function of itself.)
posted by biogeo at 3:34 PM on April 16, 2022 [13 favorites]


Thanks, biogeo. That helps me contain my compulsion to link 50 videos on various diving and aviation accidents.

It still makes intuitive sense to me that there's a point in the learning curve at which it dawns on you the scope of what you don't know, or the complexity of the interrelation of factors you didn't have the sophistication to appreciate before, and the risks that proceed from that. And that beginners tend to overestimate their capabilities and underestimate risks -- although I don't know that it follows that experts underestimate their capabilities. Often not. Complacency can lurk there too.
Still, at some point you stop having the 'why can't we just...' thoughts and start being surprised that things work as well as they do and worry about what hasn't been accounted for (generally and in your own work).

But of course you keep working past that, and come into a different kind of confidence grounded in practice. (Which can still be misled, of course. Or ossified. The whole beginner's mind thing lurks there.)
posted by snuffleupagus at 3:40 PM on April 16, 2022 [1 favorite]


Yeah, I think most of us have the experience as we get increasingly skilled at something of realizing all the ways things can go wrong that we didn't even know enough to worry about when we were less skilled. The analytical incorrectness of the Dunning-Kruger effect doesn't negate this experience that most of us have had, it just means that they weren't measuring it.

If you're a pedant (like me), and still want to have a handy phrase for describing this phenomenon, I feel like most of the time when people say "Dunning-Kruger," you could substitute in "Meno's paradox" and convey the same idea while avoiding the incorrect analysis.

And I'm genuinely looking forward to an expert in ancient philosophy explaining why that's not an accurate way of applying the concept of Meno's paradox.
posted by biogeo at 3:46 PM on April 16, 2022 [5 favorites]


I feel like most of the time when people say "Dunning-Kruger," you could substitute in "Meno's paradox" and

Those of us with Anabaptist backgrounds would be wondering when the founder of the Mennonites got into philosophizing.
posted by clawsoon at 4:05 PM on April 16, 2022


“In other words, it’s not so much wrong as essentially tautological”

My ears are burning!
posted by wittgenstein at 5:10 PM on April 16, 2022 [5 favorites]


Like sjswitzer, I'm not convinced by this analysis.

First off, "the Dunning-Kruger effect is just autocorrelation" would only debunk the original paper if their conclusion was "the difference between perceived ability and actual ability is correlated with actual ability". As I understand it, the actual claim is something like "the difference between perceived ability and actual ability are correlated with actual ability in a particular way".

Second, it seems like I can apply the logic of this debunking to any linear model. Suppose I collect a bunch of data and fit a linear model showing `y = a + bx + e` (where e is drawn from some error distribution with suitably small variance). I go to write my paper: "y is correlated with x!". A reviewer writes back: "You're making a claim that y - x depends on x. Why, that's just autocorrelation!". The reviewer is correct: if `y = a + bx + e`, then by definition `y - x = a + (b - 1)x + e`. Did the author of this blog debunk linear regression?

Notably, this article linked from the blog post does not mention autocorrelation. Instead, it claims to show that the Dunning-Kruger Effect can be explained by a combination of something called the "better than average effect" and regression to the mean.
posted by jomato at 7:00 PM on April 16, 2022 [2 favorites]


MetaFilter: their very confident explanation shows they don’t even have the most rudimentary understanding of the subject matter.
posted by kirkaracha at 7:01 PM on April 16, 2022 [6 favorites]


Even the most Socratic-minded among us are vastly unaware of most of what we're vastly unaware of
There are known knowns — there are things we know we know. We also know there are known unknowns — that is to say, we know there are some things we do not know. But there are also unknown unknowns, the ones we don’t know we don’t know.

Donald "Mr. Socrates" Rumsfeld
posted by kirkaracha at 7:05 PM on April 16, 2022 [3 favorites]


y = a + bx + e...Did the author of this blog debunk linear regression?

just keep your power gloves off my point slope formula
posted by snuffleupagus at 7:16 PM on April 16, 2022


jomato, I think you're missing a step. If you collect data and fit a model y = a + bx +e, and you ask the question "are x and y correlated?" that's equivalent to asking "is b different from zero?" But if you have a transformed variable z = y - x, and ask "are x and z correlated?" that's equivalent to asking "is b-1 different from zero?" These questions can have different answers.

One of the counterintuitive things about correlation is that it's not transitive. That is, if x correlates with y, and y correlates with z, it doesn't necessarily mean that x correlates with z. So yes, it's definitely true that if you fit a model z = y-x = a+bx with random variables x and y, you'll find that z and x correlate almost trivially, unless y and x actually happen to anticorrelate in the right way.
posted by biogeo at 8:31 PM on April 16, 2022 [3 favorites]


All I know is that I know nothing and I really hope everyone else knows they don't either
posted by I'm always feeling, Blue at 8:33 PM on April 16, 2022 [6 favorites]


Absolutely, flug - there should be people who underestimate their competency. I would posit that women and minorities are over-represented in that category.

There's also the DK of the people asking the question; competency is very multivariate and blind to "interdisciplinary-ism."

Take "engineer's disease" - someone might have technical knowledge of a process, but don't understand the sociological context of how the process has to be achieve by a disparate team of actual people with possibly very different motivations.
posted by porpoise at 10:10 PM on April 16, 2022


"Lies, damned lies, and statistics."

Sometimes the statistics are asking the wrong question in wrong ways.

DK is a very real phenomenon, but it need caveats and it is still worth investigating to figure out why/ how some people overestimate (and underestimate!) their abilities/ profess(!)/ manifest-as-if-their abilities beyond their internally/ externally ajudged abilities.
posted by porpoise at 10:14 PM on April 16, 2022


Others have mentioned one of the best comments on hackernews, but I don't think that comment has been give full due here. Essentially the critique says that if self assessments were random, then the mechanical procedure of the DK paper would basically give you what it got. From that the critique concludes that DK is a "statistical artifact". But that is not the right way to look at it. Specifically, unpack the statement "self assessments were random". That's a mathematical statement. But what does it imply? It implies that, for example, the very low skill person would on average underestimate themselves and a high skill person would, on average overestimate themselves. Think about the extreme examples of the person with the lowest score and the highest score to convince yourself of that. But that statement is quite consistent with the hypothesis of DK. So, in fact, the data random data generating process that shows DK is just a "statistical artifact" is in fact quite close to what DK is meant to show.
posted by carid at 10:48 PM on April 16, 2022 [3 favorites]


There are still supremely incompetent people who are overconfident. If you know people like that, you are not being challenged by Science. The only thing being claimed is there's not a tendency for incompetent people to be that way.

If you were using "Dunning-Kruger Syndrome" as an insult to mean "so stupid you don't know you're stupid" that is already a step removed from the paper anyway. So keep doing it if you enjoy it.

That being said, the thing about autocorrelation is it describes a real effect. It's just not very interesting, in a scientific sense. If you are a very short person you really do choose to work at companies where people are taller than you. In the same way, really incompetent people as a group must tend to overrate their relative aptitude, because they can't underrate it!
posted by mark k at 10:55 PM on April 16, 2022 [2 favorites]


Absolutely, flug - there should be people who underestimate their competency. I would posit that women and minorities are over-represented in that category.

I believe one example is 'imposter syndrome'. It's an internal experience of believing that you are not as competent as others perceive you to be - you feel as though at any moment you are going to be found out as a fraud, that you don't belong where you are, and you only got there through luck. It was initially thought to apply mainly to high achieving women, but now is recognised as applying more widely.

It can also be self-perpetuating; you put in massive amounts of overwork to try and conceal your perceived incompetence, and thus you can only have succeeded because of that extreme effort and/or luck. And thus even small failures are always down to you not working hard enough, and of course that you are incompetent in the first place, leading to a permanent state of anxiety. It can become nearly impossible to accept any level of success or praise as earned, and is one of the on-ramps to full blown depression.

Also; isn't mansplaining a gender-based version of Dunning-Kruger?
posted by Absolutely No You-Know-What at 12:09 AM on April 17, 2022 [3 favorites]


It can also be self-perpetuating; you put in massive amounts of overwork to try and conceal your perceived incompetence, and thus you can only have succeeded because of that extreme effort and/or luck. And thus even small failures are always down to you not working hard enough, and of course that you are incompetent in the first place, leading to a permanent state of anxiety. It can become nearly impossible to accept any level of success or praise as earned, and is one of the on-ramps to full blown depression.


I cannot tell you how unsettling it is to see one of the pillars of one’s self identity so clearly and neatly laid bare on the internet like this.
posted by darkstar at 12:37 AM on April 17, 2022 [5 favorites]


(Incidentally, the author describing this as "autocorrelation" slightly puts me on edge....

I had the same reaction, and more than just slightly – autocorrelation is absolutely not the statistical equivalent of saying 5=5, it's a much more interesting and useful concept/tool than that. The fact that the author made this claim early on made me read the rest of the article skeptically, and nothing I read further on convinced me the author knew what he was talking about. In fact, I began to get suspicious that the article was a joke intended to illustrate the DK effect ("I can throw around some words from statistics and talk about y-x vs. x and it sounds like I'm smart, but really, not so much"). The repeated rhetorical device of "seems fine, until we realize" also annoyed me, since in each case the thing we were supposed to suddenly "realize" with the author's help seemed trivially obvious from the outset and also to not be as problematic as the author thought. I found the first part of biogeo's comment much more compelling than anything in the article itself.

I'm with sjswitzer and jomato on this. If everyone was perfectly self-aware and could identify their own skill level completely accurately, DK's original plot would have looked like two curves matching perfectly, both sloping upward the same way. If, on the other hand, everyone was perfectly terrible at self-awareness and gave a completely random answer for their own skill level, the plot would have had a horizontal line for "perceived ability" (like in Figure 9 in the article). The article claims this would be a spurious statistical artifact showing massive effect when the raw data had no effect, but it actually would be a clear demonstration that people are terrible at estimating their own skill and their answers are no better than random.

The original DK plot is neither of these extremes: DK's "perceived ability" line does slope upward, but not as steeply as the "actual test score" line. For the bottom quartile, "perceived ability" is much higher than "actual test score" while for the top quartile, it is lower but not by a huge amount. As jomato points out, one of the linked articles describes this as a combination of the "better than average effect" and regression to the mean. The way I look at this, that's a neat explanation of the DK effect, not any sort of debunking of it. In general, people estimate their own ability imperfectly, and, as one might expect, those with the highest ability are more likely to underestimate while those with the lowest are more likely to overestimate. Meanwhile, people have a bias for thinking of themselves as better than they are. In the case of high ability, these two effects counteract each other to some degree, leading to estimates that aren't terribly inaccurate. In the case of low ability, these effects both act in the same direction, leading to wild overestimates of competence.
posted by judgement day at 6:25 AM on April 17, 2022 [10 favorites]


This is wonky statistics nit-picking, but is "autocorrelation" really the right word? In my experience, "autocorrelation" was used in time series to describe relationships between points in the same series. For example, you would use autocorrelation to determine whether a good month of sales was likely to be followed by another good month, or by a compensating bad month.

If you made this mistake in a class on regression, I would expect the teacher to explain that regression the statistics (confidence limits, etc) require that the independent variable and the dependent variable be independent.
posted by SemiSalt at 6:31 AM on April 17, 2022 [1 favorite]


Do they say anywhere how they motivate those in their studies to self-assess reliably? I will self-assess differently on a job interview, when in therapy, when reviewing my answers on a multiple choice exam that penalizes "guessing."
posted by Obscure Reference at 7:12 AM on April 17, 2022


The original DK plot is neither of these extremes: DK's "perceived ability" line does slope upward, but not as steeply as the "actual test score" line.

The actual test score's line is constructed to have slope 1. It's comparing the quartile of actual scores to the percentile of actual scores.

In my experience, "autocorrelation" was used in time series to describe relationships between points in the same series.

That works because y_{t+1}={y_t + some relatively minor \delta_t}, and y_t appears on both sides like the article describes.

There are other kinds of autocorrelation. Most obviously there's spatial autocorrelation, like voters who live near each other having correlated error terms because of all the things about them that you didn't measure and because homophily. Likewise, we do multilevel models in large part because we're worried about correlated errors among the observations from a given classroom/hospital/state.

I mean I kinda gave up when the article described an ordinal variable as categorical but still.
posted by GCU Sweet and Full of Grace at 7:31 AM on April 17, 2022 [2 favorites]


although autocorrelation is frequently applied to temporal data, i don't think it is limited to that case. height above sea level is spatially autocorrelated - locations close to each other have similar heights. you could probably apply the same ideas to any sort of distance metric, like the quartile bins in the link - in that example, people who have similar objective performance will have similar self-assessment scores. (on preview: what GCU Sweet and Full of Grace said).

the sin (or at least one of them) the author seems to be committing is that they're only talking about the case where (using temporal autocorrelation terms) lag = 0. i guess it makes sense in that they're talking about correlating each quartile bin with itself (hence "auto" correlation) but is not really a standard use. calling it autocorrelation is kinda gilding the lily since the main point is the tautological use of objective performance.
posted by logicpunk at 7:48 AM on April 17, 2022 [1 favorite]


I agree with points made above:
* if self-assessment were randomly distributed (on average, people think they are at 50th percentile rank)
* if skill were randomly distributed (from 0% to 100% percentile rank)

Then it follows that:
* every group (of sufficient size) based on skill level would, on average rate themselves 50%

Thus:
* the top 10% of people (who would average 95% percentile skill) would, on average, underestimate their skill by about 45% because they are at 95% but claim 50%.
* the bottom 10% of people (who would average 5% percentile skill) would, on average, overestimate their skill by about 45%, because they are at 5% but claim 50%.

In this hypothetical example, isn't that precisely the DK effect?

[Edit: had skill and self-assessment swapped in first 2 lines, fixed]
posted by soylent00FF00 at 9:26 AM on April 17, 2022 [2 favorites]


Is it maybe helpful to consider the range of possible survey results, and assess the D-K graph's result relative to that?

If we had the maximum perverse outcome, the self assessment line would start high and end low. If the test takers were perfect in their self evaluation, the result would be the x=x line. If the data were random, or the self assessments were all the same, the result would have been a horizontal line.

What we have is a medium-ish divergence in slope from x=x. So, people were right in part, but not completely accurate. It shows sort of a West by North West direction of travel.

If anything, this shows that D-K isn't proven by the graph, but the possibility remains open. Proponents of D-K would have to show that people who did poorly on the test tended to over-estimate their scores out of a tendency toward the perverse result, rather than just a tendency toward the random.

... I think?
posted by Richard Daly at 9:27 AM on April 17, 2022 [3 favorites]


That's an interesting perspective, Richard Daly.

If people's self-assessments were perfectly random (ignoring bias), the self-assessment line would be horizontal. If the line slopes upward, then self-assessment is better than a guess. Had the line sloped downward, people's self-assessment would be worse than a guess. That would be very interesting ("perverse") indeed, but it's not what the data show.

Almost any divergence from perfect self-assessment would show some D-K effect, and indeed self-assessment is far from perfect.

Again, I conclude that the D-K effect is real but tautological. High performers do rate themselves worse than they score and poorer performers do rate themselves higher than they score. This doesn't reflect any foible of the human psyche, but instead falls inevitably out of the mathematics.

(Nth-ing annoyance with the use of "autocorrelation" here. In my experience autocorrelation is used to detect periodicity in data as described previously here. Wikipedia notes, however, that "Different fields of study define autocorrelation differently, and not all of these definitions are equivalent. In some fields, the term is used interchangeably with autocovariance," and I suspect that's the meaning intended here.)
posted by sjswitzer at 10:15 AM on April 17, 2022 [1 favorite]


I agree that it seems to me like the author is shoehorning this into being a problem about autocorrelation, with the result that their critique as written would exclude a lot of valid analyses also. ("Spurious results caused by autocorrelation" seems to be a theme of theirs, so perhaps that's why.) The author talks about how the implied comparison is "(perceived - actual) score vs. actual score," but to me, that is not at all obvious. The comparison that is implied to me is whether the slope of the "perceived score vs. actual score" graph is less than one. While DK's methods or data may well have been insufficient to show this, that's a totally acceptable question to pose using linear regression -- people test for significant differences in slope from a given expectation all the time -- so I don't understand why the author is saying this is intrinsically misguided.

I also agree that the comparison to "random data" is not actually showing a null effect -- rather, it's showing an effect that would be very strong and surprising if it were true (that people's perception of themselves is totally uncorrelated to their actual ability). It seems to be that the null hypothesis here should be that people's perception of their abilities can't be distinguished from their actual ability (i.e., the y=x line).

I still think I agree that the way DK did this seems problematic -- but it's less because of autocorrelation between means and more because of issues with the variance. First, the DK graph doesn't show the spread in the data at all. Just presenting the means at each quartile makes the measurements look much more confident than they are. If you applied standard linear regression to these summaries instead of the actual data, you'd essentially be laundering uncertainty, so your results would have an anti-conservative bias. Secondly, because of DK's data transformation into quartiles (and possibly also because of the task itself -- I don't know enough to comment there) you should expect a lot of heteroskedasticity. In other words, we wouldn't necessarily expect the variance to be the same across percentiles, in this case because of floor and ceiling effects. That's a violation of the assumptions of (standard) linear regression, so you'd have to use a fancier method that takes this into account. The averaging and the heteroskedasticity together could appear to show a DK effect when the actual issue might be, for example, a global better-than-average bias with a ceiling effect.
posted by en forme de poire at 10:38 AM on April 17, 2022 [10 favorites]


Still not sure how they ended on "academic year" as a measure of skill. Isn't results on the test the actual measure of skill, and the academic year just a proxy of expected skill? I guess I'm still not getting why you wouldn't want the axes to be Actual Score vs Self-Predicted Score, and see what the correlation was between them.
posted by klangklangston at 10:46 AM on April 17, 2022 [1 favorite]


I don’t do statistics but something has been bothering me. Binning results in quartiles seems to me unmotivated at best and contrived at worst. Why not just plot performance against self-assessment and then do regression analysis? That would sidestep the “autocorrelation” issue entirely. Can someone who does stats enlighten me?
posted by sjswitzer at 11:46 AM on April 17, 2022 [1 favorite]


Hmm, it seems en forme de poire has already asked and answered that.
posted by sjswitzer at 12:51 PM on April 17, 2022


I began to get suspicious that the article was a joke intended to illustrate the DK effect
If this turns out to be true, that's a pretty good joke.
posted by jomato at 1:27 PM on April 17, 2022


So, if the author's article does not clear things up, is it a case Dunning-Krugeresque nominative determinism?
posted by Calvin and the Duplicators at 8:58 PM on April 17, 2022


And I'm genuinely looking forward to an expert in ancient philosophy explaining why that's not an accurate way of applying the concept of Meno's paradox.

Not an expert, but I was puzzled about what Meno's paradox has to do with this. Apparently people sometimes associate it with the "only thing I know is that I know nothing", which comes from Plato's Apology and not Meno. But Meno's paradox is kind of a flip side of this: if you know the answer to a question, then you will not learn anything new by asking further questions; but if you don't, then you don't even know what to ask or how to recognize if the answer is correct. So I suppose it is related! Never realized that, despite thinking I know my Meno etc. Cool!

Anyway, the Gell-Mann amnesia effect is the name for auto-dunning-kruger.
posted by Pyrogenesis at 10:41 PM on April 17, 2022 [1 favorite]


In this hypothetical example, isn't that precisely the DK effect?

Kind of but kind of not. The actual claim in the original paper is that incompetent people overrate their level of skill because they are incompetent. If what's happening is that at all skill levels people think they're average, or perhaps just that people are universally bad estimators of their skills, then the DK interpretation of their data is wrong.
posted by mark k at 10:59 PM on April 17, 2022 [1 favorite]


The actual claim in the original paper

Does anyone have a link to the original paper? I checked Wikpedia and it's not abundandly clear which is the "right" one to read...
posted by soylent00FF00 at 11:09 AM on April 18, 2022




(Note that the paper title pretty much nails the controversy: is it difficulty recognizing one's own incompetence leading to the inflated self assessment, or is self-assessment universally hard, and the incompetent inflate because that's the only direction they can err?)
posted by biogeo at 11:18 AM on April 18, 2022 [2 favorites]


@biogeo, thanks, that's the article I was thinking of, here is a more accessible URL ;-)

The paper is really quite weak in a number of ways - across the 4 studies, the lowest quartile subject group has about N=15 subjects in it, which is not very comforting. Study 4 does a 2x2x4 ANOVA on only N=140 subjects - ugh.

The first 2 studies mostly establish the effect but are purely observational/correlational.

The paper also addresses the main complaint of TFA:

Despite the inevitability of the regression effect, we believe that the overestimation we observed was more psychological than artifactual. For one, if regression alone were to blame for our results, then the magnitude of miscalibration among the bottom quartile would be comparable with that of the top quartile. A glance at Figure 1 quickly disabuses one of this notion. Still, we believe this issue warrants empirical attention, which we devote in Studies 3 and 4.

These later studies use a more experimental/causal design, in which they try to change the subject's level of competence in rating, either by showing them work from other students, or giving them training to improve their own competence at the skill.

They results seem to support this claim, e.g. that with better metacognitive skills, the lowest quartile improves their self-ratings substantially.
posted by soylent00FF00 at 11:30 AM on April 18, 2022 [2 favorites]


I think the fact that ability assessment is purely relative is part of the effect there, though. Like, if you were to ask me, "Are you good at math?" I might say "Yeah, I think I'm fairly good." But if you ask me the same question in a room full of professors of mathematics and theoretical physics, I'd probably say "No, not really." It's not that my metacognitive ability to self-assess has changed, just my understanding of the population I'm being compared to. So in their study, if I remember correctly (haven't reread it just now), since they're using quantiles to assess performance against the population, you can't disambiguate an improvement in metacognition per se from an improvement in the estimate of the distribution in the population.
posted by biogeo at 11:55 AM on April 18, 2022 [1 favorite]


Out of curiosity, I went to see if Andrew Gelman had written anything about the Dunning-Kruger paper, and sure enough he collects some of the criticisms in this post from last fall. I like his final takeway (which I think echoes some posters' sentiments above):
One relevant point, I guess, is that even if the observed effect is entirely a product of regression to the mean, it’s still a meaningful thing to know that people with low abilities in these settings are overestimating how well they’ll do. That is, even if this is an unavoidable consequence of measurement error and population variation, it’s still happening.
posted by jomato at 3:42 PM on April 18, 2022 [3 favorites]


Nice point @jomato - though I'm wondrering, is "regression to the mean" even the right term here? My understanding is it's more about repeated measurements of the same variable e.g.

"a concept that refers to the fact that if one sample of a random variable is extreme, the next sampling of the same random variable is likely to be closer to its mean." (Wikipedia)

Clearly, the D-K effect is not this, since measurement 1 is "Your skill" and measurement 2 is "Your estimation of your own skill"

If you go back to the original meaning from Galton, it's a little less clear, as he actually measured "height of parents" vs. "height of their children" - which is also not exactly the same variable.

Thanks for a fun discussion everybody, I think I'm 50% "real phenomenon", 50% "statistical artifact", and 50% "both" :-)
posted by soylent00FF00 at 4:16 PM on April 18, 2022 [2 favorites]


I think they’re saying it’s regression to the mean in the sense that if you get a really extreme test score, it’s likely that on further retesting you’d get something more moderate — in other words, you’d expect almost the same graph from just giving two different tests of the same material. I think this is a good point that makes sense, but I think it probably wouldn’t explain this effect entirely because of the offset.
posted by en forme de poire at 6:28 AM on April 19, 2022 [1 favorite]


I got to wondering what the right way to do the statistical analysis is. I though it might be tricky, but maybe not. We have two data sets. Using the definitions in the debunking article, there are x = actual test scores, and y = predictions of test scores. Since y is a predictor if x, we can set up an ordinary regression x = my + b (m = slope, b = additive constant).

If there was no error and no DK effect, then all the predictions would be correct, x = y everywhere, and the result would be straight line. The slope m = 1. (This assumes that the x and y are on the same scale, say a test score between 0 and 100. If not, some adjustment is required.)

If there is a DK effect, but no random error, the result would be a straight line that would be lower for small y (actual less than predicted) and higher for big y (actual greater than predicted).

So to test for DK effect, do the regression x = my + b, and using Stat 101 techniques, construct a confidence interval for m. If the CI includes 1, then you have to accept the null hypothesis of no DK. If the CI if completely larger than 1, then the data supports DK.

The simplicity of this approach raises the question of why D&K used the over-complex method of analysis that led them into error. Perhaps, this easy approach didn't support the DK effect and they went hunting for "more discriminating" method. That would be a no-no.

I have a couple more comments. 1) The truth or falsehood of the DK hypothesis should not depend on a single rather cheesy experiment. That would require multiple tests using different methodologies. 2) Lord knows, there are plenty of examples on Twitter spewing bullshit than they don't know is bullshit. They are a lot easier to identify than examples of knowledgeable people undervaluing their opinions. (Krugman's low opinion of economists' predictions may be an example.) 3) DK may be real, but perhaps "incompetence" is the wrong word to use.
posted by SemiSalt at 7:13 AM on April 20, 2022


> Blair belongs to an emerging school of economists who argue for a complete replacement of neoclassical theory with ideas based on biophysical principles and thermodynamics, which are discussed in the freely downloadable book Rethinking Economic Growth Theory From a Biophysical Perspective.

How is this not a reinvention of Systems Thinking? Classic economics says there's no limits to growth of capital -- when Donella Meadows' famous book is The Limits to Growth.
posted by k3ninho at 10:46 AM on April 21, 2022


Or just cybernetics.

wiki: The Human Use of Human Beings: Cybernetics and Society (Norbert Wiener). Full PDFs are out there.
posted by snuffleupagus at 12:11 PM on April 21, 2022 [1 favorite]


Judging by this list of books deemed relevant to the field, it seems that biophysical economics openly acknowledges systems thinking as a source of ideas. The Limits of Growth, World Dynamics by Jay Forrester, and The Limits of Growth Revisited are all included there.
posted by skoosh at 5:07 AM on April 22, 2022


Bizarre to omit Weiner, Ralph Parkman etc. Or Wallerstein, on world-systems. A good dose of Latour wouldn't hurt either. List gives me weird eco-authoritarian vibes. Does this orientation assume a spherical everything and ignore qualitative concerns?
posted by snuffleupagus at 6:27 AM on April 22, 2022


« Older Reading the Stone   |   A drab, gray dream, ‘The Secrets of Dumbledore’ is... Newer »


This thread has been archived and is closed to new comments