"There's not only an emotional quality, but there's a selfish quality."
September 3, 2014 6:05 AM   Subscribe

 
Related.
posted by Fizz at 6:38 AM on September 3, 2014 [3 favorites]


In a landmark 1999 study, Rolf Reber and Norbert Schwarz determined that people believed the statement “Osorno is in Chile” when it appeared as blue text on a white background but not when it was barely visible as yellow text on a white background. (Osorno really is a city in Chile, FYI.)

Somewhat hidden amid the most recent letter from my CEO were the instructions not to send him a PowerPoint with yellow text on a white background. To get this in the letter I'd imagine a few things happened, most obviously - someone sent him a PowerPoint with yellow text on a white background. Slightly less obvious: the distraction caused by such visual stimulus occupied more thought than whatever was being presented. While a glaringly obvious no-no from my perspective, the CEO wanted to make sure that no one in a 130K+ employee company made that mistake again.
posted by Nanukthedog at 6:39 AM on September 3, 2014


Important note: I think it was Tuesday night, Colbert pointed out that 'vlog' has made it into the official scrabble dictionary, but 'Truthiness' is not.
posted by Nanukthedog at 6:41 AM on September 3, 2014


But what if it appears in white text on a blue background?!
posted by tofu_crouton at 7:25 AM on September 3, 2014 [4 favorites]


The less effort it takes to process a factual claim, the more accurate it seems.

That sounds about right to me.
posted by arcticseal at 7:27 AM on September 3, 2014 [19 favorites]


The most dangerous thing society teaches boys and men, especially white boys and men, is that their emotions are objective logic and reason and that anyone who disagrees is being irrational.

-Tumblr user jean-luc-gohard
posted by FirstMateKate at 7:27 AM on September 3, 2014 [20 favorites]


If only the author used the same skills she used to gather the links inside the article to to solve the bigger, more pressing issue that was mentioned in the first paragraph - the ability of the author to "really grasp the difference between a moped and a Segway."

Young writers, this is why you need editors. That line, which I assume was to give a more 'of the people' feeling before diving into scientific theories, ends up either hamstringing the strength of her argument before it begins, or indicates that it's some sort of meta-humor satire piece. She obviously did do some research, and while it does read a bit like a college writing assignment, there are reasonably solid points in there.
posted by chambers at 7:37 AM on September 3, 2014


jean-luc-gohard

That is an amazing handle.
posted by Sangermaine at 7:45 AM on September 3, 2014 [4 favorites]


"News flash! 'Conservatives' are much stupider than us liberals! (because only an idiot would disagree with us facts!)"
posted by rebent at 8:21 AM on September 3, 2014 [1 favorite]


"News flash! 'Conservatives' are much stupider than us liberals! (because only an idiot would disagree with us facts!)"

The last paragraph of TFA explicitly cautions against just that leap-to-conclusions move from the research being discussed. You will also notice no other commenters here making that claim, let alone making it unironically.

The argument is not that conservatives are dumb, it is that conservative arguments primarily appeal to cognitive biases; the article even suggests that sometimes those biases act as useful shortcuts. However, they don't always do so, and so an argument that appeals primarily to such biases but falls apart when examined in terms of evidence or reason is probably not true.

For example, the argument that everyone here is mocking conservatives as "stupid" fits rather neatly with, say, the outgroup homogeneity bias or perhaps with the group attribution error if you are speaking from anecdotal prior experience. But since no one here has demonstrably made the assumption you are satirizing, the argument falls apart in the face of the facts and stands revealed as an untrustworthy artifact of cognitive bias.
posted by kewb at 9:00 AM on September 3, 2014 [10 favorites]


a throw-away comment at the end of the article that immediately follows "the overlap between GOP tenets and low-hanging cognitive fruit" does not refute her bias against intuitive thinking and for the liberal definition of what is Good
posted by rebent at 9:08 AM on September 3, 2014 [1 favorite]


The brain only has so much processing power. So human brains/minds tend toward making simplifications and lazy generalizations as a form of performance optimization. If we had to fully process every piece of information we encountered in life from scratch with fully-engaged conscious critical thinking at all times, we'd be paralyzed and incapable of making decisions. We just wouldn't have the brain bandwidth to function.

This is unfortunately not a cultural or ethnic thing, but a human biology thing, I assume. We probably do need simplifying assumptions and other cognitive performance optimizations to facilitate daily operational thinking. However, it makes all the difference in the world whether our simplifying assumptions/provisional beliefs are valid/useful ones in the specific circumstances in which they're brought to bear. We'll probably never defeat biological efficiency's tendency to make us more prone to accepting ideas that are easier to process, but now that we know about it, we can try to work around it.

What's needed might be a sort of reverse Occam's Razor test: Is X really the model that best fits the facts or is it just tempting to think so because X is intuitive and easy to grasp?
posted by saulgoodman at 9:15 AM on September 3, 2014 [2 favorites]


a throw-away comment at the end of the article that immediately follows "the overlap between GOP tenets and low-hanging cognitive fruit" does not refute her bias against intuitive thinking and for the liberal definition of what is Good

Again, the argument here is less "intuitive thinking is always bad" and more "intuitive thinking without any mechanism of external verification is a very poor basis for social policy positions." Frankly, there are good reasons to be biased against intuitive thinking as the sole or even the primary basis for most positions. For one thing, it is very often proven wrong by externally verifiable facts.
posted by kewb at 9:18 AM on September 3, 2014


(Also, politicizing this along party lines seems dumb to me. I've known plenty of self-identifying liberals who believe there must be simple solutions to problems that more likely require complex ones in exactly the same way (hell, I'd argue some Marxist ideas about property are one example of this effect). But I think it is fair to say that not all participants in our political process are equally likely to appeal to these biases to advance their positions. As rebent's comment suggests, some people seem to identify particular thinking styles with their conceptions of political identity, but I see this particular cognitive bias at work well beyond the usual party political context. Even leftist academic departments probably aren't immune to the effect of this bias. We all have the same biological need to make unexamined, simplifying assumptions.)
posted by saulgoodman at 9:24 AM on September 3, 2014 [3 favorites]


Well, you're right Kewb, but this article really is full of dog whistles. Saying it's a matter of "selfishness" rather than "self-preservation" for example. Or even the basic framing around "truthiness", a word that a liberal invented to mock conservatives.
posted by rebent at 9:36 AM on September 3, 2014 [2 favorites]


Or even the basic framing around "truthiness", a word that a liberal invented to mock conservatives.

That word was invented to mock dishonest politics. If you get defensive about it you might want to ask yourself why.
posted by srboisvert at 10:29 AM on September 3, 2014 [3 favorites]


>"a word that a liberal invented to mock conservatives"

I'd say it's a word invented to mock a certain specific type of conservatives". There really wasn't any way of knowing, back when Colbert coined the word, that the "truthiness" wing (characterized by Bush II, then Palin and the Truthers* and on to the Tea Party) was going to be the ascendent wing of the GOP, and not, say, the Chamber of Commerce wing. The version of Mitt Romney that was running around when this video came out was raising taxes, building universal health care in his state, (moderately) pro-choice, and a member of the GOP. Up until Palin got picked as McCain's running mate, the 2008 field was significantly less "truthy" than either Bush II or the current GOP. Her assumption (that you seem to have accepted) that "truthiness = all conservatives" is wrong, or at least a-historical.

Also, if you want an example of liberal truthiness, just go to Whole Foods, or read about GMOs.

And holy shit Colbert looks young in those videos.


*dibs on "Palin and the Truthers" as a band name

p.s. The "Conservative=Truthy/Dumb" meme really bugs me, because it has the effect of reducing any change or growth in the GOP. I rarely agree with the DEMs on the solutions to our national problems, but I don't agree with the GOP on what those problems ARE. And the problems that the GOP gets up in arms about (immigrants, culture wars, "socialism") seem pretty influenced by factless, feely, truthy reasoning. It's just not healthy to have one half of our political system set down in stone as the place where the truthy people go, because then that's where truthiness feels at home.
posted by DGStieber at 10:47 AM on September 3, 2014 [3 favorites]


This morning on facebook, my cousin had re-posted a picture that purported to be the cop who shot Michael Brown. It took moments of googlage to find out it was a fake. It's pure, manufactured, truthiness bait. My cousin deleted it and thanked me. I occasionally see similar liberal junk. The appeal of truthiness affects the left and right. As much as I hate conspiracism, I think there's a big machine manufacturing lies to feed teaparty/ fundamentalist/ far-right rage and mistrust. Well, duh, Fox News. But it's working well to move the US to the right.

Why don't factcheck & straight dope have fb feeds?
posted by theora55 at 10:50 AM on September 3, 2014 [6 favorites]


But what if it appears in white text on a blue background?!

WordPerfect.
posted by The Tensor at 2:24 PM on September 3, 2014 [1 favorite]


The less effort it takes to process a factual claim, the more accurate it seems.

That sounds about right to me.


Early life experiences left me with trust issue that manifest as skepticism and cynicism. Whenever something conforms *too perfectly* to my world view, I have fact check it. And most of the time, those are the ones that are at best, over-simplified misrepresentations. At worse, complete hogwash. And it makes me crazy that people fall for them. I'm sure I miss some, but I try really hard to examine everything critically.

I really wish that was the baseline everyone took; if it seems too on the nose, maybe it is. At east check into it.

I'm not sure about actual numbers, but I can say that my Facebook feed is full of plenty of liberals that fall into this trap. They like and share without 2 seconds of though. It makes me so frustrated, because how can my "team" have the moral high ground when the are also spewing wrong info. So yeah, anecdote isn't data, but it's a mistake to attribute this to conservative thinking.
posted by [insert clever name here] at 5:39 PM on September 3, 2014 [1 favorite]


the problems that the GOP gets up in arms about (immigrants, culture wars, "socialism") seem pretty influenced by factless, feely, truthy reasoning.

Ugh. I really hate this article, because it's conflating two entirely separate hypotheses and putting them in one basket. The social intuitionist theory is not confined to conservatives, but it's presented here as though it does through the focus on "harm" as a cognitive bias.

I'm persuaded by the Haidt moral foundations theory, which posits six innate moral foundations that most cultures base their moral theory around: care, fairness, freedom, loyalty, hierarchy/authority, and purity. And social research has shown that those who self identify as liberal tend to focus more strongly on, say, whether someone is hurt than whether loyalty is maintained. Thus, when they are arguing that conservatives look at statistics about how many people are hurt, for example, they are arguing that their own innate moral focus should be preserved, and that people who don't have that innate moral focus should change their own selves. Instead of saying that conservatives are "Selfish", it could just as easily be said that they are "loyal to family" - but that's not a negative priming, and it doesn't fit into author bias, so it's not used. To say that conservatives are selfish because they don't focus as much on the broad harm/care category would be as valid as saying liberals were traitors because they don't focus as much on the loyalty axis. These are shitty words for good people who have different, and equally valid, fundamental moralities.
posted by corb at 9:59 AM on September 4, 2014


Haidt's Moral Foundations work itself seems to have some significant methodological flaws, at least according to this working paper presented earlier this year. Some relevant excerpts:
Beyond their cognitive demands, the Moral Foundation items are enmeshed with cultural precepts and rhetorical cues. Consider, for example, the items listed above. Questions about the requirements for society, the demands of equality, or the virtue of chastity unavoidably make reference to particular belief systems with specifically articulated tenets about such ideas. Such rhetorical cues unavoidably raise problems of endogenous and confounding variables. If moral foundations are really innate, then their measures must be exogenous to any other cultural or political phenomena they purport to explain. Otherwise they are functionally indistinguishable from simply being alternative measures of culturally determined values. Yet, by relying on particular concepts like justice, equality or invoking particular standards of behavior like chastity, the moral foundation items invoke concepts and rhetorical signals of other belief systems.
...
In short, the multivariate equations demonstrate that religious sentiments are the primary sources of the reported ideological differences across the Moral Foundations scale. In particular, it is the greater religiosity of self-identified conservatives that explains their differential scores on the Authority, Ingroup, and Purity scales. Although ideology remains a significant but small predictor of the Harm and Fairness scales, it no longer relates to the Moral Foundations emphasized in the Haidt et al research. The fact that the inclusion of one scalar item that gauges largely religious sentiments can eliminate the robust relationship between ideology and Haidt’s central findings should cause great concern about the overall validity of Haidt’s original hypothesis test.
...

The experimental items demonstrate that the Moral Foundations Items are highly sensitive to idiomatic references and often work contrary to expectations from Haidt's theories. For example, in the framing experiment on the Harm foundation, conservatives are three times less likely to disagree with the statement that "Handouts, like X, keep poor people dependent on other and should be stopped" when the handout comes from "church soup kitchens" rather than Food Stamps. Liberals are four times more likely to disagree with the statement that "Everyone should have a say on controversial social issues, especially X, because that's only fair" when the reference is "religious groups" while conservatives are four times more likely when the reference
is "minority groups." If moral foundations really inform ideological priors, then such large differences in responses should not exist.


Similar evidentiary problems occur with other moral frames as well. For example, Moral Foundations Theory suggests that conservatives "rely" more on intuitions of loyalty than liberals, yet we find no ideological differences in the high levels of agreement to the statement "People who leek secret government information under (our president's/President Obama's) fight against terrorism and deserve severe punishment" regardless of the reference. Moral Foundations theory also suggests that conservatives are more concerned with issues of purity, yet we find this depends on what is considered pure. Liberals and moderates, for example, were more likely to agree with the statement "Things made pure by our Creator should not be contaminated by humans" when the thing in question is "the environment." Conservatives, on the other hand, are more in agreement when the environmental cue is left out. Finally, Moral Foundations Theory suggests that conservatives' judgments are more attuned to concerns with authority, yet we find liberals and moderates are twice as more likely to agree with the statements "While some may disagree with a particular President/President Obama, he's our leader and we have to follow him" than conservatives.

As noted above, one explanation for why Haidt finds such a strong relationship between the Moral Foundations survey items and ideology is because the survey items invoke rhetorical cues that signal appropriate responses to ideologues themselves. In other words, when faced with the difficult dilemmas posed in the Moral Foundations questionnaires, respondents may be using particular terms in the questions as ideological heuristics to guide their responses. This alternative hypothesis is based on the observation from other scholars (e.g., Everett 2013, Farwell and Weiner 2000, Lakoff 2010, Treier and Hillygus 2009,) that certain concepts or terms which are prominent in moral foundations theory are commonly understood as having ideological connotations.
Now, this is just a working paper, so Haidt's original work clearly outranks it in terms of academic weight, but the methodology of this paper's survey seems far superior to Haidt's very unrepresentative sample of the people that are drawn to his website, and the conclusions are very compelling.
posted by tonycpsu at 10:47 AM on September 4, 2014


It's an interesting counterpoint - reading it now, actually. But I think - honestly like /many/ papers/ideas - it suffers from its preconceived notions. For example - it suggests that religiosity drives ideology, rather than innate characteristics drive religiosity - I think I've seen some interesting twin studies around the latter, though can't recall them specifically. For example, people with low religiosity may have low tolerance for authority and vice versa. It also says that the ideas suffer from being unable to differentiate between innate moral drive towards authority and a psychological proclivity for authoritarianism, for example, which is a weird thing to want differentiated. There's a lot of "we can't disprove this", which is a real complaint to make, but does not necessarily mean that Haidt is wrong.

The idea that framing drives effect is a given and applies to literally every question in social science ever - as the authors note in their working paper, it is "unavoidable."

One thing that both the working paper and Haidt fail to account for, though, is the extent to which political understanding may affect even such questions. For example, they talk about classical conservatism and classical liberalism, as opposed to its modern examples, but fail to note the reference point. It is quite possible for conservatives to want to preserve the status quo that they see - a status quo that no longer exists or has not existed for some time - just as it is quite possible for liberals to want to preserve what they functionally see as a change. There's not a lot of room for perception in this working paper's ideas about ideology.
posted by corb at 12:56 PM on September 4, 2014


corb: The idea that framing drives effect is a given and applies to literally every question in social science ever - as the authors note in their working paper, it is "unavoidable."

That, in and of itself, is enough for me to doubt Haidt's central holdings -- that is, if you can get different results simply by juking with the questions to frame and prime your way to the results you want, then the traits aren't really tied to one's political ideology at all, they're just tied to how you asked the questions.

And the fact that Haidt's data was collected merely by collecting surveys from people who chose to go to his website also really casts aspersions on his research methods -- that's like a basic Thing You Don't Do in social science research. Don't get me wrong -- getting truly representative samples is really hard -- but merely collecting surveys at your own website rather than from a randomized sample of the population seems like a really poor basis for such a study, to the point that I'm surprised Haidt's work has been so well-received. (You can oversample and undersample to try to correct for these errors, but it's far better to try to get a representative sample in the first place, so it's notable that the authors of the working paper were able to do that, while Haidt wasn't.)
posted by tonycpsu at 1:25 PM on September 4, 2014


Honestly - and I say this as someone who has somewhat of an (albeit minor) academic background in social science - I am extremely skeptical of the idea that you can get a representative sample with a set as small as 1000. It's really, really hard to get a truly representative sample. I'll definitely grant the working paper's premise that Haidt's work self-selects for literacy and a certain level of technical competence, but I'm not sure that their sample - or really most samples - is much better. That kind of thing is useful when you are trying to get really simplistic answers, more difficult when you're trying to get complex ones. That's why I think the oversampling - the ridiculous amount of oversampling, 300,000 worth - works for him.

The honest to god dirty secret of social science is that you can always prime your sample, and usually are to some degree. You can filter it out a lot, but not perfectly.
posted by corb at 1:40 PM on September 4, 2014


The rules that govern what's a "good enough" sample size for a statistically significant conclusion are rather complex, but 1000 is good enough for a 95% confidence level and around a +/- 3% confidence interval for the population of all U.S. adults. That's pretty damned good, and, regardless of the conclusions, I'd be much more willing to trust a truly randomized sample of that size over a much larger sample of online responses to a website survey.
posted by tonycpsu at 2:02 PM on September 4, 2014


« Older Black Males, Autism, and the Police   |   Rise like a phoenix Newer »


This thread has been archived and is closed to new comments