Changing minds on minority rights with a single conversation, revisited
April 8, 2016 1:04 PM Subscribe
Previously on MeFi, a pair of then-graduate students, David Broockman and Joshua Kalla, uncovered that some highly-publicized research, claiming to show that brief conversations with gay canvassers could cause lasting changes in people's opinions on gay rights, was in fact fraudulent, and was based on fabricated data. However, whether or not there was in fact any grain of truth to that paper's claims remained to be seen. Recently, the same team that uncovered the fraud has published their own study, showing that canvassing can really be effective at durably increasing support for transgender rights.
Unlike what was reported in the fraudulent paper, the effect did not depend on whether the canvassers were cis or trans. The experimenters attribute their success instead to a canvassing technique designed to build empathy with trans people, by encouraging subjects to reflect on their own experiences and to connect them to the experiences a trans person might face. Interestingly, this technique was very effective on trans rights but appeared not to be effective with respect to abortion (again, contrary to claims in the fraudulent work by LeCour).
The paper, and the data it is built on, are freely available. Science Magazine has published an open-access perspective on the science, as well as an unfortunately non-open-access note on the history of the scandal Broockman and Kalla uncovered.
Unlike what was reported in the fraudulent paper, the effect did not depend on whether the canvassers were cis or trans. The experimenters attribute their success instead to a canvassing technique designed to build empathy with trans people, by encouraging subjects to reflect on their own experiences and to connect them to the experiences a trans person might face. Interestingly, this technique was very effective on trans rights but appeared not to be effective with respect to abortion (again, contrary to claims in the fraudulent work by LeCour).
The paper, and the data it is built on, are freely available. Science Magazine has published an open-access perspective on the science, as well as an unfortunately non-open-access note on the history of the scandal Broockman and Kalla uncovered.
Maybe every non-cynicism circuit in my brain has long since shorted out, but this is totally going to get 1/100th the airtime and coverage the "haha see minds can't be changed it was all a lie!" story did. That's totally salacious, this isn't.
posted by emptythought at 1:28 PM on April 8, 2016 [5 favorites]
posted by emptythought at 1:28 PM on April 8, 2016 [5 favorites]
I can absolutely understand why folk don't relish the idea of having to undertake hundreds of lengthy conversations with hostile strangers in the hope of mildly raising their estimation of one's right to exist. It's an exhausting labour that I don't have to undergo to be accepted, and it carries risks which I've never had to face.
Still. Anything that fosters compassion between human beings is a de facto Good Thing, I think. I suppose the takeaway is that open, honest dialogue fosters empathy. Which makes the world a better place, if ever so fractionally. So I'll endeavour to engage in it more.
posted by RokkitNite at 1:32 PM on April 8, 2016 [10 favorites]
Still. Anything that fosters compassion between human beings is a de facto Good Thing, I think. I suppose the takeaway is that open, honest dialogue fosters empathy. Which makes the world a better place, if ever so fractionally. So I'll endeavour to engage in it more.
posted by RokkitNite at 1:32 PM on April 8, 2016 [10 favorites]
I was just about to come and make a post that wouldn't have been nearly this nice. I'm really curious what our canvasser mefites think. I'm really glad that these guys now have gotten a nice paper in Science and well deserved fame out of this, and that the story can be about them rather than LaCour and Green.
Reading the their takedown Irregularities in LaCour and Green together with LaCour's Response to Irregularities in LaCour and Green was really fucking impressive for me with my passable fluency in R. These guys are clearly devastatingly brilliant and were incredibly brave to put their careers on the line to uncertain end to fight it out with the bastard in a pitched battle. Reading both papers is like watching a Tolkien-esque wizard battle of statistics rather than ephemeral will if you speak the language its written in, and gave me a deep respect for just how dangerous LaCour was to everyone around him being as clever as he clearly is. While honestly its very clear that the only case LaCour could make in his R rant is that his fraud was more sophisticated than Broockman, Kalla, and Aronow portray - he did a terrifyingly effective job in a lot of it. The asshole was clearly intelligent, motivated, and deeply angry about having been so effectively caught.
These guys were faced with a hard right choice and an easy wrong one, and they chose to risk all of their considerable potential by fighting with the precious little power afforded to grad students in academia. These guys are fucking heroes.
posted by Blasdelb at 1:37 PM on April 8, 2016 [29 favorites]
Reading the their takedown Irregularities in LaCour and Green together with LaCour's Response to Irregularities in LaCour and Green was really fucking impressive for me with my passable fluency in R. These guys are clearly devastatingly brilliant and were incredibly brave to put their careers on the line to uncertain end to fight it out with the bastard in a pitched battle. Reading both papers is like watching a Tolkien-esque wizard battle of statistics rather than ephemeral will if you speak the language its written in, and gave me a deep respect for just how dangerous LaCour was to everyone around him being as clever as he clearly is. While honestly its very clear that the only case LaCour could make in his R rant is that his fraud was more sophisticated than Broockman, Kalla, and Aronow portray - he did a terrifyingly effective job in a lot of it. The asshole was clearly intelligent, motivated, and deeply angry about having been so effectively caught.
These guys were faced with a hard right choice and an easy wrong one, and they chose to risk all of their considerable potential by fighting with the precious little power afforded to grad students in academia. These guys are fucking heroes.
posted by Blasdelb at 1:37 PM on April 8, 2016 [29 favorites]
I thought this was also a neat aside from the perspective:
Analogic perspective taking is not the most prominent method in the prejudice reduction literature. Activists at the LA LGBT Center developed the intervention by testing different persuasion techniques over more than 13,000 canvassing conversations (8). The current study's success speaks to the promise of a social science that takes the hypotheses of experienced practitioners seriously.posted by en forme de poire at 1:52 PM on April 8, 2016 [8 favorites]
For those interested in the history of the story from Science, here is a copy of the story - "Ironic coda to fraudulent study of bias."
posted by redct at 1:59 PM on April 8, 2016 [1 favorite]
posted by redct at 1:59 PM on April 8, 2016 [1 favorite]
These guys are clearly devastatingly brilliant and were incredibly brave to put their careers on the line to uncertain end to fight it out with the bastard in a pitched battle.
Oh god. What exactly did they risk? They had every incentive to do what they did and in the way that they did it.
posted by MisantropicPainforest at 2:02 PM on April 8, 2016
Oh god. What exactly did they risk? They had every incentive to do what they did and in the way that they did it.
posted by MisantropicPainforest at 2:02 PM on April 8, 2016
I can absolutely understand why folk don't relish the idea of having to undertake hundreds of lengthy conversations with hostile strangers in the hope of mildly raising their estimation of one's right to exist. It's an exhausting labour that I don't have to undergo to be accepted, and it carries risks which I've never had to face.
Well, when you put it that way it sounds terrible. But I've been reading up on decision-making, belief formation, and how people change their minds recently and basically every study says "People don't change their minds, and if you give them facts they hold onto their belief even harder."
It makes intuitive sense that personal contact with a member of the oppressed group will tilt sympathies, this is the first I've seen where that hypothesis was actually put into practice.
posted by Anonymous at 2:06 PM on April 8, 2016
Well, when you put it that way it sounds terrible. But I've been reading up on decision-making, belief formation, and how people change their minds recently and basically every study says "People don't change their minds, and if you give them facts they hold onto their belief even harder."
It makes intuitive sense that personal contact with a member of the oppressed group will tilt sympathies, this is the first I've seen where that hypothesis was actually put into practice.
posted by Anonymous at 2:06 PM on April 8, 2016
"Oh god. What exactly did they risk? They had every incentive to do what they did and in the way that they did it."LaCour put a lot of effort into covering his tracks, and while the statistical case they made was absolutely correct, LaCour did end up being able to make it look pretty convincingly ambiguous. The risk was that LaCour would somehow successfully weasel his way out of it on his own, or someone really powerful like Donald Green - LaCour's adviser who was implicated as being at least ridiculously negligent - would bury it and them by backing him up before any traction got started. Suddenly they would be the jealous assholes trying to ruin a promising career.
While they knew they were right, they certainly had no reasonable guarantee of success. The safer path would have been to do their own stuff right and just wait for someone else to find the fraud, assuming it continued to escalate. Others probably did exactly that, but they took the right path instead, I'm really glad they're being rewarded for it.
posted by Blasdelb at 2:47 PM on April 8, 2016 [10 favorites]
It makes intuitive sense that personal contact with a member of the oppressed group will tilt sympathies, this is the first I've seen where that hypothesis was actually put into practice.
To clarify, this result actually speaks against that, because it appeared not to make a difference whether the canvasser was a member of the oppressed group; what mattered appears to have been getting the subject to intentionally take the perspective of a member of that group. But the paper also didn't test "contact vs. placebo," only "(contact + perspective-taking) vs. (no-contact + perspective-taking) vs. placebo," so I'd imagine it's still possible it has some effect, just not an effect beyond perspective-taking.
posted by en forme de poire at 2:59 PM on April 8, 2016 [7 favorites]
To clarify, this result actually speaks against that, because it appeared not to make a difference whether the canvasser was a member of the oppressed group; what mattered appears to have been getting the subject to intentionally take the perspective of a member of that group. But the paper also didn't test "contact vs. placebo," only "(contact + perspective-taking) vs. (no-contact + perspective-taking) vs. placebo," so I'd imagine it's still possible it has some effect, just not an effect beyond perspective-taking.
posted by en forme de poire at 2:59 PM on April 8, 2016 [7 favorites]
(And of course that's with respect to a brief canvassing interaction, it's certainly still plausible that extended contact with members of a minority has an effect.)
posted by en forme de poire at 3:00 PM on April 8, 2016
posted by en forme de poire at 3:00 PM on April 8, 2016
Oh god. What exactly did they risk? They had every incentive to do what they did and in the way that they did it.
In addition to what Blasdelb has already said, becoming associated with work that's been visibly discredited is not without its own career risks. If nothing else, some people will automatically doubt your results because the last person who said what you're saying right now, was a lying liar.
posted by aperturescientist at 3:08 PM on April 8, 2016 [4 favorites]
In addition to what Blasdelb has already said, becoming associated with work that's been visibly discredited is not without its own career risks. If nothing else, some people will automatically doubt your results because the last person who said what you're saying right now, was a lying liar.
posted by aperturescientist at 3:08 PM on April 8, 2016 [4 favorites]
Don Green was not LaCour's advisor. He was Broockman's advisor when he was an undergraduate, he also hired Aronow. The idea that Broockman and Kalla and Aranow did this against the wishes of Don Green is laughable.
posted by MisantropicPainforest at 3:19 PM on April 8, 2016 [1 favorite]
posted by MisantropicPainforest at 3:19 PM on April 8, 2016 [1 favorite]
Whoops, I missremembered which names were which from the last thread, I should have written Lynn Vavreck, but both advisers absolutely should have seen this coming and would have had all sorts of selfish reasons to not want this to come out.
posted by Blasdelb at 3:28 PM on April 8, 2016 [2 favorites]
posted by Blasdelb at 3:28 PM on April 8, 2016 [2 favorites]
Oh god. What exactly did they risk? They had every incentive to do what they did and in the way that they did it.
posted by Panthalassa at 3:37 PM on April 8, 2016 [13 favorites]
Broockman wasn’t sure what to do. He started to bring up his concerns with other friends and advisers (about a dozen of them), and they mostly told him one of two things: Either there was a reasonable explanation for the anomalies, in which case bringing attention to them would risk harming Green and especially the less established LaCour unnecessarily; or something really was fishy, in which case it still wouldn’t be in Broockman’s interest to risk being seen as challenging LaCour’s work. There was almost no encouragement for him to probe the hints of weirdness he’d uncovered. [...]from The Case of the Amazing Gay-Marriage Data
On December 17, 2014, Broockman found himself a bit tipsy with someone he trusted: Neil Malhotra, a professor at Stanford’s business school. [...] A few drinks in, Broockman shared his concerns about LaCour’s data. Malhotra recalled his response: “As someone in your early career stage, you don’t want to do this,” he told Broockman. “You don’t want to go public with this. Even if you have uncontroversial proof, you still shouldn’t do it. Because there’s just not much reward to someone doing this.” If Broockman thought there was wrongdoing behind the irregularities he’d discovered, Malhotra said, it would be a better bet for him to pass his concerns on to someone like Uri Simonsohn, a University of Pennsylvania researcher who already had established an identity as a debunker [...]
[T]he moment your name is associated with the questioning of someone else’s work, you could be in trouble. If the target is someone above you, like Green, you’re seen as envious, as shamelessly trying to take down a big name. If the target is someone at your level, you’re throwing elbows in an unseemly manner. In either case, you may end up having one of your papers reviewed by the target of your inquiries (or one of their friends) at some point — in theory, peer reviewers are “blinded” to the identity of the author or authors of a paper they’re reviewing, but between earlier versions of papers floating around the internet and the fact that everyone knows what everyone else is working on, the reality is quite different. Moreover, the very few plum jobs and big grants don’t go to people who investigate other researchers’ work — they go to those who stake out their own research areas.
posted by Panthalassa at 3:37 PM on April 8, 2016 [13 favorites]
Sounds to me like an external validity mess (as with many social science RCTs). You can persistently change people's beliefs:
But I don't really see how the canvassing strategy that worked in the trans rights case is helpful for real-world voting scenarios. How can you tell in advance if it will work for a specific issue? Will you be able to reach enough people to make a difference? How is it affected by ideological competition?
posted by cichlid ceilidh at 3:40 PM on April 8, 2016
- on some issues (but not all)
- for people who demonstrate a willingness to take a survey and participate in a 10-minute conversation intended to challenge their beliefs
- when other groups are not actively using the same approach for an opposing stance
But I don't really see how the canvassing strategy that worked in the trans rights case is helpful for real-world voting scenarios. How can you tell in advance if it will work for a specific issue? Will you be able to reach enough people to make a difference? How is it affected by ideological competition?
posted by cichlid ceilidh at 3:40 PM on April 8, 2016
How is it affected by ideological competition?
I think they address this by looking at exposure to attack ads -- that's part of how they assessed the durability of the change. Unless that's not what you mean?
posted by en forme de poire at 5:27 PM on April 8, 2016 [1 favorite]
I think they address this by looking at exposure to attack ads -- that's part of how they assessed the durability of the change. Unless that's not what you mean?
posted by en forme de poire at 5:27 PM on April 8, 2016 [1 favorite]
I meant in the case where an anti-trans rights group also went ahead and had similar face-to-face conversations with people. The authors are drawing a distinction between the sort of canvassing the LGBT Center is trying and traditional mailers/ads, so it seems odd to look at whether its impacts are resilient to anti-trans attack ads.
I worked for one of development economics' "randomistas" in college and it's really pretty troubling the way their results are used and talked about (by the media, policy makers, and the researchers themselves). See, e.g., how Kalla and Broockman account for the non-impact of the abortion experiment. It all feels like a house of niche trials, cobbled together with speculative glue and a smattering of p-hacking.
Kalla and Broockman's transparency in terms of their pre-analysis plans, release of code and data are really heartening—certainly much better than what I was seeing in the econ realm not long ago. But it all feels like it's for naught when combined with fuzzy justifications and a lack of forthrightness about what the results actually say. Caveats are boring?
posted by cichlid ceilidh at 9:46 PM on April 8, 2016
I worked for one of development economics' "randomistas" in college and it's really pretty troubling the way their results are used and talked about (by the media, policy makers, and the researchers themselves). See, e.g., how Kalla and Broockman account for the non-impact of the abortion experiment. It all feels like a house of niche trials, cobbled together with speculative glue and a smattering of p-hacking.
Kalla and Broockman's transparency in terms of their pre-analysis plans, release of code and data are really heartening—certainly much better than what I was seeing in the econ realm not long ago. But it all feels like it's for naught when combined with fuzzy justifications and a lack of forthrightness about what the results actually say. Caveats are boring?
posted by cichlid ceilidh at 9:46 PM on April 8, 2016
"I can absolutely understand why folk don't relish the idea of having to undertake hundreds of lengthy conversations with hostile strangers in the hope of mildly raising their estimation of one's right to exist. It's an exhausting labour that I don't have to undergo to be accepted, and it carries risks which I've never had to face."
That was one of the reasons why when I was doing work on a couple similar campaigns, I referred to being straight and cis as superpowers that I was trying to use for good. It was a lot easier for me to shrug off hateful personal attacks aimed at a misconception of who I was than it was for my LGBT coworkers to deal with people who hated them.
I am so fucking chuffed to see this research come out and validate the canvassing method we used. The perspective-taking stuff can be really strong, and how I used to describe the conversations that I'd have was that I wasn't trying to change someone's mind so much as get them to tell me that they already agreed with me. The goal was always to get people to connect emotional values that they wanted to live by with the support for broader civil rights protections for LGBT people, especially since (in general) changes of view are a lot more durable when people think that it was their idea to come to the conclusion that you want them to.
posted by klangklangston at 2:38 PM on April 9, 2016 [2 favorites]
That was one of the reasons why when I was doing work on a couple similar campaigns, I referred to being straight and cis as superpowers that I was trying to use for good. It was a lot easier for me to shrug off hateful personal attacks aimed at a misconception of who I was than it was for my LGBT coworkers to deal with people who hated them.
I am so fucking chuffed to see this research come out and validate the canvassing method we used. The perspective-taking stuff can be really strong, and how I used to describe the conversations that I'd have was that I wasn't trying to change someone's mind so much as get them to tell me that they already agreed with me. The goal was always to get people to connect emotional values that they wanted to live by with the support for broader civil rights protections for LGBT people, especially since (in general) changes of view are a lot more durable when people think that it was their idea to come to the conclusion that you want them to.
posted by klangklangston at 2:38 PM on April 9, 2016 [2 favorites]
I am so fucking chuffed to see this research come out and validate the canvassing method we used.
I wouldn't go that far. Of the 68,000 people they initially contacted for the survey, only about 400 people were surveyed after the treatment was administered. The external validity of this experiment is almost nil.
posted by MisantropicPainforest at 4:15 PM on April 9, 2016
I wouldn't go that far. Of the 68,000 people they initially contacted for the survey, only about 400 people were surveyed after the treatment was administered. The external validity of this experiment is almost nil.
posted by MisantropicPainforest at 4:15 PM on April 9, 2016
But I don't really see how the canvassing strategy that worked in the trans rights case is helpful for real-world voting scenarios.
Trans rights are very much a real-world voting scenario.
posted by Dysk at 4:34 AM on April 10, 2016 [2 favorites]
Trans rights are very much a real-world voting scenario.
posted by Dysk at 4:34 AM on April 10, 2016 [2 favorites]
Trans rights are very much a real-world voting scenario.
Trans rights are very much a real-world issue.
That doesn't mean anything for the viability of this canvassing strategy. The researchers conducted a pre-selection through a survey to reduce what would otherwise have been a *hugely* expensive approach. This allowed them to whittle down the pool of people to be canvassed to ones more likely to open the door and listen. In the process they also, well, whittled down the pool of people impacted.
Outside of the context of a political science research project, you're going to want to impact a lot of people, otherwise it won't swing general opinion much. So you're not going to be able to do the cherry-picking of canvassed people, like they did here, if you want any sizable population-level effect. That brings the cost of canvassing back up.
And then you have the external validity issue, where the study does not tell us anything about how effective the canvassing strategy is in other geographies, for people who did not respond to the survey, at other points in time, when there are opposing initiatives using the same technique this study endorses for opposing ideologies, or when those people who were once willing to have a 20 minute conversation with a canvasser no longer are.
The 538 reporter brings some of this up with Broockman, whose response is basically that the treatment and control groups are biased in the same way, since they were properly randomized. This makes the results internally valid, but the sample is still biased and the results are not necessarily externally valid.
It's just bizarre (but common) that Broockman and Kalla went out of their way to conduct a rigorous analysis (pre-analysis plans, formal analysis, release of code and data) and then they basically throw it all away with this nonsense.
Notably, while this study shows that the canvassing strategy used had a positive impact on the recipients' opinions on trans rights, it did not compare the LBGT Center's approach to other possible ones (for the same issue, population).
Maybe there exists one that is cheaper to implement, that only works for progressive causes, that is resistant to to the short-term effects of attack ads, *and* that is more impactful at the population level, both for entrenched issues (abortion) and for ones where most people are unfamiliar (trans rights).
That would be great!
As a final note, it's worth looking at these two posts by Chris Blattman where he ponders RCTs, internal and external validity, and meta-analysis (1, 2). See also the comments for dissenting views. I don't agree with the idea that we should be aiming for lots of cheap, underpowered studies and doing a meta-analysis of their results, but I agree that social science researchers "invest very little as a profession in how to scientifically and reliably maximize out of sample relevance."
posted by cichlid ceilidh at 7:02 AM on April 10, 2016
Trans rights are very much a real-world issue.
That doesn't mean anything for the viability of this canvassing strategy. The researchers conducted a pre-selection through a survey to reduce what would otherwise have been a *hugely* expensive approach. This allowed them to whittle down the pool of people to be canvassed to ones more likely to open the door and listen. In the process they also, well, whittled down the pool of people impacted.
Outside of the context of a political science research project, you're going to want to impact a lot of people, otherwise it won't swing general opinion much. So you're not going to be able to do the cherry-picking of canvassed people, like they did here, if you want any sizable population-level effect. That brings the cost of canvassing back up.
And then you have the external validity issue, where the study does not tell us anything about how effective the canvassing strategy is in other geographies, for people who did not respond to the survey, at other points in time, when there are opposing initiatives using the same technique this study endorses for opposing ideologies, or when those people who were once willing to have a 20 minute conversation with a canvasser no longer are.
The 538 reporter brings some of this up with Broockman, whose response is basically that the treatment and control groups are biased in the same way, since they were properly randomized. This makes the results internally valid, but the sample is still biased and the results are not necessarily externally valid.
It's just bizarre (but common) that Broockman and Kalla went out of their way to conduct a rigorous analysis (pre-analysis plans, formal analysis, release of code and data) and then they basically throw it all away with this nonsense.
Notably, while this study shows that the canvassing strategy used had a positive impact on the recipients' opinions on trans rights, it did not compare the LBGT Center's approach to other possible ones (for the same issue, population).
Maybe there exists one that is cheaper to implement, that only works for progressive causes, that is resistant to to the short-term effects of attack ads, *and* that is more impactful at the population level, both for entrenched issues (abortion) and for ones where most people are unfamiliar (trans rights).
That would be great!
As a final note, it's worth looking at these two posts by Chris Blattman where he ponders RCTs, internal and external validity, and meta-analysis (1, 2). See also the comments for dissenting views. I don't agree with the idea that we should be aiming for lots of cheap, underpowered studies and doing a meta-analysis of their results, but I agree that social science researchers "invest very little as a profession in how to scientifically and reliably maximize out of sample relevance."
posted by cichlid ceilidh at 7:02 AM on April 10, 2016
"Outside of the context of a political science research project, you're going to want to impact a lot of people, otherwise it won't swing general opinion much. So you're not going to be able to do the cherry-picking of canvassed people, like they did here, if you want any sizable population-level effect. That brings the cost of canvassing back up."
Yeah, you are. For a real voting issue, you're going to take your VAN and filter out people who don't vote or who are unlikely to vote. Then you're going to filter out people who consistently vote against you, inferred from partisan primary participation and other contact records. Then you're going to look at your record of contact and see who has been canvassed before by you (or by one of the other databases that you buy that has a similar partisan distribution), or who has given money. Then, optionally, you're going to do a phone canvass using similar scripts, which will help you weed out people who already agree or who are strongly opposed. Then you're going to go face-to-face with weak supporters, because boosting their turnout is going to gain you the biggest advantage — you're going to keep touching base with them to make sure that they have a way to get to the polls, etc. The second group to go after is people who have been receptive to canvassing before but are weak opposition or neutral. With either of those, you shift the script to focus more on soft persuasion, because you want them to either agree with you or stay home, so you can't piss them off enough to motivate them to vote against you.
But yes, any modern ballot campaign has a pretty good idea of who they need to reach, and the biggest knock against persuasion canvassing in general has been that the persuasion hasn't been durable. People will regress pretty quickly on followup interviews, so what's exciting about this is that they demonstrated a persistence of change of opinion.
In working on similar projects for marriage and for the Fair Education Act, we had hundreds of thousands of conversations, which were supposed to contribute to the LaCour dataset. I was dubious of some of it because I saw that the way the data was collected, which was pretty noisy, and he got much better results than we did. But this study is pretty much in line with what we saw, and really does represent a practical approach for real-world campaigning. (It does also support what we saw in a lot of message testing — people don't know what "transgender" means, so their opinions were more mutable on that than on marriage.)
posted by klangklangston at 1:50 PM on April 10, 2016 [3 favorites]
Yeah, you are. For a real voting issue, you're going to take your VAN and filter out people who don't vote or who are unlikely to vote. Then you're going to filter out people who consistently vote against you, inferred from partisan primary participation and other contact records. Then you're going to look at your record of contact and see who has been canvassed before by you (or by one of the other databases that you buy that has a similar partisan distribution), or who has given money. Then, optionally, you're going to do a phone canvass using similar scripts, which will help you weed out people who already agree or who are strongly opposed. Then you're going to go face-to-face with weak supporters, because boosting their turnout is going to gain you the biggest advantage — you're going to keep touching base with them to make sure that they have a way to get to the polls, etc. The second group to go after is people who have been receptive to canvassing before but are weak opposition or neutral. With either of those, you shift the script to focus more on soft persuasion, because you want them to either agree with you or stay home, so you can't piss them off enough to motivate them to vote against you.
But yes, any modern ballot campaign has a pretty good idea of who they need to reach, and the biggest knock against persuasion canvassing in general has been that the persuasion hasn't been durable. People will regress pretty quickly on followup interviews, so what's exciting about this is that they demonstrated a persistence of change of opinion.
In working on similar projects for marriage and for the Fair Education Act, we had hundreds of thousands of conversations, which were supposed to contribute to the LaCour dataset. I was dubious of some of it because I saw that the way the data was collected, which was pretty noisy, and he got much better results than we did. But this study is pretty much in line with what we saw, and really does represent a practical approach for real-world campaigning. (It does also support what we saw in a lot of message testing — people don't know what "transgender" means, so their opinions were more mutable on that than on marriage.)
posted by klangklangston at 1:50 PM on April 10, 2016 [3 favorites]
Thanks for the insight, klangklangston.
I'll own up to not being familiar with the persuasion literature. And I clearly was not considering this as part of a constellation of simultaneous approaches. My experience is in the realm of things where the goal is very much scale up of treatment to larger population (i.e. deciding you should spend millions on bed nets for a whole country after trying it in a few villages).
people don't know what "transgender" means, so their opinions were more mutable on that than on marriage.
Do you think there would have been a durable impact from just explaining the term, without the rest of the up to 20 minute dialogue? I'm also curious about the presence of an impact from the attack ads, since there was a short term regression (even if it did not persist). Is that worrisome when there may be many such ads in the period immediately before a vote, when we really care about people's stance?
posted by cichlid ceilidh at 3:57 PM on April 10, 2016 [2 favorites]
I'll own up to not being familiar with the persuasion literature. And I clearly was not considering this as part of a constellation of simultaneous approaches. My experience is in the realm of things where the goal is very much scale up of treatment to larger population (i.e. deciding you should spend millions on bed nets for a whole country after trying it in a few villages).
people don't know what "transgender" means, so their opinions were more mutable on that than on marriage.
Do you think there would have been a durable impact from just explaining the term, without the rest of the up to 20 minute dialogue? I'm also curious about the presence of an impact from the attack ads, since there was a short term regression (even if it did not persist). Is that worrisome when there may be many such ads in the period immediately before a vote, when we really care about people's stance?
posted by cichlid ceilidh at 3:57 PM on April 10, 2016 [2 favorites]
"Do you think there would have been a durable impact from just explaining the term, without the rest of the up to 20 minute dialogue? I'm also curious about the presence of an impact from the attack ads, since there was a short term regression (even if it did not persist). Is that worrisome when there may be many such ads in the period immediately before a vote, when we really care about people's stance?"
Not really. We did a fair amount of message testing around something that we (luckily) avoided seeing on the ballot — AB 1266 — and found that without any explanation, people didn't have a firm opinion at all and the most frequent question was just what "transgender" meant. Then people tended to support equal access to education for transgender students after that was explained, but that their support basically lasted until they heard the opposition messaging (rapists in your bathrooms/teen boy peepers). We do know that with stuff like marriage, people's opinions tend to harden in the direct lead up to the vote, so they're actually more resistant to change when they're being bombarded — the effect that the last minute ads tend to have is on turnout, not on opinions.
posted by klangklangston at 10:00 AM on April 11, 2016 [2 favorites]
Not really. We did a fair amount of message testing around something that we (luckily) avoided seeing on the ballot — AB 1266 — and found that without any explanation, people didn't have a firm opinion at all and the most frequent question was just what "transgender" meant. Then people tended to support equal access to education for transgender students after that was explained, but that their support basically lasted until they heard the opposition messaging (rapists in your bathrooms/teen boy peepers). We do know that with stuff like marriage, people's opinions tend to harden in the direct lead up to the vote, so they're actually more resistant to change when they're being bombarded — the effect that the last minute ads tend to have is on turnout, not on opinions.
posted by klangklangston at 10:00 AM on April 11, 2016 [2 favorites]
« Older Feel The Payne | the far away shore Newer »
This thread has been archived and is closed to new comments
posted by Anonymous at 1:21 PM on April 8, 2016