Supreme court opinions successfully modeled as Facebook like button
November 17, 2011 7:35 AM   Subscribe

Roger Guimera Manrique and Marta Sales-Pardo have shown that "U.S. Supreme Court justice votes are more predictable than one would expect from an ideal court composed of perfectly independent justices."

"For our predictions, we use models and methods that have been developed to uncover hidden associations between actors in complex social networks. We show that these methods are more accurate at predicting justice's votes than forecasts made by legal experts and by algorithms that take into consideration the content of the cases."
posted by jeffburdges (45 comments total) 4 users marked this as a favorite
 
Could we replace a justice of the U.S. Supreme Court by an algorithm that does not know anything about the law or the case at hand, but has access to the remaining justices' votes and to the voting record of the court?

10 DO NOT ASK ANY QUESTIONS
20 VOTE WITH TONY
30 GOTO 10

posted by goethean at 7:40 AM on November 17, 2011 [16 favorites]


We find that U.S. Supreme Court justice votes are more predictable than one would expect from an ideal court composed of perfectly independent justices.

I don't really understand this claim in an academic paper. Is it not ideal for a Supreme Court judge to consistently and predictably interpret the law? Judges can still be independent while consistently disagreeing with each other.
posted by East Manitoba Regional Junior Kabaddi Champion '94 at 7:41 AM on November 17, 2011 [2 favorites]


They make it sound ominous, but I thought it was well-known, and desired, that the judges discuss the case with each other and try to win each other over.
posted by DU at 7:42 AM on November 17, 2011 [2 favorites]


I think they're using "ideal" in the mathematical sense of "easy to analyze", as opposed to the real-people sense of "best",
posted by madcaptenor at 8:07 AM on November 17, 2011


Is it not ideal for a Supreme Court judge to consistently and predictably interpret the law? Judges can still be independent while consistently disagreeing with each other.

Right, the problem is that for some reason this study expects each individual judge to have decisions that are statistically independent of every other judge's decisions, which is a completely absurd expectation. It's very reasonable to expect there to be differences in values among the judges and that judges who are most similar to each other would have similar votes. The "ideal court" where all the relationships between judge decisions vary wildly from case to case would make far less sense in terms of the thought processes that go into making these kinds of decisions.
posted by burnmp3s at 8:07 AM on November 17, 2011 [4 favorites]


I don't really understand this claim in an academic paper. Is it not ideal for a Supreme Court judge to consistently and predictably interpret the law? Judges can still be independent while consistently disagreeing with each other.

The thing is that many judges claim that they're mere "umpires" who simply call the shots as they see them, as contrasted with those "other" judges who allow themselves to be swayed by inappropriate motivations. This claim has much less juice when it's obvious that every person brings their own frame of reference to the table, irrespective of claims of greater fidelity to the Constitution.
posted by Sticherbeast at 8:12 AM on November 17, 2011 [1 favorite]


So... what they're suggesting is that SCOTUS decisions are influenced by social interaction, communication, and human patterns of thought?

OUTRAGE!
posted by IAmBroom at 8:13 AM on November 17, 2011


I think they're using "ideal" in the mathematical sense of "easy to analyze", as opposed to the real-people sense of "best",

OK, but how are they using "independent"? I still think two independent judges can predictably disagree.
posted by East Manitoba Regional Junior Kabaddi Champion '94 at 8:13 AM on November 17, 2011


It's disheartening how discussions devolve into nit-picking about process these days. The actual subject takes a back seat amid wrangling over "false equivalency" and "cause and effect" and "bias". Those are important, to be sure, but the issues still need to be discussed.

FACT: Basic corporate behavior is fucking over the mass of this country, has been for the last three decades at least, and has been helped and is being helped by elected representatives who flat out refuse to represent the majority of the electorate. And yet a huge number of people will yell "unAmerican" at anyone who dares to complain.

FACT: We have a chief justice who said he was to be "an umpire who simply calls balls and strikes" and that "precedent is the most important thing", who has quickly and consistently run in the opposite direction. The main conservative justices on this court have absolutely no problems with speaking at conservative functions or making blatantly political statements. Citizens United is a tortured, twisted, activist and highly indefensible decision that is harmful to this country.

My point is this: Fuck nit-picking. If someone is punching you in the face, his motivation is secondary. Removing his fist from your face is the primary concern. Anything else is working on the splinter while missing the log.
posted by Benny Andajetz at 8:17 AM on November 17, 2011 [13 favorites]


OK, but how are they using "independent"? I still think two independent judges can predictably disagree.

If they're independent, by definition, they won't predictably agree or disagree. Their "ideal" court is created by taking the data from an actual court and rearranging the "yes" and "no" votes within each case randomly. So you end up with the same number of cases at 9-0, 8-1, ... as you do in the actual court, but the relationships between the different judges are destroyed.

"Independent" here means that if I know how A voted, that tells me nothing about how B voted. If you tell me that A voted "yes", that makes it neither more likely nor less likely that B will vote "yes". So nobody seriously thinks that would be a good model of voting; in practice A and B's votes will be correlated because they're both connected to some outside signal, namely the facts of the case.

I think non-mathematical readers might be interpreting "independent" as just meaning that the two judges don't talk to each other. But mathematically, "independent" is much stronger; it means that the judges flip coins to decide how to vote.
posted by madcaptenor at 8:33 AM on November 17, 2011 [2 favorites]


From the PLoS paper:

"Under these assumptions, the ruling on a case can be modeled as a binomial process where each justice has the same probability q of agreeing with the petitioner, so that “easy cases” (those with q ~ 0 or 1) result in unanimous decisions, whereas “hard cases” (q ~ 0.5) generally result in divided votes. The defining characteristic of an ideal court is that justices' votes are uncorrelated, that is, the fact that two justices agree (or disagree) on one case carries no information about their potential agreement on another case."

Their definition of ideal is that the presented cases are either "easy" (where the solution is obvious and you end up with almost unanimous decisions) or "hard" (where the cases are complicated and you get split decisions). They model this by having each judge flip a biased coin (with a probability q of getting heads), and s/he votes yes if its heads. The amount of bias on the coin depends on how easy the case is.

This definition is certainly ideal in the mathematical sense, and equally certainly not-ideal in the real world.
posted by bessel functions seem unnecessarily complicated at 8:33 AM on November 17, 2011 [2 favorites]


If your wife is a lobbyist who directly benefits from your decisions, that might have something to do with the impartiality.
posted by benzenedream at 8:35 AM on November 17, 2011 [3 favorites]


It would be interesting to compare versions of the Court across the years. The culture of the Court has shifted dramatically through the years.
posted by Sticherbeast at 8:36 AM on November 17, 2011


They should be predictable. The Constitution should be the forecast.
posted by BurnChao at 8:39 AM on November 17, 2011


If your wife is a lobbyist who directly benefits from your decisions, that might have something to do with the impartiality.

I understand that might look bad to some, but it would only be actually bad if Thomas' wife traveled back in time to ensure he was consistent on opposing measures just like ObamaCare his whole time on the Court--because he has been. There was never a chance he would say any part of this law was constitutional. Whereas your Scalias and Alitos etc. will play along and take a law and apply it if there's not some challenge to its constitutionality, Thomas will ask that question himself and dissent almost every time.

Besides, if he recused based on his wife's indirect support, Kagan must certainly refuse based on her own indirect support and so it's one for one and we're looking at seven* instead of nine lawyers telling us what our own Constitution means.

*Of course it's really just one lawyer these days.
posted by resurrexit at 8:49 AM on November 17, 2011


They should be predictable. The Constitution should be the forecast.

The Constitution is a pretty general document when you really think about it, and worse, the Justices often don't agree on what it means to follow the Constitution. On the tougher issues, it's not so simple to say that the Constitution forecasts this or that, because different people will have different views as to what the Constitution even means re: this or that, especially when applied to situations not conceived of at the time of the Constitution or its Amendments' writing.

There's also the matter of precedent in the Supreme Court. Technically, the Supreme Court could overrule its own judge-made law on a dime, but they don't, because everyone (save maybe Thomas) respects precedent. So, they also have to navigate a fair amount of that cruft. There will always be distinctions and similarities between cases, making this an adventure. It's not always clear when to apply, extend, limit, refuse to apply, or to expressly overrule an established bit of case law.
posted by Sticherbeast at 8:49 AM on November 17, 2011 [1 favorite]


Is it not ideal for a Supreme Court judge to consistently and predictably interpret the law?

Anne Coulter might "consistently and predictably interpret the law", but that's not a good thing, no.

You don't want judges to be interpret on an idealogical consistent basis, you want them to judge the merits of the case, and the algorithm suggests that the merits of the case have a lot less to do with the judgements than is ideal.
posted by -harlequin- at 8:49 AM on November 17, 2011 [1 favorite]


You don't want judges to be interpret on an idealogical consistent basis, you want them to judge the merits of the case...

But basing a decision on the merits of the case, whatever that means to your ideal judge on that particular day, is your ideology, so I guess at least your ideal judge would be consistent in being inconsistent (assuming the merits of one case warranted your interpreting a law differently than in another case applying the same law)?
posted by resurrexit at 9:00 AM on November 17, 2011


Okay, this is one of those "non political science people doing simple political science, re-finding well established results that they think are unique or interesting" situations.

The idea that justices care deeply about the merits of the case and about precedent is called the legal model. Nobody but lawyers believes it, or at least pretends to believe it.

Everyone who's not a lawyer understands that the justices rule on the basis of their policy preferences; this is called the attitudinal model. It won. Period. The remaining disputes from people who actually analyze this stuff are about how strategic and long-term-view their actions are.

What they've done here is not news. Not remotely news, unless we've somehow slipped back to the 1970s. The best you can say about what they're doing is that it's marginally interesting that they're analyzing it in terms of network analysis and blocs instead of the more usual technique of estimating an ideal point in some N-space from their votes.
posted by ROU_Xenophobe at 9:00 AM on November 17, 2011 [2 favorites]


Now I understand that

We find that U.S. Supreme Court justice votes are more predictable than one would expect from an ideal court composed of perfectly independent justices

can be paraphrased as

We find that U.S. Supreme Court justices can be better modeled as weighted random number generators than as non-weighted random number generators

the statement makes a lot more sense. But it doesn't appear very insightful.
posted by East Manitoba Regional Junior Kabaddi Champion '94 at 9:00 AM on November 17, 2011 [2 favorites]


10 DO NOT ASK ANY QUESTIONS
20 VOTE WITH TONY
30 GOTO 10


10 STAY IGNORANT REGARDING THE SUPREME COURT
20 GOTO 10
posted by gyc at 9:04 AM on November 17, 2011 [4 favorites]


Everyone who's not a lawyer understands that the justices rule on the basis of their policy preferences; this is called the attitudinal model. It won. Period. The remaining disputes from people who actually analyze this stuff are about how strategic and long-term-view their actions are.

This sounds fascinating. If I wanted to read more about it where should I start? (If it matters, I have JSTOR and don't mind equations, but I know fuck-all about political science.)
posted by nebulawindphone at 9:13 AM on November 17, 2011


Everyone who's not a lawyer understands that the justices rule on the basis of their policy preferences; this is called the attitudinal model. It won. Period.

I've heard many, many real life non-lawyers approvingly deliver some version of the "judges should be like umpires" spiel.

Almost no lawyers actually think the Justices are like neutral oracles who simply read the law from the Constitution. That theory of jurisprudence died sometime the early 20th Century. Someone like Scalia, who holds a view similar to this, is more like a fundamentalist than a traditionalist.

There has, on the other hand, been quite a bit of interesting literature about the song and dance of choosing Justices, avoiding candidates who have strongly stated political beliefs, even though everyone knows that all Justices have some sort of political belief system or another.

10 DO NOT ASK ANY QUESTIONS
20 VOTE WITH TONY
30 GOTO 10

10 STAY IGNORANT REGARDING THE SUPREME COURT
20 GOTO 10


"Thomas always votes with Scalia" is at various points a myth or a specious exaggeration. Check the Voting Alignment section on his Wikipedia page. Depending on how you calculate it, the numbers vary, with one metric putting Scalia/Thomas below Breyer/Ginsburg as far as twin voting is concerned, whereas another metric states that while Thomas and Scalia may vote together 91% of the time, Breyer and Ginsburg vote(d) together 90% of the time. A whopping 1% difference.

You don't have to particularly like Justice Thomas, but he's not an idiot, and he's not a Scalia clone. Thomas makes Scalia look practically reasonable in comparison.
posted by Sticherbeast at 9:16 AM on November 17, 2011 [1 favorite]


This sounds fascinating. If I wanted to read more about it where should I start? (If it matters, I have JSTOR and don't mind equations, but I know fuck-all about political science.)

With Segal and Spaeth.

Or a blog length summary thereof.

Or get hands on and dive into the raw data yourself.
posted by T.D. Strange at 9:21 AM on November 17, 2011 [2 favorites]


I don't follow this lit at all, but was going to recommend that Segal & Spaeth book as well as probably a good intro. But I've not read it.

If you have jstor already and don't want to part with your lucre, just fire up google scholar and search for segal and spaeth, or for "quinn martin supreme court", or even just "attitudinal model supreme court"
posted by ROU_Xenophobe at 9:25 AM on November 17, 2011


The point is not so much that it is surprising that there are social network effects, it's that these are a better predictor of outcome than expert opinion or considering the content of the cases. Whether this tells us something about judges or about the ability of social networks to distill information is open to question.
posted by Nothing at 9:39 AM on November 17, 2011


How Judges Think by Richard Posner is more current, comprehensive, and informed than Siegel & Spaeth, though the latter is still good.
posted by anigbrowl at 10:14 AM on November 17, 2011 [1 favorite]


But mathematically, "independent" is much stronger; it means that the judges flip coins to decide how to vote.

This is quite wrong. The authors' definition of independence is put forth directly in the paper:

The defining characteristic of an ideal court is that justices' votes are uncorrelated, that is, the fact that two justices agree (or disagree) on one case carries no information about their potential agreement on another case...

If the court is less than ideal and some of its justices cast votes with a consistent bias, the decisions of individual justices become more predictable because, given the vote of eight justices on a case, one can use the track record of the court to classify the case into a certain “block”; then the track record of the ninth justice enables one to assess what is her most likely decision for cases in that block. In other words, bias introduces correlations between justices' voting patterns, which in turn result in increased predictability.


It's all in the paper, folks. Just give it a read.
posted by Blazecock Pileon at 10:33 AM on November 17, 2011


I had a colleague and mentor who occasionally had lunch with Justice Thomas.

I presume no-one ordered a Coke?

posted by Capt. Renault at 11:03 AM on November 17, 2011 [1 favorite]


Shocked, shocked.
posted by zomg at 11:07 AM on November 17, 2011


Blazecock, uncorrelated is a fancy way to say random. Note on figure 1 how the ideal court control data was obtained by randomly reshuffling the votes for each case. The source data includes classifications in area of law, which are ignored, as are any complex cases with more than a single dissent or which address more than a single question of law.
posted by anigbrowl at 11:14 AM on November 17, 2011


Anigbrowl: Uncorrelated isn't just a fancy way to say random. One can generate correlated random variables without much difficulty. Or in the real world, for instance, I picked a random name out of the phone book and asked about that person's gender and height. Both random, but definitely correlated.

Not to say that the paper is at all useful. It uses a nice technique from looking at clustering on complex networks, namely stochastic block models, to observe something patently obvious and surely observed at length and in more depth in the field-specific literature. It's a cute exercise, but the only thing that struck me as at all surprising given their definitions was that they observe that today's court is less "predictable" than previous eras. It's not entirely clear to me, though, if this is an artifact of their selection process. Perhaps, though, there is some merit in the approach of thinking of rulings on cases as something like binary measurements from a high-dimensional value-space. You can then find a maximum likelihood "closeness" between judges. It does feel more principled than either simple correlations or coding-based arbitrary axes, even if it doesn't really deliver more rich results in its current form. But remember, PLoS ONE is a journal that explicitly emphasizes technical correctness over impact or importance.
posted by Schismatic at 11:28 AM on November 17, 2011


You don't want judges to be interpret on an idealogical consistent basis, you want them to judge the merits of the case, and the algorithm suggests that the merits of the case have a lot less to do with the judgements than is ideal.

That's not what the study is saying at all. The study is showing that when the judges disagree (which is often), they usually disagree in ways that are not completely random when compared to previous decisions. In the study's "ideal" court, any differences between judges would be completely random, which would make absolutely no sense. Let's say that in one case, one judge decided that it was fair to make an interpretation about the intent behind a law, and another judge thought that it was more important to adhere to the letter of the law. For future cases in the ideal version, there can be no inherent difference between these two judges where one more loosely interprets a law and one more strictly adheres to a law, because at that point their decisions are no longer independent from one another.
posted by burnmp3s at 11:42 AM on November 17, 2011


Blazecock, uncorrelated is a fancy way to say random.

No, it really isn't. Independence has nothing to do with the randomness of tossing a fair coin. Independence doesn't say the judges are tossing a coin to make decisions. All independence says, is that one toss of a coin, whether fair or biased, doesn't affect the next toss. It says nothing about how much (or little) information is in any one event. Again, this is in the paper.
posted by Blazecock Pileon at 11:48 AM on November 17, 2011


So is their use of random shuffling to obtain an ideal court for comparison to the real thing.
posted by anigbrowl at 12:03 PM on November 17, 2011 [1 favorite]


You don't have to particularly like Justice Thomas, but he's not an idiot, and he's not a Scalia clone. Thomas makes Scalia look practically reasonable in comparison.

I think that is exactly Thomas' function in a coordinated conservative strategy. Also, whether Thomas is an 'idiot' depends on your definition of that word, but from what I've seen of him he's a deeply foolish and unserious thinker. People are much too prepared to respect the intelligence of powerful people who throw up polysyllabic justifications of their actions, even if those actions show profound disrespect for the social role they are supposed to hold.
posted by zipadee at 12:12 PM on November 17, 2011


Independence doesn't say the judges are tossing a coin to make decisions. All independence says, is that one toss of a coin, whether fair or biased, doesn't affect the next toss.

Then explain how two judges can disagree on one case in a way in which that would give you absolutely no information about the likelihood of them disagreeing on another case. For example, Judge A gives principal X for a decision, whereas Judge B disagrees with the importance of principal X. If principal X comes up in a new but similar case, isn't it reasonable to think that Judge A is going to be consistent and still agree with principal X, whereas Judge B will continue to reject it?
posted by burnmp3s at 12:33 PM on November 17, 2011 [2 favorites]


I'm afraid anyone criticizing the notions of ideal and independent has completely failed reading comprehension on the quotes in the post, much less the linked article.

"We show [methods that uncover hidden associations between actors in complex social networks] are are more accurate at predicting justice's votes than forecasts made by legal experts and by algorithms that take into consideration the content of the cases."

In other words, the justices own establish legal opinions as interpreted by legal scholars cannot predict their votes nearly as well as the authors' social network models for hidden associations. An algorithmic analysis of case content faired almost as well as the legal experts, but again vastly undershot the social model.

I haven't read the paper closely enough to determine if it addresses whether justices influencing one another fully accounts for their deviations from legal experts predictions.
posted by jeffburdges at 1:00 PM on November 17, 2011


Actually, the algorithmic method outperformed the legal experts, as you would know if you were familiar with the widely cited paper they use for comparison, which also offers an explanation for why this might be so.
posted by anigbrowl at 1:28 PM on November 17, 2011


The bltimes.com article claimed 67.9% for legal experts vs. 66.7% for the algorithm, which differs widely from your citation, suggesting their dataset or algorithm differed.
posted by jeffburdges at 1:57 PM on November 17, 2011


The difference is whether one is talking about the votes of justices or the overall decision of the court. This paper shows that experts are right 59.1% of the time for court decisions, and 67.9% of the time for individual judges (Tables 1 & 2).
posted by chortly at 2:17 PM on November 17, 2011 [1 favorite]


I went by the citation in the PLOS paper, which is in footnote 2. The narrow gap cited in the article is from the paper in footnote 5 of the PLOS paper and deals with forecasting votes of individual justices, not the whole court.

Why are you even talking about the newspaper article when you can just reference the primary source material directly? Newspapers are notorious for the poor quality of their science reporting and highlighting a statistic that isn't appropriate for comparison as they did here is a good example of why.
posted by anigbrowl at 2:19 PM on November 17, 2011


In other words, the justices own establish legal opinions as interpreted by legal scholars cannot predict their votes nearly as well as the authors' social network models for hidden associations.

All that means is that the social model captures the unique differences between the judges better than the expert predictions or the case content analysis. The same way that a Netflix-style prediction engine could predict what kind of films a person might like better than other methods. The "ideal" case imagines some sort of situation where such a prediction model would not work at all because no such consistent differences exist.
posted by burnmp3s at 2:49 PM on November 17, 2011


I haven't read the paper but I'd like to point out that PLoS ONE is a very unusual journal because it exists to publish correct-but-maybe-not-relevant-or-useful results. Same place that published the recent study of the neuroscience of how baseball players manage to hit the ball, for example.

It seems to me kind of obvious that judges would influence each others votes; what else would they do when they're working on those opinions? This is just an interesting method for proving that they do so.
posted by miyabo at 2:51 PM on November 17, 2011


The key thing for the Netflix challenge, which they didn't go out of their way to highlight, is that when trying to guess someone's rating of a movie, you're off an average of 1 star (out of 5) if you just guess the overall (mean) rating of that movie. At the start of their challenge, their in-house, not unsophisticated model using millions of individual ratings was able to improve that by 0.1 stars. The entire challenge was about improving on that by another 0.1 stars. So even with tons of data, predicting these sorts of things is really hard if you don't cheat.
posted by chortly at 3:37 PM on November 17, 2011


« Older RIP Ilya Zhitomirskiy   |   We hold these vegetables to be self-evident Newer »


This thread has been archived and is closed to new comments