The man who sued his trolls
August 29, 2018 9:52 PM   Subscribe

This man has decided, in the wake of Charlottesville, to sue his trolls. After sharing video of the deadly neo-Nazi rally in Charlottesville, Brennan Gilmore had his life upended by online tormentors. Now he's testing whether you can bring them to justice.

When law enforcement won’t help, citizens always have another option: hit the bad guys in the wallet. But it’s not that easy. The forensic work of identifying anonymous online perpetrators is expensive, and many attorneys won’t take the cases because the individual trolls usually don’t have much money to cough up even if you can find them.

A fascinating look at whether the legal system can be used to attack trolls.
posted by Homo neanderthalensis (26 comments total) 28 users marked this as a favorite
 
Good article. Depressing as hell but it’s interesting the various tactics different people are trying.
posted by not_the_water at 10:35 PM on August 29, 2018 [1 favorite]


This is such a tough problem. You can only get so far outing people to their employers, which as far as I know is the only known deterrent.

Still wondering if the internet will turn out to be a good idea in the long run.
posted by Tell Me No Lies at 10:52 PM on August 29, 2018 [2 favorites]


Still wondering if the internet will turn out to be a good idea in the long run.

Well, if it doesn't there might be some relief in knowing the run won't likely be all that long.
posted by gusottertrout at 11:37 PM on August 29, 2018 [6 favorites]


Still wondering if the internet will turn out to be a good idea in the long run.

I'm currently reading Crash Override, Zoe Quinn's book about being the canary in the coal mine for this stuff. Her position is, even after everything she's been through, the internet was still a good idea. I figure she's got more reasons than most to want to throw the whole internet in the bin, but a lot of her book is about how the internet gave her opportunities to grow into the person she became that she never would have had otherwise.
posted by Merus at 12:15 AM on August 30, 2018 [6 favorites]


during an online hate storm, individual members of the mob might send only one or two threatening messages each—not enough to constitute a pattern, as the laws require, says Danielle Citron, a University of Maryland law professor and the author of Hate Crimes in Cyberspace.

In the UK, the Crown Prosecution Service now treats the existence of an online mob as relevant to determining the harm that an individual message does and therefore to whether to prosecute: "[Prosecutors must take account of] the circumstances of and the harm caused to the victim, including whether ... whether this was part of a coordinated attack (“virtual mobbing”)". I do think there are freedom of speech arguments against some of the prosecutions that get brought in the UK, but in general I think the UK approach is not bad compared to the patchwork quilt that seems to exist in most US states. Of course, it barely matters when the joys of the internet mean that people from outside the jurisdiction can send as many death and rape threats as people who are in the UK and could in principle be jailed for their contribution. It is an area where you need some global coordination of standards to be effective.
posted by Aravis76 at 12:43 AM on August 30, 2018 [8 favorites]


I do think there are freedom of speech arguments against some of the prosecutions that get brought in the UK, but in general I think the UK approach is not bad compared to the patchwork quilt that seems to exist in most US states.

A benefit of centralised CPS authority, really (although this skips over the Northern Ireland and Scottish legal systems). That said, the state by state differences in willingness to prosecute are just replicated across the EU jurisdictions.
posted by jaduncan at 2:22 AM on August 30, 2018 [1 favorite]


Sadly the CPS has become so results driven that it will not prosecute any case where he evidence is not water tight. A side effect of reduced budgets is that the CPS has become effectively judge and jury making decisions about what should or should not be brought to court.
In the case of trolls they may be willing to make a case in order to test the waters but so many good cases fall by the wayside.
posted by RandomInconsistencies at 2:39 AM on August 30, 2018 [2 favorites]


I actually disagree that it's a results driven culture thing. I think it's a constrained budget thing. If the CPS were given an unlimited budget I'm sure they'd choose to prosecute a lot more. As it is, the criminal bar is already heavily underpaid and the CPS have to choose their battles carefully. Fundamentally that isn't their choice, though, and you can rest assured people inside CPS are also frequently a little heartbroken about letting things go.
posted by jaduncan at 2:54 AM on August 30, 2018 [6 favorites]


So, in the US it's most often plea deals which are used to not prosecute. A DA will tell a client he will be tried with as many iterations of crime his actions might have produced, often to logically but technically allowably ludicrous levels, and offered to take a minimal amount of time for pleading to this one minor charge.

Sure, there are cases which prosecutors won't try because of lack of evidence, but even on the most flimsy of evidence, they go for the plea deal here in the US. And they're allowed to lie to the accused about evidence they may have, witnesses, etc. It's truly shameful.
posted by hippybear at 3:18 AM on August 30, 2018 [1 favorite]


And really, I don't intend that to be a derail. It's just an observation about this discussion about the CPS and what I think might be a difference with US legal proceedings.

Personally, I hope they figure out how to win against the online abuse. If that kind of abusive mob reaction were happening in meatspace in someone's life, it would not be tolerated. That it happens online behind the smokescreen of free speech is a ridiculous finding in US law. Aren't there (perhaps even federal) laws against inciting others to violence in a public square, or something? I'm pretty sure I've heard that.

Oh yes, there it is.
posted by hippybear at 3:24 AM on August 30, 2018 [9 favorites]


Of course, it barely matters when the joys of the internet mean that people from outside the jurisdiction can send as many death and rape threats as people who are in the UK and could in principle be jailed for their contribution.

If I'm reading this right (I may not be, this sentence is confusing my morning brain!) you're saying that the issue is that people outside the UK effectively don't face consequences? But... if their victim is in the UK, that shouldn't matter should it? People get extradited for crimes in other countries committed over the Internet. I see no reason why this shouldn't (in theory) apply here as well?
posted by Dysk at 3:35 AM on August 30, 2018


My impression is that we (in the EU generally, I mean) don't have arrangements in place to extradite people from the US for speech offences; in general, the US government and US courts have declined to cooperate with EU jurisdictions on these issues, because of First Amendment concerns. It's analogous to EU countries' occasional refusal to extradite people to the US in situations where the offence they've committed would involve the death penalty under US law; in both instances, the domestic court refuses extradition where that would involve sanctioning what (under their law) would be a human rights violation by a foreign power. The First Amendment is therefore a bit of a wrecking ball, globally, to any attempts to curb hate speech online.
posted by Aravis76 at 3:47 AM on August 30, 2018 [8 favorites]


Okay, but that should only indemnify people in the US specifically, not anyone outside of the UK/EU?

(The US as the biggest barrier to meaningful action on a global issue? Never heard that one before... /s)
posted by Dysk at 4:06 AM on August 30, 2018 [1 favorite]


Aren't there (perhaps even federal) laws against inciting others to violence in a public square, or something?

To state things more clearly, I think I'm suggesting that "violence" should include emotional violence amongst its scope. Especially in this time of cultural reckoning and recognizing on so many levels.
posted by hippybear at 4:11 AM on August 30, 2018 [2 favorites]


Okay, but that should only indemnify people in the US specifically, not anyone outside of the UK/EU?


It's less a question of indemnification, I think, than one of bilateral extradition arrangements between the relevant jurisdictions. I don't think that Russia, for example, is going to be extraditing their trolls over here any time soon (in fact, I gather their constitution forbids extradition full-stop).

And of course, even if the rest of the world did have a systematic network of extradition treaties to capture most forms of what we in Europe would consider to be illegal speech (hate speech, harassment etc) the absence of the US would still be a huge deal, since so much of the hate speech (and funding for hate speech) online originates from US-based groups and organisations that shelter, and fundraise, under the auspices of the First Amendment. The murderer of Jo Cox, for example, is a case in point: he got not only his magazine subscriptions but his advice on weapons manufacture from the US-based National Alliance. And that was back in the nineties. I can only imagine what the reach of these groups is now.
posted by Aravis76 at 5:21 AM on August 30, 2018 [1 favorite]


God be with him. I don’t know his chances of success, but this is a huge problem and I admire his willingness to dive once more into the fray.
posted by corb at 5:32 AM on August 30, 2018 [4 favorites]


I d be concerned about "emotional violence" being quickly detourned / adopted in bad faith by the right wing, in the same way "religious freedom" was.

It s probably a more successful tack to show "incitement" from these online behaviors, and start to define these online forums as part of the public square, because I think there would be multiple benefits to establishing that line of reasoning.

"structural violence" would be a more productive line of reasoning, because it could be backed by sociology. But I think veteran lawyers like Michelle Alexander are in despair about the precedents set against using sociology in the US courts
posted by eustatic at 6:36 AM on August 30, 2018 [8 favorites]


Wait a minute - death threats and direct harassment are illegal in the US too, aren't they (at least nominally)? Surely they're not classed at protected speech, and can't be dismissed as speech offences in the same way as hate speech?
posted by Dysk at 7:07 AM on August 30, 2018 [3 favorites]


Dysk: yes, death threats and harassment are illegal in the US — but AIUI because of the First Amendment right to freedom of speech, they have to be very explicit before they cross the line that justifies conviction: not so much "die in a fire, f@gg*t" as "I am going to visit your house at 3am tonight with my homies and a rope and hang you from the tree in your front yard because you're a * and all * need to die". In other words, there needs to be a very specific threat of concrete violence to a specific individual: the sort of speech that, if uttered it in the UK in a face-to-face situation, could get the speaker arrested for common assault (because it puts the recipient in immediate fear of the threat of physical attack).

If the utterer of threats is on another continent it is very difficult to meet that requirement.
posted by cstross at 7:18 AM on August 30, 2018 [17 favorites]


Thank you for explanations, Aravis76 and cstross!
posted by Dysk at 7:25 AM on August 30, 2018 [1 favorite]


The main reason the Internet was able to destroy print journalism so quickly is because the internet isn't required to employ (and pay) editors. Nothing in a print newspaper or magazine gets there without someone besides the writer looking at it and approving it first, for fear of libel lawsuits.
What we really need is a two-tier system where sites with over a certain number of users/posters, like a thousand a day or a million a week, etc (actual volume tbd) are held to the same journalistic standards as traditional media wrt slander, libel, fraud, harassment, and hate speech.
If this has the unintended consequence of utterly destroying Facebook, Reddit, Twitter, et al, I will dance naked in the streets and my moves will be epic.
posted by sexyrobot at 8:01 AM on August 30, 2018 [3 favorites]


Yeah it seems like the way to deal with this is to make the platforms liable for what they enable.
posted by schadenfrau at 8:23 AM on August 30, 2018 [3 favorites]


Going all the way to strict liability for platforms would be tossing the baby out with the bathwater, I think; it would effectively eliminate discussion forums that moderate based on exceptions rather than in advance. The only way to manage liability would be to have someone screen every comment / message / whatever before letting it go through. Even at a Metafilter-like scale I think that would be impractical.

But there seems to be some room between strict liability and zero liability. To make an analogy to other legal realms, we have limited-liability corporate structures, but the liability limits are not unlimited: the law recognizes there are circumstances where criminal behavior creates a need to pierce the veil and go direct to the individual.

What's needed is a way to pierce the Section 230 veil and go after platforms when there's evidence that the operators of the platforms know or should have known that their platform was being used for harassment, and failed to take action. Legitimate platforms (including, I would imagine, Metafilter) would be quick to take action against a user if notified that they were harassing others using the platform; once that notification is made, a failure to act should eliminate the platform's ability to claim immunity under Sec. 230.

Instead, we have (from what I can tell) created a system where there's actually a disincentive for platforms to do any sort of moderation, because there's a perception that doing some moderation (e.g. against severe cases) weakens the content-agnostic-communications-provider defense. That's a perverse outcome, but it's entirely within our ability to change either judicially or legislatively, the latter obviously being preferred if our government wasn't such a dumpster fire.
posted by Kadin2048 at 8:35 AM on August 30, 2018 [8 favorites]


Some new issues have come along since it was written (in particular the Internet was in its infancy then and coordinated campaigns of social media harassment had not yet become a thing) but for anyone wishing to understand, in rough outline, how the US got to where it is with the state of the law and protections for speech I recommend Anthony Lewis' book "Make No Law", which starts as a story about NYT v. Sullivan and then proceeds to cover the development of free speech law thereafter. It's helpful in understanding why the bar for speech protection is set so high and what some of the potential consequences of lowering it might be.
eustatic: I'd be concerned about "emotional violence" being quickly detourned / adopted in bad faith by the right wing, in the same way "religious freedom" was.
And I think you have very good reason to be concerned about that. As the Sullivan case showed, the risk is not only from government suppression of speech as a policy but also can be from the strategic use of laws against libel, harassment, etc, by private parties wishing to silence criticism or opposing political views.

Or to put it another way: when you are engaged against people willing to use any weapon available to harrass their opponents, it's important to be careful not to turn the law into another such weapon.
posted by Nerd of the North at 10:48 AM on August 30, 2018 [4 favorites]


Instead, we have (from what I can tell) created a system where there's actually a disincentive for platforms to do any sort of moderation, because there's a perception that doing some moderation (e.g. against severe cases) weakens the content-agnostic-communications-provider defense.

Except that's a self-perpetuated myth. The first-line screeners in places like the Philippines filter out the worst shit in exchange for not much money and PTSD. There's compliance with anti-nazi laws in France and Germany. So at one level the labour of moderation is invisibly outsourced, but at another level that labour is visibly outsourced, either through block/mute, or by reporting mechanisms that are weakly and inconsistently enforced and mostly depend upon celebrity amplification to be effective. So we've ended up with a system that's extremely good at keeping out female nipples but is as useful towards harassment and threats as the "door close" button on a lift.
posted by holgate at 11:19 AM on August 30, 2018 [6 favorites]


This has been on my mind since the NYTimes article citing a study that showed small-town Germans who had access to Facebook were more likely to act out against ‘immigrants’ than those who did not.

The simplest solution is to ditch the absolute anonymity of so much of the Internet. It does not need to be that every comment is accompanied by the commenters name and address, but that that info is, in the event law enforcement needs to follow up on a threat, available and falsifying that info leads to strong enough penalties to discourage its falsification.

Yeah, pipe dream but the totally open free-for-all nature of so many comments on the web just aren’t working, not really.

Our baser impulses are wicked strong.
posted by From Bklyn at 7:55 AM on August 31, 2018 [1 favorite]


« Older Stan Brock: "anything that he can possibly give to...   |   The untouched beauty of Kyrgyzstan Newer »


This thread has been archived and is closed to new comments