Reporting, Reviewing, and Responding to Harassment on Twitter
May 15, 2015 9:21 PM   Subscribe

Reporting, Reviewing, and Responding to Harassment on Twitter [via mefi projects] For three weeks last November, Women, Action, and the Media (WAM!) accepted harassment reports that they escalated to Twitter, collecting data on the experience of harassment and the process of reporting it. A team of academics published a comprehensive report on what they found, with a focus on the people reporting and receiving harassment, the kinds of harassment that were reported, Twitter's response to harassment reports, the process of reviewing harassment reports, and challenges for reporting processes.

The title link goes to WAM!s advocacy page, with infographics, etc. The full report can be found on ArXiv (pdf here):
Matias, J. N., Johnson, A., Boesel, W. E., Keegan, B., Friedman, J., & DeTar, C. (2015). Reporting, Reviewing, and Responding to Harassment on Twitter. Women, Action, and the Media. May 13, 2015.
The report has been covered by:
- The Huffington Post
- FastCompany
- The Washington Post
posted by Little Dawn (11 comments total) 30 users marked this as a favorite
 
This is fucking great.
posted by NoraReed at 11:18 PM on May 15, 2015


The infographic is good, though the overall situation is really bad.
posted by Dip Flash at 2:20 AM on May 16, 2015


Their recommendations don't include perma-banning, which is something I see a lot of harassed women asking for. GamerGate's RogueStar just creates a new account every time he gets banned, goes right back to harassing people. Twitter needs something that blocks IPs.
posted by Peevish at 4:50 AM on May 16, 2015


... GamerGate's RogueStar just creates a new account every time he gets banned, goes right back to harassing people. Twitter needs something that blocks IPs.

I haven't gotten that far into the report yet, but the report does address it. The researchers refer to this practice as "chaining". A bit from page 10:
... Key to this process is signaling to the target of harassment that a new account is related to one that was suspended. This is often achieved through use of a similar account name, recognizable profile details, or explicit boasting of continued presence despite suspensions. This signaling is a critical point of intimidation—but also opens a potential point of intervention. A person being harassed through chaining needs to be able to link reports together, and they need Twitter to catch, prevent, and take action on that signaling.
(emphasis in the original)
posted by nangar at 5:31 AM on May 16, 2015


Twitter needs something that blocks IPs.

Having been on the system administration side of trying to defend against what the report calls "chaining", my experience has been that IP-based blocks or other technical measures are only effective in the short term. The sort of person who is persistent enough to create a new account each time one is banned, is also persistent enough to get around these kinds of technical measures.

What I wish would happen is earlier and more effective involvement of law enforcement. I feel like there's still a position taken by law enforcement that online channels are somehow less real than postal mail, telephone, and in-person interactions. So the very same harassment that could get a serious response via any of the latter channels is guaranteed not to be taken seriously when it happens online.

I know that geography and sheer volume make it challenging, but I also feel that prosecuting some high-profile examples would help reduce the volume when harassers start to see that yes, you can end up facing criminal charges for going after people on Twitter. So I was a bit surprised to see the report only recommending this as a next step for harassment crossing multiple platforms, but perhaps that's just being realistic at this point?
posted by FishBike at 5:57 AM on May 16, 2015 [7 favorites]


Thanks for posting! I have been (and am still) in the middle of my PhD qualifying exams as the report came out, so I haven't been able to be as responsive to journalists or online conversations as I'd like. But I'm honored to see it on the blue, so I'm taking a moment to respond.

For anyone interested in following this issue further, here are some recent academic works that didn't make it into our report:
  • The Virtues of Moderation, by James Grimmelman, which tries to set out a description of the different things you can do about what he calls "the regulation of online communities." As a legal scholar, he doesn't look so much at questions of effectiveness, but he does offer some interesting case studies.

  • What's a Flag For? Social media reporting tools and the vocabulary of complaint, by Tarleton Gillespie and Kate Crawford, asks what we mean by "flag it and move on." Their answer is that flags hide what's really going on and prevent us from having public deliberation about these issues (maybe a bad thing, maybe not).

  • Reading the Comments: Likers, Haters, and Manipulators at the Bottom of the Web by Joseph Reagle is another new book that offers a nice overview of research on commenting systems.

  • Justin Cheng's quantitative work on upvoting and downvoting offers nice empirical evidence on the effect of those systems on an online community as a whole:

  • The Work of Sustaining Order in Wikipedia: The Banning of a Vandal (pdf), by Stuart Geiger and David Ribes, where they look at the role of bots on Wikipedia for content moderation (Stuart has gone on to do his PhD on these and other kinds of bots as well).

  • The problem of online harassment, brought to widespread attention by Julian Dibbel's 1993 Village Voice article A Rape in Cyberspace, is reportedly what prompted Larry Lessig to go into Cyberlaw, resulting in his work Code is Law (now v2).

  • For those of you interested in legal questions, the legal scholar Danielle Citron has written a book, Hate Crimes in Cyberspace focuses on ways that current laws can more actively be enforced and proposes new legal responses (I blogged Danielle Citron's MIT talk with Brianna Wu here).

  • Whitney Philips has a new book, This is Why We Can't Have Nice Things: Mapping the Relationship Between Online Trolling and Mainstream Culture, that builds on her PhD studying trolls and trolling culture.

  • One of the things we focused on in our report was the work of the people who respond to harassment reports. Here at Metafilter, I have a deep respect for the work of the mods and all they do so that we can have nice things. Often when people (even scholars) talk about "governance" systems or "content moderation" they talk about all sorts of systems for handling online harassment, when what they are also doing is specifying the work that other people (and their bots) will have to do.

    In my PhD at the MIT Center for Civic Media, I'm look at the emotional and relational work that sustains our online communities. I've researched at gratitude, appreciation, and acknowledgment and issues of gender representation online.

    In my work at Microsoft Research this summer, I'll (hopefully) be looking at the work of moderation, trying to understand what it costs people, what it gains them, and how we can better support the people who invest so much energy (often unpaid) into holding together our conversations and interactions online.

    As I said, I would love to discuss the report with MeFites, but I have to dash and finish my qualifying exams. Thanks for posting! I'm on the hook with a publisher for a journalistic article on this topic in the near future and will make sure to post it at the bottom of this thread when it comes out.
    posted by honest knave at 7:09 AM on May 16, 2015 [28 favorites]


    > One of the things we focused on in our report was the work of the people who respond to harassment reports ... Often when people (even scholars) talk about "governance" systems or "content moderation" they talk about all sorts of systems for handling online harassment, when what they are also doing is specifying the work that other people (and their bots) will have to do.

    I really appreciate the attention your report gives to mental health and emotional support issues for both harassees and responders.

    Thanks to you, honest knave, and all your coworkers, researchers and volunteers, for doing this project.
    posted by nangar at 9:53 AM on May 16, 2015 [2 favorites]


    Frankly, if someone is able to create new profiles to evade bans and continue harassing, I don't think permabanning is the answer.

    That's the point when law enforcement needs to become involved, and they need to take it seriously.
    posted by TheWhiteSkull at 10:15 AM on May 16, 2015


    RogueStar avoids this particular problem by living in third world countries.

    He's nothing if not dedicated to the art of harassment.
    posted by Yowser at 10:42 AM on May 16, 2015 [1 favorite]


    Frankly, if someone is able to create new profiles to evade bans and continue harassing, I don't think permabanning is the answer.

    That's the point when law enforcement needs to become involved, and they need to take it seriously.


    Generally, people who are sufficiently dedicated to create new profiles are also dedicated enough to use proxies and VPNs in order to mix up their IP addresses. If they are delivering messages that are actually illegal (death threats, etc), then law enforcement should certainly get involved.
    posted by theorique at 1:22 PM on May 16, 2015


    Fascinating project.

    Re: needing the URL for a deleted tweet, that's a bit weird. I'm surprised that people reviewing claims of abuse at Twitter don't have the ability to see (and/or search for) all deleted tweets from their workplace (but not from anywhere else).

    Has Twitter set up a proactive monitoring system? It's not hard for me to imagine one, e.g. "here's our Potential Abuse page. This section contains nothing but tweets at another account with both sexist language and verbs typically associated with threats. Look over what's here and take action as appropriate. And don't forget the free workplace counseling, our directory of feminist groups looking for volunteers, and the biweekly brain bleach."
    posted by johnofjack at 5:24 PM on May 17, 2015


    « Older I'll be out in a minute   |   Our Robotic Future Newer »


    This thread has been archived and is closed to new comments