As a collective of human beings, it could choose to be better.
February 5, 2015 6:14 AM   Subscribe

"We suck at dealing with abuse and trolls on the platform and we've sucked at it for years. It's no secret and the rest of the world talks about it every day. We lose core user after core user by not addressing simple trolling issues that they face every day. I'm frankly ashamed of how poorly we've dealt with this issue during my tenure as CEO. It's absurd. There's no excuse for it. I take full responsibility for not being more aggressive on this front. It's nobody else's fault but mine, and it's embarrassing."
Twitter CEO Dick Costolo addresses the platform's persistent harassment issues in an internal memo.
posted by almostmanda (111 comments total) 24 users marked this as a favorite
 
Not exactly revolutionary to suggest it, but the latest episode of This American Life is one of a couple podcasts/media outlets that has been doing really excellent reporting on trolling and the negative aspects of Internet culture in the past few years specifically.
posted by jsplit at 6:18 AM on February 5, 2015 [9 favorites]


This is potentially very big news. Like, potentially change the way that the internet (or at least the Twitter -and maybe Reddit- portions of it look.)
posted by Going To Maine at 6:18 AM on February 5, 2015


Sending a followup taking all-caps PERSONAL responsibility is nice, but following through on the "We're going to start kicking these people off right and left and making sure that when they issue their ridiculous attacks, nobody hears them" is what matters.
posted by thelonius at 6:23 AM on February 5, 2015 [11 favorites]


I'm putting this in the "I'll believe it when I see it" column, but if Twitter can implement this effectively (i.e. the harassers can't find a way to game the system), then this is a BFD. But I don't believe for a second that Reddit--which still allows upskirt/creepshot subs--is anywhere close to putting something like this in place. They'll hem and haw and make some noise, but at least in the near future I think they'll just fall back on the freeze peaches excuse.
posted by zombieflanders at 6:25 AM on February 5, 2015 [19 favorites]


If their CEO gets it, that's a start. These initiatives do need to come from the top. I wish I could see something similar from the CEO of Wal-Mart (i.e., "the way we treat everyone who works for and with us is a disgrace"), but I'm not holding my breath.
posted by orange swan at 6:26 AM on February 5, 2015 [2 favorites]


This leak coming out just before Twitter's earnings call is no coincidence. It's further possible Costolo feels he can take full responsibility because he's got a sweet golden parachute if he gets fired.
posted by Doktor Zed at 6:33 AM on February 5, 2015 [1 favorite]


Dear CEO of twitter,

Charge a dollar per account and the trolls will be diminished greatly. Require a confirmed email address and credit card and they will nearly completely disappear.
posted by cjorgensen at 6:35 AM on February 5, 2015 [65 favorites]


and maybe Reddit

That's one I'll believe when I see it.
posted by Artw at 6:39 AM on February 5, 2015 [2 favorites]


Require a confirmed email address and credit card and they will nearly completely disappear.

Along with lots of Black and Hispanic people, and poor people of all colours, who don't own credit cards at the same rate as wealthy and White people. Hmmm, of course, that applies to Metafilter too...
posted by alasdair at 6:40 AM on February 5, 2015 [93 favorites]


After Wheeler's about face yesterday it's not been a good couple of days for cynics.
That said, perhaps they're just taking insincerity to new heights (or depths).
posted by fullerine at 6:42 AM on February 5, 2015


Honestly, what can Twitter do to cut down on this? As long as the only requirement for signing up is an email address, that's not a lot that can be done, right? Banned accounts will simply spawn new accounts.
posted by Brandon Blatcher at 6:49 AM on February 5, 2015 [1 favorite]


I think it's important to find other ways of addressing this than just kicking people off after they start to be a problem. Two difficult-but-not-impossible directions: 1) Give people something else to do, some way to direct their vitriol, when they're frustrated, lonely, bored, full of anger; 2) Make signing up also include some kind of quick-but-deep educational component, so that people (kids) understand the damage that thoughtless meanness can do.

This second thing will probably seem pointless to a lot of people -- I can imagine thoughts of "Everyone already knows right from wrong" and "if they're going to be bullies, nothing we can say is going to change that", but that's not really true. People don't know how much damage they can do, and how easy it is, unless they are taught that. Maybe you had sensitive, intelligent parents or teachers who taught you that, but I can guarantee you that most people have not. It's the kind of lesson that needs an emotional component, too, so a simple book or instruction will not do it.

If Twitter or Facebook or whoever has resources to go deeply into this problem, they are ideally positioned to find a way to make it work.
posted by amtho at 6:54 AM on February 5, 2015 [4 favorites]


I was thinking about this a lot at the height of GG - it seems like the organised efforts at least would have behaviour patterns that would be easy to pick out and subject to greater scrutiny - some mass sign up of new accounts that suddenly wants to butt into other peoples conversations would be very suspect, for instance.

Of course once it got rolling GG became less about signing new accounts and more about taking over old accounts, but that's a pattern that can be watched for too.
posted by Artw at 6:55 AM on February 5, 2015


Props to Costolo for the one of the best management mea culpas I've read, even though it was posted internally. I was starting to believe that tech CEOs would be blind to trolling as long as it doesn't affect them or their bottom line.

I'm interested to see what concrete steps they decide to take. Two of the things that make Twitter what it is also make it conducive to trolling: anonymity / lightweight accounts and "flatness." The ability to spin up an account and start Tweeting allows for all those amusing joke accounts and bots, and more importantly lets people in oppressive situations (e.g. gay people who are unable to come out) express themselves freely wih a separate identity. By flatness I mean that everyone on Twitter can talk with everyone else in the same way, as opposed to Facebook which gates interaction with the friend graph.

Of course I can appreciate these things as someone fortunate enough not to be a target. They're the same aspects that make the service unusable for a lot of people, and I don't think "don't use it then" is an acceptable answer.
posted by skymt at 6:55 AM on February 5, 2015 [3 favorites]


Harrassment and bad actors are a widespread problem, but the real uphill battle Twitter has in this case is that it has effectively endorsed the behavior for years by doing basically nothing about it. If they had addressed the issue aggressively from the outset then a culture of general shittiness wouldn't have become this bad.

If Dick isn't full of shit (he probably is) then this is a pretty big opportunity to show that a trolling problem of this magnitude can be overcome. I wouldn't hold your breath.
posted by cellphone at 6:56 AM on February 5, 2015 [4 favorites]


poor people of all colours, who don't own credit cards at the same rate as wealthy and White people.

Using a phone number must be verified by SMS or automated voice call would probably be more useful. You need something that most users already have 1 of, but would be more trouble than its worth to get 1000 of.
posted by the jam at 6:58 AM on February 5, 2015 [23 favorites]


Honestly, what can Twitter do to cut down on this? As long as the only requirement for signing up is an email address, that's not a lot that can be done, right?

New blocking settings:
* no contact from accounts newer than X days
* no contact from accounts that have sent fewer than X tweets
* no contact from accounts that have sent fewer than X (unblocked, unreported) DMs
* no contact from accounts that have been reported or blocked by X accounts that I follow
* no contact from accounts that have fewer than X mutual follows

Any or all of these could be used in combination to make it much, much harder to create and abuse sock puppet accounts for harassment purposes.
posted by jedicus at 6:59 AM on February 5, 2015 [106 favorites]


Here's an idea: Aggressively ban people and add a waiting period for new account before they can tweet at other users. A day, two days, just enough that you can't just create a new account and pick up where you left off.
posted by Lazlo Hollyfeld at 7:00 AM on February 5, 2015 [4 favorites]


New blocking settings:
* no contact from accounts newer than X days
* no contact from accounts that have sent fewer than X tweets
* no contact from accounts that have sent fewer than X (unblocked, unreported) DMs
* no contact from accounts that have been reported or blocked by X accounts that I follow
* no contact from accounts that have fewer than X mutual follows


Huh. Those controls should be baked in. Looking at that list, it's shocking those aren't standard tools on social media sites. Thanks for laying out some excellent ideas for upgrades.
posted by Brandon Blatcher at 7:08 AM on February 5, 2015 [20 favorites]


my idea is that net communication just doesn't scale well - if you have 100 users, you can control this kind of problem easily - 100k users, like metafilter, requires some hard work, but it's still doable

millions of users? - it's pretty much hopeless, unless you have stiff user defined controls that will effectively turn the service into a bunch of small places where only a few approved people can comment
posted by pyramid termite at 7:09 AM on February 5, 2015 [1 favorite]


alasdair brings up a valuable point. Twitter has been of immense impact in unusual places and raising the bar through first world details would be a shame for future of inclusive interwebz.
posted by infini at 7:14 AM on February 5, 2015 [6 favorites]


Reading the FPP before they even got to the part where they said it was Twitter I knew it was Twitter.
posted by Drinky Die at 7:14 AM on February 5, 2015 [1 favorite]


alasdair: "mmm, of course, that applies to Metafilter too..."

It doesn't, actually. The mods have stated many times that if $5 is a hardship for you, they can work something out with you.

I would agree that maybe a statement to that effect on the signup page would be helpful, but the policy does exist.
posted by Chrysostom at 7:15 AM on February 5, 2015 [5 favorites]


Another option would be to incorporate ~freebsdgirl's GG autoblocker and allow for sharing of block lists.
posted by Cash4Lead at 7:15 AM on February 5, 2015


Require a confirmed email address and credit card and they will nearly completely disappear.
Using a phone number must be verified by SMS or automated voice call would probably be more useful.


While I believe Twitter needs to do more to police bad actors, trolls, etc. (especially after Gamer Gate) you need to give Twitter a little leeway/ credit. They essentially own a communication medium not a website and the solutions that work for individual sites don't scale. Require real identities makes it impossible for people living under oppressive regimes to use Twitter as a protest channel like the Arab Spring. Require a phone number so users can only have one account and you prevent novelty accounts or secondary accounts that are focused on a certain topic which is another value to Twitter.
posted by yerfatma at 7:16 AM on February 5, 2015 [4 favorites]


Honestly, what can Twitter do to cut down on this? As long as the only requirement for signing up is an email address, that's not a lot that can be done, right? Banned accounts will simply spawn new accounts.

Can twitter pull the mac of the device used to connect, either through its own software or through the browser? Or otherwise determine the identity of the device? Then it could ban the device, or ban it for some period.

Still not perfect. Someone could spoof their mac on a pc, or swap in a new network card, or root their phone and spoof the mac. But it would be a greater pain in the ass to deal with than just creating a new throwaway email account.

It would also mean that basically all publicly accessible pcs would be banned more or less permanently as GG doofuses burn through them.
posted by ROU_Xenophobe at 7:17 AM on February 5, 2015


MAC addresses have been ruined by marketers and were never reliable to begin with.
posted by yerfatma at 7:18 AM on February 5, 2015


Shunting angry people of to the side doesn't address the anger, it just hides it while it festers and pressurizes.
posted by amtho at 7:18 AM on February 5, 2015 [1 favorite]


a) it's just as likely that twitter loses "core users" to the inanity that is twitter.

b) this seems self-correcting, in that when twitter becomes just a place for trolls something better will come along

c) he signed the letter : Dick
posted by OHenryPacey at 7:21 AM on February 5, 2015 [1 favorite]


Shunting angry people of to the side doesn't address the anger, it just hides it while it festers and pressurizes.

Then what's the alternative? I don't necessarily see where this is Twitter's problem to address. To a large extent, harassers gonna harass. It's what they do, and a lot of them don't even do it out of anger. Racism and misogyny and all manner of -phobias are a societal and cultural problem, to a large part baked into the societies and cultures that use Twitter. In the end, I think shunting the angry people off to the side will have a far more positive impact on discourse than negative, and many of the targets of harassment have said as much over and over again.
posted by zombieflanders at 7:26 AM on February 5, 2015 [14 favorites]


A local newspaper has recently required users to log in with their Facebook account to comment. Every comment now has an association with that user's account. It has all but stopped the trolling. (Maybe trolls don't know how to set up a fake FB account?)
posted by Gungho at 7:29 AM on February 5, 2015 [1 favorite]


There are numerous stories of people who were racist at one point (sometimes in childhood), then changed later in life (sometimes through a transformative event, but often through simple basic education or just meeting someone of a different race).

As for the alternative: it's not as simple as "disable accounts", but see my earlier comment in this thread.
posted by amtho at 7:30 AM on February 5, 2015


Shame they had to be faced with haemorrhaging users before they got of their arses in any meaningful way. Like, will it take a Zoe Quinn to actually get killed maybe?

> Shunting angry people of to the side doesn't address the anger, it just hides it
> while it festers and pressurizes.

It's preferable to allowing a forum where the same hate speech and incitement to violence that would get you locked up in many real-life countries can be disseminated anonymously with no fear of censure.
posted by GallonOfAlan at 7:31 AM on February 5, 2015 [2 favorites]


I welcome this news because it seems in the face of years of abuse new twitter users have been posting, the twitter abuse reporting tools and responses from support staff have been awful and largely ignoring people complaining about abuse (I recently saw a response from Twitter staff where they didn't trust screenshots as evidence -- but they're Twitter support and could easily pull up even protected/deleted tweets from someone's account to confirm they really posted something).

This sounds like Twitter is going to stop ignoring the reports and stick to the terms of service they've always had, but never enforced. This is good, and hopefully shuts down a lot of the awful nonsense that happens there daily.
posted by mathowie at 7:32 AM on February 5, 2015 [2 favorites]


It can be done, it just costs money.
posted by fullerine at 7:34 AM on February 5, 2015


I got blocked by someone by mistake. Seems like a pretty buff button.
posted by Trochanter at 7:35 AM on February 5, 2015


Shunting angry people of to the side doesn't address the anger, it just hides it while it festers and pressurizes.

It is not my job as a Twitter user to be some random person's target practice for their anger issues. That's what therapy is for.
posted by emjaybee at 7:37 AM on February 5, 2015 [47 favorites]


It can be done, it just costs money.

Explain how. Please account for differences in tastes, cultural differences and language differences.
posted by yerfatma at 7:39 AM on February 5, 2015


There are numerous stories of people who were racist at one point (sometimes in childhood), then changed later in life (sometimes through a transformative event, but often through simple basic education or just meeting someone of a different race).

As for the alternative: it's not as simple as "disable accounts", but see my earlier comment in this thread.


All of that is a far larger responsibility than can be expected of a for-profit social network. Like I said above, these are deeper issues that need to be addressed at a far higher level and wider scale than Twitter. It would be great if they could address them, and maybe some day they can, but realistically the approach Costolo proposes (including disabling accounts) is, a decision with greater upsides than down for them and targets of harassment at this point in time.
posted by zombieflanders at 7:44 AM on February 5, 2015 [1 favorite]


I'm not sure how I feel about all this blocking & banning as a solution. While it is very hard -- even fatally hard at times -- to be a target, there has to be some upside to the harassment being carried out in what is basically broad daylight. Much easier for victims of abuse & trolling to find [massive] public support also on Twitter and to carry on their own fightback, if they so desire. I prefer to have hate speech in the public square and ridiculed, argued against and dealt with head on, rather than hiding it away. But then, I have never been the victim of an online attack, so my mileage must certainly vary.
posted by chavenet at 7:44 AM on February 5, 2015 [1 favorite]


Somewhat related:

A Group Is Its Own Worst Enemy

Even though Twitter is not really a "community" in the sense of a group with some unified identity around the platform (i.e. Metafilter or even Reddit), it seems to fall into a similar trap the pulls most communities down. Enforcing social norms can undermine the group, but not enforcing them can do the same.
posted by Jacks Dented Yugo at 7:49 AM on February 5, 2015 [5 favorites]


to be a target, there has to be some upside to the harassment being carried out in what is basically broad daylight.

Please do check out the multiple threads on street harassment here on MeFi. It too happens in broad daylight. There are no upsides when there are no consequences for the harassers. You can call it out all you want, but if no one believes you, no one listens, and no one actually does anything, then there's a huge DOWNSIDE to it being in broad daylight: not only are you harassed, you are ALSO shown that your existence and experiences do not matter. That fucking sucks. Just, y'know, speaking as a woman who has lived it. And written about it here. And argued against and dealt with it head on.

Having concrete consequences for harassment is a good thing.
posted by fraula at 7:50 AM on February 5, 2015 [35 favorites]


Having concrete consequences for harassment is a good thing.

Yeah, this is the part of Twitter's responses in the past that surprised me the most. To people being skeptical about this, we're not talking about edge-case behavior, we're talking about prolonged directed attacks where say one person is running ten twitter accounts sending death and rape threats every day to a single target user. It's totally beyond the pale, and abuse on the order of stuff that you'd think would end with you in a local jail for sending such abusive threats. In the past, Twitter has done little to nothing about these, and it sounds like they're finally going to act on them, which is a good thing for anyone that uses the service.
posted by mathowie at 7:54 AM on February 5, 2015 [23 favorites]


to be a target, there has to be some upside to the harassment being carried out in what is basically broad daylight.

Being able to anonymous accounts is not broad daylight at all. There's no upside to being harassed, just shades of "well, this example isn't as fucked up and threatening as others".
posted by Brandon Blatcher at 7:55 AM on February 5, 2015


You can't use ridicule and shame with anonymous and unaccountable users. That's the entire problem. There is no reputation to lose.
posted by almostmanda at 7:55 AM on February 5, 2015 [2 favorites]


Will Twitter be able to stop all abuse and invective? No, and it probably wouldn't be a good thing if they could. It would require clamping down too hard on the platform and there would be too much collateral speech shut down.

But there is a lot Twitter can do. They can act on the simple egregious cases that they have not been acting on.

Is there a simple technical or UX solution to the problem? No. There's no one-time fix for this. There will be an ongoing battle between Twitter and the more technically sophisticated and committed trolls. The analogies that come to mind are denial of service attacks and spam. These are problems that can be fought, and that must be fought on an ongoing basis. The tools on both sides will grow more sophisticated over time. You will never be able to eliminate the problem but by fighting the good fight you do succeed in limiting the damage.

This is really Twitter's responsibility, and it's terrible they haven't done anything about it to date. I hope they take it as seriously as Dick Costello says they will.
posted by alms at 8:00 AM on February 5, 2015 [2 favorites]


If anything, I'd argue that some of the harassing users gain reputation from harassing people on Twitter among their peers. It's not hard for them to claim the identity in other spaces and use that to egg each other on. It's a nasty dynamic.

I was invited as a guest speaker for a panel at an online conference recently that drew some of this online harassment. Let me tell you how much I deeply appreciated that our moderator from the conference took the time to screen the comments and, after saying that he'd received some harassment, made sure that we the panelists didn't have to respond or interact with that harassment. It was a really novel experience and a gratifying one, compared to other similar events I've worked with in the past.

I've been waffling about getting involved with Twitter for some time, and an anti-harassment policy--if they bother to give it some teeth this time--would be a big mark in favor of doing so for me. I've heard way too many horror stories to get involved right now, though.
posted by sciatrix at 8:01 AM on February 5, 2015 [8 favorites]


Does it need to be pointed out that being put in the role of the target/sponge/"escape valve" for angry, often rape-threatening, often violent, often doxxing men is something women have been putting up with for far too long? I guess so, so I will point it out. That attitude needs to die in a hot hot fire.

I don't give a fuck if a man "explodes" in some way because he got blocked after sending me hate-tweets. Or because I won't sleep with him. Or flirt with him. Or put up with him.

It is not on me. I am not his hate-sponge. I am not responsible for anything he does. If he does something even worse, it is STILL not my fault. It is all on him. If he goes his whole life a bitter hateful unloved mess, it is on HIM, not me. Not any woman.

Women are not responsible for the bad actions of men trying to hurt/use/rape/abuse them. Ever. No matter what we do. Not responsible. In any way.

Some excellent ways of blocking hate-tweeters have been mentioned upthread. I like the ones where you can set your filters to "no comments to me if you've only had the account for 5 minutes" and so on. Who does that hurt? No one. It's trivially easy for good actors to establish themselves in a Twitter account and then be able to talk to me and lots of other people after a short period. It only makes life harder for people with anon hate accounts.
posted by emjaybee at 8:07 AM on February 5, 2015 [78 favorites]


I think all the mentioned hurdles are excellent ideas, and I can think of ways to circumvent most of them in about ten seconds (a 4chan thread with logins to pre-established burner twitter accounts that are already through the gate, even bots that can set them up, etc). The rest will never happen because there are both legitimate counter arguments about why they won't work (or serious undermine what might be considered the best uses of real time communication) or would be damaging enough to growth potential where money will outstrip decency (given how much pressure TWTR gets to increase the user base, I can't see them proactively doing anything that might diminish that).

It could probably take the edge off 10-30% of the low hanging fruit (idiots) but not the bottom decile, who I suspect are responsible for the worst of what has been happening to the people targeted by the gg horde.

Twitter isn't a community, it's a technology. There was a ban the other day as a result of a MeTa thread of a user (whom I'm not old school enough to know) presented basically as 'they've been a dick for ten years and we finally gave up on them'. That's community policing and consideration. That's what people are hoping for. I'm fine with Twitter banning text strings (I hope you get raped, etc.) but then you have leet, and languages and unicode and then we end up at some equivalent to Facebook breast feeding photo bans (even allowing for the gender bias issues that likely lead to such things that could be mitigated somewhat by more diversity in who writes the rules). You don't solve community issues with technology, and I don't see anything about twitter that is easily going to create community standards, because lots of people here have been involved with intentional communities at some point in their life, and what's the largest self-policing community you've been in that seemed to actually mitigate hateful thinking. A couple thousand, maybe?

But if we are talking tech, maybe trying to communicate in 140 characters is part of the problem?
posted by 99_ at 8:26 AM on February 5, 2015 [2 favorites]


There's only one way to stop this, and it's to make Twitter accounts cost maybe ten bucks a year, payable by a credit card. So the charge will weed out the opportunistic haters, and the credit card will provide a trail.
posted by GallonOfAlan at 8:36 AM on February 5, 2015


Yeah, the cynic in me says that they were torn a strip by Google. This is all because of pressure from Google and the new partnership which will make tweets more visible in their search feed.
posted by jimmythefish at 8:40 AM on February 5, 2015 [2 favorites]


New blocking settings:
* no contact from accounts newer than X days
* no contact from accounts that have sent fewer than X tweets
* no contact from accounts that have sent fewer than X (unblocked, unreported) DMs
* no contact from accounts that have been reported or blocked by X accounts that I follow
* no contact from accounts that have fewer than X mutual follows


Problem with the first is I don't mind new people signing up to chat with me. We all have to start somewhere, but I get it, it's a hurdle, so in two days you can tweet people that have the first option on.

Second would just mean you'd see a bunch of inane tweets prior to them actually getting around to the trolling. It would be like when metafilter required two comments before posting to the front page. You got, "Thanks for post this," then, "Interesting article," then SPAM POST!

Third if too many people turned this on you are making those who don't be your filter.

Four, yes please.

Five, then how do you ever find new people?

The only other issue I have with this is these things can be gamed as well. If the trolls don't like someone they report everything said as abusive or just block them, and then suddenly normal real good people are taken out by these filters.
posted by cjorgensen at 8:43 AM on February 5, 2015 [1 favorite]


It bothers me a bit that it's the Lindy West article -- and specifically the one troll who impersonated her dead father -- that got Costolo to finally take notice. Not GG's months of harrassment; not @femfreq's horrendous never-ending screenshots of one week of death and rape threats. But dead fathers? OH NO WE CAN'T HAVE THAT.

Whatever works, I guess; but seriously, it's not like there was a shortage of "hey Twitter you've got an abuse problem" evidence before that.
posted by We had a deal, Kyle at 8:46 AM on February 5, 2015 [17 favorites]


There's only one way to stop this, and it's to make Twitter accounts cost maybe ten bucks a year, payable by a credit card. So the charge will weed out the opportunistic haters, and the credit card will provide a trail.

As pointed out this would silence a large portion of the users and also open many up to governmental intrusions.

Hell, I would be happy if it was a voluntary fee, and you got a verified badge for doing it. Give me a "real person" or "information on file" icon that says I am not one of the trolls. Then let me ignore those accounts that aren't.
posted by cjorgensen at 8:46 AM on February 5, 2015 [1 favorite]


Interestingly, the Cracked.com podcast that dropped this week is largely about this problem, and the hosts do get into the question of 'how to ban while remaining open to free speech' and 'how to inculcate an online community that helps enforce norms and mores' etc, etc.
posted by eclectist at 8:48 AM on February 5, 2015


(a 4chan thread with logins to pre-established burner twitter accounts that are already through the gate, even bots that can set them up, etc).

That's just it though - with barriers in place that have to be overcome, those accounts have a cost, and so they have a value.

Once things have value, people are less willing to squander them.
posted by Pogo_Fuzzybutt at 8:50 AM on February 5, 2015 [3 favorites]


It would be a false dichotomy to suggest that the choices are either a) disabling accounts that are generating bullying and hate speech, or b) completely ignoring the problem and allowing these kinds of problems to continue unabated. You can take steps to actually alleviate the underlying problems without presenting sacrificial victims.
posted by amtho at 9:00 AM on February 5, 2015 [1 favorite]


Well, some variety of A is going to be the answer - ignoring trolls harder basically doesn't work. The question is how they do it in a practical manner and without breaking what is good about Twitter.
posted by Artw at 9:03 AM on February 5, 2015


then how do you ever find new people?


By "mutual follow" I mean X accounts that are both followed and follow the account in question. Maybe there's a better Twitter-term for it. I don't really use Twitter.

I'm not suggesting that all of these be on by default. I'm saying they can be part of a toolbox of filtering options.
posted by jedicus at 9:04 AM on February 5, 2015 [2 favorites]


Props to Costolo for the one of the best management mea culpas I've read, even though it was posted internally. I was starting to believe that tech CEOs would be blind to trolling as long as it doesn't affect them or their bottom line.

To be fair, he says in the memo that it's costing them users, so you can keep believing that tech CEOs are blind to trolling as long as it doesn't affect them or their bottom line.

Still great if they actually go through with it, of course, but I'll believe it when I see it.
posted by Dysk at 9:05 AM on February 5, 2015


Gungho: A local newspaper has recently required users to log in with their Facebook account to comment. Every comment now has an association with that user's account. It has all but stopped the trolling. (Maybe trolls don't know how to set up a fake FB account?)

I don't know. Some people are willing to say anything through their facebook account, even if they have their real name associated with it (everyone has that one relative or friend who keeps putting bigoted or crazy stuff in their feed). Meanwhile, some people (like myself) are unwilling to associate their real name online with any kind of controversy at all. I generally won't comment politically online under my real name, for instance, as I would be too worried it would interfere with my future employability.

For whatever it's worth, my newspaper here was done it as well, and now the comment section on articles is pretty much always empty.
posted by Mitrovarr at 9:07 AM on February 5, 2015


How about disabling accounts, but also somehow addressing the problems _behind_ the behavior, so that people don't just create new harassing accounts, and so that the number of people engaging in this behavior decreases, and so that people behind the problems don't just move their rage into new forms?
posted by amtho at 9:09 AM on February 5, 2015


amtho: How about disabling accounts, but also somehow addressing the problems _behind_ the behavior, so that people don't just create new harassing accounts, and so that the number of people engaging in this behavior decreases, and so that people behind the problems don't just move their rage into new forms?

Hey, instead of just using technical solutions, why don't they just fix our society? That's a feasible goal for Twitter!
posted by Mitrovarr at 9:15 AM on February 5, 2015 [31 favorites]


One way of changing society is by changing what society, on average, finds acceptable. If trolling/harassment/rape threats become less acceptable, along with the people who make them, then that is part of the process of changing people themselves. Maybe not today's trolls, but tomorrow's potential trolls.

It's not going to change a lot of the current individuals at a soul-deep level, but it will encrourage/promote a trend away from those things being acceptable, which, if you are talking about an entire human culture, is about as much as any one entity can reasonably hope to do.
posted by emjaybee at 9:21 AM on February 5, 2015 [12 favorites]


I used to think that anonymity was essential for the internet. Now, I'm not so sure. Yes, it gives you protection from stating unpopular positions, but on aggregate, it seems to inspire just so much negativity.

Read any local newspaper's comments section. They are absolutely vile. Even the most positive story can be dragged down by anonymous negative commenters, people who spout racism, people who lie, people who seem to want to bring others down. It's really sad.
posted by RalphSlate at 9:22 AM on February 5, 2015 [2 favorites]


zombieflanders: But I don't believe for a second that Reddit--which still allows upskirt/creepshot subs--is anywhere close to putting something like this in place. They'll hem and haw and make some noise, but at least in the near future I think they'll just fall back on the freeze peaches excuse.
How did Reddit get dragged into this? [Caveat: the linked page in the FPP is blocked by my workplace, so if the Twitter CEO discusses Reddit, I get it.]
posted by IAmBroom at 9:23 AM on February 5, 2015


Relevant piece from Imani Gandy, Reality Check - " #TwitterFail: Twitter’s Refusal to Handle Online Stalkers, Abusers, and Haters" (trigger warning for hate speech). Snippet: "If you winced when you read that list of slurs, imagine having them lobbed at you nearly every day for two years."
posted by joseph conrad is fully awesome at 9:23 AM on February 5, 2015 [3 favorites]


The rape and death threats aren't happening because of uncontrollable anger that has no outlet. They are a deliberate and calculated attempt to silence a specific set of people. The idea that the people behind them have X amount of anger and it has to be channeled somewhere is wrong. I have anger too, and I manage it, because society expects me to manage it. We shouldn't be making excuses for adults who are otherwise mentally well engaging in this stuff.
posted by almostmanda at 9:29 AM on February 5, 2015 [21 favorites]




One way of changing society is by changing what society, on average, finds acceptable. If trolling/harassment/rape threats become less acceptable, along with the people who make them, then that is part of the process of changing people themselves.

I'd like to think that - but look at overt racism. Even as unacceptable as it is to publicly express, some people - well paid, educated people - will still publically post racist shit under their own names.

GIFT not required.
posted by Pogo_Fuzzybutt at 9:32 AM on February 5, 2015


FYI, MAC addresses are both settable via software and inaccessible to browsers, so they aren't a useful way to stop griefers. There are different unique identifiers on phones that apps could use, but I suspect most trolling is fine through the browser interface or alternative apps using the API.
posted by Candleman at 9:35 AM on February 5, 2015 [1 favorite]


I would still argue, Pogo_Fuzzybutt, that we are better off than we were 50 years ago in many ways, but still have a long way to go. I don't think anyone would say there was no difference between now and those days, do you? So there is some progress made. Just not enough yet. It's a long arc to bend. And too many people think the work is already done because that flavor of injustice is invisible to them.

All I personally expect from Twitter is to make a real effort to help people being harassed, not to solve sexism/racism/injustice.
posted by emjaybee at 9:44 AM on February 5, 2015 [5 favorites]


Google gmail is the King of spam blocking. Much props to the guy mentioned above who is worked on that program and is now tackling a different problem....

Cultural change happens along, rarely within, generations. So the hand-wringing stuff like "can't we all just _______" isn't going to effect change in the near-term.

It's a fine line between free speech and a threat. Especially in the USA. "I hope someone does X and Y to you" vs "I am going to do X and Y to you."

The only solution I see to any of this is drawing a hard line for a service/business then hiring sharp people, like the fellow above, to implement a solution. How to do it is beyond my ken.
posted by CrowGoat at 9:53 AM on February 5, 2015 [1 favorite]


FYI, MAC addresses are both settable via software and inaccessible to browsers

And they don't propagate through routers? Uniquely identifying human individuals via network addresses is non-trivial to the point of not being really possible in a reliable way.
posted by GuyZero at 9:54 AM on February 5, 2015


It seems that the same type of simple Bayesian filter that works so well for spam could also work extremely well for a large percentage of abuse. And Twitter presumably has a huge training corpus from all the "report abuse" button clicks.

Some basic abuse classification would quickly isolate abuse driven accounts, and perhaps give feedback to individuals that are usually OK that a one-off angry DM is a bad idea, in addition to letting the targets of abuse easily ignore the most egregiously obvious abuse tweets. Automated abuse classification, combined with jedicus' great suggestions, would go a really really long way to providing the feedback to trolls that what they're doing is not socially acceptable nor Twitter acceptable.

And if somebody really wants to be able to be contacted by brand new accounts, provide the option to turn off the newb-troll protections on communication, but they should perhaps be turned on by default to limit the ability of trolls to harass anyone
posted by Llama-Lime at 9:54 AM on February 5, 2015 [1 favorite]


Hey, instead of just using technical solutions, why don't they just fix our society? That's a feasible goal for Twitter!
Surely one should be able to acknowledge that Twitter building in affordances for third party harassment reporting and streamlining police engagement when appropriate would have noticeable network effects, no?
posted by whittaker at 9:55 AM on February 5, 2015 [1 favorite]


whittaker: Surely one should be able to acknowledge that Twitter building in affordances for third party harassment reporting and streamlining police engagement when appropriate would have noticeable network effects, no?

There's a difference between saying that Twitter changing might have a positive effect on society and suggesting that Twitter should deal with harassment and trolling by fixing the root causes within society.
posted by Mitrovarr at 9:59 AM on February 5, 2015 [2 favorites]


I think my point is that technological changes do cause social changes (writ both small and large).

This odd tendency we have to treat the internet as a separate causality domain from human behaviour in general often contributes to excusing online toxicity in the first place.
posted by whittaker at 10:02 AM on February 5, 2015 [1 favorite]


You could suggest Twitter could work on it's own culture, I guess, but that culture is somewhat nebulous and pretty much against this kind of thing*. The biggets example of a harrasment campaign we are talking about, GG, very much comes from outside Twitter with the Chans and Reddit being it's staging posts.

* Theres the odd hashtag twitter rage weirdness that is questionable, though that tends to be a response to specific events and more "organic" rather than organized.
posted by Artw at 10:03 AM on February 5, 2015 [2 favorites]


There are people out there who do this type of harassing not because they are angry or hold some sort of grudge or need to vent.

They do it because they find it entertaining. It's their version of 'fun'. It's about power. There are people who have little conscience. They are sociopathic.

You don't stop this sort of person by talking or some sort of compassion based solution that looks at the root of the issue.

They do this sort of thing because they have the tools readily available to do so, it's easy and their are no consequences.

Some people just need to have the tool taken away or made less inviting so that other people don't have to be subjected to it.

The root of the issue is that some people in this world are really, really crappy to other people because they just don't care. They're not even capable of really caring.

So yeah it would be wonderful to help fix the roots that can be fixed. People that do vent anger and rage and the like. The solution needs to have the person who doesn't give a flying fig about other people in mind beyond what they socially have to pretend to.
posted by Jalliah at 10:06 AM on February 5, 2015 [4 favorites]


Surely one should be able to acknowledge that Twitter building in affordances for third party harassment reporting and streamlining police engagement when appropriate would have noticeable network effects, no?

Well, sure, but I think the point to be argued is if Twitter grew so much because of its flexibility to allow shitty people to be really shitty or in spite of it. I would say the network effects you seek would send Twitter down a slow drain (disclosure: I'd be happy to see that happen).

Some times you build a tool that just maximizes the worst aspect of human interaction. And you don't fix that tool by arguing it can be converted to semi-auto. You just ban the tool.
posted by 99_ at 10:08 AM on February 5, 2015


Well, sure, but I think the point to be argued is if Twitter grew so much because of its flexibility to allow shitty people to be really shitty or in spite of it.
All true, but Twitter is a demonstrably different organism now than during its initial growth phase with different needs and goals. We no longer require the sleep, soft foods, and milk quantities we did during our infancy that were necessary to development. In fact to strictly adhere to them as adults would be suboptimal.
posted by whittaker at 10:14 AM on February 5, 2015


Shunting angry people of to the side doesn't address the anger, it just hides it while it festers and pressurizes.
I hear this kind of argument a lot about online harassment, but AFAIK the science doesn't really support it. When people are forced to cool off and not do anyhting about their anger, it does eventually go away. And when they express their anger, it just builds on itself, and they become more likely to do something violent later on.

Some of this harassment – and even more overt violence – happens not because the perpetrators were offended, but because they had been allowed to harass earlier and enjoyed it.
posted by aw_yiss at 10:27 AM on February 5, 2015 [22 favorites]


It's the "someone should have given Elliot Rodgers a handjob" argument. No, nobody should have given Elliot Rodgers a handjob, someone should have identified Elliot Rodgers as a threat and put him out if reach of other people and guns.
posted by Artw at 10:36 AM on February 5, 2015 [8 favorites]


There's a difference between saying that Twitter changing might have a positive effect on society and suggesting that Twitter should deal with harassment and trolling by fixing the root causes within society.

Twitter, Facebook, and the like are very powerful tools. More powerful than some would like, maybe, but certainly powerful enough to try some things that might make a positive difference.

I'm certainly not suggesting that they don't do the obvious things to make things better. Hateful speech and bullying shouldn't be normal.

However, there may be some additional idea or mechanism that's not yet been tried, or imagined, that would help even more. Something really new, and probably difficult, that only a powerful platform could provide.
posted by amtho at 10:46 AM on February 5, 2015


*Someone* should look address the root causes in society and even in psychology. We're becoming so closely packed in, physically and socially, that ignoring things like this aren't a viable option. And if these kinds of companies don't do it, who will?
posted by amtho at 10:48 AM on February 5, 2015


You are mistaken in assuming that no one is looking into this.
posted by almostmanda at 10:50 AM on February 5, 2015 [9 favorites]


*Someone* should look address the root causes in society and even in psychology. We're becoming so closely packed in, physically and socially, that ignoring things like this aren't a viable option. And if these kinds of companies don't do it, who will?

Broadly, the root cause in society and psychology is patriarchal male entitlement made manifest. It's not very complicated at all, these honest-to-god terrorists are just overemotional, hysterical dudes lashing out because they don't want to live in a world that might consider women's voices as equal to men's in weight and credibility. As Lindy West's aforementioned troll showed in spades, they're infuriated by the idea that some of us might be happy with our lives while they're so incredibly miserable with theirs. Or they think women are living too far out from under the thumb of gendered violence, so they want to put the proverbial fear of god into us by any means necessary.

The less we obey, the more we refuse to shut up, the angrier they get. They think we're too big for our britches and it makes them very, very mad. That's really it, no soul searching required; it's depressingly simple. So I'm not terribly interested in figuring out the very deepest reason why these garbage people can't handle the fact that women are human beings, because the root cause has no material bearing on whether or not we can put social and legal ramifications into place to make them stop harassing women out of every on- and off-line space they want to reserve exclusively for vicious misogynists and their enablers (read: literally the whole world).

I guess cookie-cutter sexism turned raw and violent might seem different, more obvious, or more difficult to ignore now that social media platforms have come into existence, but the behavior isn't new. In the harasser's minds, the only real solution is a return to the gender roles of yore, a time when women knew their (weaker, lesser, irrational) place. In my mind, the only real solution is feminism. IBTP!
posted by divined by radio at 11:20 AM on February 5, 2015 [27 favorites]


Yeah, in the GG case in particular if you want to find someone "doing something" about the culture you really just have to look at the people targeted.
posted by Artw at 11:30 AM on February 5, 2015 [3 favorites]


Require a phone number so users can only have one account and you prevent novelty accounts or secondary accounts

No, just allow multiple accounts. Then if one account gets banned, all the others tied to the same phone number die too.
posted by ryanrs at 12:28 PM on February 5, 2015 [5 favorites]


Yeah that's not a bad idea. Metafilter lets you have a few sockpuppets if you're well behaved with them. A site I moderate on allows "gimmick" accounts provided you mostly use them for jokes, occaisonally semi-anonymous questions and stuff like that. But if you become a banned user, the gimmick accounts usually go too.

I've seen firsthand the results of a lot of attempts to limit spam and trolling. Some work pretty well, some don't. My all out favorite is that our forum (not metafilter, this other site I'm on) software has the ability to "silence" users. They can still post, but no one can see WHAT they post. It's so great. It's rarely used.

Actually, back in the dawn of the internet I was designing community software and we came up with a pretty crucial concept. If you ban someone, they come back. If you make it so that the site just *doesn't work* for undesirable accounts, they shrug and leave. One constant on the internet is that shit doesn't work right. There's always someone who can't log in, or can't post, or whatever, and can't figure out why. If you make the site just degrade into unusability for trolls, I think they might be more likely to just go away.

Like... add a random posting delay between like 10 and 30 seconds for them. Make the site just slooooooow. Make every other page request get dropped. Make it so images or CSS don't load. You know, stuff that happens to EVERYONE sometimes, make it always happen to them.
posted by RustyBrooks at 12:43 PM on February 5, 2015 [6 favorites]


On the other hand, the phone number thing means that at some point Twitter will be hacked and the phone number of every account will be publicly posted. Hrm.
posted by ryanrs at 1:21 PM on February 5, 2015 [1 favorite]


Also, who wants the company to have a list of users' phone numbers to sell for advertising or other purposes?
posted by amtho at 1:25 PM on February 5, 2015


One thing that might help is, don't tell a blocked account that its been blocked. Let Deez___Nuts6969 continue tweeting, unaware that his foolishness is going straight into a trashcan. Sure, he'll figure it out eventually, and that's where other things will come in, but until then he gets his ya-yas out and no one has to hear it.
posted by Legomancer at 1:26 PM on February 5, 2015 [2 favorites]


Legomancer, that sounds good in theory, but in practice results in you seeing one-sided conversations where your friends and followers are arguing with the jackass who is harassing you, so you end up seeing it anyways.
posted by almostmanda at 1:29 PM on February 5, 2015


Also:

A) it'd somewhat frustrating of it looks like Deez___Nuts6969 has been behaving abusively and it looks like nothing has happened to their account.
B) Deez___Nuts6969 probably cares a good deal less about hellbans if they have Deez___Nuts7000 to 7100 to play with also.
posted by Artw at 1:41 PM on February 5, 2015 [2 favorites]




Require a phone number so users can only have one account and you prevent novelty accounts or secondary accounts that are focused on a certain topic which is another value to Twitter.

1) My account about ME! (No flags, no response)
2) My account about CUPCAKES! (No flags, no response)
3) My account about Nazi Hitler Vagina Raping and Poo! (10^7 flags, all accounts BANNED FOR LIFE!)


Also, who wants the company to have a list of users' phone numbers to sell for advertising or other purposes?


Every 2 red flags=your phone # sold to 1 telemarketer. "Hi! This is Rachel from cardholder services!"

Like... add a random posting delay between like 10 and 30 seconds for them. Make the site just slooooooow. Make every other page request get dropped. Make it so images or CSS don't load. You know, stuff that happens to EVERYONE sometimes, make it always happen to them.

1)Such page. Very machine language. SO unicode.
2)Why is every picture Tubgirl?
3)Where are all these pop-up ads coming from? TRY OUR NEW SERVICES!
4)Why is my hard drive suddenly making so much noise?
5)Great. A Twitter toolbar, just what I needed. Fuck this! Oh look, another one.
6)Ding-dong! Hi, we're from the FBI. It seems you've been torrenting a LOT of kiddie porn. Didn't know about it? Sorry, but we're STILL going to have to take your computer.
7)Ding-dong! Hi, we're from the FBI. It seems you've been torrenting a SPIDER-MAN 3. Didn't know about it? Sorry, but we're STILL going to have to take you to Guantanamo Bay forever and ever.
posted by sexyrobot at 2:19 PM on February 5, 2015


It sounds a bit like how in Malaysia, before you can legally obtain a SIM card, you need to register your national identity card to it. This means there is no such thing as an anonymous phone call, because all SIM cards are tied to a national registry. I suspect this was done to cripple organized crime.

Internationally I'm not sure how identity verification could work - maybe something like how Blizzard does it. Blizzard verifies users via credit cards (this was years ago, but even if you wanted to use anonymous scratch cards to recharge your WoW credit, you still needed to verify your account with a credit card). Identity verification is critical in their business for account recovery: if you can't "prove" who you are, they can't recover your account for you if you get hacked, and having a credit card seems to be good standard.
posted by xdvesper at 2:28 PM on February 5, 2015


sexyrobot, your plan to solve online harassment is difficult to distinguish from online harassment
posted by ryanrs at 3:09 PM on February 5, 2015 [2 favorites]


Lol :D ...well, what's good for the goose is good with the shoe on the other foot ;)
posted by sexyrobot at 3:14 PM on February 5, 2015


That shoe is diseased.
posted by Artw at 3:19 PM on February 5, 2015 [3 favorites]


Re:Lesser degrees if anonymity as a fix for harassment - I agree it's part of the solution, but taking a look over at Facebook I think it's clear it's not the entire solution.
posted by Artw at 3:22 PM on February 5, 2015


This might not work for the bad apples who devise premeditated and carefully crafted hate, but for the average person online, creating a buffer for your posts before they go live is a good idea. I often find myself drafting a Mefi comment and then copying it to my notepad instead of posting it. I come back to it later and decide if it would have added anything to the conversation or not (usually not).

Twitter isn't a synchronous conversation anyway (such as a chat). E.g., users could have to wait an hour or 6 hours or 24 hours for their post to go live, depending on their content and previous use record. A lot of commercial media isn't disseminated live anyway. The wait time would not stop the patient sociopaths, as I said, but it would stop the impulsive users.

I think when telepathy is invented, many will immediately keel over with aneurysms from experiencing directly how vile some people's thoughts are.
posted by bad grammar at 4:12 PM on February 5, 2015


I didn't describe what I was suggesting well enough. What I mean is, Twitter determines, after a number of complaints, that Deez___Nuts6969 has been abusive. Instead of killing the account and making the guy start up the next one, you throw him into a mode where, as far as he's concerned, he's still tweeting away, but nothing he writes is actually going to anyone. The only account that sees anything in his timeline is his. It only works if he's logged in, and it's more trouble than it's worth (and once the word gets out this is happening, they'll be on the alert for it), but I for one would be curious to see how long it takes these assholes to figure out they've been canned in this way.
posted by Legomancer at 4:24 PM on February 5, 2015


Twitter isn't a synchronous conversation anyway (such as a chat). E.g., users could have to wait an hour or 6 hours or 24 hours for their post to go live, depending on their content and previous use record. A lot of commercial media isn't disseminated live anyway. The wait time would not stop the patient sociopaths, as I said, but it would stop the impulsive users.

Twitter pretty much IS chat for most people, and a artificial lag would be unacceptable for them.
posted by Artw at 4:26 PM on February 5, 2015 [1 favorite]


It only works if he's logged in

This pretty much falls down at that, since they most likely are hopping between accounts rather than waiting on some reply. It might make their routine less efficient as sometimes they'll hop onto dead accounts and not no it, but as you say that goes away as soon they become aware that this is a thing that can happen.

Hellbans are basically a fun smalltime board idea that assume a different paradigm of trolling.
posted by Artw at 4:31 PM on February 5, 2015


Technically savvy users are really hard to stop, but I'm sure twitter has a very large number of people working on uniquely identifying readers and posters. This department is called "advertising", and I'm sure twitter and their advertisers know who you are within 10 clicks of browsing.
posted by benzenedream at 9:28 PM on February 5, 2015 [3 favorites]


Entrepreneur Jason Calacanis told CNBC's "Halftime Report" on Friday that the company will create a new revenue source called "Verified Twitter," he said, "where anyone would be able to verify their account for $1 a year or something like that."

Twitter declined to comment.

posted by infini at 10:37 AM on February 10, 2015


Interesting considering Twitter currently offers a service for Verified accounts where they can choose to only see mentions from other Verified accounts. So William Shatner will get notified if Lady Gaga tweets him but not Joe Schmoe.
posted by smackfu at 11:47 AM on February 10, 2015


There goes the entire premise of Twiplomacy...
posted by infini at 12:39 PM on February 10, 2015


« Older Rambo Day   |   I'm so excited! I'm so excited! I'm so scared! Newer »


This thread has been archived and is closed to new comments