The Black Feminists Who Saw the Alt-Right Threat Coming
April 23, 2019 7:59 PM   Subscribe

Before Gamergate, before the 2016 election, they launched a campaign against Twitter trolls masquerading as women of color. If only more people had paid attention. In 2014 Shafiqah Hudson noticed an odd hashtag purporting to be from black feminists arguing against father's day. But the language these accounts were using read to her as a parody of AAVE, and some of the photos were of people she knew didn't use twitter. This led her and I’Nasah Crockett down a racist rabbit hole that led to 4-chan, right before gamergate.

the corrective rise of #YourSlipIsShowing is the story of a community that, mostly ignored by institutions, chose to fight back with the limited tools available. But despite all their efforts, Hudson, Crockett, and the other black feminists watched as the very same 4chan boards that had birthed #EndFathersDay spawned a new misogynistic harassment campaign mere months later: Gamergate. They watched as women like Zelda Williams and Zoë Quinn were aggressively bullied by accounts using many of the same tactics deployed during #EndFathersDay. And eventually, they watched as the 2016 election campaign unfolded and the very same forces that had been antagonizing them for years rebranded themselves as the alt-right.
posted by Homo neanderthalensis (57 comments total) 89 users marked this as a favorite
 
This piece shows how deep the rot is at Twitter. They have known since 2014 that brigading is something that happens - and they refuse to do anything about it.
posted by NoxAeternum at 8:02 PM on April 23 [23 favorites]


I argue that twitter is doing something ‘about it.’ They’re encouraging and supporting brigading because twitter makes money from that behavior.

Twitter is working as designed. They just get to make some mouth noises about how it’s hard to moderate and free speech and the algorithm is color blind.

Twitter will never make an effort to stop harassment but they will happily ban folks who speak out.
posted by bilabial at 8:24 PM on April 23 [53 favorites]


I heart these awesome and brave women. Instead of being driven off Twitter due to harassment, they identified the source of the harassment.
Crockett, meanwhile, starting Googling around, and soon enough, she found the source from which #EndFathersDay had sprung: the original 4chan post outlining the fake hashtag as part of a crusade by men’s rights activists, pickup artists, and miscellaneous misogynists hoping to capitalize on previously existing rifts in the online feminist movement related to race and class. On 4chan threads, users said things like “I’ve had hundreds of nigs chimp out at me over this [fake tweet]. This turned out way better than expected :)” and “the more you do it the less effective it is going to be when we launch a proper attack. making them question each other is great but i want to make them hate each other.” Using the tag #YourSlipIsShowing, Crockett posted screenshots from 4chan to Twitter as evidence of the premeditation behind the hoax.
posted by spamandkimchi at 8:59 PM on April 23 [13 favorites]


Twitter is working as designed.

Design sucks, too.
posted by Revvy at 9:24 PM on April 23 [7 favorites]


Social media was a mistake.
posted by RakDaddy at 10:44 PM on April 23 [31 favorites]


Being able to avoid social media, which many can't if it is part of their revenue model, doesn't fix the fact that the many people who won't avoid it are being influenced by false narratives.
posted by bootlegpop at 1:45 AM on April 24 [16 favorites]


You can live an amazing life without Twitter or Facebook.

Maybe you can, but there are a lot of people - many of whom are marginalized - who can't, because social media gives them the community that they can't find elsewhere, and they need our protection. To quote a line from a favorite game of mine: "To ignore the plight of those one might conceivably save is not wisdom—it is indolence."
posted by NoxAeternum at 1:56 AM on April 24 [56 favorites]


* Tumblr and Twitter were where she encountered a network of black women who were genuinely interested in ideas about race and culture in a way she felt her university was not.
* Crockett knew that Tumblr posts from so-called social justice warriors would often find their way onto 4chan, where trolls would coordinate brigading campaigns ...
* There is only one mention of Facebook. That's tangential. The article doesn't really compare whether some platforms have managed to do better than others (or not?). It tells us about an overlooked moment of history. It's interesting because we're still dealing with the same problems.

If burning your social media accounts is the right solution for you and your contacts, great. I approve of making conscious decisions.

Sometimes, when technology is built by big capital, they build things that you want. And you don't have the capital to replace it, to fix the parts you don't want on your own.
posted by sourcejedi at 2:01 AM on April 24 [7 favorites]


Twitter could definitely do more about direct harrassment, but I'm not sure how they could meaningfully deal with the kind of false-flag attacks that the article is mostly about, or at least not in a way that would (a) also work for slightly more sophisticated attacks (e.g. using more credible language, not using easily recognisable stolen photographs, not organised on a public website, not focussed on a particular hashtag-linked campaign) and (b) be scaleable to all of the many environments around the world (in many different languages and cultures) where they operate.

Actually I suspect the trolls have moved on to the more sophisticated approach. You can see it on reddit if you're unfortunate or unwise enough to stumble on a sub that the trolls consider to be worth trolling: there'll be posts from newish accounts claiming to be alts (because often you do need an alt account to post something remotely controversial on reddit without being doxxed) which seem to have been carefully put together in just the right way to enflame existing divisions and get people fighting, but since they're on-topic and polite and mostly plausible they're very difficult to moderate out without a lot of false positives.

Facebook is different, because to participate in Facebook you need to have more of an actual network of acquaintances and if you're too obnoxious you'll just end up talking to people who already agree with you. Which is a problem, but not the same problem. I don't think these kinds of attacks are really possible there, or if they are they'd be much more work. But I don't know how much Twitter could really do without becoming a lot more like Facebook - more exclusionary, more insular, more privacy-intrusive - with all of the problems that would entail.
posted by A Thousand Baited Hooks at 3:00 AM on April 24 [7 favorites]


So you're saying we can't counteract these more sophisticated campaigns because they disguise themselves better, then that the new, more sophisticated campaigns are recognisable because [description]. That sounds to me like you absolutely can du something here - delete the offending content, ban the offending users. The same thing twitter could've done, and still could do. That there theoretically could be more sophisticated problems in the future that can't be dealt with in the same way is no reason not to do the straightforward thing in dealing with the problems we have now.
posted by Dysk at 3:11 AM on April 24 [11 favorites]


Take care to distinguish from (caricature) "Social media is a waste of time anyway. If you don't pay attention to it, then it can't hurt you." Because it's a common argument, from conservative politics. It is wrong and in bad faith. You can take care of yourself however you need to. That doesn't disappear the alt-right, foreign influence campaigns from Russia, and the votes cast for Trump, Brexit, or Orbán.

Or to read the article: cable news channels picking up the fake "feminist" accounts, as easy outrage fodder and to promote their owners politics.
posted by sourcejedi at 3:15 AM on April 24 [8 favorites]


I'm not sure why I still have Twitter at this point. Even though my feed is curated and I make use of lists to organize my interests to try to streamline the content I consume, even with all that I'm usually anxious when I do log on there these days (and I'm now only checking in once every week or two).

Ugh. Great post but the ugh is directed at how awful humans are. Just ugh.
posted by Fizz at 3:31 AM on April 24 [6 favorites]


So you're saying we can't counteract these more sophisticated campaigns because they disguise themselves better, then that the new, more sophisticated campaigns are recognisable because [description]. That sounds to me like you absolutely can du something here - delete the offending content, ban the offending users

"but since they're on-topic and polite and mostly plausible they're very difficult to moderate out without a lot of false positives."

(Just to be clear - I'm not trying to defend Twitter. I think it's basically unsalvageable.)
posted by A Thousand Baited Hooks at 3:47 AM on April 24 [3 favorites]


I'd be willing to take more false positives over what we have now. That's also allowing for a lot of advantages to the trolls - going much stricter on new accounts, for example, makes it harder for them to pull off. And even then, sophisticated false-positive inducing behaviour on reddit does nothing to excuse twitter not dealing with much more straightforward attacks on their platform.

I don't think this problem is as hard to deal with as it's often suggested. It's likely expensive in terms of employee hours, but that's a different thing. Being willing to go "no, fuck these people and fuck 'innocent' people who agree with them" would go a long way over wringing hands about how to get the trolls while making sure to leave the genuine reactionaries/racists/Trumpers/whatever alone and with a good enough impression of your site that they're not l leaving in disgust at your policies. That is the problem with Twitter as I see it. They could implement much better moderation without it being too complicated, they'd just have to alienate a good portion of what they no doubt see as valuable users and fellow travellers. Be willing to fuck them all off, and it all becomes much easier.
posted by Dysk at 4:20 AM on April 24 [7 favorites]


there are a lot of people - many of whom are marginalized - who can't, because social media gives them the community that they can't find elsewhere

No it doesn't. The Internet gave them the communities they couldn't find elsewhere. Social media supplanted those communities, and replaced them with toxic noise.
posted by Cardinal Fang at 4:42 AM on April 24 [28 favorites]


Current (2019) Leader of the US Political House-Of-Cards, recently met with Jack Dorsley (co founder of Twitter), and their meeting seemed cosy, so perhaps that's why certain users feel more at home on Twitter - it's by design, and this won't change unless he current company leadership changes.

Source: https://www.bbc.co.uk/news/world-us-canada-48033037
posted by Faintdreams at 4:50 AM on April 24 [1 favorite]


The Internet gave them the communities they couldn't find elsewhere. Social media supplanted those communities, and replaced them with toxic noise.

What's true for you isn't true for everyone. A lot of us still get great use out of social media. If you stay off open groups and have a not batshit friends list, Facebook can be great. Inherently open platforms like twitter rely on being able to fly under the radar to a greater extent, but that is not all social media.
posted by Dysk at 5:10 AM on April 24 [21 favorites]


They have known since 2014 that brigading is something that happens

We've known since the 90s. Usenet existed, after all - and all this same shit happened there, too.

I signed up for twitter in 09, I think. Quit using it, because it lacked features that Outlook Express had in 1998 for managing abuse.

And the "trending" was as gameable as any local radio station internet poll.

The groper in chief was complaining about lost followers and not any of these solvable problems...

We are ruled by morons.
posted by Pogo_Fuzzybutt at 5:31 AM on April 24 [6 favorites]


If you find yourself welling up with an acute case of 'What is Twitter even for?" please note that TFA has a pretty nice section on how much the online black feminist community means to the women behind #YourSlipIsShowing.

Great piece, with plenty more and better to chew on than another round of arguing whether social media should even exist.
posted by DirtyOldTown at 5:45 AM on April 24 [39 favorites]


Ban the Nazis, @jack.

That said, father's day is a pretty lame "BUT WHEN IS INTERNATIONAL MEN'S DAY" kind of holiday, and I am made really uncomfortable by it each year.
posted by rum-soaked space hobo at 6:04 AM on April 24 [1 favorite]


I'd be willing to take more false positives over what we have now. That's also allowing for a lot of advantages to the trolls - going much stricter on new accounts, for example, makes it harder for them to pull off. And even then, sophisticated false-positive inducing behaviour on reddit does nothing to excuse twitter not dealing with much more straightforward attacks on their platform.

I think this is really underestimating how tricky it can be to identify sufficiently skillful trolls, especially concern trolls who are prepared to be more subtle than the idiots in the linked article.

For example (warning: various kinds of offensive nonsense), take this, or this, or this, or even this. How do we work out who gets banned for each of these? (If the answer depends on identifying the true trolls according to how much we think we disagree with their politics, how do we (a) make sure that the rules are always enforced by people who agree with us, and (b) translate that approach to the rest of the world?) And how do we do so without giving trolls another weapon to use against their victims?
posted by A Thousand Baited Hooks at 6:20 AM on April 24 [3 favorites]


I was at RSA this year and listened to the people at Facebook and twitter talk about detecting and mitigating these kinds of actions and I was unimpressed with their approaches. They have all decided that “content” is a weak signal and that social network analysis is a better signal. Really “more useful end user tools that allow me to filter and curate how the sort algorithm serves up social network interactions” is how this is solved. But twitter and Facebook don’t want to have an open sort algorithm.

The reason is that they are really invested in maintaining the status quo where the sort algorithm is their IP (it isn’t, sort will eventually become a commodity like everything else). They are only addressing this as an edge case problem with bad users exploiting their sort algorithm, they do not see it as a core platform problem with how they believe they are the benevolent providers of a sort algorithm and we should all feel grateful for that. Let me have a choice with how your sort methods interacts with me.

To do that requires twitter and Facebook (and every company with sort algorithms) to get over themselves a little bit and start building tools for humans instead alongside the tools they are building for lining the pockets of VC.
posted by nikaspark at 6:24 AM on April 24 [8 favorites]


Twitter's problem reminds me of when Google started getting gamed by SEO. Google's web indexing innovations were fantastic: automatically curating web content, and assessing the worthiness of a web page by counting the number of incoming links. Once people figured out how to exploit that, though, it was a race to the bottom that still hasn't ended. Twitter is pretending that there's some magic way to distinguish good content from bad. There isn't! If the incentive is great enough – and we've just learned how great that incentive might be – the people gaming the system will put in any amount of effort to undermine the system. They're not going to find a technological fix; we're all screwed.
posted by Joe in Australia at 6:44 AM on April 24 [5 favorites]


I’m a principle security architect for a large platform company and if it helps at all Joe in Australia, I am raising massive hell about this stuff.
posted by nikaspark at 7:11 AM on April 24 [6 favorites]


I think this is really underestimating how tricky it can be to identify sufficiently skillful trolls, especially concern trolls who are prepared to be more subtle than the idiots in the linked article.

Which is great and all, but doesn't address the fact that no attempt is being made to do anything about even the ones that are obvious. Which is most of them. "Oh but it can be difficult" feels like a non-sequitur in the face of the easy shit not being touched.
posted by Dysk at 7:23 AM on April 24 [12 favorites]


(And if all future trolling becomes the super-subtle stuff in response to action on the rest, that's a net good in itself: the subtle stuff is subtle by virtue of being less immediately objectionable or offensive. If all internet bullshit were less immediately objectionable or offensive, that would be a huge improvement over now.)
posted by Dysk at 7:24 AM on April 24 [13 favorites]


The article says Twitter updated their fake accounts rules, which actually surprised me. When I last tried reporting something like this, I went to the "Impersonation" section, but it lists:
  • An account is pretending to be me or someone I know.
  • An account is pretending to be or represent my company, brand, or organization.
It doesn't list "this person is pretending to be of an ethnicity or ideology they don't understand"

So apparently it's under "suspicious or spam" instead of "impersonation" and I never thought to look there.
posted by RobotHero at 7:41 AM on April 24 [1 favorite]


That said, father's day is a pretty lame "BUT WHEN IS INTERNATIONAL MEN'S DAY" kind of holiday, and I am made really uncomfortable by it each year.

Someone once pointed out that Father's Day is nine months before Mother's Day, which was quite telling.
posted by acb at 7:52 AM on April 24 [1 favorite]


Someone once pointed out that Father's Day is nine months before Mother's Day, which was quite telling.

Except that... it's not? (Isn't it 11 months before Mother's Day?)
posted by clawsoon at 7:58 AM on April 24 [2 favorites]


Mother's day isn't the same date everywhere in the world.
posted by Dysk at 8:00 AM on April 24 [1 favorite]


And isn't 11 months before more intuitively understood as one month after?
posted by Dysk at 8:00 AM on April 24 [4 favorites]


Just from my perspective, when 4chan did #droptheb, they appropriated debates and language that had been floating around LGBTQ communities for decades. Documenting how the visual memes originally appeared on 4chan and that the trolls had minimal social connections prior to #droptheb activism was necessary for sussing out the bad-faith actors in that episode.
posted by GenderNullPointerException at 8:12 AM on April 24 [5 favorites]


Which is great and all, but doesn't address the fact that no attempt is being made to do anything about even the ones that are obvious. Which is most of them.

Is it most of them? How would we know?

"Oh but it can be difficult" feels like a non-sequitur in the face of the easy shit not being touched.

I had hoped it was reasonably clear that I was talking about this general kind of attack and not these specific accounts, which are pretty obvious now they've been pointed out. But sure, it would be great if Twitter took these ones down, along with any others that are as obvious.

(And if all future trolling becomes the super-subtle stuff in response to action on the rest, that's a net good in itself: the subtle stuff is subtle by virtue of being less immediately objectionable or offensive. If all internet bullshit were less immediately objectionable or offensive, that would be a huge improvement over now.)

You mean like the more sophisticated Russian psy-ops stuff? Um... maybe.
posted by A Thousand Baited Hooks at 8:22 AM on April 24


because often you do need an alt account to post something remotely controversial on reddit without being doxxed)

This, right here, is why I think we can safely say our social media giants give zero fucks about safety.

Imagine if doxxing got you a permaban. Imagine if engaging in a false information campaign got you a ban on all of your alts.
posted by corb at 8:24 AM on April 24 [11 favorites]


I had to read the article in chunks, it was that infuriating.

Black women have pointed out for years that they were the test run for Gamergate, and it's good to see that publicized in mainstream media.

Twitter is wonderful, but it's despite Jack and the other leadership, not because of them.
posted by JawnBigboote at 10:14 AM on April 24 [7 favorites]


which are pretty obvious now they've been pointed out.
Pretty obvious to us, but if the point is more fodder for right-wing "look at these crazy feminists" shtick, not obvious enough. How obvious a fake is depends on the expectations of the viewer.

One defining example for me, back in the day when I was arguing on a forum someone objected to being described as the "anti-feminist" position. He agreed with feminists as long as they were what he saw as "reasonable." And then to illustrate an example of what he thought we would all agree was a feminist being unreasonable, he linked to what was obviously to me, a fake blog. And I responded that now I think he's anti-feminist because he thinks so little of feminists that he couldn't tell this was a fake.

The fake blog was using an avatar that looked like it would show up on a stock photo site if you searched for "angry black woman." It had also posted a "quote" from Andrea Dworkin saying "All heterosexual intercourse is rape," with the comment "Right on!"
posted by RobotHero at 10:16 AM on April 24 [6 favorites]


This is the third big article or discussion/think piece i've seen about this at least. At the time, even on reddit there was /shitredditsays and a good sized community pointing out the patterns in this stuff.

Whats a common thread with all of these is how good they are at making it a middle school principle sort of "well you can't have a fight without two fighters" discussion, instead of a "these people are acting poorly". These guys are BRILLIANT at accusing, and making it stick, the other side of persecuting them, having a vendetta, hating free speech, being dishonest about their reasons for accusing them of this, and it being part of some bigger outside fight where this is a "gotcha". They successfully got the peanut gallery to label the people trying to shine light on this as trolls/brigaders/etc essentially everywhere until they had built up enough mass that the dam broke and gamergate happened.

Looking back on this and trying to decide where it started, and where i first saw these nerds coalescing into something worse is really hard to put a finger on, but it wouldn't be 2010, it would be like 2005. And the entire time all we heard was "just ignore them, giving them attention makes it worse, you're letting them win by obsessing over them like this and playing their game"

"Don't feed the trolls!"

It feels awful to say "toldja so", but honestly it's more like, why didn't you listen to all the different groups of marginalized people saying this was Bad Actually and not just like, annoying?
posted by emptythought at 10:41 AM on April 24 [12 favorites]


I’m a principle security architect for a large platform company and if it helps at all Joe in Australia, I am raising massive hell about this stuff.

Not joe, but this actually does make me feel a little better
posted by schadenfrau at 10:55 AM on April 24 [4 favorites]


These guys are BRILLIANT at accusing, and making it stick, the other side of persecuting them, having a vendetta, hating free speech, being dishonest about their reasons for accusing them of this, and it being part of some bigger outside fight where this is a "gotcha".

The problem is that we've created a culture primed for this. There was one part of this disorganized "argument" against NZ ISPs blocking 4/8chan that stood out for me:
Censorship also calls attention to what’s being censored, since people will probably be curious about what was blocked—but in this case without a good reason steeped in company policy. And when an internet provider blocks a website full of hate, users of that site can cry censorship, politicizing and potentially strengthening their community. “Blocking like this allows people to say, ‘Our speech is being censored,’ and therefore you are riling up a community who can go elsewhere on the web, and then they’re connecting and congregating around a ‘We’ve had our voices silenced’ line. And then there becomes kind of a victim narrative here that can act as a recruiting mechanism,” said Claire Wardle, a TED fellow and executive director of First Draft, an organization that helps journalists and researchers find and study disinformation. These users would have a point. As disturbing as parts of 4chan and 8chan are, plenty of corners on those sites aren’t used for hate. 4chan has thriving message boards of anime fans, video gamers, pornography, and advice topics. Blocking an entire website silences those groups, too, that may not agree with the hate groups on 4chan but would oppose blocking of the site. This is the case with any kind of blunt blocking—more people will be affected than are at fault.
We've created a culture where people on a board infamous for conducting wide scale harassment campaigns can cry 'censorship' when people respond to that abuse - and be taken seriously. This is part of the problem, and it needs to change.
posted by NoxAeternum at 11:10 AM on April 24 [6 favorites]


The social/political hackers of social media (and thus our culture) took advantage of the institutional, systemic, and cultural racism that already exists to do an end run around thoughtfulness, critical thinking, real mindfulness, (and whatever else you want to call it), which is basically our only defense against propaganda and manipulation.

I'm an activist, and I've been watching this unfurl (with dismay, for sure) since I started stopping being childlike, so let's call it the late 1970s. I, personally, think that we'll be vulnerable to this kind of attack, propaganda, manipulation. Because it relies on, at minimum, our inherent (and largely unacknowledged) bias and prejudice, as well as how we've structured our socioeconomic meritocracy. I think we'll have to dismantle capitalism's power over us as well as get a lot better at not being so reflexively prejudiced before we'll be able to stop ourselves from buying into non-subtle, partly AI-generated propaganda. Let alone being able to be on guard for the more subtle, well-crafted manipulations.

I think that it's possible. In activist circles, there are folks who are self-directed, critical thinkers, who keep their ethics pretty consistent and non-Balkanized (so, e.g., they are freely open-minded, supportive of other people, sharing, kind, a little or a lot socialist, and don't have worrying hypocrisies, like being all these things but also white and TERFy). It's possible to become this kind of person. But you really have to want it. I'm not sure if social policy can bridge that gap. In my experience, getting "woke" only happens with hard work and immersion and humility, and I know for a fact that a lot of us are short on any or all of these resources.
posted by kalessin at 11:30 AM on April 24 [6 favorites]


Just last week one of my local newspapers ran a column on the pushback against the State's Attorney in Cook County, Kim Foxx, over the Jussie Smollett hoaxing thing which has attracted every Trumper in America for the opportunity to attack a black woman (who may or may not have fucked up). They published letters to editor with names and locations and I thought to myself "I wonder who these people really are?" and fired up google. First name and location combo returned an obituary from the same week. Second name an obituary from two years ago. Other results where just your usual name SEO classmates style spam results but I did contact the newspaper with an email pointing out the two results and admitting it could just be a coincidence but that I thought they should know.

I learned to question letters to the editor, the earliest of social media outlets, back in the late nineties while in grad school because the local paper ran a letter from "a concerned citizen" who opposed the city's proposed ban on animal acts in circuses.

That 'concerned citizen' was an animal learning researcher and full professor at the university in my department. They had two capuchin monkeys in a ridiculously small cage in his lab and their stimulation was that a radio playing CBC radio 2 was left on overnight. I'm sure it was just an accidental oversight that he left all his credentials off the letter.
posted by srboisvert at 11:57 AM on April 24 [21 favorites]


I'm not sure how they could meaningfully deal with the kind of false-flag attacks that the article is mostly about, or at least not in a way that would (a) also work for slightly more sophisticated attacks (e.g. using more credible language, not using easily recognisable stolen photographs, not organised on a public website, not focussed on a particular hashtag-linked campaign) and (b) be scaleable to all of the many environments around the world (in many different languages and cultures) where they operate.

This is admittedly an emerging area where there's a sort of ongoing cat-and-mouse game, but if you're the operator of the social network there is a lot you can do. Having your users catch a troll brigade is, or ought to be anyway, frankly embarrassing.

There are a lot of things you can look for: multiple accounts claiming to be different people in different locations posting from the same IP address (the platforms all log IPs, they're not publicly shown); multiple accounts retweeting/amplifying each other that were all created at the same time; accounts that are supposedly located in different places but only post at certain times of day that doesn't match their supposed locations (because they're all being posted by people working 9-5 at the troll farm in Russia or wherever); accounts where their followers/followees are basically a closed social graph, i.e. they're basically circle-jerking each other without many external connections... that's just the first level stuff. I don't know much about Twitter internally, but Facebook has a whole AI/ML research division that supposedly is working on ways to model authentic vs. inauthentic user behavior, down to stuff like word-frequency analysis, time between original posts and re-share/amplification, etc.

There is a ton that network operators can do to crack down on coordinated propaganda campaigns and brigading, which is really just an amateur-hour version of same. That they aren't doing anything about it is because they don't want to or don't care to, not because they can't.
posted by Kadin2048 at 12:28 PM on April 24 [14 favorites]


Maybe I just don't use Twitter enough, but I thought it was kind of the point that it doesn't have the same kind of model of authentic/inauthentic communication as a place like Facebook - that if you want to set up an account that periodically tweets historical climate data or run some kind of alternate reality game or have your own little insular community with few connections to the outside world, you could do that as long as you didn't engage in particular kinds of spamming or harrassment.

The kinds of techniques you're describing make sense for a social network that's meant to be built on more clearly defined kinds of interpersonal communication between individuals, like Facebook. No doubt Twitter could use them to deal with coordinated propaganda campaigns and brigading as well. But for the very specific kinds of attacks I was talking about - where someone is adopting an identity and behaving in ways designed to discredit that identity - it's a much harder problem unless Twitter is going to turn itself into Facebook and expect its users to be their authentic selves, which is what I've been saying all along.
posted by A Thousand Baited Hooks at 5:27 PM on April 24


But for the very specific kinds of attacks I was talking about - where someone is adopting an identity and behaving in ways designed to discredit that identity - it's a much harder problem unless Twitter is going to turn itself into Facebook and expect its users to be their authentic selves, which is what I've been saying all along.

Except that it isn't, because - as the OP points out - most times the contempt shines through their act. These are not sophisticated attackers trained to be able to sublimate their identities to effectively portray their opponents.
posted by NoxAeternum at 9:22 AM on April 25 [3 favorites]




Except that it isn't, because - as the OP points out - most times the contempt shines through their act. These are not sophisticated attackers trained to be able to sublimate their identities to effectively portray their opponents.

Nope! See this comment.
posted by A Thousand Baited Hooks at 3:58 PM on April 25


By now everyone knows that Twitter wont crack down on Nazis algorithmically, like they do ISIS, because it would ensnare Republicans. This and the fact that Facebook gave up on fact checking by mainstream news organizations [rather than the "Daily Caller"-related "checkers" they have now] because of pressure from Republican legislators. Right now, I guess it is a Nazi-friendly-world. Just think of how the women in this group were probably told to just ignore Nazis (and they will go away) not very long ago. What should our next step be?
posted by RuvaBlue at 10:59 PM on April 25 [6 favorites]


I think working with legislators like AOC is likely to be most effective. For long term change, work with young progressives.
posted by kalessin at 6:41 AM on April 26 [1 favorite]


Jack Dorsey to Rep Ilhan Omar: "Go fuck yourself":
Omar pressed Dorsey to explain why Twitter didn’t remove Trump’s tweet outright, according to a person familiar with the conversation who spoke on condition of anonymity because the call was private. Dorsey said that the president’s tweet didn’t violate the company’s rules, a second person from Twitter confirmed.

Dorsey also pointed to the fact that the tweet and video already had been viewed and shared far beyond the site, one of the sources said. But the Twitter executive did tell Omar that the tech giant needed to do a better job generally in removing hate and harassment from the site, according to the two people familiar with the call.
posted by NoxAeternum at 11:07 AM on April 26 [3 favorites]


The Algemeiner: How Much Did YouTube and PayPal Make From Owen Benjamin’s Jew-Hatred?
It happens every day, even on weekends.

Sometime in the late afternoon or early evening West Coast time, several thousand people log onto YouTube’s livestream service to watch a failed comedian rant about the Jews from his backyard in Gig Harbor, Washington. Sometimes he rants for less than an hour, but before he swore off drinking a few weeks ago, he would rant for more than three hours at a stretch.

As the comedian, who goes by the name of Owen Benjamin, rants about the evil, hate-worthy Jews who are responsible for all the problems of the world, his supporters make comments on YouTube’s live chat stream that vilify and demonize Jews. They type things like “With Jews You Lose,” “Jews became their own god,” “The bible is a jew scam,” “MUZZIES ARE PWNS OF JEWS,” and “BOLSHEVIK JEWS ZIONIST JEWS SAME CONTROL FREAKS.” [...]
According to the report, Youtube was making around $12,000 a month, and Benjamin was making around $30,000.
posted by Joe in Australia at 5:08 AM on April 27 [6 favorites]


That right there is literal blood money.
posted by kalessin at 6:30 AM on April 27 [7 favorites]


Yes, and it's frighteningly profitable. I listened to a bit of his rants,. He describes himself as a comedian but I think that's a way to deflect criticism: he's not at all funny, not even in a racist-joke way. If he's telling the truth about his earnings - tens of thousands a month from poorly produced stream-of-consciousness videos of Benjamin talking into a camera - then there's a scary amount of people who want to promote racism, and are willing to pay to make that happen.
posted by Joe in Australia at 8:24 AM on April 27 [2 favorites]


Jack Dorsey to Rep Ilhan Omar: "Go fuck yourself":

I think Dorsey is so bone stupid he doesn't even know he's a Nazi (and I also think this does not, in any way, shape, or form, absolve his dumb ass of being a goddamn Nazi), but we don't need to make him seem even more terrible by making it look like that's a direct quote.

Like I get people are being snarky and flippant about the gist of Dorsey's response, but direct, aggressive, dismissive language like that directed towards a muslim woman who is a MOC would actually represent another level of escalation that would be genuinely frightening to a lot of people.

I can already hear the Proper Leftists sneering at me for missing the point, but no: I am saying that direct aggressive language is something you learn to watch for as a sign that things are about to get very specifically more dangerous. So please don't cry wolf about that shit, especially if you wouldn't be one of the targets.

Shit's already bad enough. We don't need any noise in the already terrifying signal.
posted by schadenfrau at 9:03 AM on April 27 [6 favorites]


Social media was a mistake.

It's potentially great. This place is social media, isn't it? It acts a lot like social media (user accounts, posts, comments on posts, chat channels, gift exchanges, etc.) but with no-nonsense moderation to keep things decent, a $5 PayPal charge to discourage anonymous drive-by idiots, and even IRL meetings.

Twitter and Facebook would be more honest and friendly if they at least worked out some kind of account trust ratings based on things like whether you have met people in real life. People could arrange formal and informal meetups to let people increase trust levels without forcing anyone to expose their real identity to absolutely everyone else. Then let everyone filter their Twitter and Facebook activity by trust level. If no one in my IRL network has met a certain person, maybe don't show me their stuff, or at least flag it as potentially fake.
posted by pracowity at 8:04 AM on April 28


This place is social media, isn't it?

That's absurdly reductive.
posted by tobascodagama at 9:34 AM on April 28 [1 favorite]


The problem isn't social media, but the people running it:
Listening to him speak at ted felt like witnessing the end of something: the end of the techno-utopian period when social-media architects could speak eagerly about democracy and openness, without also mentioning the potential for enabling authoritarianism. Perhaps, also, it is the end of the mythos of the young, brilliant, iconoclastic tech founder. As Dorsey pivoted from non-answer to non-answer, it was hard not to wonder whether, despite his appearance of media-savvy calm, he wasn’t in over his head. Since the 2016 election, it has grown increasingly clear that allowing young, mostly male technologists to build largely unregulated, proprietary, international networks might have been a large-scale, high-stakes error in judgment.
Diversity in tech is a matter of survival now.
posted by NoxAeternum at 2:28 PM on April 28 [5 favorites]


Shit's already bad enough. We don't need any noise in the already terrifying signal.

For what it's worth, check the actual URL: splinternews.com/jack-dorsey-told-ilhan-omar-to-go-fuck-herself-1834316485

It appears that was word for word, the original headline, and wasn't editorialized. I agree with you here, but i don't think NoxAeternum was participating.

Now my annoyance with news sites participating in this sort of "summary" exaggeration/editorializing... ugh that's its own issue.
posted by emptythought at 3:07 PM on April 30 [2 favorites]


« Older FILTRATE   |   Life Gave Me Fifteen Lemons Newer »


This thread has been archived and is closed to new comments