"It was built on our watch and it needs to burn on our watch"
April 1, 2019 7:26 PM Subscribe
"Twitter never built in a way to deal with harassment because none of the people designing it had ever been harassed, so it didn't come up. [...] If your reply is that we didn’t design and build these things to be used this way, then all I can say is that you’ve done a shit job of designing them, because that is what they’re being used for. These monsters are yours, regardless of what your intentions might have been." A book extract published in Buzzfeed News from Mike Monteiro.
This is a righteous, enjoyable screed that proposes nothing novel or unique. "Support non-white male perspectives", while nice, isn't a new message.
posted by Going To Maine at 7:49 PM on April 1, 2019 [2 favorites]
posted by Going To Maine at 7:49 PM on April 1, 2019 [2 favorites]
Like, the more radical take is to discuss how Jack follows Stefan Molyneux, and how these entrepreneurs and innocent white folks have allowed themselves to slide from a place of naivety to villainy.
posted by Going To Maine at 7:54 PM on April 1, 2019 [5 favorites]
posted by Going To Maine at 7:54 PM on April 1, 2019 [5 favorites]
I’ll keep giving them the benefit of the doubt and say they did it with the best of intentions
Okay, but why? Why would you do that?
Sure, people - especially White men - make it to adulthood without personally being the target of harassment/racism/sexism/homophobia/etc. But they know it exists. They know it happens to other people. If these guys weren’t designing to deal with bad behaviour it isn’t because they thought everyone on the internet was just super friendly and nice.
posted by Secret Sparrow at 7:59 PM on April 1, 2019 [17 favorites]
Okay, but why? Why would you do that?
Sure, people - especially White men - make it to adulthood without personally being the target of harassment/racism/sexism/homophobia/etc. But they know it exists. They know it happens to other people. If these guys weren’t designing to deal with bad behaviour it isn’t because they thought everyone on the internet was just super friendly and nice.
posted by Secret Sparrow at 7:59 PM on April 1, 2019 [17 favorites]
There are lots of smart people thinking about how to fix the broken-ness of the Internet described here.
Tarleton Gillespie's book Custodians of the Internet is an academic but accessible introduction to the problems in the area. Zeynep Tufekci and Kara Swisher have been two of the best Silicon Valley critics for some time. Swisher's interview with Mark Zuckerberg is one of the best pieces of tech journalism around and Tufekci's writings on things like YouTube and radicalization are absolutely on point. (Both are also great Twitter personalities.)
Monteiro's critique of our modern obsession (and perhaps perversion) of "design" has also got its longstanding academic roots in a few fields, most notably science/tech studies and design theory. Do Artifacts Have Politics may be the most foundational writing in this area. It was written in 1980 but stayed remarkably relevant. The first paragraph:
posted by redct at 8:06 PM on April 1, 2019 [33 favorites]
Tarleton Gillespie's book Custodians of the Internet is an academic but accessible introduction to the problems in the area. Zeynep Tufekci and Kara Swisher have been two of the best Silicon Valley critics for some time. Swisher's interview with Mark Zuckerberg is one of the best pieces of tech journalism around and Tufekci's writings on things like YouTube and radicalization are absolutely on point. (Both are also great Twitter personalities.)
Monteiro's critique of our modern obsession (and perhaps perversion) of "design" has also got its longstanding academic roots in a few fields, most notably science/tech studies and design theory. Do Artifacts Have Politics may be the most foundational writing in this area. It was written in 1980 but stayed remarkably relevant. The first paragraph:
In controversies about technology and society, there is no idea more pro-Sure, the academic prose is a bit flowery, but make it a bit more vernacular and it could totally be a modern-day techlash article fresh from someone's Medium blog.
vocative than the notion that technical things have political qualities. At issue is
the claim that the machines, structures, and systems of modern material culture
can be accurately judged not only for their contributions of efficiency and pro-
ductivity, not merely for their positive and negative environmental side effects,
but also for the ways in which they can embody specific forms of power and
authority. Since ideas of this kind have a persistent and troubling presence, they
deserve explicit attention.
posted by redct at 8:06 PM on April 1, 2019 [33 favorites]
Sure, people - especially White men - make it to adulthood without personally being the target of harassment/racism/sexism/homophobia/etc. But they know it exists. They know it happens to other people.
I suddenly just realized that this is exactly what The Witch Elm by Tana French is about, and I'm going to have to re-read with that lens. (No spoiler, this is apparent in the first chapter, but I speed-read the book to get to the bottom of the mystery.)
posted by rogerroger at 8:35 PM on April 1, 2019 [3 favorites]
I suddenly just realized that this is exactly what The Witch Elm by Tana French is about, and I'm going to have to re-read with that lens. (No spoiler, this is apparent in the first chapter, but I speed-read the book to get to the bottom of the mystery.)
posted by rogerroger at 8:35 PM on April 1, 2019 [3 favorites]
"It was built on our watch and it needs to burn on our watch"
the hymn of the fires of mount doom
posted by clockzero at 8:42 PM on April 1, 2019 [3 favorites]
the hymn of the fires of mount doom
posted by clockzero at 8:42 PM on April 1, 2019 [3 favorites]
And I think this might be the least true. Of course Twitter can be fixed, it's just that the people in control of it would have to cede that control in ways they're unwilling to do. Twitter doesn't get fixed because it doesn't want to, not because it's difficult.
I'm fully on board team Twitter Can't Be Fixed, because the key hook of Twitter is that you can talk to strangers and celebrities like they're real people, and if you want to fix Twitter, you have to give people the tools to defend their communities, which would preclude that kind of 'everyone is here' organisation that people want from Twitter and people missed from Mastodon.
That said, even allowing people to self-select and be in the community that suits them doesn't actually solve the problem. I was on Mastodon when Wil Wheaton joined, and then was forced off, and at that moment I received enlightenment. The format of microblogging itself encourages the kind of hostility people decry, because it's easy and profitable to make a big slam post and be morally righteous, secure in the knowledge that you're unlikely to receive the consequences of your actions. Compare to MetaFilter: if I say a dumb thing in the comments, both it and the responses are permanently associated. That encourages me to post things I'm willing to stand by, and encourages responders to not swing for the fences lest they themselves say something that will be a black mark against them.
Not that comment threads are perfect, but they're fixable, in a way that I don't think microblogging is.
posted by Merus at 8:45 PM on April 1, 2019 [16 favorites]
I'm fully on board team Twitter Can't Be Fixed, because the key hook of Twitter is that you can talk to strangers and celebrities like they're real people, and if you want to fix Twitter, you have to give people the tools to defend their communities, which would preclude that kind of 'everyone is here' organisation that people want from Twitter and people missed from Mastodon.
That said, even allowing people to self-select and be in the community that suits them doesn't actually solve the problem. I was on Mastodon when Wil Wheaton joined, and then was forced off, and at that moment I received enlightenment. The format of microblogging itself encourages the kind of hostility people decry, because it's easy and profitable to make a big slam post and be morally righteous, secure in the knowledge that you're unlikely to receive the consequences of your actions. Compare to MetaFilter: if I say a dumb thing in the comments, both it and the responses are permanently associated. That encourages me to post things I'm willing to stand by, and encourages responders to not swing for the fences lest they themselves say something that will be a black mark against them.
Not that comment threads are perfect, but they're fixable, in a way that I don't think microblogging is.
posted by Merus at 8:45 PM on April 1, 2019 [16 favorites]
> Not that comment threads are perfect, but they're fixable, in a way that I don't think microblogging is.
Twitter in its current form sort of has the worst of both worlds. It presents replies as a comment thread that the original poster has no control over. If you post something pleasant on Twitter and I reply with something offensive that doesn't actually violate the Twitter rules, my reply is attached to your post forever.
Sure, you can block me, but when other people click on your original tweet they'll also see my stupid comment, and potentially engage with that and tag you, and there's nothing you can do about it.
posted by smelendez at 9:46 PM on April 1, 2019 [11 favorites]
Twitter in its current form sort of has the worst of both worlds. It presents replies as a comment thread that the original poster has no control over. If you post something pleasant on Twitter and I reply with something offensive that doesn't actually violate the Twitter rules, my reply is attached to your post forever.
Sure, you can block me, but when other people click on your original tweet they'll also see my stupid comment, and potentially engage with that and tag you, and there's nothing you can do about it.
posted by smelendez at 9:46 PM on April 1, 2019 [11 favorites]
I'm fully on board team Twitter Can't Be Fixed, because the key hook of Twitter is that you can talk to strangers and celebrities like they're real people, and if you want to fix Twitter, you have to give people the tools to defend their communities, which would preclude that kind of 'everyone is here' organisation that people want from Twitter and people missed from Mastodon.
Yeah, at this point I firmly believe that any kind of public social media with a very large membership is doomed to become a cesspool. IMO, the future of social media looks more like belonging to a bunch of Slacks or private forums. Individual communities with relatively small memberships, moderation tooling, and the minimum shared infrastructure to make it possible to scale the backend. Absolutely no functionality for public posting or virality. Individual communities may still be horrible, but the damage is contained.
posted by a device for making your enemy change his mind at 9:48 PM on April 1, 2019 [4 favorites]
Yeah, at this point I firmly believe that any kind of public social media with a very large membership is doomed to become a cesspool. IMO, the future of social media looks more like belonging to a bunch of Slacks or private forums. Individual communities with relatively small memberships, moderation tooling, and the minimum shared infrastructure to make it possible to scale the backend. Absolutely no functionality for public posting or virality. Individual communities may still be horrible, but the damage is contained.
posted by a device for making your enemy change his mind at 9:48 PM on April 1, 2019 [4 favorites]
Yeah, at this point I firmly believe that any kind of public social media with a very large membership is doomed to become a cesspool.
Dreamwidth (which isn't large) may not go that route, because the founders are LJ veterans who are very, very clear on the difference between "intense discussion" and "direct harassment," and because anti-harassment tools are built into the platform: you can block people from commenting on your journal, or you can allow them but their comments are screened until you make them visible, so there are multiple ways to prevent dogpiling. You still get people rallying their supporters on their own journals, but they can't send them after someone else. And DW is run by people who are willing to ban those who try to get around those blocks.
Pillowfort is new, and has some very interesting features, including both a no-nazi no-bigotry no-harassment policy (I'm paraphrasing) and some tech features that are different from other sites. But the founders seem to think that (1) if they describe in detail exactly the kind of behavior that isn't allowed, it won't happen, and (2) they have adequately described (and forbidden) all the ways people can be truly vicious to each other, so the site won't turn into a cesspool.
I'm not sure I think that "large membership will equal cesspool," but I definitely think that "large open to everyone membership will equal cesspool." The only way to prevent that is to make it very clear: "we have a culture here; we intend to keep a particular style of communication and a certain focus to our discussions; if you don't agree with that approach, you're not welcome." And it needs mods that are quick to delete and not slow to ban.
I mean. I hear that ravelry continues to be friendly, other than the "crocheted socks" debates.
posted by ErisLordFreedom at 10:21 PM on April 1, 2019 [13 favorites]
Dreamwidth (which isn't large) may not go that route, because the founders are LJ veterans who are very, very clear on the difference between "intense discussion" and "direct harassment," and because anti-harassment tools are built into the platform: you can block people from commenting on your journal, or you can allow them but their comments are screened until you make them visible, so there are multiple ways to prevent dogpiling. You still get people rallying their supporters on their own journals, but they can't send them after someone else. And DW is run by people who are willing to ban those who try to get around those blocks.
Pillowfort is new, and has some very interesting features, including both a no-nazi no-bigotry no-harassment policy (I'm paraphrasing) and some tech features that are different from other sites. But the founders seem to think that (1) if they describe in detail exactly the kind of behavior that isn't allowed, it won't happen, and (2) they have adequately described (and forbidden) all the ways people can be truly vicious to each other, so the site won't turn into a cesspool.
I'm not sure I think that "large membership will equal cesspool," but I definitely think that "large open to everyone membership will equal cesspool." The only way to prevent that is to make it very clear: "we have a culture here; we intend to keep a particular style of communication and a certain focus to our discussions; if you don't agree with that approach, you're not welcome." And it needs mods that are quick to delete and not slow to ban.
I mean. I hear that ravelry continues to be friendly, other than the "crocheted socks" debates.
posted by ErisLordFreedom at 10:21 PM on April 1, 2019 [13 favorites]
Everytime someone mentions how hard it would be to identify abuse correctly, I point out that Youtube is able to identify content in the case of copyrighted music. They don't block it. They flag it and "demonetize" it. And presumably send the ad revenue to another copyright holder, or keep it for themselves.
When it is about revenue, tech giants are very much able to build automated systems to detect content they don't want.
I can't believe that detecting copyright music in the background of videos is more difficult than detecting abuse and harassment in text form. It's clearly not.
My first attempt would be: any @ mention to someone you never talked to before or just recently followed, combined with a blacklist of words. They could do this in a week. Within a 6 months they could have a neural network, trained on the large existing body of harassing tweets, to replace the blacklist.
Twitter doesn't fix its issues because it's not in Twitter's interest to do so. They have the President of the United States on their platform announcing policy and threatening world leaders. They have their audience. Why would they change anything? Changing anything costs money.
posted by cotterpin at 11:19 PM on April 1, 2019 [19 favorites]
When it is about revenue, tech giants are very much able to build automated systems to detect content they don't want.
I can't believe that detecting copyright music in the background of videos is more difficult than detecting abuse and harassment in text form. It's clearly not.
My first attempt would be: any @ mention to someone you never talked to before or just recently followed, combined with a blacklist of words. They could do this in a week. Within a 6 months they could have a neural network, trained on the large existing body of harassing tweets, to replace the blacklist.
Twitter doesn't fix its issues because it's not in Twitter's interest to do so. They have the President of the United States on their platform announcing policy and threatening world leaders. They have their audience. Why would they change anything? Changing anything costs money.
posted by cotterpin at 11:19 PM on April 1, 2019 [19 favorites]
But the founders seem to think that (1) if they describe in detail exactly the kind of behavior that isn't allowed, it won't happen, and (2) they have adequately described (and forbidden) all the ways people can be truly vicious to each other, so the site won't turn into a cesspool.
Assuming that this characterisation is correct, the approach seems doomed to fail.
Firstly, it’s going to invite an enormous amount of rules-lawyering about whether something technically was or wasn’t covered by the Infallible Behaviour Guide.
Secondly, it stands or falls on whether the founders were more imaginative and creative during the period of time while they were drawing up the rules, vs their entire userbase, who are given unlimited time. Which seems... unlikely.
Of course, they can always add or remove rules, exceptions, etc etc. But that tends to undermine the perception of authority invested in the rules, and over time the rules will start to be perceived as 1) arbitrary and 2) ineffective.
It would be better for them to reformulate the no-Nazis-etc mission statement to something more goal oriented (“we want a community that is X, Y and Z, but also A, B and C and will judge on a case-by-case basis against these principles”).
Mechanistic, proscriptive policies invite gaming, lack
nuance (think of twitter’s frequent arbitrary suspensions that fail to consider intent, and can’t distinguish between hyperbole on the one hand and barely concealed threats on the other), and - because the focus is on process, not results - often lead to pretty poor outcomes.
posted by chappell, ambrose at 12:04 AM on April 2, 2019
Assuming that this characterisation is correct, the approach seems doomed to fail.
Firstly, it’s going to invite an enormous amount of rules-lawyering about whether something technically was or wasn’t covered by the Infallible Behaviour Guide.
Secondly, it stands or falls on whether the founders were more imaginative and creative during the period of time while they were drawing up the rules, vs their entire userbase, who are given unlimited time. Which seems... unlikely.
Of course, they can always add or remove rules, exceptions, etc etc. But that tends to undermine the perception of authority invested in the rules, and over time the rules will start to be perceived as 1) arbitrary and 2) ineffective.
It would be better for them to reformulate the no-Nazis-etc mission statement to something more goal oriented (“we want a community that is X, Y and Z, but also A, B and C and will judge on a case-by-case basis against these principles”).
Mechanistic, proscriptive policies invite gaming, lack
nuance (think of twitter’s frequent arbitrary suspensions that fail to consider intent, and can’t distinguish between hyperbole on the one hand and barely concealed threats on the other), and - because the focus is on process, not results - often lead to pretty poor outcomes.
posted by chappell, ambrose at 12:04 AM on April 2, 2019
The only way to prevent that is to make it very clear: "we have a culture here; we intend to keep a particular style of communication and a certain focus to our discussions; if you don't agree with that approach, you're not welcome." And it needs mods that are quick to delete and not slow to ban.
Indeed, harassment is a Bad Thing and such moderation as noted above generally yields quality discourse. But, some moderators confuse disagreement with cultural contravention and turn a forum into an echo chamber of nothing but 'Amen brother', 'Yes and', and 'The folks on the other side suck, amirite?' posts where one may only disagree by asserting that others in the thread are insufficiently onboard with the hive mind position. This is little more than rhetorical comfort food. Better than vicious attacks and verbal grenade-throwing? Unquestionably. Useful for reevaluating my thoughts on a topic? Not so much.
posted by zaixfeep at 12:15 AM on April 2, 2019
Indeed, harassment is a Bad Thing and such moderation as noted above generally yields quality discourse. But, some moderators confuse disagreement with cultural contravention and turn a forum into an echo chamber of nothing but 'Amen brother', 'Yes and', and 'The folks on the other side suck, amirite?' posts where one may only disagree by asserting that others in the thread are insufficiently onboard with the hive mind position. This is little more than rhetorical comfort food. Better than vicious attacks and verbal grenade-throwing? Unquestionably. Useful for reevaluating my thoughts on a topic? Not so much.
posted by zaixfeep at 12:15 AM on April 2, 2019
Everytime someone mentions how hard it would be to identify abuse correctly, I point out that Youtube is able to identify content in the case of copyrighted music.
Detecting Copyrighted music and automated moderation aren’t really the same class of software development problem. They are probably about as far apart in degree as multiplication and second order differential equations.
Or another example would be visual processing — for a while a standard way to test ai performance was image classification. Now you can make an image classifier in a few hours that will (with a few hundred dollars machine time for training) outperform average humans. But we don’t have general visual processing that can come remotely close to human performance. Automated moderation is more like the general visual processing type of problem; detecting copyrighted music is more like the image classification class of problem.
posted by lastobelus at 12:34 AM on April 2, 2019 [17 favorites]
Detecting Copyrighted music and automated moderation aren’t really the same class of software development problem. They are probably about as far apart in degree as multiplication and second order differential equations.
Or another example would be visual processing — for a while a standard way to test ai performance was image classification. Now you can make an image classifier in a few hours that will (with a few hundred dollars machine time for training) outperform average humans. But we don’t have general visual processing that can come remotely close to human performance. Automated moderation is more like the general visual processing type of problem; detecting copyrighted music is more like the image classification class of problem.
posted by lastobelus at 12:34 AM on April 2, 2019 [17 favorites]
They're in denial about what happens to more vulnerable and marginalized than themselves. I can't say why, I haven't figured out that bit yet but the sneering putdowns imply an inability to accept that this kind of thing happens because it would shake their worldview in some way.
Last week was my birthday, and I received a greeting by email from someone I've known since grade school (40+ years, though not closely since college since we moved to different continents). He's become a VC type person on the West Coast since then, having flipped a couple of startups and whatnot.
He wrote to ask me why I had disappeared from Twitter. I wrote back, assuming that he was aware of the controversies around the Twitter experience for WoC, that my account had been suspended after the harassment following my TEDTalk video. And, that after a year of silence, I'd just registered a new real name account that he could follow.
His reply honestly shocked me to core, here it is cut and pasted verbatim:
What TEDVideo? And what do you mean chased off? You must have done something terrible to get suspended.
So I snarked back saying Yeah, I was a WoC with an opinion on Twitter, and we're something terrible, and his reply:
Who’s the “we” you are referring to?
- Women of color with assholes, oops I mean opinions
- women with no color with opinions
- just women in general
I’m so confused, why can’t you just be a person with something to say
So I archived that email and carried on with my life. But it gave me food for thought that one could be actively involved in the West Coast tech industry and be completely unaware that Twitter (and social media) were not the friendliest most welcoming places for those outside the bubble. It has to be deep denial unless you're telling me its still possible for VCs to exist to be completely unaware of what goes on OR your attitude to our issues is such that whether you know it to be true or not your first impulse is to blame the victim, put down and sneer at the issue, and generally act like an asshole at the age of 53?
These are the people paying for this shit to be built. Go fuck yourself is the answer to any complaints that disturb their filter bubbled worldview.
posted by infini at 12:41 AM on April 2, 2019 [50 favorites]
Last week was my birthday, and I received a greeting by email from someone I've known since grade school (40+ years, though not closely since college since we moved to different continents). He's become a VC type person on the West Coast since then, having flipped a couple of startups and whatnot.
He wrote to ask me why I had disappeared from Twitter. I wrote back, assuming that he was aware of the controversies around the Twitter experience for WoC, that my account had been suspended after the harassment following my TEDTalk video. And, that after a year of silence, I'd just registered a new real name account that he could follow.
His reply honestly shocked me to core, here it is cut and pasted verbatim:
What TEDVideo? And what do you mean chased off? You must have done something terrible to get suspended.
So I snarked back saying Yeah, I was a WoC with an opinion on Twitter, and we're something terrible, and his reply:
Who’s the “we” you are referring to?
- Women of color with assholes, oops I mean opinions
- women with no color with opinions
- just women in general
I’m so confused, why can’t you just be a person with something to say
So I archived that email and carried on with my life. But it gave me food for thought that one could be actively involved in the West Coast tech industry and be completely unaware that Twitter (and social media) were not the friendliest most welcoming places for those outside the bubble. It has to be deep denial unless you're telling me its still possible for VCs to exist to be completely unaware of what goes on OR your attitude to our issues is such that whether you know it to be true or not your first impulse is to blame the victim, put down and sneer at the issue, and generally act like an asshole at the age of 53?
These are the people paying for this shit to be built. Go fuck yourself is the answer to any complaints that disturb their filter bubbled worldview.
posted by infini at 12:41 AM on April 2, 2019 [50 favorites]
Here's another sample chapter from the book, about ride-sharing and venture capital.
The key takeaway for me was this passage:
The key takeaway for me was this passage:
Once Uber’s goal moved from providing a car-sharing service to using a car-sharing service to make themselves and their investors rich, the delicate balance between drivers, riders, and Uber was destroyed. Only one of those parties was going to benefit from Uber’s future success.posted by cheshyre at 4:03 AM on April 2, 2019 [2 favorites]
They're in denial about what happens to more vulnerable and marginalized than themselves. I can't say why, I haven't figured out that bit yet...
Well, money is a helluva drug.
posted by jeremias at 4:09 AM on April 2, 2019 [3 favorites]
Well, money is a helluva drug.
posted by jeremias at 4:09 AM on April 2, 2019 [3 favorites]
They're in denial about what happens to more vulnerable and marginalized than themselves. I can't say why, I haven't figured out that bit yet...
These are all platforms designed explicitly for extracting value from human interaction, and passing it on to investors and shareholders. They may be described as social media platforms; search platforms; ride sharing platforms; etc. But in the end they're all really about adding users, and/or collecting and mining their data, and/or pushing ads, and/or shaving value from online interactions, etc. One reason for this is that this is how they have been defined by the stock market and their shareholders.
If you wanted to design a platform that actually supported meaningful human interaction, it might look different. But this is what we have. The prime purpose of all these existing platforms is to extract value, by giving people a set of tools that appears to be social in purpose. But this social veneer covers a highly-designed and addictive set of interfaces and morally-agnostic algorithms, which encourags you to interact with the site in order to generate user clicks.
If you combine these goals with the existing toxic privileged punch-up/kick-down culture of the tech industry, then this is what you end up with - an organizational culture that perpetuates itself by recruiting like-minded people. They probably have a good laugh at this over their coffee.
posted by carter at 4:46 AM on April 2, 2019 [8 favorites]
These are all platforms designed explicitly for extracting value from human interaction, and passing it on to investors and shareholders. They may be described as social media platforms; search platforms; ride sharing platforms; etc. But in the end they're all really about adding users, and/or collecting and mining their data, and/or pushing ads, and/or shaving value from online interactions, etc. One reason for this is that this is how they have been defined by the stock market and their shareholders.
If you wanted to design a platform that actually supported meaningful human interaction, it might look different. But this is what we have. The prime purpose of all these existing platforms is to extract value, by giving people a set of tools that appears to be social in purpose. But this social veneer covers a highly-designed and addictive set of interfaces and morally-agnostic algorithms, which encourags you to interact with the site in order to generate user clicks.
If you combine these goals with the existing toxic privileged punch-up/kick-down culture of the tech industry, then this is what you end up with - an organizational culture that perpetuates itself by recruiting like-minded people. They probably have a good laugh at this over their coffee.
posted by carter at 4:46 AM on April 2, 2019 [8 favorites]
The takeaway about YouTube and copyright enforcement is not technical - it's business model related. YouTube is *willing* to expend significant effort on copyright enforcement. This is because it is a requirement for their continued existence. They do not seem to have the same motivation for harassment.
posted by scolbath at 4:50 AM on April 2, 2019 [6 favorites]
posted by scolbath at 4:50 AM on April 2, 2019 [6 favorites]
The second reason is that Twitter is too hard to fix.
Twitter (and to a similar degree, Facebook) is too hard to fix because "too hard," in Silicon Valley terms, equates with "can't be fully automated" or "the fix includes people." Twitter can fix itself tomorrow, but that would mean using people as the fix instead of some nifty new algorithm.
posted by Thorzdad at 4:51 AM on April 2, 2019 [17 favorites]
Twitter (and to a similar degree, Facebook) is too hard to fix because "too hard," in Silicon Valley terms, equates with "can't be fully automated" or "the fix includes people." Twitter can fix itself tomorrow, but that would mean using people as the fix instead of some nifty new algorithm.
posted by Thorzdad at 4:51 AM on April 2, 2019 [17 favorites]
Ravelry continues to be friendly because the white dudes on it are far outnumbered. There just aren't that many knitting nazis.
posted by rikschell at 4:54 AM on April 2, 2019 [12 favorites]
posted by rikschell at 4:54 AM on April 2, 2019 [12 favorites]
Twitter can fix itself tomorrow, but that would mean using people as the fix instead of some nifty new algorithm.
I mostly agree with this, but I bet they could make an algorithm that quickly learned to identify death threats, and if all death threats were immediately followed with a warning, and a second warning got a 2-week suspension, and a third incident after return got an account ban... inside of two months, there'd be pretty much no death threats on twitter.
Of course, that would also end all the "joking" death threats, like "oh, just fuck off and die, asshole." But I suspect this is a price many of us are willing to pay.
Repeat for various other forms of threat and harassment. ID'ing jokes/ironic/in-game uses of a threat is more difficult, but I suspect it wouldn't take much to make serious strides in those directions. (Including, "target clicks a thing that says 'threats from X person are okay,' which can be revoked at any time.")
Twitter likes being the platform where the president can literally make death threats against foreign politicians. They don't want to enforce their existing terms of service against anyone who draws in users. The problem isn't "what can an algorithm do?" It's "what can an algorithm do that wouldn't touch white men with the verified sticker on their accounts?"
posted by ErisLordFreedom at 7:42 AM on April 2, 2019 [10 favorites]
I mostly agree with this, but I bet they could make an algorithm that quickly learned to identify death threats, and if all death threats were immediately followed with a warning, and a second warning got a 2-week suspension, and a third incident after return got an account ban... inside of two months, there'd be pretty much no death threats on twitter.
Of course, that would also end all the "joking" death threats, like "oh, just fuck off and die, asshole." But I suspect this is a price many of us are willing to pay.
Repeat for various other forms of threat and harassment. ID'ing jokes/ironic/in-game uses of a threat is more difficult, but I suspect it wouldn't take much to make serious strides in those directions. (Including, "target clicks a thing that says 'threats from X person are okay,' which can be revoked at any time.")
Twitter likes being the platform where the president can literally make death threats against foreign politicians. They don't want to enforce their existing terms of service against anyone who draws in users. The problem isn't "what can an algorithm do?" It's "what can an algorithm do that wouldn't touch white men with the verified sticker on their accounts?"
posted by ErisLordFreedom at 7:42 AM on April 2, 2019 [10 favorites]
Ravelry continues to be friendly because the white dudes on it are far outnumbered. There just aren't that many knitting nazis.
You may want to talk to knitters of color about their experiences. Ravelry remains a friendly enough place for me, a middle aged white woman... but 9 out of 10 users I stumble across look exactly like me, and I can't help but notice the lack of non-white faces in the curated areas. My friends of color who knit don't use Ravelry.
In short, the problem is not limited to white men.
posted by palomar at 10:03 AM on April 2, 2019 [5 favorites]
You may want to talk to knitters of color about their experiences. Ravelry remains a friendly enough place for me, a middle aged white woman... but 9 out of 10 users I stumble across look exactly like me, and I can't help but notice the lack of non-white faces in the curated areas. My friends of color who knit don't use Ravelry.
In short, the problem is not limited to white men.
posted by palomar at 10:03 AM on April 2, 2019 [5 favorites]
... some moderators confuse disagreement with cultural contravention and turn a forum into an echo chamber of nothing but 'Amen brother', 'Yes and', and 'The folks on the other side suck, amirite?' posts where one may only disagree by asserting that others in the thread are insufficiently onboard with the hive mind position. This is little more than rhetorical comfort food. Better than vicious attacks and verbal grenade-throwing? Unquestionably. Useful for reevaluating my thoughts on a topic? Not so much.
Undoubtedly this is/has been true in some instances. But I'd suggest that one of the fundamental mistakes made at the start of the social media age—carried over from listservs and usenet, certainly—is precisely the idea that agonistic debate is the optimal method for arriving at enlightenment or truth and that therefore conflict is necessarily a characteristic of intellectual robustness while the inhibition of conflict necessarily results in intellectual stagnation. In fact, I've rarely reevaluated my ideas on any topic through debate and when I have changed my mind about something it's usually because I've gone off and thought about it on my own for a while. The insistence on modeling every online exchange as dialogue—but without the boundaries and commitments to some degree of shared norms that exist in face-to-face dialogue—is, largely to blame for a technology and a medium that's become just another way to brutalize each other, IMO.
(And in the best of circumstances this is because the people who designed these systems failed to envision the ways they would be used to brutalize. In the worst of circumstance this is because they don't care, wish to profit from it, or wish actively to brutalize.)
Twitter doesn't fix its issues because it's not in Twitter's interest to do so. They have the President of the United States on their platform announcing policy and threatening world leaders.
I rely on twitter partly because it's a way for me to learn things I might never know otherwise and partly because it's my equivalent of Candy Crush or something. I'm complict. But I make no mistake—as a technological space, twitter is the Colosseum for the bloody gladiatorial games of our declining empire.
posted by octobersurprise at 10:10 AM on April 2, 2019 [4 favorites]
Undoubtedly this is/has been true in some instances. But I'd suggest that one of the fundamental mistakes made at the start of the social media age—carried over from listservs and usenet, certainly—is precisely the idea that agonistic debate is the optimal method for arriving at enlightenment or truth and that therefore conflict is necessarily a characteristic of intellectual robustness while the inhibition of conflict necessarily results in intellectual stagnation. In fact, I've rarely reevaluated my ideas on any topic through debate and when I have changed my mind about something it's usually because I've gone off and thought about it on my own for a while. The insistence on modeling every online exchange as dialogue—but without the boundaries and commitments to some degree of shared norms that exist in face-to-face dialogue—is, largely to blame for a technology and a medium that's become just another way to brutalize each other, IMO.
(And in the best of circumstances this is because the people who designed these systems failed to envision the ways they would be used to brutalize. In the worst of circumstance this is because they don't care, wish to profit from it, or wish actively to brutalize.)
Twitter doesn't fix its issues because it's not in Twitter's interest to do so. They have the President of the United States on their platform announcing policy and threatening world leaders.
I rely on twitter partly because it's a way for me to learn things I might never know otherwise and partly because it's my equivalent of Candy Crush or something. I'm complict. But I make no mistake—as a technological space, twitter is the Colosseum for the bloody gladiatorial games of our declining empire.
posted by octobersurprise at 10:10 AM on April 2, 2019 [4 favorites]
Oh, sure, Ravelry is no paragon of diversity, but it’s not like Reddit where you’re likely to fall into a hole of really toxic people. We can always do better. My point was more that if your social media site is centered around something that doesn’t interest Nazis, you won’t find many Nazis there. That doesn’t mean there won’t be some other kinds of unhealthy gatekeeping.
posted by rikschell at 10:21 AM on April 2, 2019 [1 favorite]
posted by rikschell at 10:21 AM on April 2, 2019 [1 favorite]
Probably related, from Bloomberg, "YouTube Executives Ignored Warnings, Letting Toxic Videos Run Rampant":
posted by mhum at 10:44 AM on April 2, 2019 [3 favorites]
The conundrum isn’t just that videos questioning the moon landing or the efficacy of vaccines are on YouTube. The massive “library,” generated by users with little editorial oversight, is bound to have untrue nonsense. Instead, YouTube’s problem is that it allows the nonsense to flourish. And, in some cases, through its powerful artificial intelligence system, it even provides the fuel that lets it spread.I agree that these platforms (FB, Twitter, YouTube, probably also Reddit) were kind of broken from the start based on their origin stories. But, I've been thinking that it wasn't necessarily a huge problem so long as they remained fairly small and niche. I would contend that the thing that turned these things from "a" problem into "the" problem with internet today is these platforms' monomaniacal focus on growth, like some kind of metastasized tumor. They all have entire departments devoted to increasing growth via any and all means. There's a whole Silicon Valley culture of "growth-hacking" devoted specifically to all the different ways, ranging from sketchy to nefarious, of increasing growth for your app or platform or whatever. In fact, I've heard that in that SV culture, the idea of "organic growth" (i.e.: ordinary, non-hacked growth) is considered rather déclassé, as if you weren't even trying to, y'know, dominate world. Once these platforms became the behemoths they are now, I fear that we may be past some point of no return.
Wojcicki and her deputies know this. In recent years, scores of people inside YouTube and Google, its owner, raised concerns about the mass of false, incendiary and toxic content that the world’s largest video site surfaced and spread. One employee wanted to flag troubling videos, which fell just short of the hate speech rules, and stop recommending them to viewers. Another wanted to track these videos in a spreadsheet to chart their popularity. A third, fretful of the spread of “alt-right” video bloggers, created an internal vertical that showed just how popular they were. Each time they got the same basic response: Don’t rock the boat.
posted by mhum at 10:44 AM on April 2, 2019 [3 favorites]
unless you're telling me its still possible for VCs to exist to be completely unaware of what goes on
Theranos much? I mean, I agree with your whole comment infini. It's a shitty situation, and there's no excuse for it. But we have at least one use case of VC money being able to remain pretty blissfully ignorant of the product it's funding. Just so long as it can expect a wild ROI.
posted by allkindsoftime at 12:32 PM on April 2, 2019 [1 favorite]
Theranos much? I mean, I agree with your whole comment infini. It's a shitty situation, and there's no excuse for it. But we have at least one use case of VC money being able to remain pretty blissfully ignorant of the product it's funding. Just so long as it can expect a wild ROI.
posted by allkindsoftime at 12:32 PM on April 2, 2019 [1 favorite]
And, I've been musing on mhum's comment... its VC money that drives that crazy growth at any cost spree isn't it? look at Uber.
posted by infini at 12:40 PM on April 2, 2019
posted by infini at 12:40 PM on April 2, 2019
its VC money that drives that crazy growth at any cost spree isn't it
Yes, imho, it is. Moreover, I think this VC influence happens even in indirect ways via incubators like Y Combinator which mostly seem oriented towards getting companies ready to grab for that VC money. I seem to recall some campaign that Chicago's tech circles put on to contrast their culture with Silicon Valley where the big point of comparison was that Chicago's tech startups were far less VC-oriented and more self-funded and/or bootstrapped. In my recollection, certain SV types responded rather defensively to the mere suggestion that there might be a non-VC way to do tech startups (although, to be fair, they also have been picking up on some not so subtle shade towards VC funding as well).
posted by mhum at 3:04 PM on April 2, 2019 [2 favorites]
Yes, imho, it is. Moreover, I think this VC influence happens even in indirect ways via incubators like Y Combinator which mostly seem oriented towards getting companies ready to grab for that VC money. I seem to recall some campaign that Chicago's tech circles put on to contrast their culture with Silicon Valley where the big point of comparison was that Chicago's tech startups were far less VC-oriented and more self-funded and/or bootstrapped. In my recollection, certain SV types responded rather defensively to the mere suggestion that there might be a non-VC way to do tech startups (although, to be fair, they also have been picking up on some not so subtle shade towards VC funding as well).
posted by mhum at 3:04 PM on April 2, 2019 [2 favorites]
If your reply is that we didn’t design and build these things to be used this way, then all I can say is that you’ve done a shit job of designing them, because that is what they’re being used for.
A few years ago, when "smart home" technology began its ascent at CES, I made a point of going to demos about "smart doorknobs" that tracked the comings and goings of every person in the house. A lot of these products were designed, the PR people would tell me, so "You could remotely approve your babysitter or your housecleaner to come in*, but you didn't have to give them a key, and they could only come and go at certain hours."
Then I'd ask, "So this extends to family members too? One person's got the app that manages the doorknob and they can monitor who enters and exits the house based on the alerts?"
The PR person would nod, pleased that I understood it. "That's right! Now you can see when your kids come home, nobody has to worry about losing their keys because they'll have an app on their phone to let them in."
And then I would say, "So what would stop an abusive family member from tracking a spouse or child's entries and exits, and using that as an excuse to harm them?"
Always, the reaction was the same: A moment of awkward silence, a flash of pure anger from the white man I was talking to (it was always a white man), then the attempt at a grin and " ... We didn't design the technology to be used that way."
Nobody ever designs the technology to be used "that way," and thus nobody can be held responsible when it is used "that way."
* An awful lot of people in the smart-doorknob/smart lock segment seemed really outraged by the notion that the people they paid to do their domestic work should be allowed the freedom of coming and going as they pleased. One product I saw demo'd had a feature wherein domestic labor would be expected to download the app to their own phone, wait for a time-sensitive PIN that expired within a certain minute limit, then use that app+PIN combination to be allowed into the house. They'd then have to confirm when they were allowed to leave and get a second PIN for that.
The smart home segment is a classist wet dream.
posted by sobell at 3:37 PM on April 2, 2019 [16 favorites]
A few years ago, when "smart home" technology began its ascent at CES, I made a point of going to demos about "smart doorknobs" that tracked the comings and goings of every person in the house. A lot of these products were designed, the PR people would tell me, so "You could remotely approve your babysitter or your housecleaner to come in*, but you didn't have to give them a key, and they could only come and go at certain hours."
Then I'd ask, "So this extends to family members too? One person's got the app that manages the doorknob and they can monitor who enters and exits the house based on the alerts?"
The PR person would nod, pleased that I understood it. "That's right! Now you can see when your kids come home, nobody has to worry about losing their keys because they'll have an app on their phone to let them in."
And then I would say, "So what would stop an abusive family member from tracking a spouse or child's entries and exits, and using that as an excuse to harm them?"
Always, the reaction was the same: A moment of awkward silence, a flash of pure anger from the white man I was talking to (it was always a white man), then the attempt at a grin and " ... We didn't design the technology to be used that way."
Nobody ever designs the technology to be used "that way," and thus nobody can be held responsible when it is used "that way."
* An awful lot of people in the smart-doorknob/smart lock segment seemed really outraged by the notion that the people they paid to do their domestic work should be allowed the freedom of coming and going as they pleased. One product I saw demo'd had a feature wherein domestic labor would be expected to download the app to their own phone, wait for a time-sensitive PIN that expired within a certain minute limit, then use that app+PIN combination to be allowed into the house. They'd then have to confirm when they were allowed to leave and get a second PIN for that.
The smart home segment is a classist wet dream.
posted by sobell at 3:37 PM on April 2, 2019 [16 favorites]
Merus: "The format of microblogging itself encourages the kind of hostility people decry, because it's easy and profitable to make a big slam post and be morally righteous, secure in the knowledge that you're unlikely to receive the consequences of your actions."
I don't really condone the late actions of the titular characters in Jay and Silent Bob Strike Back but I sure can appreciate where they are standing.
cotterpin: "I can't believe that detecting copyright music in the background of videos is more difficult than detecting abuse and harassment in text form. It's clearly not. "
It's a lot harder. Detecting copyright infringement is a simple pattern match (that still generates some false positives). Pattern matching for harassment immediately runs up against the Scunthorpe problem. And of course the most visible worst part of the problem is being pepetrated by the deplorables who are already so far into the weeds using coded language that the mainstream can hardly understand them.
Pattern matching as a tool to fight harassment is only viable as a first pass filter to winnow things for humans to look at and even then it's only useful as a small part of identifying harassment.
posted by Mitheral at 6:57 PM on April 2, 2019 [1 favorite]
I don't really condone the late actions of the titular characters in Jay and Silent Bob Strike Back but I sure can appreciate where they are standing.
cotterpin: "I can't believe that detecting copyright music in the background of videos is more difficult than detecting abuse and harassment in text form. It's clearly not. "
It's a lot harder. Detecting copyright infringement is a simple pattern match (that still generates some false positives). Pattern matching for harassment immediately runs up against the Scunthorpe problem. And of course the most visible worst part of the problem is being pepetrated by the deplorables who are already so far into the weeds using coded language that the mainstream can hardly understand them.
Pattern matching as a tool to fight harassment is only viable as a first pass filter to winnow things for humans to look at and even then it's only useful as a small part of identifying harassment.
posted by Mitheral at 6:57 PM on April 2, 2019 [1 favorite]
Just the other day we got a trans woman being suspended for correcting somebody who was deliberately misgendering her because his nazi friends mass reported her and Twitter's automated processes suck.
But Twitter can do a lot better, which can experience for yourself by setting your location to Germany, where they have to do better by law.
Jack loves nazis however so if he isn't forced to, Twitter won't change to make itself safer for non nazis.
posted by MartinWisse at 1:32 AM on April 3, 2019 [3 favorites]
But Twitter can do a lot better, which can experience for yourself by setting your location to Germany, where they have to do better by law.
Jack loves nazis however so if he isn't forced to, Twitter won't change to make itself safer for non nazis.
posted by MartinWisse at 1:32 AM on April 3, 2019 [3 favorites]
« Older When we all fall asleep, where do we go? | "there’s actually a waiting list of dogs" Newer »
This thread has been archived and is closed to new comments
This is the truest statement in this article.
The second reason is that Twitter is too hard to fix.
And I think this might be the least true. Of course Twitter can be fixed, it's just that the people in control of it would have to cede that control in ways they're unwilling to do. Twitter doesn't get fixed because it doesn't want to, not because it's difficult.
The cynic in me says that the reason Twitter and Facebook suck is that they are built to be as far-reaching as possible, damn the white supremacy. If it hadn't been Zuck and Dorsey it would have been some other white person (disclosure: I'm a white dude) who would have had enough privilege to not care about the harassment and abuse. I'm not convinced you even can live in a world dominated by capitalism and white dudes and not end up with this problem. The fix is to de-capitalize and de-white-dudify the world.
posted by axiom at 7:37 PM on April 1, 2019 [19 favorites]