The admins turn away so they can have deniability
April 6, 2022 11:34 AM   Subscribe

The vast majority of large-scale social platforms have an explicit policy of ignoring the harms or destructive actions that someone commits on any platform other than their own. When people have deliberately targeted others for abuse, spread harmful propaganda, or even bilked people out of money or opportunities, it's very common for a company to say, "That happened on another platform, and we only judge users by what happens on our own platform." That's a mistake, and it's one that is frequently exploited by some of the worst actors on the Internet.
posted by qi (49 comments total) 17 users marked this as a favorite
 
Content-based tracking across the internet … what could go wrong?
posted by haptic_avenger at 12:08 PM on April 6, 2022 [5 favorites]


So, do we want to go to some sort of US or global ‘social credit’ score ala PRC? Maybe roll out the ‘Make America Great Firewall’ to protect us?

Sure, I’m in. /s
posted by sudogeek at 12:17 PM on April 6, 2022 [1 favorite]


Maybe better to respond substantively to the article rather than just drive-by shitposting?
posted by biogeo at 12:27 PM on April 6, 2022 [35 favorites]


This is problematic both technically and ethically. I'm not a fan.

They seem to invest a lot of importance in IP addresses, when those can be spoofed or disguised using a VPN. And they give some lip service to the idea that different people can use the same username, but then never come back to that to suggest a solution for perfectly identifying someone.

If they did have a perfect solution, I'd still likely find it overreach. What would perfect identification require? Enforced retina scans or fingerprints? If they don't, then they are inevitably going to punish the wrong person sometimes.

They seem oblivious to the potential abuse of power in what they're suggesting. Admins are not impartial, they have their own biases and favor culture and context that they understand. Admins who police a single site already wield quite enough power and often in fact use it inconsistently and unfairly (including here!). For an individual to feel that they are entitled to judge the entirety of someone's online presence is just gross.
posted by Flock of Cynthiabirds at 12:28 PM on April 6, 2022 [7 favorites]


I have a lot of time for Anil Dash's point of view on a lot of things. He's absolutely dead on right about the fact that the cumulative harms of a person's behaviours being a reason that they can harm a community regardless of whether individual contributions they make reach a threshold of problematicness. A community, after all, is great than the sum of its parts, as are people's contributions to it, negative or positive.

And I think the point about the IP address was that it was specific circumstantial evidence which would be fine to link someone to a public place where there is likely a conflict of interest, but not something that you could link to what was likely, but not for certain, their home. Which I think comes to the fact that community building decisions have to be made on the balance of probabilities, not beyond reasonable doubt.

And unlike in a court of law, previous behaviour is admissible evidence for that.
posted by ambrosen at 12:35 PM on April 6, 2022 [4 favorites]


There's an awful lot of "we should do this," and absolutely no "this is how to do this." Even if someone points out someone else's behavior in another board, there's no guarantee that the admins have access to the messages over there, or that they can be sure that the messages are from the same user. (I've been misidentified on other boards, but it's been stuff like, "Are you the Spike Glee who write 'Mustaches of Asia?'" and I can simply reply "What?")
Another issue with infracting people for off site behavior is that it looks capricious to people not in the know. I'm on another board where as a rule, they don't penalize off-site drama, but they've occasionally made exceptions for particularly egregious behavior. Almost every time, they've had to explain why they did it, because there's no way to tell, looking at what's on their board.
posted by Spike Glee at 12:36 PM on April 6, 2022


With no disrespect to Anil, the Matrix team has already built and shipped this vision, at least between Matrix instances. We use it at Mozilla on our Matrix based chat services today, as in right now. I've gotta get my blog spun back up, so forgive me for this long copypasta, but when we decided to go with Matrix to replace IRC, we had this absolutely in mind:
One of the strongest selling points of Matrix is the combination of powerful moderation and safety tooling that hosting organizations can operate with robust tools for personal self-defense available in parallel. Critically, these aren’t half-assed tools that have been grafted on as an afterthought; they’re first-class features, robust enough that we can not only deploy them with confidence, but can reasonably be held accountable by our colleagues and community for their use. In short, we can now have safe, accountable infrastructure that complements, rather than comes at the cost, of individual user agency.

That’s not the best thing, though, and I’m here to tell you about my favorite Matrix feature that nobody knows about: Federated auto-updating blocklist sharing.

If you decide you trust somebody else’s decisions, at some other organization – their judgment calls about who is and is not welcome there – those decisions can be immediately and automatically reflected in your own. When a site you trust drops the hammer on some bad actor that ban can be adopted almost immediately by your site and your community as well. You don’t have to have ever seen that person or have whatever got them banned hit you in the eyes. You don’t even need to know they exist. All you need to do is decide you trust that other site judgment and magically someone persona non grata on their site is precisely that grata on yours.

Another way to say that is: among people or communities who trust each other in these decisions, an act of self-defense becomes, seamlessly and invisibly, an act of collective defense. No more everyone needing to fight their own fights alone forever, no more getting isolated and picked off one at a time, weakest first; shields-up means shields-up for everyone. Effective, practical defensive solidarity; it’s the most important new idea I’ve seen in social software in years. Every federated system out should build out their own version, and it’s very clear to me, at least, that is going to be the table stakes of a federated future very soon."
This future is already here and, sure, it's not widely deployed yet. But it works amazingly well and every federated service - every service where expressions of mutual trust are possible - should have it.
posted by mhoye at 12:38 PM on April 6, 2022 [17 favorites]


What I get from this is that the very real problems social media causes demand the existence of a global HR department.
In that case social media needs to go away.
posted by thatwhichfalls at 12:38 PM on April 6, 2022 [2 favorites]


Social media isn't the problem. People are the problem. Abolish people.
posted by Faint of Butt at 12:43 PM on April 6, 2022 [13 favorites]


This future is already here and, sure, it's not widely deployed yet. But it works amazingly well and every federated service - every service where expressions of mutual trust are possible - should have it.
Absolutely. Matrix is a good example of this, and I was drawing on my own experiences in working on Glitch over the last half decade as well. I didn't mean to imply that this doesn't exist (those who wanted to take shots while apparently being unable to understand the piece may be dismayed to find out that many of these ideas have even been put into practice not his very site) but rather that it's not yet standard practice.

I don't ascribe any importance or significance to IP addresses; they merely served as a relatively simple example that I could use to illustrate a larger point. I'd welcome more feedback from people who've run successful communities over time. I can't say I have a lot of patience for the "just because that person is hateful on another site doesn't mean they should be blocked over here", but absolutely it's incumbent upon a community's leaders to clearly explain any policy decision, not just ones that involve information gathered elsewhere.
posted by anildash at 12:54 PM on April 6, 2022 [13 favorites]


where expressions of mutual trust are possible

This seems particularly important to emphasize as being part of this proposed equation. If someone's being a jerk on Twitter, I may not want to let them into my Discord, even if they haven't been a jerk to me personally. That's not "a global HR department", "a global social credit score", or any of the other hyperbole that's been preemptively deployed. That's "you can't firewall behavior & expect that nobody should care".

To look at a historical point, much of the early Gamergate counter-response was able to be what it was when Zoe Quinn was able to drop channer chat logs & dox threads, and go "here's what they were planning, in the open". What, were other sites supposed to go "Sure they're saying they're going to do all these things, and here's people doing all these things, but how dare you bring in this off-site evidence"? Same with "NotYourShield", "AbolishFathersDay", and a whole slew of organized disruption over the years.

The standard when I block someone on Twitter isn't "Do I have an ironclad case that this person is an irredeemable troll", it's "am I 51% sure this person is going to be more of an annoyance to me than they are likely to say anything of value for me". Nobody has a right to my eyes, or to my spaces.
posted by CrystalDave at 12:55 PM on April 6, 2022 [21 favorites]


I've been dealing with the IRL version of this problem for ten years. I'm on the board of an event around which a community has formed; there are other, similar events with similar (often overlapping) communities. And inevitably, there are bad actors in these communities. I've got four active "people of concern" cases right now.

Our process is report-driven: someone needs to send us a first-person account of bad behavior for us to act. We don't distinguish between bad behavior at the event vs outside the event. It gets tricky for us because we treat all our deliberations and decisions as completely confidential, so no one (other than the affected parties) knows anything about them. But a similar event, with a largely overlapping community, has a policy that's the reverse of ours: they proactively send their ban list to other event organizers. For us, that's not a first-person account, so we can't act on it, but we can't pretend we didn't see it either. The idea of federated ban lists has been floating around for a while, and my board rejects it, but I know others would embrace it. We've discussed numerous variations. As Dash says, this is hard.
posted by adamrice at 12:59 PM on April 6, 2022 [5 favorites]


Social media isn't the problem. People are the problem. Abolish people.

I wonder how much all of this mirrors the early days of the automobile. At first, there were no speed limits, no stop signs, no rules of the road. I'm sure it was crazy fun but then people started getting hurt and killed. So ... rules and enforcement. None of which would have been necessary if people hadn't driven like fools. But people are fools.
posted by philip-random at 1:00 PM on April 6, 2022 [3 favorites]


But if all you have is a username, and you're not federated with the site where they're being offensive, what inferences are you willing to take as an admin? And it's a big step to go from blocking someone so you can't see them, and kicking them off of your site.
posted by Spike Glee at 1:09 PM on April 6, 2022 [1 favorite]


If MetaFilter comes after me for all of my reddit trolling, I'm going to want my $5 back.
posted by Hatashran at 1:18 PM on April 6, 2022 [8 favorites]


Besides other issues with IP addresses, they get recycled (particularly with IPv4 addresses, where there are only 4 billion of them to go around). When I sign up for an Internet provider, they give me an IP address. I don't get to pick it. If a bad actor has created a bad reputation at the IP address, I inherit it. There already are issues with this, as there are spam blacklists maintained by IP address, so just hope that the IP address issued to you by your Internet provider wasn't used by a previous spammer.

And then there is the issue that with NAT, many people share the same IP address within an organization. So now if one person on the network acts badly, everyone at the same organization gets painted with it. One person at a school is bad, so nobody at the school can post.

Doing integrity ratings by IP address is a bad idea.
posted by Xoc at 1:22 PM on April 6, 2022 [4 favorites]


Hatashran: If MetaFilter comes after me for all of my reddit trolling, I'm going to want my $5 back.

This was presumably a joke, but when the norms vary between communities, you can't apply penalties universally.

That is, MeFites are held to higher standards than Redditors, so a ban there might not be applicable here -- or it might involve behavior that's waaay worse.
posted by wenestvedt at 1:25 PM on April 6, 2022 [1 favorite]


If MetaFilter comes after me for all of my reddit trolling, I'm going to want my $5 back.

Maybe I'm just responding to a joke (or a lightly trollish) comment, but the way I read this article, it didn't seem to be saying "let's sleuth out all of our users and proactively ban anyone who's been bad elsewhere!" but more like "when you have a moderation decision to make, it's good it to use people's past behavior elsewhere to inform that decision, to the extent that you can find it and make sense of it"
posted by aubilenon at 1:41 PM on April 6, 2022 [17 favorites]


But if all you have is a username, and you're not federated with the site where they're being offensive, what inferences are you willing to take as an admin? And it's a big step to go from blocking someone so you can't see them, and kicking them off of your site.

Going back to the original article, I think this is very applicable:
For example, if we can see that someone is usually reasonable and thoughtful on other platforms, but has been transgressing within the context of our community, we can assume that they're either having a bad day or were prompted by something out of the ordinary in their experience that pushed them into not being their best selves. Many times, simply nudging them with a reminder of how much more constructive they are elsewhere can be more than enough to bring them back into being more positive community members. I've personally had a surprisingly large number of people respond with an apology and a near-instant change in attitude when given this kind of prompt.

There are, of course, the less heart-warming examples. Sometimes you see someone transgressing in a community, take a look at what they're doing elsewhere, and are completely horrified at what you find. In those contexts, it's easier to make a proactive decision to ban (or limit the privileges of) a user, because you can have higher confidence that they're not simply having a bad day.
What inferences am I willing to take? The ones I'm convinced by. I'm not interested in looking for objective standards, that's what leads to what's described in the article as "narrowly walking within the lines of the worst things each platform permits, and spreading out the ill intentions in a way that gives them plausible deniability on each individual platform, while collectively enabling truly awful outcomes for their targets."
posted by CrystalDave at 1:53 PM on April 6, 2022 [11 favorites]


aubilenon: Agreed. And as CrystalDave noted upthread, there's often clear evidence that malicious actors are planning organized social media campaigns in full public view. Even if there's no way to tie the planners to the people carrying out the behavior, knowing that there's going to be an attempt to stir the shit on your platform through some astroturfed hashtag and burner accounts really should put the community teams on high alert for people starting to do that. The real reason Twitter, et. al., don't bother with this is because that astroturfing and burner account stuff is beneficial to their internal metrics. There's no real incentive to shut down any sort of campaign like that before it gains traction because they benefit from the behavior.
posted by SansPoint at 1:56 PM on April 6, 2022 [5 favorites]


I can't say I have a lot of patience for the "just because that person is hateful on another site doesn't mean they should be blocked over here", but absolutely it's incumbent upon a community's leaders to clearly explain any policy decision, not just ones that involve information gathered elsewhere.

For what it's worth, my experience is that this is an opinion that community members only hold before they've seen community leadership that is sustained and effective.

The first time you need to excommunicate a toxic participant from a community that has learned to tolerate toxicity, it is an ordeal. But the second and third time, the long justifications and difficult conversations we had to have about that first guy happened again, with all the same people, but they were basically all "is this ... like that other guy?"

"Yes. Would you like to see the details?" "No, that's... that's too bad, I didn't know." "Yes." Offenders four and five got responses like, oh yeah, that guy, fuck that guy. Once you've had a taste of a non-toxic working environment, a non-toxic community, you are absolutely not going back to not having that, and my experience is that people are willing to grant a tremendous amount of deference to people who can help secure them that environment.
posted by mhoye at 2:12 PM on April 6, 2022 [23 favorites]


all i've heard on this site is how much of a stressful experience it is to moderate this website - now, on top of this, we're either going to put blind faith in other sites and their judgments on people, or we're going to spend even more time checking out people on other websites and exhausting ourselves further

we really need to be clear on what we're asking from people - basically, the idea here is to delegate our judgments to people who may not have even heard of our site, by letting their bans influence our own - or policing the whole damned net, which just can't be done

sure, in rare cases, this might be the way to handle a problem user - but as an ongoing practice it's just too much to do, i think
posted by pyramid termite at 2:43 PM on April 6, 2022 [2 favorites]


And it's a big step to go from blocking someone so you can't see them, and kicking them off of your site.

One of the things we've learned - the hard way and at enormous cost - is that the blast radius of toxic participants in communities is huge and mostly invisible; for everyone person you can see who disengages or leaves because of that bad actor, there are dozens or hundreds of smart, competent people who will show up wanting to help, take one look into the trash fire burning in your forums, issue trackers or mailing lists and just keep on walking. If you're a community leader who's blocked or muted that person, you're just painting over black mold. All you've accomplished, as a community manager is to make the toxicity that is eating away at your community invisible to you.

As a community manager or leader, leading by example isn't good enough; blocking or muting people isn't good enough. When you see something you have to deal with it, because the worst thing that you walk past is what your community will become.
posted by mhoye at 2:57 PM on April 6, 2022 [17 favorites]


I guess I’m just confused by why this would be helpful. I don’t understand what monitoring user/ member behaviors elsewhere would add. It seems like behavior bad enough to get banned should be self-evident. If it’s such an edge case that you need to spend the time and resources to cross-reference my Mumsnet account, I’m not sure what the point is. On the flip side, that kind of monitoring is just creepy and verges on doxxing. If trust is the goal, this would just create a sense of mistrust.
posted by haptic_avenger at 2:59 PM on April 6, 2022 [1 favorite]


I wonder how much all of this mirrors the early days of the automobile. At first, there were no speed limits, no stop signs, no rules of the road. I'm sure it was crazy fun but then people started getting hurt and killed. So ... rules and enforcement. None of which would have been necessary if people hadn't driven like fools. But people are fools.

This metaphor doesn't really work this way, but it works like a charm if you turn it the other way around. People did drive like fools, but those people were rich so rather than constraining or banning that behaviour we made a bunch of rules that took much of the existing, safe public commons away from pedestrians, paved it flat and handed it over to dangerous polluters.
posted by mhoye at 3:02 PM on April 6, 2022 [7 favorites]


don’t understand what monitoring user/ member behaviors elsewhere would add. It seems like behavior bad enough to get banned should be self-evident. If it’s such an edge case that ...

The problem is there's a ton of edge cases, and it's hard to tell if someone's butting up against the community's boundaries of acceptable behavior because they're honestly unaware or having a bad day but still basically acting in good faith, or because they precisely understand the boundaries and are deliberately stirring up as much shit as they know they can get away with.

The article's pretty clear of their opinion that community management is as much as possible about turning people who are out of line back in the right direction, and anything that helps you determine whether someone is worth engaging with in this regard, or is mainly there to waste your time and raise your hackles, can save a lot of time and frustration.
posted by aubilenon at 3:13 PM on April 6, 2022 [4 favorites]


I wonder how much all of this mirrors the early days of the automobile.

I realize that was an off-hand comment, but: Wow. I mean, sure, I've seen the attempted overthrow of the United States government, I've seen Twitter threads posting all sorts of made up stuff trying to justify horrors, I've seen the changes in social media over the years (and sociology grad students have called me up to ask me about my role in social media in the '90s), but this phrase sent the chills right down me.

Thinking about how much better our world would be if we'd managed to quash the automobile's dominance of our public spaces a century ago, rather than letting it dominate our lives, is pretty much the best argument I can imagine to undermining the expanding role of these companies in our society, right now, before they get any larger.

Thank you. That comparison is chilling, and eye-opening.
posted by straw at 3:21 PM on April 6, 2022 [9 favorites]


I don't get a lot of the negative comments. Communities might do better if you hired more people to moderate and gave them resources to consider out-of-community posts when investigating a complaint. This seems reasonable to me. Human moderators using their judgment, something that (properly resourced) can actually work (I say as I comment on MetaFilter).

Spend more money on moderators. There's the question of whether there's enough resources, sure. But it's like complaining that your staff of one person can't possibly keep the office building clean, printer paper in stock, toner cartridges filled, handle expense reports and also manage the reception desk. The obvious response is, why do we only have one staff person in the office building? But most communities try to manage by saying "Clean up after yourselves, people!" in an increasingly annoyed tone of voice.
This is an approach that, plain and simple, requires more investment in people and resources than the default model. This costs more money. It is worth it, because it yields a sustainably better result.
That's it. That's the pitch.

"Who pays?" is a pretty serious practical problem but that doesn't seem to be the objection here?

They seem to invest a lot of importance in IP addresses

There's a short paragraph that's sort of thought experiment with IP addresses to assess the impact of publicizing them, so . . . maybe not? They absolutely aren't proposing some technical framework where IP's are logged and that is used to evaluate users.

I guess I’m just confused by why this would be helpful. I don’t understand what monitoring user/ member behaviors elsewhere would add. It seems like behavior bad enough to get banned should be self-evident.

My impression as an observer is that people are very seldom banned for a single comment; it's the pattern of behavior that bans them. If banning a user is self-evidently justified after 20 comments on the site you moderate, it might have also been self-evident after 5 comments if you took into account behavior on other sites.

That obviously saves the other community members some time, and moderators get at least some level of time they invested in the process back.
posted by mark k at 3:51 PM on April 6, 2022 [10 favorites]


I’m a bit boggled at some of the seemingly extreme viewpoints in this thread? Say, for example, that you have just started dating someone. On the one hand, it would certainly be extreme of you to hire a private investigator to fully investigate the person’s background, sure. No argument there. But some of you seem to be arguing that the only other alternative is the equally extreme position of ignoring anything that you might happen to learn about the person in any way other than direct experience? “Oh gee, 20 people have directly told me about how buddy has acted abusively to all his past partners and I happened to overhear a conversation that, from contextual details, seemed quite likely to be about him that described some pyramid scheme con that likely-him was running; but I’m going to ignore all that and form my own opinion solely based on how he treats me” is not the realistic and healthy position some of you seem to think it is.
posted by eviemath at 5:19 PM on April 6, 2022 [10 favorites]


Doing integrity ratings by IP address is a bad idea.
This would be an important objection to raise in a thread where someone were advocating that.
And it's a big step to go from blocking someone so you can't see them, and kicking them off of your site.
From the linked article:
The goal of building a community policy that considers a person's broader presence online is not just so you can ban people more quickly. (Although it does enable that for people who are clearly just being awful!) Instead, you'll very often find community members who are misguided but redeemable. ...
What we end up with is a process where, most of the time, people can be brought back into the fold. Folks who started out as annoyances or even real problems in a community can turn into productive community members, and sometimes even community leaders. Others around them in the community can see that the inclination of the community is not punitive, but constructive.
I'll refrain from quoting the entire piece again here, but it's probably worthwhile to see if the exact issue being raised was specifically addressed in the source document.
posted by anildash at 5:47 PM on April 6, 2022 [5 favorites]


I think this is a terrible idea.

If you ran afoul of moderation on Trump's Truth Social platform and they banned you, should they be able to keep you from participating in other social media?

If you’d raised a stink about the way Facebook was allowing right wing lies to run amok during the 2016 election campaign and they banned you, should that follow you and compromise your reputation on other social media?
posted by jamjam at 6:02 PM on April 6, 2022 [1 favorite]


I would subscribe to reputation ratings by a WokeAI that figured out everyone's sock puppets and rated their synoptic online behavior. I would want to be able to give different factors different weightings and then have some kind of transparent overlay on my monitor display this info unobtrusively as I go about the internet. I can certainly see downsides. But maybe those are tradeoffs we need to make to live so close together online, and maybe also not so different from those we make to live together already.
posted by hypnogogue at 6:45 PM on April 6, 2022 [1 favorite]


I was in a situation in a forum where mods were on Facebook promising the reps of a company they would 'deal with me' when I raised hell about them sheltering and excusing a serial predator.

That was a good sign for me that the forum was not a space I wanted to be. They went ahead and banned me under flimsy pretences, including that I brought up the mod behaviour.

Observing patterns of behaviour is important.
posted by geek anachronism at 7:25 PM on April 6, 2022 [3 favorites]


If you ran afoul of moderation on Trump's Truth Social platform and they banned you, should they be able to keep you from participating in other social media?

Who here is arguing for blind acceptance of all reports? One of the whole points of such a system is that you have people evaluating the reports and external behavior, and making a educated decision on how that behavior should be treated. (And to be fair, if you're going to other sites to metaphorically shit in the punch bowl, that should count against you, even if it's an ideologically opposed space.)
posted by NoxAeternum at 8:18 PM on April 6, 2022 [2 favorites]


RPG.net (the original "we ban Trump supporters" website) banned someone for working for ICE. Here's the explanation. Do you think that this was a good use of offsite information for a ban? (I'm undecided. I'd like to hear what other people think.)
posted by Spike Glee at 9:47 PM on April 6, 2022


We banned [MEMBER] for being a member of ICE, which is a hate group. Before he was banned from Tangency, he regularly justified and valorized ICE's actions. Since that ban, he has continued to wave his ICE flag elsewhere on the internet.
If you have questions about why we consider ICE a hate group, Amnesty International has some information, as does the Southern Poverty Law Center.


That sounds plenty on-site to me, the off-site behavior at that point seems to be "Yup, continued the bannable behavior", which mostly sounds like confirmation that the ban shouldn't have been reverted.
posted by CrystalDave at 10:17 PM on April 6, 2022 [2 favorites]


Also they once sanctioned someone for a group attack on Human Resources departments (in a thread about sexual harassment) for saying HR exists to protect the company not the people. So. They make some dodgy decisions too.
posted by geek anachronism at 1:56 AM on April 7, 2022 [2 favorites]


> I’m a bit boggled at some of the seemingly extreme viewpoints in this thread? Say, for example, that you have just started dating someone.

But that's the problem with this plan: having someone join your community on the internet is nothing like dating someone. If it was, then you'd be right! If we were carefully filtering through an entire world of people to select an extremely limited few (usually just one person!) to share everything in our lives with, then absolutely yes these rules would be great to determine "who gets to date you?"

But instead this is more like "who gets to drive on your neighborhood's roads?"

Someone up above made the analogy to roads and traffic rules. I think the point of that analogy was missed on this thread: to me what's crucial there is the federal government stepped in to impose a single set of rules on everyone. You don't leave the creation and application of traffic rules on the road to, like, volunteer neighborhood associations of gated communities!

It would be a completely shit system, not to mention a wildly untenable one, if each separate neighborhood had a separate set of volunteer traffic cops trying to determine access to their own separate neighborhood roads based on the decisions of other unpaid-volunteer neighborhood traffic associations.

If a driver is that unsafe, the actual existing legal apparatus of a representatively and democratically elected body should be the ones revoking their license to drive everywhere. Same should go for the internet.

I think Anil Dash is looking at this problem all wrong. He's still got the silicon valley mindset that the internet is and always should be the wild frontier outside the reach of regular laws. And the internet still being in its infancy, his view is a popular one. It seems unthinkable to all of us to ask the police, say, to take reports of internet harassment seriously. But that's changing. And while our laws have a loooooong way to go before they truly address the scope of harm caused by propaganda and misinformation and hate speech on the internet, we're getting there. Slowly.

And that's what will (and should!) solve the problem Dash is worrying about. Regular laws. If we jump in to give moderators the tools to track IP addresses by reputation before the laws can catch up to do what they're supposed to, we will be diverting our course into a truly damaging and nonsensical path.
posted by MiraK at 3:52 AM on April 7, 2022 [2 favorites]


MiraK, I don’t at all agree that community norms should be set and enforced by law. I think the law must be way more permissive about being rude or mean to people, axe-grinding, self promotion, and a ton of other things than most communities want to be. Even if our laws expanded and we’re enforced it would still be desirable to have communities that didn’t permit the maximum amount of assholishness allowed by law.
posted by aubilenon at 6:48 AM on April 7, 2022 [3 favorites]


We can't have it both ways, though. We can't say the harm being caused is serious enough that we need to track IPs and attach reputations to them, with all that this implies and all the consequences that will come attached, and also at the same time claim the harm isn't serious enough that the law needs to be involved.

Back when people were tarred and feathered for violating community norms in harmful but legal ways, at least the tar would wash off in a few days or a couple of weeks, and the person was able to learn and grow and start over in a new town. Now we want to tar and feather not even the person but their IP address for all eternity, and share that information across all communities on the internet. If the harm this person caused actually merits this punishment, then it is better handled by the law which is at least supposed to be accountable to the people.
posted by MiraK at 6:56 AM on April 7, 2022 [2 favorites]


And actually, Dash sets his argument on these examples:

> When people have deliberately targeted others for abuse, spread harmful propaganda, or even bilked people out of money or opportunities

All of which ought to be handled by the legal system. It makes zero sense to be empowering unpaid volunteer moderators who are accountable to nobody and offer no transparency in their processes to deal with literal criminals and abusers on their own.
posted by MiraK at 7:06 AM on April 7, 2022


Judging from MetaFilter and the rest of the internet, the only form of moderation that really works is "Hire a sufficient number of moderators who share my values and give them the power to make judgment calls."

Twitter's not going to spend that much money on moderators, and if they did, the moderators would not share my values.

Do MetaFilter's moderators take into account offsite user behavior? They can and they have, but not because of some specific policy or shared blocklist.
posted by straight at 10:15 AM on April 7, 2022 [1 favorite]


Twitter has several orders of magnitude more money than MetaFilter, but also several orders of magnitude more users. I've never seen anyone try to figure out how those differences add up. Does Twitter actually have enough money (real cash that they can spend) to hire enough moderators to do MetaFilter-scale moderating?
posted by straight at 10:33 AM on April 7, 2022


This discussion speaks to the larger issue of digital identity, or more accurately identity in an age where information technology is everywhere.

The internet birthed many new ways of communicating and interacting, including many that enabled people to interact anonymously or pseudonymously. This facilitated some great things, like breaking down repressive social mores and allowing people to express themselves without fear of reprisal. Pseudonyms allowed people to build digital lives and reputations in many more social domains that we may previously have been able to, while still allowing for reputation and social capital.

Anonymous/pseudonymous action is a crucial protection against the powerful, and in a digital world "the powerful" includes not only holders of wealth or institutional power but anyone with access to digital tools to harass/brigade/threaten/etc. I.e. basically everyone. And despite the internet offering new methods of communication, it's not fundamentally some "different" area of life where abuse shouldn't matter just because you're sometimes hidden behind a (thick or thin) veil of anonymity.

The fear that we're going to go too far and build systems of censorship and oppression is not misplaced -- just look at China. Internet communication methods are often subject to markedly authoritarian or draconian methods of governance (how many bans require a jury trial of one's peers?) simply because they can be, but also because we've gotten used to them working certain ways, and because existing powerful interest want them to work that way.

This status quo is far from ideal in terms of either facilitating freedom or reducing harm. The past 25 years have been a long series of lessons about how unrestricted speech can serve anti-social aims, while at the same time showing what happens when our unrestricted speech never goes away and can be forever dredged up from Google or the Wayback Machine.

Ultimately the question is, how can we keep the benefits of anonymity (protection from abuse of power; facilitating experimentation and free-er speech) while still building communities in the public interest? As others have said, it's not an all-or-nothing proposal -- the cool part is, you can build almost any kind of identity, reputation, and governance systems in software. Just because they're not available yet doesn't mean we shouldn't think about them, try them out, and see what works. We just have to keep in mind why we care about anonymity. The Matrix reputation solution (and cross-site reputation generally) seems like a good attempt, although it's never clear what it might look like scaled up to the whole internet, or all the ways that system itself might be subject to abuse.
posted by ropeladder at 11:13 AM on April 7, 2022


Twitter's not going to spend that much money on moderators, and if they did, the moderators would not share my values.

Like any editorial policy, of which online moderation is absolutely an analog. And like griping about Harper's not wanting to run your writing, you're always allowed to publish it yourself GYOFB. If a blog is destined to be too isolated, get involved in federated services similar to the semi-realtime interaction of Twitter/FB/etc., where you might still not get access to the entire audience you think you're entitled, but you can probably not get disconnected from as many people as you would be by not having anything, or by holding up a dusty corner of the blogosphere. If you do find yourself banished after exhausting all possibilities of community, then some soul-searching is in order.
posted by rhizome at 1:07 PM on April 7, 2022


And to be sure, those are only the free (or very low cost) options.
posted by rhizome at 1:07 PM on April 7, 2022


Judging from MetaFilter and the rest of the internet, the only form of moderation that really works is "Hire a sufficient number of moderators who share my values and give them the power to make judgment calls."

Actually I think the key to moderation is transparency and consistency. I can disagree with community norms, but as long as I know them and feel they are consistently applied, then that’s good moderation. The “Matrix” approach seems to undermine consistency and transparency.
posted by haptic_avenger at 2:25 PM on April 7, 2022 [1 favorite]


But instead this is more like "who gets to drive on your neighborhood's roads?"

That is a mischaracterization of most internet communities, which are more frequently voluntary associations of sub-groups of people who aren’t required or don’t need to participate in that specific online community. Eg. the ability to post in a given sub-Reddit is not a basic human right, the way travel on public thoroughfares is.
posted by eviemath at 4:31 AM on April 8, 2022 [2 favorites]


Let me give a different analogy. I have a former co-worker who got himself fired for creating a hostile learning environment for students and workplace former colleagues after falling down one of those online pipelines to right wing extremism. Although his behavior exceeded a bar for workplace conduct, it has not (yet?) risen to the level of criminal harassment against any one particular individual (he spreads it out just enough among different people and random strangers or public figures). Should other employers give him a chance, because he technically has a particular set of qualifications and hasn’t done anything there yet? That happened before the pandemic, and so of course he has also adopted many of the pandemic-related conspiracy theories. Turns out that he has also been causing problems at a couple of local businesses of the professional sort where customers are clients and the business can refuse to take on new customers for some reasons (such as lack of capacity to take on new clients) - we’re talking important services but not a grocery store and other options for businesses in the same category are available locally. So now he’s also not allowed at those businesses - which, probably not coincidentally, happen to be women-owned. Should similar businesses that he might be seeking to be a client at not take his past behavior into account when considering whether to take him on as a client, or at least take some precautions around warning him that he can’t pull the same crap and keeping a careful eye on him more than they would other random new clients?

Notice here that even within existing law and government-set regulations, we have different standards in different contexts. So why should all online communities from sub-Reddits or Metafilter or Ravelry or similar be expected to have the same policies and standards as something like Twitter or Facebook as a whole, that are large enough that they should probably be nationalized as infrastructure that has reached the size of de facto public service? Or are you failing to clarify context and that is not in fact what you are arguing, MiraK?
posted by eviemath at 4:56 AM on April 8, 2022 [2 favorites]


« Older Mining for 100-year-old denim   |   Emotive forces shape the gestalt of the brand... Newer »


This thread has been archived and is closed to new comments