Facebook is Removing Posts by Singaporean Activists
July 9, 2016 6:00 AM   Subscribe

When can Facebook take down your post?
Five days ago (July 4), Ms Teo Soh Lung – an activist and former ISA detainee – had her post “Police Terror” removed “for violating community standards”, according to freelance journalist Kirsten Han. Shortly after (July 7), Ms Han herself received a notice that her post – in support of Ms Teo and a reproduction of “Police Terror” – had been taken down for the same reason. This sparked a ban on her posting for 24 hours. In late June (June 23), blogger Andrew Loh had his post about violations of Cooling-Off Day rules taken down by Facebook – but in his case, Facebook restored the post and issued him an apology just three days later (June 26).

A copy of the original banned post by Teo Soh Lung can be found here. [Warning: TR Emeritus is an "anti-government" site that is quite lax in fact-checking.] As The Middle Ground notes, '[in] Ms Teo’s case, community standards don’t seem to have been violated; “Police Terror”, albeit provocative, is Ms Teo’s recount of her experiences with the police, namely the seizure of her electronic devices.'

Kristen Han wonders if '[it] is possible that the post was removed because it was reported by a large number of people, triggering an automatic algorithm.' Social media in Singapore has become increasingly political in the recent years, with large numbers of pro-government and anti-government individuals constantly engaging in verbal sparring.

So far, it seems that Facebook has been silent on the matter.
posted by destrius (57 comments total) 13 users marked this as a favorite
 
There's been a couple of reports I've seen of Facebook removing "Black Lives Matter" statements from people in the US as well.

In the article they mention a theory of some critical mass of people reporting the post triggering an algorithm that removes a post; that sounds really plausible.
posted by EmpressCallipygos at 6:13 AM on July 9, 2016 [6 favorites]


Assuming the mirror at TR Emeritus is true, then the post is obviously not in violation of any reasonable TOS. I'd go with the automatic algorithm, very likely manipulated by professionals in the employ of the Singaporean government to monitor and manipulate social media.
posted by Bringer Tom at 6:14 AM on July 9, 2016 [3 favorites]


Never ascribe to malice that which is adequately explained by the soulless grinding of a dystopian, faceless corporate machine churning through terabytes of personal data every second.
posted by Behemoth at 6:18 AM on July 9, 2016 [106 favorites]


this is old, but i stumbled across it just yesterday: Disneyland With The Death Penalty.
posted by andrewcooke at 6:22 AM on July 9, 2016 [4 favorites]


In my experience, the only thing that never gets removed is ACTUAL hate speech -- misogyny, racism, anti-semitism, etc.
posted by jeather at 6:22 AM on July 9, 2016 [38 favorites]


andrewcooke: this is old, but i stumbled across it just yesterday: Disneyland With The Death Penalty.

A bit of a derail... it's indeed pretty old, and Singapore has changed a lot since then. I kind of dislike how Gibon's article is one of the first things (liberal) people remember when they think about Singapore; I find it a bit too superficial and just-so, written by somebody who only spent a short time in the more touristy areas and didn't manage to get into the underbelly of the complicated city he was in.
posted by destrius at 6:37 AM on July 9, 2016 [14 favorites]


Never ascribe to malice ...

The malice is just an emergent phenomenon
posted by kleinsteradikaleminderheit at 7:05 AM on July 9, 2016 [27 favorites]


Never ascribe to malice what can be adequately explained by pursuit of profit.

"the only thing that never gets removed is ACTUAL hate speech -- misogyny, racism, anti-semitism, etc."
The Haters are a very lucrative market.

That's just one of the reasons why I see NO value for Facebook in my life.
posted by oneswellfoop at 7:15 AM on July 9, 2016 [6 favorites]


I kind of dislike how Gibon's article is one of the first things (liberal) people remember when they think about Singapore; I find it a bit too superficial and just-so, written by somebody who only spent a short time in the more touristy areas and didn't manage to get into the underbelly of the complicated city he was in.

The title "Disneyland With The Death Penalty" is also kind of weird, since, like, Disneyland already has the death penalty.
posted by threeants at 7:23 AM on July 9, 2016 [5 favorites]


My Facebook account serves as a placeholder for my wife to be "Facebook married" to me. I don't even know the password and only log in once every month or so when she lets me have access to her laptop.

This arrangement is mostly owing to the fact that Facebook is an incredibly hostile algorithm-driven place that has reduced the friction of connecting competing and adversarial social networks to the point where the platform now serves as a form of weaponized antagonism. My particular mental state can't deal with that well, ergo I'm not on there anymore.

Given my own experiences with being reported, having content deleted and my account put on hold, it does not surprise me at all that this is happening to other activists as well.
posted by Annika Cicada at 7:35 AM on July 9, 2016 [3 favorites]


FWIW, I've reported abuse on Facebook a few times, and the clear-cut cases are almost always taken down within an hour.

I've rarely seen them*reluctant* to remove content.
posted by schmod at 7:44 AM on July 9, 2016 [1 favorite]


I just use facebook to look at people's faces.
posted by srboisvert at 7:46 AM on July 9, 2016 [9 favorites]


MetaFilter: just an emergent phenomenon
posted by Fizz at 7:46 AM on July 9, 2016 [1 favorite]


Dr. Emily Laidlaw, a law professor at the University of Calgary, is doing some interesting research on corporate social responsibility and how much of an obligation platforms like Facebook have to support what you might generously describe as free speech rights (keeping in mind that they are not a government actor, so technically can't violate your free speech rights) as well as their obligations to support the opposite right to not be exposed to hate speech, defamation, and harassment. Because of the reach of platforms like Facebook, Twitter, etc, she argues that they should have an actual obligation to strike an appropriate balance on those issues, and not just to do whatever they want under contract law.

The difficulty she has in doing her research -- the difficulty that anyone has in doing this kind of research -- is that it's so difficulty to figure out what Facebook is actually doing, both because they won't say and because it doesn't seem to be consistent from issue to issue and place to place and time to time. So, it's equally possible that the posts were removed by an overzealous algorithm or by an overzealous/biased human being, and that either one of those may or may not have been under the influence of the local government at the time.
posted by jacquilynne at 8:08 AM on July 9, 2016 [5 favorites]


In my experience, the only thing that never gets removed is ACTUAL hate speech

Oh that's easy, just unfollow your extended family and most of your friends
posted by RobotVoodooPower at 8:09 AM on July 9, 2016 [12 favorites]


I've rarely seen them*reluctant* to remove content.

There have been multiple attempts to persuade Facebook that pages promoting the medieval blood libel are, perhaps, not within the TOS. Facebook tends not to agree.
posted by thomas j wise at 8:12 AM on July 9, 2016 [3 favorites]


I suspect a lot of the reason Facebook is inconsistent is whoever they've farmed out their editorial to are people in a dim room somewhere, making decisions very quickly on very little guidance.

Though it doesn't explain why they leave all of the gun sale groups up despite it being against their own policies, but that's a derail for another time...
posted by fifteen schnitzengruben is my limit at 8:21 AM on July 9, 2016 [1 favorite]


This happens all the time in authoritarian states. Bots or humans report activity they don't like, post it taken down. Facebook doesn't care.
posted by k8t at 8:58 AM on July 9, 2016


Mass reporting of posts is a technique used by people who have legitimate reasons to want something that's actually bad removed as well as by people who are trying to get something that shouldn't be removed taken down. So it's not exactly a situation where a one-size fits all approach is really appropriate.
posted by jacquilynne at 9:24 AM on July 9, 2016 [2 favorites]


FWIW, I've reported abuse on Facebook a few times, and the clear-cut cases are almost always taken down within an hour.

If by clear-cut you mean pages like "let's all kill schmod whose address is 123 Metafilter Way" (or pages that say maybe pictures of breastfeeding are okay), sure. If by clear-cut you mean pages like "Jews kill Christians, Hitler was right" or "All [bad word for any marginalised group] are pedos and deserve to be shot" then no. I've tried a number of times.
posted by jeather at 9:29 AM on July 9, 2016 [2 favorites]


Part of the problem is that Facebook does really have a notion of "community standards". It's inconsistently applied (imo) and pretty hard to define in a globally accessible medium. But "community standards" has to allow content that you may not like if enough folks do like it. I think that's one reason hateful stuff that isn't directly threatening or very specific is hard to get taken down. I may not like it but there's literally a community of folks who do like that stuff.
posted by R343L at 9:35 AM on July 9, 2016


Also: consider an image meme with a quote from say Rush Limbaugh or some other popular conservative talk radio person saying something really awful. That is speech quite literally taken from someone who is quite popular in the real world. Should Facebook take that down? What if the words express the same content but isn't a quote from someone (relatively) famous? It's actually pretty hard to be consistent or fair or tolerant and be open to everyone (yes even racists).

We might argue for different standards but I think it's a genuinely hard problem. It's one reason that communities that create the ability and encourage users to genuinely "police" each other work well. But Facebook isn't one community but many many many overlapping ones.
posted by R343L at 9:41 AM on July 9, 2016 [2 favorites]


But "community standards" has to allow content that you may not like if enough folks do like it.

It doesn't. They might choose to do this, but they don't HAVE to.

Of course that isn't even how they define community standards. They claim that they remove hate speech: "Organisations and people dedicated to promoting hatred against these protected groups are not allowed a presence on Facebook. As with all of our standards, we rely on our community to report this content to us."
posted by jeather at 9:45 AM on July 9, 2016 [3 favorites]


I see anti-Jewish stuff posted all the time on FB, and I report it when I see it. Not once has my report been successful; "just wait until we wipe all of the filthy Jews out" is apparently okie dokie. I am starting to suspect that whoever is reviewing these reports is not just in a dim basement, but maybe in a country where it's taken for granted that these things are OK to say?
posted by 1adam12 at 10:05 AM on July 9, 2016 [2 favorites]


That's why I said it was inconsistent. Became stuff that is ostensibly against their policies stays up but their policy is kind of incoherent and I'm sure most of the time it's "algorithms" along with occasionally poorly paid humans double checking. It's easy for me to see someone (possibly not even a North American) who when reviewing content, doesn't realize that it's against the specific hate speech rule because it looks like "normal" content. That is they don't recognize it as hateful because they see it elsewhere (either in their lives or on the site) or don't recognize it at all because maybe the worker reviewer is from SE Asia. Or maybe the reviewer does recognize it as awful but shrugs and notes a lot of people like this kind of crap and (inconsistently) decides it's something the "community" accepts.

I'm not saying this is a good outcome. I'm just saying that I believe someone when they say they they have no trouble getting content removed and I believe folks who say they can't get content removed. Facebook's own rules are incoherent and so of course they won't be applied consistently.
posted by R343L at 10:16 AM on July 9, 2016 [1 favorite]


Situations like this one are why I find it rather disturbing that so much activism has shifted to commercial platforms such as Facebook or Twitter. On the one hand, such platforms are very easy to use for those without a great deal of time or other resources, while also reaching a wide audience. On the other, for-profit companies may have a monetary incentive to remove content that challenges prevailing norms, especially if it is posted by marginalized groups.
posted by Excommunicated Cardinal at 10:18 AM on July 9, 2016 [9 favorites]


The problem with Facebook is not policies, it's their business model where success is predicated on people building larger and larger social networks and staying "captured" on their platform.

This is achieved by removing barriers to communication and increasing each user's reach. You can set your privacy setting to be super restrictive but heaven forbid you comment on a news post (not even on Facebook!) that is tied into FB authentication, because you are now public whether you like it or not.

It's a lot to describe, but basically Facebook as a base technology is akin to shaking a jar of bees to "gin up" the potential for social networks to grow. Facebook requires your account to be leaky, and that leakiness is what allows for groups of people who in real life have enough sense to not associate with each other get basically crammed into each other's faces and encouraged to fight. Which leads to groups organizing and rallying and mass reporting and basically using the cover of free speech to be really really shitty and abusive to people they disagree with or are bigoted towards.

So community standards ain't gonna solve that problem, because it's a neccesary feature of the platform's main function which is to exponentially grow social networks.
posted by Annika Cicada at 10:26 AM on July 9, 2016 [1 favorite]


17 years ago I worked for excite.com on the "communities" team, which was for all intents one of the early prototypes of "user generated social networks". My job was to review reports of content that violated policies.

I could only do that job for about 3 months, because there are only so many FBI reports I could manage to file for child pornography before I started becoming way too negatively affected by it. Thankfully they recognized that I had talents with building front ends that integrated with multimedia video so they moved me to the video chat team.

We had about 20 content reviewers for about 1 million active users. Our burn rate was about 3-6 months.

I can't imagine what scale you have to achieve to match what Facebook needs to do that for a billion users. 20 thousand people minus the churn every 6 months, so maybe 30 thousand people a year, roughly?

That's. A huge problem to solve.

The easier thing IMO is to give people better ability to control the leakiness of their content and put a tighter limit to how much interaction is encouraged in order to grow social networks. Basically a little more friction on the platform would go a long way I think.
posted by Annika Cicada at 10:46 AM on July 9, 2016 [8 favorites]


Yeah smart educated humans as reviewers just doesn't scale. Which is why their process doesn't even work now (I doubt they pay for many human reviewers - their employee base world wide is less than 10k). But even so there is a lot of egregious content I've seen that on the one hand "why the hell is that still up?!" but on the other it would be perfectly legal to publish on your own and while it's certainly within FB's rights not to host it, there's something weird about the idea of FB being a platform everyone uses but (in theory) enforces one set of values about acceptable speech. The theoretical standard happens to be one I value in my online communities but those values aren't sadly universal. And there's no guarantee they wouldn't restrict speech I'd be less okay with it being hidden. It would feel less weird to me if Facebook were less popular / "the" place for folks. It comes back to it not really being "one place".
posted by R343L at 11:06 AM on July 9, 2016


(To be clear I am not defending hate speech. I'm challenging the idea that even humans can be fair and not make mistakes applying rules in a site as big as Facebook because "community standards" implicitly is saying there is one standard which is nonsensical for a global site that wants everyone to join. I guarantee there are people who get upset that some post I think is perfectly fine stays up. Because they have different ideas about what "community standards" are.)
posted by R343L at 11:25 AM on July 9, 2016


I've noticed that activists I know who are either popular or unpopular tend to have their stuff removed and given automatic bans far more than people who say actually worse things, but don't have that many eyeballs. Stuff I've reported that's been actual threats has not been removed, while people get bans for stuff that's clearly not a violation - and there's no good way to protest a violation unless it's yours. Or even then. There's no review process.
posted by corb at 11:27 AM on July 9, 2016 [6 favorites]


I was a little surprised that "Disneyland With The Death Penalty" article was about Singapore. I assumed it was about Facebook.
posted by webmutant at 11:30 AM on July 9, 2016 [2 favorites]


Community values. MeFites hire mods to be the active face of our common sensibilities. It's clear that some friction exists, even here. How much moreso over a billion-user front from perhaps dozens of countries? Oh, dear lipstick. Help me find the pig.

I am at that end of the spectrum that tries to keep suspicions from being the underlying current of my day. I could drift into paranoia, but I keep it tamped down a bit. I'm old, and I don't wish to die screaming. I will be outraged if I discover that the internet has finally become the government's darling, editing itself to suit the power structure, sort of like the way my browsers like to tell me, with more and better resolution, what I like, what I want to see, what the next word I was about to type would probably be.

In this thread I've learned that gangs influence the content of FB, that the common sensibility (as evinced by likes, dislikes, and complaints) can be influence by blocs, who instruct their computers to complain and register thus and so--all this to keep the public dialog from drifting too far into certain dark corners. It's a huge and convoluted society that has sprung up with the advent of the internet. We are all onions, layers upon layers, the tough ones near the surface. My comfort is a sunny porch, not a meal for my children or a saddle for my camel. How deep do we go before we get to the same smile? Can't do that. You won't like what makes me grin.

It's not hard to imagine Big Brother's hand here. Even though most generations after me see Big Brother as a vaguely applied metaphor, I believe they are aware of content framing, spin, cherry picking. I have hopes for the internet. But, if you'd like to see some wonderful prophecy, do read 1984 (by Orwell), and notice how it ends. BB doesn't tire.
posted by mule98J at 11:31 AM on July 9, 2016 [3 favorites]


I've read 1984 and this is why I am in favor of sensible government regulations.

The dystopia ended up not so much big government but big capital.
posted by Annika Cicada at 11:38 AM on July 9, 2016 [2 favorites]


Unless it negatively impacts their bottom line (possible through activism, but perhaps unlikely) FB (and most every other multinational corporation) will profit from brutal regimes, and be complicit in the deaths of activists.

I once asked coworkers at a company that built technology that enabled the suppression of activists in brutal regimes how they felt about enabling the deaths of activists... My coworkers (asked privately) refused to even discuss the topic.

This is not unique to technology companies - oil companies, arms companies (obviously), and many (all?) sectors that could contribute to the detriment of human rights will do so if profit can be made (unless for some reason it becomes unprofitable: e.g. apartheid South Africa). While there are specific companies that could be named (feel free!) that will not value profit over human suffering, they are outliers and not the rule. Note that companies that have been 'shamed' into doing the right thing still did this because they value profit - it was potentially becoming unprofitable to be complicit in evil.
posted by el io at 11:41 AM on July 9, 2016 [3 favorites]


For all their talk of algorithms, they sure seem incapable of detecting basic DOS attacks which take the form of a group effort to simultaneously report given posts. These should be detected and flagged for skilled, non-outsourced staff to confirm whether they're legitimate.... And here's the other part: when they're found to be not legitimate, they should counter by banning the participants' accounts.

As someone who's worked on similar systems (not at FB) in the past, I'll just point out how difficult this actually is, both on the algorithms front and (as has already been pointed out) the human front. Here's what it's like on the engineering side:

What you're basically dealing with is a series of clicks, and identifying whether that series of clicks is 'real' or not. Any additional context that you want about those clicks (eg, what OTHER things did this person click on lately) is going to involve processing stupidly large number of other traffic, joining, and making some kind of additional assessment; this is hard at FB scale, both from an engineering and analysis perspective. There's a bunch of good stuff you can do to fend off the most imbecilic attacks (eg, all from a handful of ip addresses, or ip's known to be in AWS datacenters). But once the imbecilic attacks stop working, the attacks get smarter, and blend in much better with actual human traffic.

One approach is to create a real-seeming sub-network of bot accounts who post on one another's walls and friend each other, which you can then use to spam-flag a post you don't like (often for pay). Have your bot accounts friend a few real people to help it look less like an island of evil. To run the network, you can use malware infested computers, which are basically AWS datacenters for black hats; this gives you a good distribution of ip addresses, for example.

Or maybe don't bother with the bot accounts at all, and just use malware to click from the accounts of infected users. Good luck differentiating those clicks from the human clicks... To make it all worse, the more sophisticated the attack, the more time-consuming it is for human evaluators to figure out if it's real traffic or not.

And if you're dealing with a group of actual humans coordinating to flag something, there's NO WAY for algorithms to determine that this is bad traffic; it's actually real humans, after all. So then you need systems that try to parse natural language in the post being flagged, and determine whether a post actually breaks guidelines or not. This is sort-of possible with machine learning, but far far far from perfect.

In short, the people running the algorithms are dealing with an incomprehensible stream of shit. And lots of humans spout shit, too, making it difficult to tell the good shit from the bad shit. So some handful of wizards do incredible work behind the scenes in this department, but tend to get worn down over time and move on to other projects, with more tangible benefits, which look better on their resume than 'Level 9 Shit-Wizard'.
posted by kaibutsu at 12:06 PM on July 9, 2016 [7 favorites]


A few months before Edward Snowden released the NSA files, I was traveling in Chile, where I saw graffiti that said, "Facebook Es El CIA." I initially shrugged it off as kooky post-Pinochet lefty paranoia, but now that graffiti is scarily starting to make sense to me.
posted by jonp72 at 12:15 PM on July 9, 2016 [1 favorite]


Part of the problem is I don't think a human ever looks at reports unless a post gets lots of reports about it. So stuff that's offensive but that not very many folks see (and why would you see it unless you're in those "communities" or someone links you to it) won't get enough reports to get automatically pulled. And bad actors mobbing a non-offensive post looks ..... Just like lots of folks justifiably upset at an offensive post. I see posts that shouldn't be taken down all the time. It's often activists etc which is not really surprising (because targeting and silencing) but I think people read more into the automated message of their post's removal. The messaging they have is awful as it implies someone looked at it. It's something like "we removed this post because it violated community standards" but of course that just means some threshold of folks reported it as such within a certain time frame. Facebook could probably help with that particular case by being more transparent about how it all works. Though I suspect it changes rapidly.

But it's not a matter of them just having the will to do something about it or not caring. There are fundamental conflicts between their growth intentions (having everyone on the service) and being able to police behavior (or even deciding what the boundary of behavior should be.) Humans can't look at it all so they have automated systems to take reports. But the same systems that let you get an offensive post taken down can be manipulated to take down something inoffensive. This is actually hard. I think they can do more but everyone has a case where they didn't think the system was working and to conclude "they could (easily) fix it if they cared" is pretty unfair. You are applying your full human attention to particular cases where you know the full details and context. So it's obvious to you what the patterns are. They aren't necessarily obvious to someone else. Even some stuff I've had friends directly link to me and tell me is abusive in the past has looked ambiguous to me because I lacked their context.

My god I can't believe I'm defending Facebook. I'm pretty deeply ambivalent about it. I've had great conversations there but it's also got major problems. But it's not really easy.

As an example on another network, some friends on Twitter were getting "prove you control your account because you've been posting weird" messages suddenly a couple weeks ago. Most of them were not posting anything harmful (that I saw). They were posting a lot and sending multiple tweets at folks outside their follow lists a lot (and occasionally getting blocked) though which if you look at it another way looks just like the profile of someone who goes hunting for tweets they don't like and sends that person a lot of hateful replies. So I quickly suspected some new anti-abuse detection system not quite working as intended. That is what the big social networks face. Most on them suck at abuse prevention. And I don't think they do enough but it's really not something they could just fix if they only wanted to.

Worse, we won't all agree on what should be taken down. "So-and-so lives at exact-address. Someone go rape her" is maybe something we agree is never acceptable. But we're also the kind of people who defend folks for posting lyrics that technically threaten (say) a police officer who then gets charged criminally for it because in context to us it looks more like poetry or at least wasn't a serious threat. Some folks genuinely think breastfeeding photos are bad because nudity. Most "reasonable" people I know don't think that's a problem (some I think would probably say breastfeeding shaming posts are actually hate speech!) We literally disagree about what is offensive but Facebook is asking everyone to report stuff they think is offensive. If enough folks do then maybe maybe a human with next to no context has to decide if the reports were fair. When we see something unjust in terms of moderation we assume others will agree it's obvious. But metafilter is one of the best moderated sites (with pretty firm values about acceptable speech that are much stricter than FB) and even we have conflicts about what is acceptable or not where different folks reading the same comment have wildly different interpretations.
posted by R343L at 1:15 PM on July 9, 2016 [3 favorites]


Yep, exactly.
There's definitely room for improvement on the tech side on these issues, but these are - for the most part - human problems, not engineering problems.

A big part of the problem is that there isn't a one-size-fits-all algorithm that is going to handle the community guidelines; any site's guidelines has to be almost entirely permissive, for exactly the reasons that R343L outlines. But this lets in piles of hate speech, and makes the internet unlivable for people from many targeted groups. What's needed is machine learning that responds to more individual expectations around acceptable content, by putting the slider(s) for the classifiers in people's own control instead of trying to find a correct site-wide setting.
posted by kaibutsu at 1:38 PM on July 9, 2016 [2 favorites]


Kaibutsu: detecting automation is very difficult and requires behavioral analytics services that use JavaScript to fingerprint the connection and also inject JavaScript into the page loads that reads the browser DOM and uses various statistical analysis techniques on the HTTP responses to detect selenium jigs and the like. This is something that I am working on for my current employer (I'm a network security engineer) and while it is extraordinarily difficult it is not impossible.

With human-based attacks you have to use fingerprints, java script and UBA (user based analytics) to put together a hell of a lot of data points to surface an indicator of abuse and on the backend of that system is a team of about 75 people whose primary job is to review and investigate all the notable events and research and categorize them. Once they've got a proven abuse vector they update various backend systems that feed into the application layer, my front line security tools and security operations logging systems.

Ugh. It's a lot of complexity and integration that seems better served by better defining the goals of the business so that the application workflows deter abuse by through feature and design as opposed to bolted on the side behavioral analytic systems...
posted by Annika Cicada at 1:47 PM on July 9, 2016 [2 favorites]


There's weirdness developing around the takedown of the video of Philandro Castile being shot to death by police, which was livestreamed to Facebook by his girlfriend Diamond Reynolds:
Not too long after it was posted, the video vanished. Facebook blamed it on a "technical glitch" and restored the recording an hour later. . . .
Reynolds thinks cops used her confiscated phone to go into her account and delete the video, but Facebook has reiterated that it was a "technical glitch."

That it was a mere "technical glitch" is simply not credible, so why wouldn't Facebook just go with Reynolds' claim police used her cellphone if that's what really happened?

My guess is, police did not use her cellphone to take down the video, but instead have a direct mechanism to take things down without consulting Facebook in case of an emergency, and chose to use it in this circumstance -- a mechanism which Facebook has probably supplied to governments all over the world as one of the conditions for being allowed to do business.

But they wouldn't want account-holders to know that, of course.
posted by jamjam at 2:12 PM on July 9, 2016 [3 favorites]


Content gets taken down on all kinds of community sites, often for no reason or for simply being unpopular or being expressed by unpopular people. All kinds of places can be unfriendly to dissent. Online censorship a tough problem and one that is not necessarily limited to Facebook or to bad actors in government.
posted by a lungful of dragon at 3:00 PM on July 9, 2016


I actually believe "technical glitch" for the video going down (and back up) if they are denying the police did it via her account. Facebook Live is a relatively new feature. That has to be one of the most shared ones ever. That's just the kind of event that can expose a load issue or other bug you only see when a feature suddenly gets used a lot.
posted by R343L at 3:04 PM on July 9, 2016 [1 favorite]


Facebookistan is a fascinating documentary on the subject.
posted by fairmettle at 3:12 PM on July 9, 2016 [1 favorite]


Ugh. It's a lot of complexity and integration that seems better served by better defining the goals of the business so that the application workflows deter abuse by through feature and design as opposed to bolted on the side behavioral analytic systems...

The tricky part is that when you've got a big enough site, literally every button available becomes an avenue of abuse. It's demoralizing...

Thinking a bit more on it, there are two separate problems we're talking about in this thread, automation and badly behaved humans. The former is a mostly-solvable technical problem, where we get most of the would-be-bad-actors with well-built anomaly detection systems, and then have a bunch of ongoing skirmishes with persistent jackasses. I think the big sites focus more on automation because it is kinda-solvable, and because if you don't, the site gets hella spammy, which chases out real users.

The "badly-behaved humans" problem (which odinsdream is concerned with) is where I think some real new directions are needed. The user-customized ML idea I mentioned above is mainly in response to that issue. It also occurs to me that just opening up a common API to allow users to filter comments with their own automated systems (ie, allowing mere mortals to turn on a no-more-death-threats plug-in) would be a fantastic step forward, allowing third-party development of strong defenses across platforms. Sorta like spam assassin.
posted by kaibutsu at 3:58 PM on July 9, 2016 [1 favorite]


I'm working on a paper that proposes an extension to the BGP protocol that passes along fingerprint data and reputation scores as part of a community string, with the idea that you could "route" unique individual reputation scores along with ASN and IP routes in an open source manner. My thoughts are this information could be shared up into the application layer and be user definable, as in the front end UI could surface a user definable element that says "fingerprints below x score shall be invisible to me".

As to how you prevent abuse of that I'm working it out along the lines of how BGP route integrity is currently monitored and maintained, as in route dampening, looking glasses, etc. that way all the systems that feed into the "fingerprint information base" can have reliability scores and you can choose to only trust fingerprint scoring data from sources with a high enough trust that their data is good.

My idea requires a well defined protocol, vigilant monitoring and solid implementations but I think the success of BGP can be used as the springboard towards a truly open and shared security model that is trustworthy.

(I don't wanna get into the mechanics of how fingerprinting and risk scoring is generated and the trust model of how that's shared because that's a huge ball of wax that's way beyond my willingness to talk tech on a Saturday hahahaha)
posted by Annika Cicada at 4:46 PM on July 9, 2016 [1 favorite]


Is there some sort of vanishing point, where the attacks become so sophisticated that malware detection/correction is not possible?

As a 98J (decades ago) I learned that jamming a receiver was as simple as overriding a signal by transmitting a stronger signal--any noise would do--or you could emulate a signal, changing it a bit, to confuse the operator of the receiver. But you jam transmitters with a hammer.

I'm looking for a connection here, but I'm not finding it. I still go with the theory that sunlight kills germs, but my resolution is fading.
posted by mule98J at 5:41 PM on July 9, 2016 [2 favorites]


In my experience, the only thing that never gets removed is ACTUAL hate speech

Oh that's easy, just unfollow your extended family and most of your friends


Or anyone with whom you went to high school...
posted by y2karl at 6:03 PM on July 9, 2016 [1 favorite]


They remove what their advertisers tell them to remove.
posted by 922257033c4a0f3cecdbd819a46d626999d1af4a at 6:57 PM on July 9, 2016


Is there some sort of vanishing point, where the attacks become so sophisticated that malware detection/correction is not possible?

There are specific circumstances where you can prove theorems along these lines, though in reality, there are so many different kinds of signals that one can consider that it's hard to draw a hard line between what is and is not distinguishable.

(For example, there's an area of graph learning called community detection, dedicated to finding highly interlinked clusters with relatively few outgoing links. Like, say, friend groups. There are theorems on how interlinked two groups can be before they become indistinguishable; if you're a spammer setting up a bunch of interlinked fake accounts, you can view this as a target number of connections to make with a group of real people to turn your cluster of spammy accounts into a sleeper cell. In reality, you're rarely looking at graph data in isolation, though.)
posted by kaibutsu at 8:04 PM on July 9, 2016 [1 favorite]


My guess is, police did not use her cellphone to take down the video, but instead have a direct mechanism to take things down without consulting Facebook in case of an emergency, and chose to use it in this circumstance -- a mechanism which Facebook has probably supplied to governments all over the world as one of the conditions for being allowed to do business.

While its possible that governments are given this ability, it's beyond my imagination that this ability would be given to local police departments. They aren't afforded that level of power/trust by the federal government. For one thing, it's hard to keep a secret among every police department in the country.

I'll give you an example of what I'm talking about: Once I was illegally detained (for a short period of time, and I fully cooperated as to not piss them off) by the Chicago police department. They said they were going to check with the FBI to see if I was 'of interest'... I told them that they could feel free to do that, and the FBI wouldn't share anything they had about me with them (and I have a high degree of confidence backed by some data points that the FBI has a significant file on me). Sure enough, after a few minutes they let me go and told me that the FBI didn't tell them anything.

Now, its possible that the local PD called the FBI and said 'we need this taken down pronto', and the FBI helped them out before FB decided that they weren't going to cow-tow to the FBI anymore (in this specific incident because it would look bad for them)... But I doubt the local PD had the power or ability to block an individual FB post.
posted by el io at 9:21 PM on July 9, 2016


I have been consistently successful in getting posts with a swastika or the N-word removed for hate speech. Anything that qualifies as hate speech which is not so blatantly obvious usually gets a pass by Facebook, in my experience. This includes straight up slurs and deliberate misspellings of slurs.
posted by krinklyfig at 2:37 AM on July 10, 2016


Late to the party, but:

German Govt Hires Ex-Stasi Agent To Patrol Facebook For ‘Xenophobic’ Comments

The legality so the so called "Task force" is even dubious under German law. One of the members of this "task force" is Anetta Kahane who worked for the East German State security.

Akif Pirincci, a German-Turkish writer called her "An expert in treason who wants to put citizens, with unwanted political views, into prison".
posted by yoyo_nyc at 7:45 AM on July 10, 2016 [1 favorite]


odinsdream: "Try finding transphobic or homophobic pages and comments.. You'll have no trouble. Now try getting them removed."

Those are exactly the kind of comments I was talking about.

I have no doubt that others have encountered difficulty, but my experience has been overwhelmingly positive compared to other online communities.

[On the other hand, it's basically impossible to get Facebook to remove dog-whistle comments, or anything that isn't 100% direct. But posts containing blatant homophobia and transphobia are generally removed quite rapidly.]
posted by schmod at 3:14 PM on July 10, 2016 [1 favorite]


I should add that I find Facebook's business to generally be SUPER ICKY. But on this one point, they've done well in my anecdotal experience.
posted by schmod at 3:15 PM on July 10, 2016


I also suspect an algorithm complying to multiple reports is at play. The case of extremely offensive content seemingly never getting removed is usually because no one actually hit "report" on them. I try to do so when I see it and usually it gets removed soon enough. Giving them the benefit of the doubt, we all should do our part and reporting offensive content and trust that it'll work.
posted by numaner at 10:53 AM on July 11, 2016


Let me be crystal clear:

I have, personally, reported a lot of offensive content, and it has not been removed. I'm glad other people have been successful, but I do not give them the benefit of the doubt or acknowledge that they are good at removing offensive content.
posted by jeather at 11:08 AM on July 11, 2016


« Older Raising Pipevine Swallowtails: an example of...   |   Why Do Animals Like Capybaras So Much? Newer »


This thread has been archived and is closed to new comments