Award winning reporter and Rappler co-founder Maria Ressa Q&A
September 29, 2019 7:54 AM   Subscribe

“Facebook Broke Democracy in Many Countries around the World, Including in Mine” In a Q&A, the Manila-based journalist discusses how Silicon Valley has “forever changed” our societies — and what can be done to stop hate spreading faster than facts
posted by Mrs Potato (34 comments total) 20 users marked this as a favorite
 


I’m just going to throw it out there that 10 or so years ago there were a number of people praising Silicon Valley on the blue as being able to fix all of societies ills. Over and over again, it was written on this very site that we no longer needed gate keepers. There was militant anti-main stream media rants on the regular. As a journalist and someone who worked in digital journalism I got tired of fighting with people and left Mefi for a long time. I don’t even know where I’m going with this but it got tiring to hear how the gatekeepers i.e. editors, writers, etc were part of the problem. How’d that all work out I wonder....
posted by photoslob at 9:33 AM on September 29, 2019 [33 favorites]


Oh this is horrific. Thanks for posting. Much admiration for Maria Ressa.
If you don’t have facts, you can’t have truth. If you don’t have truth, you don’t have trust. If you don’t have these two things, you don’t have democracy. You have no ability to have any kind of civic engagement — your society cannot hold power to account, and the voice with the largest megaphone wins, which in our country is President Duterte. And that’s exactly what played out.

... With Instant Articles, Facebook became the largest distributor of news globally, but it left behind the responsibility of the gatekeeping role. The most important part of what we do as journalists is determining facts from fiction. [Facebook] threw news into the same algorithms that were designed to appeal to the worst of human nature, the kinds of things we’d sometimes even hide from ourselves. These algorithms were designed to keep you on-site for as long as possible. You’ve seen all of these studies already — for YouTube, it pushes you further to radicalization; for Facebook, lies laced with anger and hate spread faster than facts.
posted by spamandkimchi at 9:57 AM on September 29, 2019 [7 favorites]


It is past time for the US to step in and enforce anti-trust regulations with regard to Silicon Valley. The effective monopolization of services under individual corporations gives companies undue powers and puts them in the position of governance. So, after coordinated campaigns to manipulate elections or even promote genocide, we're stuck begging private companies to tweak their algorithms or improve their content moderation. But there are few to no consequences if companies with fail to act or do so ineffectively, even as their properties are used to persecute individuals and groups, and to unravel truth itself. Facebook should not be the primary distributor of news. Forgetting its retail services, Amazon should not control half of cloud computing. Google should not have a stranglehold on advertising funds, or be able to punish sites for not using AMP. The security and future of our democracies should not be left in the hands of a handful of tech firms.
posted by evidenceofabsence at 10:36 AM on September 29, 2019 [9 favorites]


I remember when I first saw TheFacebook back when it was still called that, and it was still invite only for EDU email addresses at specific schools.

My first thought was "ooo, this isn't going to end well, is it?" and it's why I never, ever joined. I didn't have any idea how bad it would get.
posted by loquacious at 10:55 AM on September 29, 2019 [4 favorites]


I'm also very weary, frustrated with and tired of the "but I need Facebook" arguments from activists and community organizers and the like. I don't want to rehash those arguments now, because we've had them.

I'm not sure if that convenience can really be justified at all any more, especially for activism and local community building and work.
posted by loquacious at 10:58 AM on September 29, 2019 [13 favorites]


The same platforms that enabled Duterte and other horrifying authoritarians to shape public opinion also enabled a teenager to go from a solitary protest in front of the Swedish Parliament to sparking and leading a powerful global activist movement in barely a year.
posted by PhineasGage at 11:35 AM on September 29, 2019 [7 favorites]


Part of why Facebook doesn't have to care about the truth is because we've indemnified them from the repercussions of promoting falsehoods. It's long since past time that we started taking a serious look at Section 230, because the only way you're going to get these companies caring about the truth is to give them material consequences for not doing so.
posted by NoxAeternum at 11:36 AM on September 29, 2019 [2 favorites]


The same platforms that enabled Duterte and other horrifying authoritarians to shape public opinion also enabled a teenager to go from a solitary protest in front of the Swedish Parliament to sparking and leading a powerful global activist movement in barely a year.

Yes, and?

We can have the latter without the former - but that requires these platforms to actually care about the truth.
posted by NoxAeternum at 11:37 AM on September 29, 2019 [11 favorites]


Social media should be regulated like TV, radio, and newspapers, which are all media that have absolutely no problems with right-wing misinformation and propaganda.
posted by Pyry at 12:03 PM on September 29, 2019 [6 favorites]


Who exactly shall we appoint as arbiters of truth? The U.S. Federal Elections Commission is moribund. The Federal Communications Commission is actively worse. The U.S. Supreme Court is slip-sliding in a direction I am pretty certain no one here thinks is positive. Why would any other powerful body not be subject to the same risks of ineffectualness or actively moving to the dark side?
posted by PhineasGage at 12:17 PM on September 29, 2019 [1 favorite]


Sure, every gate keeping body is a potential tyrant - but now that we know that no gatekeepers leads to a different kind of tyrant, it looks like we need to work harder to empower and control public bodies for the public good.

Which we knew we had to anyway.
posted by clew at 1:10 PM on September 29, 2019 [6 favorites]


Social media should be regulated like TV, radio, and newspapers, which are all media that have absolutely no problems with right-wing misinformation and propaganda.

You can sue Fox News for defamation, which does put some constraints on their behavior, as small as they may be. Facebook is more or less legally indemnified for allowing defamatory material to remain on their service.

Just because one platform has problems doesn't mean that the problem on another isn't worse.
posted by NoxAeternum at 2:31 PM on September 29, 2019 [1 favorite]


it got tiring to hear how the gatekeepers i.e. editors, writers, etc were part of the problem
To be fair, I think mass journalism has a gatekeeping problem. I'm not saying this in defense of Facebook -- in fact maybe gatekept mass journalism is the "least bad" alternative in our current media landscape. But I think it's valuable that silenced minorities and other voices that go against the deep pockets of advertising patrons of mass media get a soap box that can reach as far and wide as mass journalism alone once did.

Now, addressing your debaters' idea that Facebook is some sort of solution to the gatekeeping problem of mass journalism... I know Facebook brands itself as a distributed network of people being spontaneous around their friends and loved ones, and has all the trappings of this idea of a small world where everybody has a voice --- but it behaves more like the anarco-capitalist fever dream of mass journalism, with its laissez-faire approach to content farming and its two-sided attention marketplace in lieu of a proper newsroom-sales firewall.

The fact that there is no "editor" role in Facebook's headcount doesn't mean UX, design, data science and other teams aren't hard at work gatekeeping and fostering just the right content to extract the most "engagement" and sell the most ads. When someone talks about the gatekeeping problem of mass journalism and mentions the web as a kind of a partial solution to this problem, I think about blogs, podcasts and forums, not about Facebook.
posted by rufb at 3:04 PM on September 29, 2019 [4 favorites]


"Social media giant" is a funny old phrase, isn't it?
posted by Made of Star Stuff at 3:32 PM on September 29, 2019


I don’t see how breaking Facebook and YouTube into pieces via anti-trust laws solves the problem.

I don’t see how monitoring their content from outside is possible either.

I suppose MeFi is a good example of moderation at work. Applying that model to 1+ billion users all communicating at each other is hopeless.

I know there’s supposed to be a punch line about what the real answer is. I don’t know what that is. But wanting, or even needing, an answer does not mean there is one.
posted by argybarg at 5:01 PM on September 29, 2019 [4 favorites]


Sure is cool that's theres just nothing we should even try with a company that's literally facilitated genocides. Probably shouldn't even sternly talk to Zuckerberg. might hurt his feelings.
posted by Uncle at 5:12 PM on September 29, 2019 [3 favorites]


if you were Mark Zuckerberg, what would you do?
posted by argybarg at 5:17 PM on September 29, 2019 [1 favorite]


In these conversations, the one thing nobody seems to mention is China.

Is it possible to use algorithms and lots of human moderation to control a huge social media environment? YES. CHINA DOES IT ALL DAY EVERY DAY. They do it for horrible authoritarian purposes, but ffs yes you CAN screen and moderate every post and have human verification of all social media content, that is exactly how the largest national internet on the planet is operated and the companies involved still make massive profits and create tens of thousands (possibly hundreds of thousands, nobody knows) of jobs. Ask your local teenager about all the things you can't post on Tiktok.

How do we have this debate and not point at China as an example of the fact that moderation is possible?
posted by saysthis at 5:58 PM on September 29, 2019 [6 favorites]


Do we know that Chinese moderation is effective? Or is it a blunt instrument with many failings and exploits and loopholes?
posted by argybarg at 6:54 PM on September 29, 2019 [1 favorite]


Do we know that Chinese moderation is effective? Or is it a blunt instrument with many failings and exploits and loopholes?
posted by argybarg at 10:54 AM on September 30 [+] [!]


It's both. There have been no studies as far as I know about its effectiveness at stopping harassment, promoting democracy, preventing genocide, curbing disinformation (although the case of school stabbings being kept out of the news in mainland China to prevent copycat attacks is one example unrelated to state narrative control off the top of my head; I think there were some studies, I wouldn't know where to find them), or any other purposes we would consider worthwhile. It isn't built for that.

What is certain is that it operates under a central authority that delivers detailed instructions every day, and that tens of thousands, if not hundreds of thousands, of censors screen posts on every network, right down to Wechat group chats (Wechat is as ubiquitous in mainland China as Facebook + WhatsApp and has 800+ million users), remove offending material within minutes, and refer egregious offenders for prosecution within 12 hours, because you can't have an account on the internet if you don't register your real name, phone number, and ID. It is also certain that keyword flags, image recognition algorithms, and other automated tools routinely screen networks for offending content and flag it for human moderation. All of these systems, except the police who come to your door to arrest you, are provided as a legally mandated cost of business by private companies.

The actual effectiveness of the system is obviously not something the Chinese government will release anytime soon, but as someone who has lived in the system, I can tell you I've watched posts in group chats vanish, I've had Weibo posts deleted seconds after posting, I've had innocuous Tantan profile text deleted minutes after updating (Tantan = mainland Tinder - even Tantan has mods!), and I've basically never seen any content on mainland China sites and networks that goes against the state narrative.

It's human moderation on a massive scale that works for their purposes. The notion that mass-scale moderation is unfeasible is false.
posted by saysthis at 7:28 PM on September 29, 2019 [8 favorites]


But moderation in service of WHAT? The Chinese government censorship, er, moderation cannot possibly be something any of us would like to see here. Where does this fantasy come from that some governmental body would censor, er, limit right wing views and comments while amplifying progressive viewpoints?!?
posted by PhineasGage at 8:24 PM on September 29, 2019 [5 favorites]


But moderation in service of WHAT? The Chinese government censorship, er, moderation cannot possibly be something any of us would like to see here. Where does this fantasy come from that some governmental body would censor, er, limit right wing views and comments while amplifying progressive viewpoints?!?
posted by PhineasGage at 12:24 PM on September 30 [+] [!]


In service of what is a very good question.

China does so in the name of "social stability", which is actually keeping the current government in power. Consider that beyond simply blocking unfavorable true news, this also means eliminating unfavorable fake news, and when you consider how many people dislike the CCP and the veracity of Chinese scammers, it stands to reason that they're likely also holding back a tidal wave of disinformation.

Facebook does so with "1,000 people like Chloe moderating content for Facebook at the Phoenix site, and for 15,000 content reviewers around the world" (TheVerge), so we already have 15,000 "censors" on record as employed by just one social media giant in the service of "keeping Facebook legal", which mostly means deleting obscene content, nudity, images of drugs, animal abuse, etc.

Personally, I would prefer these teams of people work to combat disinformation and bullying, especially since they already exist and are proven to work, more perfectly or less. You don't find nudity on Facebook. I am not the king of the world, and I can't make them do what I want. I think an interesting detail of the Chinese model is that they have Party-approved training courses for censors, and mandatory Party cells in management at every major company. I could see a parallel institution in American networks being something like a "First Ammendment Compliance Department" involved with actively managing content to ensure maximum expressiveness while eliminating hate speech and disinformation, as well as a government bureau like the FCC & "speech courts" being at the top of that apparatus, and regular public reporting of "this is what we blocked and why" ala Twitter's Removal Requests page. I don't know how it would work. I do know the journalist in the FPP is right about disinformation, but is also only telling half the story.

No discussion of disinformation and hate speech on the internet is incomplete without acknowledging that social media information management is real, effective in certain contexts, proven to work at scale, and ongoing even on Facebook et al. They could shut down all the disinformation tomorrow if they tried, and we could make them. We choose not to, despite knowing how Facebook directly enables the Syrian & Rohingya refugee crises and the Cambodian election lockdown and Donald Trump's election interference and...there's a very, very long list. Why?
posted by saysthis at 9:44 PM on September 29, 2019 [2 favorites]


This is where we disagree: "They could shut down all the disinformation tomorrow if they tried." Who gets to define disinformation is the core issue.
posted by PhineasGage at 10:41 PM on September 29, 2019


How many people would it take to moderate, in real time, the comments of billions of people in hundreds of languages? According to whose standards of decency? By whose cultural norms?

Would they moderate private messages? Comments in private groups?

Would “Trump is ridiculous and his behavior is harmful” be acceptable? Would “homophones can kiss my ass” be censored?

This is insanity you’re proposing. There’s a reason you’re the only one here who wants to take the Chinese model of censorship seriously.
posted by argybarg at 10:47 PM on September 29, 2019 [5 favorites]


I'm seeing people say "who gets to decide", so I'll make one more comment and then bow out of the thread.

From the FPP: A lie told a million times becomes a fact; that’s the reality of social media, and that’s what authoritarian-style rulers around the world are taking advantage of.

We know this is true, and we know it's dangerous. We also know, from the limited examples of Twitter/Facebook/Google efforts to fight hate speech and state-level disinformation campaigns, that active moderation, bans of bot networks, and content labeling can be effective. We also know, from China's example, that far more invasive moderation can effectively shape public discourse and opinion.

How many people would it take to moderate, in real time, the comments of billions of people in hundreds of languages?
Would they moderate private messages? Comments in private groups?


We know the answers to these questions from the example of China, and from the example of Facebook's and Twitter's own moderation teams. I make no claims about the "should", but they are possible, up to and including private messages and groups, because Wechat and QQ don't have end-to-end encryption, by design (which creeps me out).

It would take 50,000-100,000+, going by estimates of people employed as censors in China that I've seen, aided by big data, algorithms, automated keyword flagging, etc. The article I linked above about Facebook moderators in the US states that they make about $28,000 a year (which is too low, given the trauma they deal with; the article goes into that), and I've seen other articles about "Facebook moderation farms" in other countries where they make much less. In China, I'd say $14,000 a year is a reasonable estimate for that salary. China has 750 million internet users, moderates in Chinese, English, Uighur, Tibetan, and dozens of other languages. Wechat actually just passed 1 billion monthly active users. In the article about Wechat encryption above, it says Tencent's net profit in 2016 was $6.2 billion, and if we assume on the high end, that Tencent employs, say, 30,000 censors at $14,000, we can assume it spends $420 million USD/year on this. Tencent's 1 billion active users is about 35% of Facebook's 2.7 billion active users, and Facebook's 2018 net income was $22.1 billion. If we assume $40,000/year to hire 90,000 full-time moderators (let's give our imaginary moderators a raise), we get $3.6 billion.

It's possible, and it's much cheaper than you think, and they can afford it. The United States could feasibly set up real-time moderation of all social media, affordably, in a very short period of time. Snowden revealed that the NSA did almost exactly that, minus the huge staff of moderators, in the early 2000's.

This is where we disagree: "They could shut down all the disinformation tomorrow if they tried." Who gets to define disinformation is the core issue.
posted by PhineasGage at 2:41 PM on September 30 [+] [!]


According to whose standards of decency? By whose cultural norms?
Would “Trump is ridiculous and his behavior is harmful” be acceptable? Would “homophones can kiss my ass” be censored?


argybarg's questions are spot on, because right now, the decision-makers are Facebook/Twitter/Youtube, the Chinese government, other unelected and opaque bodies, or armies of stochastic terrorists, botnets, and liars that drown good information in bad. Obviously, the Chinese censorship regime is awful, but it's primarily because of what it censors, not how (although again, that lack of end-to-end encryption is creepy af). Currently, the people who decide what to censor on Facebook/Twitter/Youtube are the moderation teams at Facebook/Twitter/Youtube, and they use the same methods China uses...and I would much rather they both be accountable to the public, and a lot more proactive about fighting obvious disinformation. As clew said,

Sure, every gate keeping body is a potential tyrant - but now that we know that no gatekeepers leads to a different kind of tyrant, it looks like we need to work harder to empower and control public bodies for the public good.

Which we knew we had to anyway.
posted by clew at 5:10 AM on September 30 [5 favorites −] Favorite added! [!]


I'm suggesting that we need better gatekeepers. While we certainly can make efforts that don't involve active real-time moderation of everything on the internet, the fact is that we already do have active real-time software moderation of everything posted to Facebook, we have active flagging and takedowns of thousands of posts and videos per day on Facebook/Twitter/Youtube, and they're not going to stop, so we may as well accept that we kind of have a model for what works if we don't want to end up like the Philippines (hi Trump/Brexit/Bolsanaro), but it will have to be subject to a lot of democratic oversight if we don't want to end up like China. We can do a lot better than we are.
posted by saysthis at 12:50 AM on September 30, 2019 [3 favorites]


We have built a machine that can delight you! It may also destroy you. You can connect with long lost friends! You can be manipulated at a sub-conscious level. It has photos of babies! Hillary is Satan.
posted by zerobyproxy at 7:32 AM on September 30, 2019 [1 favorite]


"Where does this fantasy come from that some governmental body would censor, er, limit right wing views and comments while amplifying progressive viewpoints?!?"

Hell, where's the fantasy come from that the governmental body wouldn't censor detailed discussion and criticism of the moderation?
posted by Selena777 at 9:34 AM on September 30, 2019 [1 favorite]


. It's long since past time that we started taking a serious look at Section 230

Nothing in Section 230 prohibits Facebook from doing a much better job of moderating content. When not moderating hits their bottom line, either by consumer or govt. action, they'll change policies.

I'm sceptical govt. can enact anything that won't just make the overall problem worse, but I'll listen to ideas.
posted by COD at 3:44 PM on September 30, 2019


Nothing in Section 230 prohibits Facebook from doing a much better job of moderating content. When not moderating hits their bottom line, either by consumer or govt. action, they'll change policies.

Which is the point. Nothing in Section 230 prohibits them from doing a better job, sure - but it also doesn't require them to do better, because Section 230 grants them legal indemnification from any sort of monetary penalty from allowing defamatory material to remain on their service. You want to actually hit Facebook's bottom line on their lack of moderation - then you're going to have to roll back Section 230's blanket.
posted by NoxAeternum at 3:54 PM on September 30, 2019 [1 favorite]


"Anti-authoritarian censorship regime" is kinda an oxymoron.
posted by save alive nothing that breatheth at 6:12 PM on September 30, 2019


So, it came out today that in an internal employee meeting, Mark Zuckerberg threatened proactive lawsuits if Elizabeth Warren is elected. As Vox's Matt Yglesias noted:
If you work at Facebook and have a conscience, note that when Facebook takes criticism from conservative politicians, Zuckerberg tries to change things up to appease critics whereas when it takes criticism from progressive politicians he vows to “go to the mat” to fight them.

A good question for him is whether in addition to using his billions of dollars and vast army of lawyers to try to win political battles, would he use Facebook’s control over news distribution to try to slant election results in favor of candidates he likes better? Why not?

As Zuckerberg said, Warren winning would cost him a lot in legal fees.

Trump, by contrast, boosts profits with corporate tax cuts.

Does he have an obligation to shareholders to help spread fake news that helps him beat her?
posted by NoxAeternum at 9:29 AM on October 1, 2019 [1 favorite]


So here's a great example of how hard it is to legislate/mandate/moderate "truth."

"How an industry ‘environmental’ group helped defeat California’s plastics crackdown."
Supporters of the bills said the group’s arguments were examples of how the plastics industry worked, often in misleading ways, to kill the recycling effort last month..

[but]

[chemical company VP public affairs...] said criticism of the effort is unfounded because Novolex is a “California company,” with four manufacturing plants in the state, and shares Allen’s goal of wanting to increase recycling.
posted by PhineasGage at 3:28 PM on October 1, 2019


I'm only back to post this

The old rules prohibited all ads that contained "false" and "misleading" content and made no mention of the fact-checking program. The new rules are limited to claims that are "debunked by third-party fact checkers."

Moreover, Facebook says "political figures" are exempt from even that narrow restriction.

It's open season.


We need better gatekeepers.
posted by saysthis at 8:04 AM on October 5, 2019


« Older What the future of the American ballpark should...   |   Violence on the Floor Newer »


This thread has been archived and is closed to new comments