Clickbait Disinformation
December 11, 2021 12:18 PM   Subscribe

How Facebook and Google fund global misinformation. The tech giants are paying millions of dollars to the operators of clickbait pages, bankrolling the deterioration of information ecosystems around the world.
posted by blue shadows (10 comments total) 19 users marked this as a favorite
 
And to head off a certain tangent I saw elsewhere discussing this - there is no functional difference here between intentionally funding misinformation proprietors and not giving a shit who you do business with.
posted by NoxAeternum at 12:52 PM on December 11, 2021 [24 favorites]


Deleting our Facebook accounts and moving our social chats and event planning to Discord is the healthiest thing our group of friends ever did.
posted by xedrik at 1:26 PM on December 11, 2021 [4 favorites]


I'm glad that these journalists are choosing not to call Facebook by another name. Corporations renaming themselves is itself often an act of misinformation, as it would be in this and future reporting about Facebook.

I hope stories like these help people shun these companies. There is a lot we can do to push governments across the world to finally regulate information war profiteers, but it starts with people walking away.
posted by They sucked his brains out! at 3:12 PM on December 11, 2021 [3 favorites]


How people of good conscience can continue to use Zuck-owned services is beyond me. In decades to come our grandchildren will look back astonished and puzzled that we were so laissez-faire about a corporation so clearly harmful to democracy and humanity.

I know it's considered poor taste to bring up the Third Reich when talking about present-day evil, but what is the difference between businesses that profited by collaborating with nazis, or turning a blind eye to their horrors, and those who ignore the destruction caused by Zuckerberg's greed that is literally everywhere to see? At the moment the biggest difference is numbers. Give him time.

It disgusts me that my friends use these services but that celebrities, politicians, the White House, and so many others who claim to care about the planet and its people so willingly line the man's pockets is unconscionable.
posted by dobbs at 3:16 PM on December 11, 2021 [9 favorites]


From a related article, linked from the one in the OP:
Facebook ... regularly trots out various leaders to speak to the media about the ongoing reforms. In May of 2019, it granted a series of interviews with [CTO] Schroepfer to the New York Times, which rewarded the company with a humanizing profile of a sensitive, well-intentioned executive striving to overcome the technical challenges of filtering out misinformation and hate speech from a stream of content that amounted to billions of pieces a day. These challenges are so hard that it makes Schroepfer emotional, wrote the Times: “Sometimes that brings him to tears.”
It's not like nobody warned him.

Well past time for us to put on the collective blue robe and eyebrow trick, methinks.
posted by flabdablet at 7:51 PM on December 11, 2021


How people of good conscience can continue to use Zuck-owned services is beyond me. In decades to come our grandchildren will look back astonished and puzzled that we were so laissez-faire about a corporation so clearly harmful to democracy and humanity.

I know it's considered poor taste to bring up the Third Reich when talking about present-day evil, but what is the difference between businesses that profited by collaborating with nazis, or turning a blind eye to their horrors, and those who ignore the destruction caused by Zuckerberg's greed that is literally everywhere to see? At the moment the biggest difference is numbers. Give him time.


I hope you're not typing that from an IBM then. Or driving a car that runs on petrol. Or drinking Coke. Or any of these brands. Etc, etc.

You don't understand how anyone could use a Zuck product, but you doubtlessly engage in all kinds of consumption behaviour that is just as suspect, if viewed through the same lens. Does that make you a horrible person? No, it makes you a person that has to exist in the society you're in. Maybe it's feasible for you to give up e.g. all meat and every mass-produced Nestle or Coca-Cola owned brand of everything in your life. Maybe it isn't, for whatever reason that is not for me or anyone else to second guess. Either way, you should probably extend to same courtesy to the humans around you that aren't, for reasons that you may not see or fully comprehend, not willing or able to give up Facebook.

There is no ethical consumption under capitalism. The problem is the people who do the bad things, not everyone who has ever done business with them. Go after Facebook, not its users.
posted by Dysk at 9:04 PM on December 11, 2021 [20 favorites]


There is no ethical consumption under capitalism.

My breakfast of scrambled eggs from backyard chooks fed mostly on the kind of kitchen scraps that most households are still sending to landfills comes close, though.
posted by flabdablet at 9:27 PM on December 11, 2021 [2 favorites]


From that same related article:
In 2014, Kaplan was promoted from US policy head to global vice president for policy, and he began playing a more heavy-handed role in content moderation and decisions about how to rank posts in users’ news feeds. After Republicans started voicing claims of anti-conservative bias in 2016, his team began manually reviewing the impact of misinformation-detection models on users to ensure—among other things—that they didn’t disproportionately penalize conservatives.

All Facebook users have some 200 “traits” attached to their profile. These include various dimensions submitted by users or estimated by machine-learning models, such as race, political and religious leanings, socioeconomic class, and level of education. Kaplan’s team began using the traits to assemble custom user segments that reflected largely conservative interests: users who engaged with conservative content, groups, and pages, for example. Then they’d run special analyses to see how content-moderation decisions would affect posts from those segments, according to a former researcher whose work was subject to those reviews.

The Fairness Flow documentation, which the Responsible AI team wrote later, includes a case study on how to use the tool in such a situation. When deciding whether a misinformation model is fair with respect to political ideology, the team wrote, “fairness” does not mean the model should affect conservative and liberal users equally. If conservatives are posting a greater fraction of misinformation, as judged by public consensus, then the model should flag a greater fraction of conservative content. If liberals are posting more misinformation, it should flag their content more often too.

But members of Kaplan’s team followed exactly the opposite approach: they took “fairness” to mean that these models should not affect conservatives more than liberals. When a model did so, they would stop its deployment and demand a change. Once, they blocked a medical-misinformation detector that had noticeably reduced the reach of anti-vaccine campaigns, the former researcher told me. They told the researchers that the model could not be deployed until the team fixed this discrepancy. But that effectively made the model meaningless. “There’s no point, then,” the researcher says. A model modified in that way “would have literally no impact on the actual problem” of misinformation.
Members of the reality-based community have understood for some time that conservative claims of bias against them are almost always bad-faith misrepresentations intended to discredit watchdogs and whistleblowers who publicly call out barefaced horseshit for what it is. In other words, these claims are themselves disinformation.

So what we see here is a senior Facebook exec actively promoting disinformation as a matter of official company policy.
posted by flabdablet at 10:12 PM on December 11, 2021 [11 favorites]


From the article linked in the OP:
Google confirmed that the behavior violated its policies and terminated all of the YouTube channels MIT Technology Review identified as spreading misinformation. “We work hard to protect viewers from clickbait or misleading content across our platforms and have invested heavily in systems that are designed to elevate authoritative information,” YouTube spokesperson Ivy Choi said.
And what of the clickbait misinformation channels not quickly found by journalists about to publish critical articles? Where is YouTube's well-promoted process for having those flagged for moderator review and rapid removal?
posted by flabdablet at 10:19 PM on December 11, 2021 [2 favorites]


And what of the clickbait misinformation channels not quickly found by journalists about to publish critical articles? Where is YouTube's well-promoted process for having those flagged for moderator review and rapid removal?

Well, Youtube does have a "Report" under the ... tag, where you can then flag the content under a range of issues. I agree that maybe it should be a bit more prominent.

The real question is how quickly (if?) should a normal viewer flag content does it take for Youtube/Google to respond.
posted by coberh at 10:41 PM on December 11, 2021 [1 favorite]


« Older "The paradox of the “u up?” text is that its...   |   Adventures in the Universe of Ambiguous Morality Newer »


This thread has been archived and is closed to new comments