"Hatred is a constant underlying theme"
May 22, 2021 7:57 AM   Subscribe

TikTok’s recommendation algorithm is promoting homophobia and anti-trans violence TikTok’s recommendation algorithm appears to be circulating blatantly anti-LGBTQ videos, some of which encourage targeted violence. This isn’t the first time TikTok’s opaque algorithm has been caught recommending far-right content, including accounts promoting dangerous movements.
posted by plant or animal (47 comments total) 18 users marked this as a favorite
 
I think the story here isn’t the algorithm as such - “site gives you more of what you actively say you like” is hardly surprising - it is more that the content itself is on there, and is very difficult to get it taken down.

My TikTok is the exact opposite of that described: lots of supportive, inclusive, ally content, but of course that’s based on what I like. My concern is that when something hateful appears, reporting it does very little: very often the most homophobic content will be found not to contravene their “community guidelines”.
posted by bonaldi at 8:26 AM on May 22, 2021 [36 favorites]


Clearly, the content graph knows to group together anti-lgbtq content. The recommendation engine seems to work just fine to amplify and grow that type of hate. It's not a classification issue as such. The engine could be tuned to instead de-escalate such content, but curbing hate is not a priority.
posted by tigrrrlily at 8:32 AM on May 22, 2021 [14 favorites]


I don’t see this content on TikTok, probably because the algorithm has determined that I don’t want to see it. What I do see is LGBTQ+ people in my feed saying that they are being attacked by right-wingers or that mass mis-reporting of one of their videos got the video taken down, or they’ve somehow wound up on the “wrong side” of TikTok and are being constantly harassed.

I’ve only been on for a few weeks and my first impression is that there are a lot of great people on there making great videos, and a ton of trolls and right-wingers determined to make everyone else miserable.
posted by Ampersand692 at 8:33 AM on May 22, 2021 [14 favorites]


It seems like the same logic of Facebook which is the worst people and groups have the best engagement numbers so let's onboard as many people into those groups as possible
posted by Ferreous at 8:35 AM on May 22, 2021 [26 favorites]


Clearly, the content graph knows to group together anti-lgbtq content

I very much doubt that the algorithm realizes the content is anti-lgbtq and puts in that group. Most likely it just making an enormous number of “statistically, people who like A seem to like B” type links between things, and there’s not really even distinct or discrete clusters.

I don’t mean to say this to absolve tiktok of bad things people are doing on their platform, or to even say that I have any idea of whether they’re making any kind of effort at all, just that they probably don’t really have the sort of automatic “this stuff is bad” classifier that would make it super easy for them to keep a lid on this stuff.
posted by aubilenon at 8:45 AM on May 22, 2021 [6 favorites]


I very much doubt that the algorithm realizes the content is anti-lgbtq and puts in that group.
I don't think that's the issue that tigrrrlily is raising. Of course they don't have a 'let's find fascists and group them' algorithm.
What they do have, as does FB, is an algorithm that by default ends up radicalizing any group.
One of the reasons I left FB was that I realized it had shown me so much BLM content that I was ready to go start active resistance during a COVID peak. I AM a staunch pro BLM person, but I don't need a machine algorithm guiding me to further heights of anger.
These companies need to face the issue and build in a check on over stimulation of groups, or they're going to keep building anti queer hate, racism, and yes, maybe push those of us with (clearly correct!) social justice values into doing something stupid.

posted by Flight Hardware, do not touch at 9:18 AM on May 22, 2021 [26 favorites]


Algorithms don't "realize" anything. But if TikTok knows to give you more anti-lgbtq content if you've told it that you like one instance of anti-lgbtq content, then there is an algorithmic handle on the unifying theme of anti-lgbtq content that a human operator could go in and use to de-emphasize that content. If they cared to.
posted by tigrrrlily at 9:26 AM on May 22, 2021 [10 favorites]


there is an algorithmic handle on the unifying theme of anti-lgbtq content that a human could go in and use to de-emphasize that content

Or, you know, delete and ban, which would be the correct approach. Have we all been conditioned to accept this bullshit to the point where all we can ask for is for it not to be actively encouraged, rather than burned with fire?

Delete the anti-LGBTQ content (and ask other forms of hate) and ban the accounts originating and sharing it.
posted by Dysk at 9:31 AM on May 22, 2021 [34 favorites]


You can be 100% assured that whatever TikTok is doing is either tacitly endorsed, or specifically approved, by the Chinese PTB. You can also be 100% assured they have extremely fine grain control over what their algorithm recommends. It's not a black box at all at the switches-and-levels level. If they wanted to kill a particular type of content, you can bet it would be gone. If they haven't killed something, they've decided it's net neutral or positive to Chinese interests.

China has some of the best image and video classifiers in existence, and they're being trained and refined with everything that's being put onto TikTok. There's metadata about everything in every video, including versions of every bit of readable text and every word that's spoken, facial recognition and identification of every visible face, and almost certainly keywords about what people are physically doing in the videos.

I mean, that's what I'd do in their situation with infinite storage and CPU available, and I'm not even remotely as paranoid as the CCP.

If it's on TikTok, it's been reviewed as not harmful to Chinese government interests. Anything else is fair game.
posted by seanmpuckett at 9:53 AM on May 22, 2021 [21 favorites]


Fixing these problems is relatively easy from an algorithmic point of view. The algorithms naturally cluster groups and therefore content, and it's relatively easy to inspect even thousands of clusters and remove or downrank the associated content and users. They (TikTok, FB, Google, etc) do this all the time with pornography, criminal posts, etc. The problem they run into with a particular subset of materials like this -- and misinformation is another famous example -- is that there is a high degree of overlap between this content and just being a conservative or a Republican. And then they hit the a classic case of "bias in machine learning" where you can't delete anti-LGBTQ or misinformed content without basically deleting large swathes of the Republican party members and their leaders. So we get endless commissions and foo-fah about algorithmic challenges, when really the basic problem is just political, and is the same as Apple and others run into in China. Republicans, China, etc, are markets that are too large (from their point of view) to jettison on political grounds, but they also can't just say that they are abandoning principles that they claim to support (such as inclusivity, anti-hate, privacy, truth) simply in order to preserve a major revenue market. The result is bullshit from the companies, but at least the rest of us can be clear that it's not remotely a technical problem, just a political one.
posted by chortly at 9:58 AM on May 22, 2021 [12 favorites]


So outright bigotry and mainstream republicanism are difficult to sort because they look so similar. Tiktok is just one tree in this forest. It seems to me that this problem would go away if Republicans stopped using fear and hatred as political tools.
posted by adept256 at 10:16 AM on May 22, 2021 [6 favorites]


Just find + replace TikTok with YouTube, Facebook, Twitter, Instagram or Reddit (or a hundred other country-specific forms of social media) and you'll quickly realise that this isn't a problem limited to one company or one form of algorithm, but an endemic issue across the internet (or, arguably, humanity) as a whole.

Until corporations are willing to take a stand against hatred and actually enforce it on their platform, this is what will happen. Hateful people will find a space on there and will find ways to access each other, which turns into said platform recommending more hate, which spirals on and on. A lot of them are, actively or passively, working on a bottom line of "don't rock the boat with [insert conservative-leaning government here]", which means they'll tolerate hatred and even, in some cases, enforce anti-LGBTQ+ agendas themselves through their moderation, before stepping up against it.

I'll also take a moment to rant here about how FOSTA/SESTA has contributed to this in frightening corporations and online spaces into cutting down on "adult" content, which said spaces have taken to also mean queer or non-conformist content, which is slowly and silently driving LGBTQ+ users away and silencing important voices. To give a small example, a friend of mine is an artist who makes art featuring fat bodies of all genders. She's had her Instagram accounts deleted twice. Every other post of hers is automatically reported for violating community guidelines. Why? Because those posts show soft, fat tummies or legs -- not even sexual content or nudes, but just people living their lives -- which the algorithm (or certain users) classifies as "adult content" because it's too.. something. Curvy? "Obscene"? The answer isn't clear. Now she's decided to stop fighting and just leave the platform. Her valuable art isn't able to be shared any more. Just one voice gone among many, and that's not even getting to how this has impacted vulnerable sex workers from marginalised communities.

The internet is increasingly becoming an artificially cleansed, commodified space, where hatred and nastiness is allowed to fester, useful only to sell ads to depressed and oppressed people. Talk about the future that nobody wanted.
posted by fight or flight at 10:17 AM on May 22, 2021 [26 favorites]


The problem with most social media is the lack of quality moderation. To do what we do here on MF is declared too expensive to consider. So people post evil things that fly under the radar of AI moderation tools, and when you report them, most of the time the tools are too poor in quality to see what any human could, and the evil posts stay up. Meanwhile, trolls exploit the moderation tools with ease to report people talking about slurs used against them, leading to fresh victimization in the form of victims of abuse getting banned as abusers.

Everyone knows what the solution is: actual human moderation, careful and well-compensated. But the social media companies that dominate our lives roll their eyes at that in the same way that fast food companies roll their eyes at paying workers a living wage. And so here we are.
posted by DrMew at 10:27 AM on May 22, 2021 [17 favorites]


“This international platform has bad moderation that can/has led to bad outcomes” is going to be a recurring news story for our entire lives, isn’t it
posted by Going To Maine at 10:32 AM on May 22, 2021 [12 favorites]


As pointed above algorithms don't have context, they're ... algorithms. I don't TikTok but YouTube gives me Simspons videos and weird history videos that I like. I'd also point out that Tucker Carlson is not Donald Knuth.

I don't think this is an easy fix as pointed out above. Dissenting groups have a good way of couching their language in acceptable terms or using in-group codewords (for lack of a better term). Getting rid of things is hard to do, but what's also hard but more feasible is actual moderation. People are expensive, engineers think in automation terms. Sometimes it is easier to stick to as good algorithm and hire people to review content manually. I am currently on a project that makes no sense fiscally to automate.
posted by geoff. at 11:04 AM on May 22, 2021


:lightbulb:

Oh, this is the place in the story where we let the AIs do the thinking for us, and it bites us in the ass.

I guess I thought that was still a ways off, but apparently not.
posted by ryanrs at 11:23 AM on May 22, 2021 [2 favorites]


I signed up for TikTok to save my name. I didn't want the app, I used the web. On the web there doesn't appear to be a way to follow particular hashtags to tailor one's "For You" page, so I get the raw feed of everything people post marked "#FYP" trying to get traction on TikTok.

Or got, I should say, because after two days of looking at that I deleted the bookmark and now never look at TikTok unless I'm following a link to something I'm already pretty sure is okay. There are posts of cute animals and people being nice, but what mostly seems to make it to the "For You Page" is every variety imaginable of the human animal being vicious for laughs.

Cops posting themselves violating people's rights for laughs. Bystanders posting their recordings of people getting hurt in serious accidents or attacks for laughs. People spending hours in front of a green screen and making multiple costume changes to make a one minute movie featuring a hurtful stereotype for laughs. Going through an evicted family's belongings and roasting them for laughs. I could go on. It seriously made me wonder if there was any point in the struggle for human liberation.

Also, I don't know what standard they're mixing to, but the audio is so loud on TikTok it for sure isn't the -13 LUFS YouTube standard.

Anyway, I'm not a fan.
posted by ob1quixote at 11:34 AM on May 22, 2021 [15 favorites]


Like, "oh no, our advertising AI has decided the best way to sell ads is to fuel conspiracy theories and fascism and overthrow democracy."

I didn't think AIs were smart enough yet to pull that off. But to be fair, I had the same reservations about their human collaborators.
posted by ryanrs at 11:34 AM on May 22, 2021 [3 favorites]


What DrMew said.
posted by Artifice_Eternity at 11:38 AM on May 22, 2021


These companies need to face the issue and build in a check on over stimulation of groups, or they're going to keep building anti queer hate, racism, and yes, maybe push those of us with (clearly correct!) social justice values into doing something stupid.

AI/ML do not work for their stated purposes and are all biased. Recommendation engines do not (often) recommend things you will like, nor even things you want to buy, except by chance. Spotify's algorithm is geared toward satisfying their payola obligations.

Wait, as I type that, how come Spotify/YouTube/etc. tend to go the other way?

Facebook and TikTok will light the way to the new "Fuck your neighbor, that guy's a dick" group, but music services? They sure as shit aren't funnelling you to Oranssi Pazuzu as often as the latest Foo Fighters. So one company's algorithms push you to less popular topics, where another suggests more popular ones. It's just a choice based on the company's business model.

I have an empty, unused (for years) FB account I created for testing applications. Zero friends, or I forget, it might be friends with my actual FB account, but suddenly, over the past 6-12mos, FB has been emailing notifications of new groups. Half the time the subject line is "Fuck Your Feelings and 4 other groups you might be interested in." "DEAD Inside and 6 other groups..." and so on. It's never the Foo Fighters option (gonna start using that one) where it suggests "Bunnies and Baseball and 4 other groups..." or "I Love My Family."

And who is designing these algorithms? The most sheltered college graduates the world has ever known, and they are True Believers. People whose only concern is how they can help Zuck (or whoever) make more money. What will drive engagement? The minimum unit of engagement is the click, do the math.
posted by rhizome at 12:09 PM on May 22, 2021 [10 favorites]


Yeah, "the algorithm" isn't the problem. TikTok previously admitted it actively suppressed content by LGBTQ, disabled, and fat creators, for victim-blamey reasons. It's very hard to believe they suddenly lack the ability to do this for homophobic or transphobic content. Of course, they could just delete and ban it like I'm presuming they already do for shock stuff, if they cared enough.

Blaming the technology just seems like distraction through over-abstraction. It obscures the fact that real people are making real decisions about stuff.
posted by en forme de poire at 12:11 PM on May 22, 2021 [24 favorites]


IMO it's not AI gone awry, it's that tiktok is an op.
posted by GuyZero at 12:45 PM on May 22, 2021 [7 favorites]


It's very hard to believe they suddenly lack the ability to do this for homophobic or transphobic content.

I'm not defending TikTok, a product I don't even use, but as someone who has been in these organizations:

1. Content fatigue. After you've seen the worst of the worst it becomes tiring.
2. Hiring people for non-revenue generating purposes is hard to do. We can make abstract arguments against this but it is fundamentally a lot easier to spend money on a 15 second video of some teenager getting a new Lamborghini that gets a ton of views rather than hiring people to filter out bad content that unfortunately also probably gets a ton of views.
3. Unless you're upper management it is really hard to give change in an organization. No one anywhere will say they like bad content. At the same time what we perceive as small, say a bug in Samsung phones only in China, might be promised to stakeholders and needs to get done. I have not been in a black and white situation where there's something morally wrong, but definitely have been directly told to "do what is assigned to you" and ignore grey areas.
posted by geoff. at 12:48 PM on May 22, 2021 [4 favorites]


But if TikTok knows to give you more anti-lgbtq content if you've told it that you like one instance of anti-lgbtq content, then there is an algorithmic handle on the unifying theme of anti-lgbtq content that a human operator could go in and use to de-emphasize that content.

I’m not sure that one could say confidently that the latter follows from the former in a pure ML-based recommendation system. This is not the same as saying there’s nothing they could do about it, though - it would be hard to believe that they don’t have a business interest in tools that do break down content and viewing trends in human-readable way.
posted by atoxyl at 3:42 PM on May 22, 2021


I think TikTok's "moderation" and "oversight" systems have been broken for quite some time.

Take Jessica Rummel's situation as an example: As @kali_ma_tx, she has been calling out dangerous racism and homophobia on TikTok for quite some time. Her videos are frequently reported, and her account(s) have been banned and censored several times.

TikTok is completely opaque in its approach to moderation and banning users and content, and quite frankly, this lack of transparency is what makes it a dangerous platform.
posted by yellowcandy at 4:37 PM on May 22, 2021 [1 favorite]


TikTok is not an American company.

TikTok is a Chinese company.

According to their wikipedia page, with citations from the Guardian, the BBC, and the Washington Post, the Chinese government removes pro-LGBTQ content from TikTok even in countries where it is legal.

This isn’t about its AI or about not wanting to disappoint Republicans. This is about a fascist country controlling the content of a platform owned by a company based inside its borders.
posted by antinomia at 4:47 PM on May 22, 2021 [25 favorites]


I feel that reducing this to a matter of TikTok being a Chinese company isn't helpful; it's not like the same phenomena (where recommendation algorithms create a black hole of hateful content) doesn't also happen at American social media companies.
posted by truex at 7:04 PM on May 22, 2021 [7 favorites]


truex, the issue here is in the case of TikTok control by the CCP means pleas to get changes made to the algorithm will fall on deaf ears. You can at least bring Zuckerberg in front of Congress. TikTok doesn't give a shit. In fact, TikTok is an actively malicious actor. The only way to win with TikTok is to not play (which you shouldn't be doing anyway unless you like having your data used for the national defense of a fascist government).
posted by Anonymous at 7:28 PM on May 22, 2021


It's sad that tools (TikTok, Facebook, Instagram, Twitter) ostensibly for communication and sharing, have this amount of valid moral quandary of using them attached.
posted by tiny frying pan at 7:39 PM on May 22, 2021


(This was not really a problem with telegram, mail, phone, or email, in my view. It's altogether new in it's moral threat.)
posted by tiny frying pan at 7:41 PM on May 22, 2021 [2 favorites]


truex, the issue here is in the case of TikTok control by the CCP means pleas to get changes made to the algorithm will fall on deaf ears. You can at least bring Zuckerberg in front of Congress.
Did any substantive changes come out of Zuckerberg's appearance before Congress?
TikTok doesn't give a shit. In fact, TikTok is an actively malicious actor. The only way to win with TikTok is to not play ...
All of this applies to private corporations in the United States as well.

My concern is about jumping immediately to a place where the primary blame for this problem is assigned to China rather than understood as part of an industry-wide problem with algorithmic content moderation.
posted by truex at 8:00 PM on May 22, 2021 [9 favorites]


According to their wikipedia page, with citations from the Guardian, the BBC, and the Washington Post, the Chinese government removes pro-LGBTQ content from TikTok even in countries where it is legal.

While I'm willing, perhaps, to believe that the Chinese government has in various instances "removed pro-LGBTQ content", if you're implying that Tik Tok is scrubbed of queer content that's simply inaccurate. There's tons and tons of it, though it's very much in the same Approved Material™ vein that all content on Tik Tok tends to embody. That's not really what this is about, though, which is: actively homophobic and transphobic content. The Chinese government thing is a weird derail.
posted by dusty potato at 8:12 PM on May 22, 2021 [6 favorites]


Ooh, I took a break from LGBTQ TikTok to read this post! Yeah, I agree with truex and dusty potato - this is an industry-wide problem, and the Chinese government thing is definitely a derail... this isn't Douyin. The algorithm definitely feels more prominent on TikTok than other platforms - I have to actively like and follow things that I want to see more of. (I also made it to brick TikTok today; it feels like I unlocked some achievement; pretty pleased with myself... I thought CaulkTok was the pinnacle, but now I'm thinking it's BrickTok.) I'm also interested in how widespread this issue is on TikTok. It is dominated by users between 17 - 24 years old, and I think Gen Z is pretty alright.
posted by catcafe at 9:07 PM on May 22, 2021 [2 favorites]


I don't think companies like TikTok or Youtube are neutral or that this content grows just because it drives engagement. When you take those videos down, well-funded conservative groups fight legal battles to put it back up.

I think most of us here remember the "Facebook is biased against conservatives" legal battles and IIRC youtube gets the same legal challenges when they try to moderate. They're accused of playing sides bc one side has way more hate and misinformation. I think years of these kinds of legal fights is what's leading to the situation we're in now. TikTok is a chinese company but the videos in English are on the American side and companies being unable to moderate bc they'll be attacked by interest groups for "bias" is what creates the ecosystem English TikTok exist within. This outcome did not just happen, it was created.
posted by subdee at 9:13 PM on May 22, 2021


IIRC youtube actually has MORE filtering of left-wing terms, like Black Lives Matter, than right-wing terms like White Lives Matter. They were caught back in April blocking advertisers from targeting the first but not the second group.

https://www.google.com/amp/s/arstechnica.com/tech-policy/2021/04/youtubes-policies-are-blocking-ads-on-black-lives-matter-videos/%3famp=1

Not of this is just algorhyms, like it was mentioned upthread all the decisions about what to moderate, what to not moderate, what to promote, what to hide /make unsearchable are political. And there is organized money keeping the hate speech up.
posted by subdee at 9:29 PM on May 22, 2021 [3 favorites]


Harassment of LGBTQ+ creators and harassment of people who speak up on their behalf is most intense in the spaces where a lot of young people are, incidentally. This is the new phase of the culture war, to go anywhere LGBTQ+ are building a community and disrupt it, and teach young people ideas that will make them more likely to support conservative policies when they are older.
posted by subdee at 9:41 PM on May 22, 2021 [2 favorites]


Everyone knows what the solution is:

No. The solution is to wipe out corporate-owned social media entirely. It is beyond repair.

The reality, unfortunately, is that that is not going to happen unless and until we can come up with community-owned-and-driven alternatives good enough to gain mass traction. Hopeless though that seems right now, we must not stop trying. Anything we do in the meantime regarding existing social media is not a solution; it is only mitigation.
posted by Cardinal Fang at 12:50 AM on May 23, 2021 [3 favorites]


You can at least bring Zuckerberg in front of Congress.

I guess that's better if you're American? If you're not, that's no closer to meaningful accountability than we get with TikTok, especially not with the US's bizarre veneration of an reductio ad absurdum interpretation of free speech.
posted by Dysk at 1:13 AM on May 23, 2021 [6 favorites]


Sorry, I didn't mean to sound overly reductive on the topic, I was typing on my phone right before bed so I was trying to stick to one point instead of doing a long post, and what I kept seeing in the thread was people not knowing that there is an influence outside of US companies and US culture wars at play in this particular case.

My views on social media in general: data collection for advertising purposes should be banned full stop. Ad-funded social media should be banned full stop. This, imho, is the only thing that will stop people profiting from, and therefore encouraging, the outrage spiral that is deeply polarizing the US and leading to actual genocides in other countries. I live my principles -- I do not have a Facebook account, or an Instagram account. We have a household whatapp account because our neighborhood uses it, but that is set up on a separate phone that never leaves the house and is used exclusively for whatsapp for that one group of people that I can't move off the service. I have a twitter account that I don't use and exists only to point people to my mastodon account. My household runs a mastodon server for ourselves and our friends.

Social media is a beautiful thing, ad-funded social media is a poison that will destroy society if we don't get rid of it. As far as China goes, I wish it were easier to criticize the Chinese government without being seen to participate in the anti-Asian culture war that the fascist-leaning elements in the US are spurring in order to better manipulate people. The activities of the Chinese government should be rightfully condemned, the Chinese people should not. But I want to be clear that I don't advocate banning specifically TikTok, and especially not because it's a Chinese company. We should ban unneccessary or ad-driven data collection by any company, because enabling surveillance violates human rights. And this would take care of TikTok, the Facebook family, Twitter, the Google family, and whatever else is going to pop up.
posted by antinomia at 1:18 AM on May 23, 2021 [8 favorites]


On the eve of the monetized web, I worked for a website that created content for seniors, linked to a seniors advocacy group that also sold insurance, the Canadian AARP. We had early web metrics and we had extraordinary editorial freedom although very little budget. So we got out there with a whole bunch of stuff from protecting yourself from scams, don’t get an STD in your seniors’ retirement home (a huge issue! People who can’t get pregnant in communal living!), dream vacations, gardening, etc. We had forums and a chat room

Our most popular offering was a single page. At one point we had, in Canada, almost a million views a day. It was The Joke Of The Day. It was a text joke, delivered from a database of jokes. We tried really, really hard to drive engagement from that page to big issues. Nope. Joke of the day.

In the same period of time, our magazine ran a cover of a biracial family - grandparents, parents, kids. In Canada. No big deal!! But it was. We were deluged with complaints that it wasn’t Canadian. It was shocking. We happily banned people.

We also had to shut down our politics form over the 2000 Federal Election (anyone remember Stockwell Day?). We also had a maintenance day in our chat room one time - had a pop up banner that said WE ARE LOGGING EVERYTHING TODAY” etc. - and all the private chats were sexting. All of them. Hundreds of Canadian seniors sexting. And they really were those people, we used to run in-person events.

Anyways...this was pre-algorithm. We spent a good chunk of time trying to get seniors to do things other than jokes or sex or political fighting (while not being like, humour or sex negative.) We didn’t have too many advertisers so that wasn’t yet a thing. It was an extremely uphill battle.

If human beings would stop looking at this shit, algorithms would stop serving it. Yes, there are forces pushing us to radicalize, to be too tired out to demand better. For sure. But if there’s something the pandemic has reinforced for me, it’s that whatever I thought was an “online” problem is actually that my neighbour is prejudiced against Asian people! In Scarborough! That’s nuts! How did I miss this?

Of course did either of us learn one iota of Asian history in school? No.

But we both saw the same garbage on Facebook. One of us reported it and one of us believed it. Yes, Facebook holds responsibility - I miss editorial responsibility - but the cesspool is there waiting. All the time.

My kids and I spend time on YouTube. I sit with them even though I find a lot of it like nails on a chalkboard. I point out that influencer lifestyles are stressful and bad for them (there’s entire families that go increasingly radical for views, meaner and meaner jokes, weirder stunt charity, etc.) I talk to them about the ads and the value of their own time and attention and to give the good, moderate voices (or the squirrel mazes) space.

But I also raise them as best I can to be thoughtful and kind, and we attend bystander workshops (well my teen and I) and stuff. It is their kindness and awareness that eventually causes them to not watch the cruel stuff. Mostly; they are human children.

It’s a hell of a lot of work. But man, this is the work.
posted by warriorqueen at 6:02 AM on May 23, 2021 [15 favorites]


My concern is about jumping immediately to a place where the primary blame for this problem is assigned to China

OK. This series by YouTuber smartereveryday I found illuminating. They didn't do interviews with TikTok, unfortunately.

The fact that NATO is involved in moderating Facebook does make me feel like the influence of the CPC is a non-zero matter.
posted by eustatic at 7:18 AM on May 23, 2021


Hiring people for non-revenue generating purposes is hard to do.

Except that they already did exactly this to screen out content by queer, fat, and disabled creators...?
posted by en forme de poire at 8:10 AM on May 23, 2021 [2 favorites]


Ad-funded social media should be banned full stop

In smarteveryday's interview with facebook staff, I forget when, but he first insinuates that all social media efforts to influence people are bad, but the facebook employee responds, "but that 's what advertising does".

Fascinating exchange.
posted by eustatic at 9:33 AM on May 23, 2021 [1 favorite]


If that's what advertising does, what is Facebook selling? There would appear to be a category error where Facebook equates suggestions to buy a product to...what? I can't think of any apples-to-apples connections (yes, even in an "if ur not paying ur the product" context) to any product, service, or brand advertising, and I doubt they could either. Like I said, True Believers who exist only to further a CEO's prognostications.
posted by rhizome at 11:55 AM on May 23, 2021


With due respect seems to be a certain amount of naivete in the presumptions underlying this discussion.

It is dominated by users between 17 - 24 years old, and I think Gen Z is pretty alright.

It is very interesting to me as a former moderator on Reddit who gave up on that site, based on the amount of bigotry and pure bullshit being platformed there these days.Tiktok though frankly seems like the worst to me, but I've also seen some pretty egregious hatred being promoted on Instagram in particular.

What I've noticed on Reddit and Instagram is that these brigades of far-right trolls at this point realize the sort of sheer misanthropy they are promoting is a tough sell on most social media, and so they have to operate almost entirely in bad faith. Just as one example I'm talking about things like posts with the #grime hashtag on IG being used to slyly promote UKIP videos ('grime' as in the London hip hop hybrid). Reddit is notable for being practically purpose-built for bad faith trolling and brigading.

I think most of us here remember the "Facebook is biased against conservatives" legal battles and IIRC youtube gets the same legal challenges when they try to moderate. They're accused of playing sides bc one side has way more hate and misinformation.

This is a good point but the other thing that stands out as distinguishing the far-right troll brigades is that they have ca$$$$h on their side. Of course on Reddit aside from all the crytocurrency ponzi scheme promotion going on these days, you also have a very significant amount of disinformation being spread about issues like the climate crisis, and the disinfo almost entirely encourages an industry-friendly frame of reference.

I feel like it's also a bit disingenuous and even somewhat simplistic to blame most of these issues on 'human nature' or the algorithm. Just as one last Reddit example (sorry), I've gotten into arguments with some of the more reasonable users there about how when you search for terms like "divorce" on their site search engine, the top hits are typically the 'men's rights' subreddits -- is this simply an algorithm issue? What could the admins do to address it?

Since I lived in China for the years when they were cracking down the hardest on access to the the global internet I feel like my perspective on the issue of CPC involvement here would be too cynical. I'll just say again I think it's naive at best to marginalize concerns about their role. Remember when Xi targeted that blogger for her fan fiction? They are not a neutral party by any means.
posted by viborg at 5:29 PM on May 23, 2021 [6 favorites]


> I didn't want the app, I used the web. On the web there doesn't appear to be a way to follow particular hashtags to tailor one's "For You" page, so I get the raw feed of everything people post marked "#FYP" trying to get traction on TikTok. Or got, I should say, because after two days of looking at that I deleted the bookmark and now never look at TikTok unless I'm following a link to something I'm already pretty sure is okay. There are posts of cute animals and people being nice, but what mostly seems to make it to the "For You Page" is every variety imaginable of the human animal being vicious for laughs.

Yeah, your "following" tab is how you get a truly tailored feed. The "for you" page (FYP) is just the algorithm doing its thing, but it's seems to definitely NOT be just a raw feed of what people post marked #FYP. I have no idea what data is seeding a user's very first views of the FYP before they start liking/following/commenting, but I have literally never been shown ANY of the vicious content you describe.

I'm neither trying to encourage you to use TikTok nor defending TikTok's content moderation, which is obviously biased, just providing another piece of anecdata.
posted by desuetude at 10:32 AM on May 24, 2021


At least it doesn't seem to recommend LGBTQ hate as the default, but only once the users actively seek it out. This is in contrast with for example Youtube, where simply watching videos of someone playing games means the algorithm will (or at least used to) present you with content straight from the alt-right pipeline.
posted by ymgve at 1:23 PM on May 24, 2021 [2 favorites]


« Older Fifty years ago, Marvin Gaye's masterpiece   |   you can feel good about resistance without... Newer »


This thread has been archived and is closed to new comments