Death By Social Media
October 1, 2022 5:42 PM   Subscribe

In a first, a London coroner's inquiry has ruled that social media platforms such as Instagram held material blame for the death of 14 year old Molly Russell, who committed suicide in 2017 after falling into a social media negative feedback loop that pushed self-harm media to her. (CW: suicide, self-harm) [alt link]

The inquiry revealed how services like Instagram and Pinterest fed Molly media on depression, self-harm, and suicide - some of which was so severe that professionals who worked with at-risk individuals were shaken by the content.

While the inquiry by itself serves no financial penalties to companies like Meta, it is being pointed to as evidence in regulation of social media in the UK and wrongful death lawsuits in the US.

(As always, to those struggling with their own mental health and depression: you are not alone. Resources such as the National Suicide Prevention Hotline at 988 in the US or Samaritans in the UK at 116-123 are available for people to reach counselors to talk to.)
posted by NoxAeternum (24 comments total) 17 users marked this as a favorite
 
Fuck those companies for their social engineering bullshit
posted by Issithe at 6:48 PM on October 1, 2022 [18 favorites]


ContraPoints' Incels video had a segment on the topic of online communities that serve primarily as ways for people to inflict harm on themselves and others and how addictive it can be.

Doesn't surprise me at all that social media companies "accidentally" profit off of this phenomenon.
posted by Reyturner at 8:11 PM on October 1, 2022 [7 favorites]


When I was researching assisted dying techniques for a novel I was writing, I found some of these communities.

Some of the forum boards were truly disturbing. Also all kinds of overlaps with communities of people who didn't want to be human anymore. Vampires and werewolves, and not the dress up variety.

I wasn't really aware of Instagram at the time. I shudder to think.
posted by Zumbador at 8:36 PM on October 1, 2022 [3 favorites]


This is very sad and extremely terrible. I'm ever hopeful that by the time my kids are old enough to be interested in social media it'll be a thing that "old people" do and they'll find some better way to fill their time. I feel like we are almost there.
posted by Toddles at 8:37 PM on October 1, 2022 [3 favorites]


Eeeek. Oh my.

I read here: archive
posted by PistachioRoux at 9:32 PM on October 1, 2022 [2 favorites]


My kid uses Pinterest for drawing references and window shopping clothes. When I set the account up, I had to go through and actively ‘do not show me things like this’ for a lot of weight loss and critical beauty (is there a term for this? Not here’s some cool eyeshadow but how to fix what’s horrifically wrong with your normal face stuff) that just rolled up in her feed. A couple of curious clicks from a blueberry smoothie recipe and you’re surrounded by pro-ana and self harm guides.
posted by dorothyisunderwood at 9:44 PM on October 1, 2022 [24 favorites]


Mod note: Thanks, PistachioRoux; I've added that to the post.
posted by taz (staff) at 10:57 PM on October 1, 2022


It seems like part of the problem is that we allow a company like Facebook to change its name, so that upper management and the owner can more easily evade public accountability. The NYTimes and other media outlets use the rename — and we start to get accustomed to the change, as well — which helps put more distance between the criminals and victims. At the end of the day, we just don't hold these ad executives criminally or even financially responsible on any real level. Whatever meager laws exist to regulate Facebook and other ad companies, such as they are, don't really get enforced. And if they somehow are, it takes years to get to enforcement, which is ignored or negotiated away. Some crocodile tears might be shed by some talking head, but ultimately the checks keep getting cashed. It's pretty depressing.
posted by They sucked his brains out! at 11:04 PM on October 1, 2022 [6 favorites]


It seems like part of the problem is that we allow a company like Facebook to change its name, so that upper management and the owner can more easily evade public accountability.

The issue isn't letting Facebook rename itself after Zuckerberg's white whale, but that there is a fundamental belief that internet service providers aren't responsible for the content that users provide while putting heavy thumbs on the mediation of that user content to users. Social media platforms turn blind eyes to communities on their platforms that cause genuine harm, and this behavior is defended as being necessary to "protect free expression" while a blind eye is turned to that harm.
posted by NoxAeternum at 2:37 AM on October 2, 2022 [11 favorites]


Yes. It seems the pseudo free speech defence that they’re a neutral service just hosting user-generated content can’t survive the reality their business explicitly relies on engagement. As soon as you start algorithmically pushing harmful content to children, you can’t wash your hands of your platform.

I’m not convinced about large chunks of the proposed online harm bill, but if companies aren’t going to take responsibility for children being crushed between the grinding mills of AI-recommendation, ultimately government will step up.
posted by Hartster at 3:35 AM on October 2, 2022 [9 favorites]


This is so sad. I am going to closely monitor my child’s social media, but it seems like really limited access to devices is the only way to make sure that this stuff doesn’t get compulsive. I hate having to invade his privacy, but seems like the only way to parent “these days.”
posted by haptic_avenger at 5:13 AM on October 2, 2022 [5 favorites]


As soon as you start algorithmically pushing harmful content to children, you can’t wash your hands of your platform.

OK. How does anybody know they're pushing any content to children on the internet?

I’m not convinced about large chunks of the proposed online harm bill, but if companies aren’t going to take responsibility for children being crushed between the grinding mills of AI-recommendation, ultimately government will step up.

And I'm sure the government solution will work well.

The suicide is beyond sad. But the idea that social media is to blame misses the mark. This poor girl was so depressed, yet her own parents, who don't seem disconnected or deficient, couldn't see how bad it was. And it was bad. Meta gave up 16,000 pages from her Instagram account. From a 14 year old.

Sure, awful content could be restricted from accounts for underage people. Until they figure out how to set up an account for themselves. What kid would do that?
posted by 2N2222 at 1:05 PM on October 2, 2022 [1 favorite]


The suicide is beyond sad. But the idea that social media is to blame misses the mark.

One man-in-tech's opinion, but twenty years from now, I think we're going to look back at arguments like this the same way we look at insistences that cigarettes can't be blamed for causing cancer.
posted by Tom Hanks Cannot Be Trusted at 1:17 PM on October 2, 2022 [22 favorites]


Worse, I worry that kids will simply be taught "and then the Internet destroyed society" and it will just be accepted knowledge, just like I was taught "the First World War was a pointless waste of life". And some kids will wonder "why did the people then think that bad stuff was OK?" and the teacher will wave their hands and say something non-committal and the kids will just think "people then were dumb".
posted by one more day at 2:13 PM on October 2, 2022 [5 favorites]


OK. How does anybody know they're pushing any content to children on the internet?

This argument misses the point completely - the problem isn't that "objectionable content" is being served to minors, but that content about self harm - content that had mental health professionals troubled by what it contained - is served up algorithmically to vulnerable individuals. And while that group contains minors, it is far from exclusive to them - this death would be just as much a tragedy (and social media just as culpable) if Molly had been 21, or 35, or 60.

And I'm sure the government solution will work well.

Ah, good old cynicism as a deflection from the reality that Silicon Valley routinely turns a blind eye to harm if it would mean that they would have to spend money. Government is the tool we use to make companies behave, especially when they harm vulnerable people.

The suicide is beyond sad. But the idea that social media is to blame misses the mark.

How does it miss the mark, exactly? Social media pushed material that fed her ideation, putting her into a negative feedback loop that destroyed any positive feelings she had and led her to feel that her only escape was death. And while Facebook might claim that these groups are "a cry for help" to evade responsibility, the reality is that there are groups that are focused on putting a positive spin on self-harm, and social media platforms happily serve their content to vulnerable people in the name of "engagement" while refusing to remove them in the name of "freedom of expression".

This poor girl was so depressed, yet her own parents, who don't seem disconnected or deficient, couldn't see how bad it was.

Now you're just showing your ignorance of how depression works. Because as someone who has been dealing with depression since I was a teen, I can tell you that the first "trick" depression teaches you is how to hide your true feelings - it does this not only to make it that people who could help you don't even know that you need help, but also to further isolate you by using that inability to help as "proof" that they don't actually care.

And it was bad. Meta gave up 16,000 pages from her Instagram account. From a 14 year old.

And yet with all this evidence, Facebook continued to serve her more material to feed her ideation. Remind me again why they aren't at fault?

Sure, awful content could be restricted from accounts for underage people. Until they figure out how to set up an account for themselves. What kid would do that?

Once again, the focus on her age misses the point - the problem is that social media platforms argue that when it comes to things like self-harm, bigotry, and hate they can be "neutral" - a position that has been shown to be laughably false.
posted by NoxAeternum at 2:31 PM on October 2, 2022 [23 favorites]


The problem with "neutral," here, is that it basically assumes that social media mechanics are themselves completely inert phenomena. And that's flat-out not true. Social media mechanics are technologies, and they are technologies that have been evolved to specific psychological and financial ends. They want us to feel something. And the thing they want us to feel is literally bad for us.

Let's go past the obvious. Yes, they want to addict us. Yes, they are designed to continually grab our attention. Yes, they revolve around "likes" and "notifications," and are designed to make us crave both—and to "create content" that will generate both, in between listlessly scrolling to find more.

All that is true. But why do social networks exacerbate anxiety and depression so much? Why are they all so consistently good at radicalizing their users? It goes beyond red numbers and infinitely-scrolling feeds. At the heart of all these mechanics lies a simple but non-obvious truth: social networks are designed to put us in competition with one another—and on some level, the game board is your very sense of self.

Your "character" is you, no matter what the game. Are you interesting? Are you beautiful? Are you funny? Do you have enough friends? Are you enjoying your life enough? Are you rich? Are you famous? Are you right? Are your feelings and opinions the correct ones to have? Are the things you enjoy, objectively speaking, the most correct things you can enjoy?

The more invested you get—and these games are expertly designed to make us feel invested—the less it suffices to play completely honestly, "as yourself." There is an increasing pressure to perform, to outdo, to win at all costs. But your "reward" isn't really a reward, and there are two reasons why that is:
  1. First, any chemical reaction in your brain to "being liked" is extremely fleeting, unless it's connected to some genuine intrinsic benefit (like "meaningful connection"). Social networks aren't designed to offer you that.
  2. Second, and more perniciously, the actual "reward" in the schema of the game is that you make other people feel worse, driving them to try and outdo you. This is the Möbius strip of sequence that defines social networking: you doing "well" is what causes other people to "lose," usually in ways designed to make them feel specific forms of inadequacy. And they respond to those feelings by doubling down, pushing those feelings onto you instead.
It's a game without an actual winner, because nobody's accurately depicting anything. They're just lying to make other people feel bad. Projecting images that are designed, somewhat consciously but mostly due to social-network manipulation, to make other people feel more alienated and alone. Which leads to the phenomenon that defines the social-media age: literal billions of people are unified in these feelings of resentful isolation, but that unification is sundered by the fact that they've all been taught to see each other as the enemy.

The "algorithm" is designed to blindly push people towards things that generate the most engagement—in other words, the things that make people feel the most compelled to commit to something. Perversely, that often means the things that make people feel the worst, because that's what leads them to want to lash out or overcompensate in some way. "Lowest common denominator" obviously holds some advantage—hence cute animals popping up everywhere—but ultimately that's not quite enough. The most successful things are also irritating or aggravating on some level, because that's what sets the feedback loop into motion.

The algorithm doesn't know what it's pushing—but there's plenty of data on what it pushes and why. It helps users encourage one another to despair, because the more you despair, the more you need an outlet for venting that despair. Hence communities that revolve around suicidal ideation or eating disorders or explaining why slight variations in the shape of a male forehead determine whether women feel a biological need to screw a man over. The unifying trait of these communities is that they consist of people encouraging each other to keep going, confirming for one another that their worldview is right while constantly upping the ante, escalating the bleakness, ratcheting up the sense of urgency. And these communities proliferate because they do the algorithm's work for it—so the algorithm keeps recommending them to new people. New users, if you will.

You know the popular story about the "paperclips AI?" The machine that gets told to make paperclips as optimally as possible, and ends up destroying the world by turning it all into paperclips? I've seen it suggested before that publicly-owned corporations are a version of this AI: companies like Chevron can't help setting the world on fire, because they're an algorithm tooled towards maximizing profits at the expense of literally anything else. Social media algorithms work the same way: they will destroy communities and human connection and they will do their damnedest to destroy your soul too, because they don't care about human beings.

The "optimal user," to these algorithms, is the most horrific image of a person that you can imagine—the kind of person who'd annihilate a part of your mental well-being if you so much as glimpsed at them. That person might be an vapid influencer or a political radical or someone who isn't just suicidally depressed but actively addicted, in a deeply disturbing way, to suicide as a concept. Or all three! But make no mistake: this is what the algorithms are designed to produce, regardless of whether or not their creators are smart enough or willing enough to anticipate the end results. The monstrosity isn't an unintentional byproduct—it's the precise thing these sites want to generate, even if they'd rather limit it just enough to maintain plausible deniability or even a good night's sleep.

They want engagement. They want you obsessed with other people and obsessed with yourself—and they want you to hate both them and yourself. That's their idea of perpetual motion, and it leads to Bored Apes and murders and suicides and not much else. And as the economy crumbles and the social safety net falls away, and as younger generations are taught that the only way out is to hustle, their livelihood and their future starts to look exactly like social media: whether it's driving for Uber or posting on OnlyFans or living in a TikTok influencer house or, hell, trying to invent the next big social network, you're just trying to be the one person who does well enough to destroy all the others, as you cross your fingers and pray that the algorithm doesn't abruptly change and pull the rug out from under your feet.
posted by Tom Hanks Cannot Be Trusted at 5:13 PM on October 2, 2022 [17 favorites]


I've seen it suggested before that publicly-owned corporations are a version of this AI: companies like Chevron can't help setting the world on fire, because they're an algorithm tooled towards maximizing profits at the expense of literally anything else.

The ever-relevant essay Dude, you broke the future! by Charlie Stross.
However, Facebook is trying to get eyeballs on ads, as is Twitter, as is Google. To do this, they fine-tune the content they show you to make it more attractive to your eyes—and by 'attractive' I do not mean pleasant. We humans have an evolved automatic reflex to pay attention to threats and horrors as well as pleasurable stimuli: consider the way highway traffic always slows to a crawl as it is funnelled past an accident site. The algorithms that determine what to show us when we look at Facebook or Twitter take this bias into account. You might react more strongly to a public hanging in Iran than to a couple kissing: the algorithm knows, and will show you whatever makes you pay attention.
posted by MrVisible at 6:01 PM on October 2, 2022 [4 favorites]


Despite what many people will tell you, the solution to this kind of problem truly is EASY to implement.

It would be resisted by every group that benefits from it -- this includes radical/reactionary politicians, the companies themselves, and even people who honestly believe that their user experience benefits from it.

We must abolish and criminalize the use of algorithmic promotion of content on all social media, and abolish and criminalize the use of engagement-goal algorithms to reorder content in non-chronological order on social media.

The internet was a bad enough place when people self-selected into echo chambers that amplified their own cultural pathologies. It has become many orders of magnitude worse when it became clear that surreptitiously putting users into echo chambers that they're not even aware of, and amplifying, sub rosa, cultural pathologies was the most profitable means by which to extract value from users.

Force users to curate their own follows and feeds, and force social media companies to have human-curated promotional content. Clearly this does not prevent users from falling into toxic groups (I'm reminded of the old pro-ana forums, here), but it would prevent someone who LEAVES those groups from having an algorithm keep pushing its content to them wherever they may later go.
posted by tclark at 6:48 PM on October 2, 2022 [10 favorites]


but I will gently suggest that people trying not to censor the internet are concerned about things like the belief being trans is a mental illness that minors are catching off eachother via social contagion.

To which I will point out that we just had a major demonstration that no, they really are not. You don't create hate offsets if you actually care about what hate and abuse do.

The challenge with trying to contain horrors is that it tends, like voter ID, to be used as a Trojan horse, to scrub "adult" content, and from there, LGBTQ+, etc... stuff.

And this here is the argument that CloudFlare head Matthew Prince uses to argue for why he just has to do business with fascists and bigots while turning his gaze away from the harm they do to marginalized and vulnerable communities and people - because if we act on that harm, it's a trip down the slippery slope to censorship and oppression. Better that they be the "price of free speech".

I'm more than a little done with the response to the point that speech can in fact harm being synchronized dives down the slippery slope. We see the strategy here - dodge responsibility by arguing for the "greater good". And it's starting to get very old that we're expected to just shrug and wash our hands of the harms that social media is engaging in and enabling. I'm pretty sure that we can stop the legitimzation of self-harm and abuse while protecting the speech of the marginalized - but it means that the people in charge need to actually give a fuck.
posted by NoxAeternum at 7:17 AM on October 3, 2022 [4 favorites]


NoxAeternum, I wish you'd give these threads about privacy/social media some room to breath, and not always try to get every single person to agree with you 100%. I get that this is a topic you're very into, but it makes it hard to read this thread with your definitive statements about who is Morally Right and who is Not To Be Trusted.
posted by sagc at 7:26 AM on October 3, 2022 [4 favorites]


The suicide is beyond sad. But the idea that social media is to blame misses the mark.

Both KiwiFarms and 4chan/8chan have body counts, which is a prime reason why they were kicked off various hosting and cybersecurity providers. And even with that body count, doing so has been a process that requires concerted activism with little to no help from government bodies even when they don't consider threats and harassment to be covered under free speech.

And, of course, this doesn't even cover how social media has been directly responsible for deadly acts ranging from assaults and murders of individuals (for example, swatting), mass murder (encouraging shooters and other murderers), all the way up actual genocides such as Rohingya.
posted by Glegrinof the Pig-Man at 10:13 AM on October 3, 2022 [3 favorites]


haptic_avenger, above:

I am going to closely monitor my child’s social media, but it seems like really limited access to devices is the only way to make sure that this stuff doesn’t get compulsive. I hate having to invade his privacy, but seems like the only way to parent “these days.”

So, I'm not the departmental expert on children and media (we do have one and it's Not Me), but I do have to teach about it to future librarians now and then. The research so far is pretty clear: the biggest protective factor against many social-media harms is the child trusting the major responsible adult in their life.

This protective factor takes a massive dive when that responsible adult invades the child's privacy. Please think very, very hard about whether and when to do this.
posted by humbug at 11:40 AM on October 3, 2022 [9 favorites]


We must abolish and criminalize the use of algorithmic promotion of content on all social media, and abolish and criminalize the use of engagement-goal algorithms to reorder content in non-chronological order on social media.

Well, you may just get that, as the Supreme Court granted cert on Gonzalez v. Google, which is about whether algorithmic promotion is protected by Section 230.
posted by NoxAeternum at 3:50 PM on October 3, 2022


SCOTUS might actually get that one right, but it just as well could be another Alito opinion where only Evangelical Christians are allowed to modify what's displayed, censor, or ban users, citing the Council of Nicaea as precedent.
posted by tclark at 4:47 PM on October 4, 2022 [1 favorite]


« Older Antonio Inoki 1943-2022: Japan's "Last Fighting...   |   The Process by which an Epidemic becomes Endemic... Newer »


This thread has been archived and is closed to new comments