Sort By Controversial
February 4, 2019 5:54 PM   Subscribe

You guys, who haven’t heard a really bad Scissor statement yet and don’t know what it’s like – it’s easy for you to say “don’t let it manipulate you” or “we need a hard and fast policy of not letting ourselves fight over Scissor statements”. But how do you know you’re not in the wrong? How do you know there’s not an issue out there where, if you knew it, you would agree it would be better to just nuke the world and let us start over again from the sewer mutants, rather than let the sort of people who would support it continue to pollute the world with their presence?

Scott Alexander (previously) with a fictional(-ish) tale of weaponized online strife.
posted by Johnny Wallflower (61 comments total) 11 users marked this as a favorite
 
*confirms launch code*
posted by Johnny Wallflower at 6:05 PM on February 4, 2019


I'm curious who else besides me has had an argument that they couldn't stop arguing about with someone. I think I'm better now at not being drawn in, but still there are some topics I can know someone is likely being a troll on and I'll still feel a need to spend an hour researching a refutation, because their troll might capture someone else if there's no counter argument.

Someone is releasing these statements in the story, but what if it's not to be divisive like the narrator suggests, but as an attempt at inoculation?
posted by gryftir at 6:43 PM on February 4, 2019 [2 favorites]


I suspect that half of my comments that have been deleted from MetaFilter - and a significant percentage of ALL deleted comments - would qualify as Scissor Statements. That's why I respect the moderators here for their judgment AND their grace under extreme pressure.
posted by oneswellfoop at 6:44 PM on February 4, 2019 [7 favorites]


Is this something I'd have to learn to code to understand?

More seriously, I think the underlying issue here is the effort to understand online and cultural division as a matter of processing propositions differently, or of propositions with special (context-dependent?) properties, rather than as the product of the same forces that generate context, of whole histories and so forth.

This is the flaw in a lot of online "rationalism," of which SSC is a prime example: a 101-level understanding of analytical philosophy applied to complex sociocultural issues, each of which tends to have its own features and history, not some common, identifiable one as if all irreconcilable cultural differences share some potentially isolatable factor. All unhappy families are unhappy in their own way, as some Russian fellow once wrote. No doubt a plot to divide Americans.

Beyond that, it's too cute by half to deliberately embed what is likely meant as a meta-Scissor statement with the flagged-up "is it racism or not?" element of the story's narrative POV and initial conflict. It's a cheap and silly trick, designed to provoke certain reactions from certain readers while enabling others to sit back and play the dispassionate logician, secretly indulging in judgment of those so easily "triggered." "You see! Scissor statements are everywhere, and the way to avoid them is to understand that they cannot be verified, and so we should behave as if they have no truth-content, except we feel sooooo strongly about our "side" of the proposition's projected truth or falsity. They're Rorschach tests, the story wants to tell us, merely suggestive blobs onto which we project our assumptions, which we then defend with fierce tribal rage at the out-group. Immunity means being of the grey tribe, or whatever.

Except of course the questions of what is ethical, what is historically true and likely still true or not, of what a society might owe to those within it and how to balance competing claims, in short, the question "what is just in this situation?", is an unavoidable, ever-relevant, inescapeabale set of problems, not a 101-level analytical philosophy poser. As a thought experiment, the story us vacuous; as a provocation, it is a clumsy provocation; as either or both it is really much worth thinking about, let alone being bothered by.
posted by kewb at 6:53 PM on February 4, 2019 [30 favorites]


"We trained this program on a whole bunch of historical examples, then asked it to come up with new ones, and *gasp* they closely corresponded with the historical examples that we trained it on! It predicted the examples in its training data!! Spoooooky; there must be some secret shadowy cabal!!!"
posted by eviemath at 6:54 PM on February 4, 2019 [7 favorites]


The description of difference of opinion that escalates into violent disagreement that escalates into attempts to personally destroy the disagreeing party that is outlined in this story is a remarkably good outline of what happened when I started pointing out the sexism and racism in my workplace.
posted by medusa at 7:00 PM on February 4, 2019 [22 favorites]


kewb, well-put. Something in this didn't sit right with me but I couldn't put my finger on it at first. Once you pointed it out, I now notice the entire piece is littered with casually rendered judgements and opinions-as-facts that were breaking my flow as a reader.
posted by glonous keming at 7:01 PM on February 4, 2019 [2 favorites]


the problem with this story is that people can actually have reasons for disagreeing with each other. most of the supposed real world scissor statements obviously fall into this category.

a scissor statement as described would have to be like blue dress/gold dress but for politics, and it would have to not really divide along existing ideologies. cannot imagine anything working like that — maybe some weird abstract ethical trolley problem thing.
posted by vogon_poet at 7:09 PM on February 4, 2019 [6 favorites]


Isn't this the dude who is a huge anti-feminist who likes to link to anti-Semitic caricatures in an attempt to "own the feminists?" also isn't he a Eugenicist? Why should I take anything he says seriously?
posted by Homo neanderthalensis at 7:15 PM on February 4, 2019 [13 favorites]


Nah. Fuck this guy.

EDIT:
Sorry, let me expand upon this statement. Fuck this guy for thinking that pointing out that questions that touch deep and historical divisions in our society are also focal points of societal strife. And fuck him for suggesting that the answer is to go live in the woods with a gun.
posted by runcibleshaw at 7:24 PM on February 4, 2019 [4 favorites]


*for thinking pointing it out is profound.
That edit window closes fast.
posted by runcibleshaw at 7:30 PM on February 4, 2019 [1 favorite]


To add on to kewb's comment, I'd say that this story highlights how among a certain strain of internet thinker (or "thinker") the question of ethics/morals/values is essentially invisible. There's this unspoken assumption that there exists one true set of values that everyone shares (or should share) and that any disagreement or discord is due to some kind of lack of information or irrationality or other cognitive deficit. There doesn't seem to be any understanding or acknowledgment that people might simply have different values and what that might imply for their worldview. I mean, I think you could argue that in 1863, the statement "black people should be enslaved" would qualify as a scissor statement under this story's framework.

Also, the best/worst part of the story is the protagonist trying to check if the Indian software developer knew the difference between the words "true" and "false". I have to wonder if the author had ever encountered an Indian software developer, particularly one who was able to qualify for a visa to work in the States.
posted by mhum at 7:33 PM on February 4, 2019 [18 favorites]


I'm curious who else besides me has had an argument that they couldn't stop arguing about with someone.

Have I ever not?
posted by atoxyl at 7:41 PM on February 4, 2019 [1 favorite]


There's a few writing Scott Alexanders out there. This one is the one born in 1984, whereas the screenwriter and writer of the People Vs Larry Flint, was born in 63, and rumor has it, is a pretty froopy dude.


1984 Alexander, bless his his pointy head, quite likes to read his own words, which is why he uses so many of them to say so very little.
posted by SecretAgentSockpuppet at 7:44 PM on February 4, 2019 [9 favorites]


I liked this story better when it involved decadent plays, and there were suicide booths, and crazy guys plotted to kill various siblings...
posted by LeRoienJaune at 7:45 PM on February 4, 2019 [3 favorites]


To my eyes, the narrator of this story is meant to be newly, barely, and not particularly hopefully self-aware about some deeply problematic limits of human reliability (including his own), but is still notably not meant to be reliable (for example, potentially racist and unwilling to examine that regarding the firing, also potentially prone to violence barely or even accidentally held in check). That distinguishes this from the fantasy of possessing a dispassionate and yet so tragically insightful intellect.

And observing that people will fight bitterly over various values + reasoning chains is distinct from painting a value-free picture of the world. In some ways it's worse: what if you can actually be reasonably correct -- even verifiably correct -- and still be dangerous to various forms of social ties & fabric, at least without certain kinds of restraint, kinds of restraint people are prone to discard?

What kinds of practices would you have to resort to in order to keep a handle on that? How would you have to prepare for encounters with other people who'd haven't or even can't do that and may not even be reasonably correct, too?

Those are the kinds of questions I think the story is meant to provoke -- along with the urgency of the problem in an era when we have factors (ML or not) amplifying the problem. And the narrator's answers at the end are no more meant to be relied on than the frightened protests of cracked Lovecraft narrators who've reached the end of their tale; if anything, they're an illustration of yet another way of succumbing.

If there's a knot in the story for me to worry at, it's that, OK, ML-generated enmity basilisks are our MacGuffin here, and the narrator understands this, appreciates the magnitude of the threat, and has the capability to reconstruct similar work. Seems to me that researching ML-generated peace unicorns that have the opposite effect, Rey's Rock to Shiri's Scissors might be one reasonable approach. Maybe it doesn't quite have the punch of the denoument with a Lovecraftian broken narrator, though.
posted by wildblueyonder at 7:50 PM on February 4, 2019 [5 favorites]


Even if the details about the woman being Indian and the man being Jewish are meant as some sort of clever meta-commentary on controversial statements, it's still bizarrely racist writing. The idea that single statements could cause... what, civil war? full-scale societal collapse? It necessitates simplifying the human mind to the point that none of the argument really holds up, and/or gives human language nearly supernatural abilities - couched behind fiction that only obscures the ideas, likely by design.
posted by reductiondesign at 7:58 PM on February 4, 2019 [8 favorites]


Become a hermit prepper because the dress is white and gold? That's a fucking excellent idea.
posted by grumpybear69 at 8:09 PM on February 4, 2019 [1 favorite]


I took this as a science fiction story along the lines of comp.basilisk FAQ, but with words instead of images as the transmission vector.
posted by Robin Kestrel at 8:15 PM on February 4, 2019 [7 favorites]


This story also ignores the fact that humans don't need any help whatsoever creating conflicts that escalate into violence or war. We've been good at it since well before the written word. People fighting over not being able to agree on things isn't the heady scifi concept, or trenchant commentary on human nature that the author thinks it is. He also gets the direction of causation of the conflict over controversial topics upside-down. People don't disagree because Colin Kapernick took a knee. People already disagreed and Kap (or any of the stories he used) was a convenient symbol that could be used for already existing hostility towards certain concepts.

A more insightful take on this idea might have pointed out that these type of controversial headlines (and it sure feels like the author just googled "controversial headlines" for this piece) aren't the cause of polarized disagreement, but rather are flashpoints for an underlying division that already exists. Clearly this author hasn't thought real hard about what he was trying to say here, other than that fighting about racism or rape is silly and bad.

As for the idea that this is an unreliable narrator and that the author is well-aware that his narrator is a bigoted dipshit... well, that previously link has some real doozies from this particular author that makes me think he isn't sophisticated enough to create such a narrator.

Also, this story is just a pale imitation of a dozen or so Warren Ellis stories about memetic viruses causing the apocalypse from the aughts. Check out Angel Stomp Future or that one Global Frequency issue where the hacker makes everyone bisexual to stop an alien meme virus for a better version of this.
posted by runcibleshaw at 8:21 PM on February 4, 2019 [9 favorites]


One-time statements and reddit posts thankfully are unlikely to cause civil war, and a complaint that the author has strained disbelief suggesting that level of causality might be fair enough.

But I think the causal chain is something like this:

1) There are discussion frames that tribalize engagement (hence the narrator's emphasis on the kind of person who believes the other side of the argument)

2) Some also create high levels of engagement (maybe even some approach self sustaining?)

3) There are systems and perhaps even people in society that are incentivized to discover scissor statements and amplify/spread them -- especially those that can draw in engagement to begin with

The idea that these things are happening doesn't seem controversial. If Scott Alexander's precious little phrasing* bugs... we could always use the phrase "wedge issue."

The degree to which the tinder here is dry and flammable is a good question. I don't know if the human mind is that simple or human language has nearly supernatural abilities. I'm starting to lend credence to cognitive linguists like Lakoff, though, to the effect that it's more powerful than I'd previously suspected.

* "Scott, if your life had a face, I would punch it."
posted by wildblueyonder at 8:27 PM on February 4, 2019 [4 favorites]


The scissor-statement concept does seem to discount the possibility that some events on social media are actually newsworthy, and that peoples' beliefs are often sincerely held and maybe even worth fighting over. I expect it will be invoked by centrist pundits to trivialize public outrage over things we see online, so we'll have that to contend with.

Still, it appears that internet content is increasingly subject to a kind of natural selection through which the most controversial and divisive messages thrive and multiply while more nuanced/moderate/boring content fades into the woodwork. So while I don't love the story, I'm down with some of the themes.
posted by ducky l'orange at 8:38 PM on February 4, 2019 [5 favorites]


It would have been funny if it had stuck with the original thread and made it so that Scissor only worked on arcane technical doodads like the database software and yet still resulted in the relationship-ending fights. Trying to come up with "real" political scenarios which are "just opinions" made it super-dumb.
posted by Scattercat at 9:01 PM on February 4, 2019 [4 favorites]


Homo neanderthalensis there's a bit more nuance, he's jewish, but that was a clusterfuck of an essay. Did not like. I understood the point he was trying to make, but he didn't think things through, and then went at it badly. Rereading it makes me angry.

He's also written stuff I like, like this post, Meditations on Moloch, though, so... I dunno quite how to express my thoughts on him, but while I understand your take, I guess I'm at least a bit more charitable.

Also runcibleshaw, on preview, I don't think authors are always their protagonists. There's a bit of good discussion of that in the Cat Person went Viral mefi post from last month.
posted by gryftir at 9:46 PM on February 4, 2019


the entire piece is littered with casually rendered judgements and opinions-as-facts that were breaking my flow as a reader.

This is pretty much Scott Alexander's stock-in-trade, with maximum verbosity.
posted by soundguy99 at 9:59 PM on February 4, 2019 [7 favorites]


Isn't this the dude who is a huge anti-feminist who likes to link to anti-Semitic caricatures in an attempt to "own the feminists?" also isn't he a Eugenicist?

You may be thinking of someone else.
posted by justsomebodythatyouusedtoknow at 10:50 PM on February 4, 2019 [1 favorite]


You may be thinking of someone else.

He might be casting the overall picture of the guy in an unfavorable light - I'm not a big fan, myself, but I'd say the impression I more often have gotten from his writing is of a blindered radical centrist type than an outright redpill guy. And I also liked the Moloch essay, back in the day.

But he absolutely is the guy who wrote this thing (in which memes about "neckbeards" are compared to Nazi cartoons).
posted by atoxyl at 1:14 AM on February 5, 2019 [6 favorites]


(Christ that nerd essay is long.)
posted by atoxyl at 1:18 AM on February 5, 2019 [1 favorite]


+1 to haltingproblemsolved. I’ve never seen anyone link to his blog without the discussion turning immediately to whether and what type of bad person he is. This really puzzled me the first time I encountered his writing and was admonished for sharing it (a book review in that case). Is he really that famous? Or is this just some kind of nerdview thing?

About this story, enh. It does feel a little clumsy and I think would have been a better story if he’d left Putin and Karpernick out of it (and I got legit creeped out when the narrator brought his ex girlfriend into it). But I don’t quite see how the story implies that nobody has good reasons for the things they believe. Can those of you with this perspective say more?

On a totally other note, as a statistician I am kind of charmed by the mathematical setup. It’s like trying to model the variance instead of the mean. I don’t do that much but I can see why one might want to (for non sociopathic reasons).
posted by eirias at 4:47 AM on February 5, 2019 [1 favorite]


Real problem is he ripped this off from Project Itoh's Genocidal Organ (in which the gimmick is that there are specific language patterns that can trigger people to commit genocide) and made it worse.
posted by MartinWisse at 5:59 AM on February 5, 2019


So the premise is that there are statements where (1) people are very likely to be highly committed to their opinion on them and (2) what that opinion is is completely arbitrary -- there's no meaningful correlation between your opinion and your knowledge of the underlying facts, or your general moral outlook. The second half of that isn't explicitly stated, but it's implied -- the programmers on both sides of the technical disagreement are described as competent, the disagreement is surprising and unresolvable by any objective means.

That second premise is just idiotic.
posted by LizardBreath at 7:27 AM on February 5, 2019 [2 favorites]


I mean, there are lots of stories where language is a virus or ideas are directly fatal, of which I think this is a subset.

This feels sort of instinctively unsuccessful, though, as one of those stories, because e. g Caepernick and Kavanaugh are on their face not arbitrary wedge issues that irrationally drive people into rages.

They are issues one's feelings on which are intimately connected to one's personal development, and which cause perfectly explicable responses.

The idea that (e.g again) women or people of color who are emotionally affected by these issues and their relationship to prior trauma are being manipulated by a superior machine intelligence is maybe the most Internet thing I have heard in a while.
posted by running order squabble fest at 7:42 AM on February 5, 2019 [6 favorites]


Also runcibleshaw, on preview, I don't think authors are always their protagonists. There's a bit of good discussion of that in the Cat Person went Viral mefi post from last month.

I don't think so either. I just think that this particular author isn't self-aware or insightful enough to create a character that isn't just a proxy for his own opinions and observations.
posted by runcibleshaw at 7:50 AM on February 5, 2019 [1 favorite]


This feels sort of instinctively unsuccessful, though, as one of those stories, because e. g Caepernick and Kavanaugh are on their face not arbitrary wedge issues that irrationally drive people into rages.

Right, and the frame story is explicitly set up to overcome that objection, in an idiotic way. That is, Alexander's implicit answer to "Disagreements about Kavanaugh aren't arbitrary and irrational" is "They seem rational to you, but they really are arbitrary. The way you know this is possible is that the disagreement about the technical, programming Scissor Statement I described must have been arbitrary and irrational -- there were equally competent and knowledgeable people on both sides, and they were completely unable to resolve it -- but it seemed rational to me. I'm therefore asking you to believe me about the existence of this sort of Scissor Statement on the basis of my personal experience."

And, of course, that part of the story is fiction, and completely unbelievable fiction as well. But without it the characterization of the real-world political issues as similarly arbitrary goes nowhere.
posted by LizardBreath at 7:53 AM on February 5, 2019 [5 favorites]


I’ve never seen anyone link to his blog without the discussion turning immediately to whether and what type of bad person he is. This really puzzled me the first time I encountered his writing and was admonished for sharing it (a book review in that case). Is he really that famous? Or is this just some kind of nerdview thing?

Innuendo Studios has been doing a series of videos about the rhetorical tricks of the right, and their latest is a pretty good dissection of the strategies that Alexander uses that aren't above the board. There's a segment in the middle in particular that illustrates the problem - his "logic flow" regarding his "rationality" is inverted, with his "I am rational" stance informing his positions, rather than the reverse.
posted by NoxAeternum at 8:15 AM on February 5, 2019 [9 favorites]


Is he really that famous? Or is this just some kind of nerdview thing?

I mean, he's not like Jordan Peterson famous, so I guess it's more of a "nerdview" thing, buuuuuuut . . . y'know, lots of the Internet is nerd-friendly or nerd-adjacent (*waves* hello, MetaFilter!) And he's been around for quite a while (starting at LessWrong before firing up SSC), so it's not too surprising that, um, less-nerdy people have run into his ideas or writings. Not least because AFAICT (like a lot of right-leaning and/or "rationalist" bloggers) he's got a pretty hardcore group of vocal fans and followers, who have a tendency to jump to his defense, which tends to blow up discussions about his work even in non-nerd spaces.

If you're curious about people's objections to him, I think one of the "previouslies" here on MF (The Library of Scott Alexandria) has a lot of discussion about how and why people dislike his writings/positions/conclusions.
posted by soundguy99 at 9:19 AM on February 5, 2019 [5 favorites]


Lizardbreath, my understanding is that it's a truism, that the intensity of fights in science is inversely proportional to the strength of the relevant evidence.
(note, I am saying this, not having yet read the Scott Alexander essay)
posted by Baeria at 9:48 AM on February 5, 2019


It's a truism that people in science are committed to the point of passionate personal rage to factual propositions that seem self-evidently true to them but that there's no way to rationally evaluate the truth of? I'm not saying it never happens, but are you thinking of anything specific that looks much like what Alexander is suggesting?
posted by LizardBreath at 10:33 AM on February 5, 2019


I wasn't too thrilled with the way it was written (way too long, mildly confusing, a few troubling bits and trying too hard to establish his proximity to the original concept), and it didn't convince me that maximally controversial concepts/statements could be created algorithmically, but I do like the term "Scissor Statement" far more than "Wedge Issue" or even "Trigger Warning".

And having Scissor Statements, how close are we to discovering Rock Statements and Paper Statements?
posted by oneswellfoop at 10:37 AM on February 5, 2019 [4 favorites]


like a lot of right-leaning and/or "rationalist" bloggers

Thanks for the link, soundguy99. I pulled this wee bit from your response to highlight a gap between me and the conversation about Alexander that I had forgotten about until you said this. In my little subculture the opposite (that’s not the right word, but close) of “rationalist” is “empiricist.” I get the sense that that isn’t what the word means when Alexander uses it, like he’s not contrasting “reasoning from first principles vs synthesizing from data,” he’s making some kind of normative claim about smart people and dumb people. Is that right? (If that’s right, I think whoever ceded that framing to him made an error.)

To clarify, I didn’t mean nerd here as in member of Nerd Culture specifically, I meant nerdview in the sense of “speaking at the wrong discourse level for the person you’re talking to.” Like how your elevator speech about your thesis should be different depending on who’s on the elevator. Though I guess if it is nerdview those two things could overlap in this case.
posted by eirias at 11:17 AM on February 5, 2019


Yeah, the way this blog and others like it use "rationalist" is like your "empiricist" but without needing any actual evidence or tests or data because they already know everything.
posted by Scattercat at 11:54 AM on February 5, 2019 [2 favorites]


It's interesting that y'all read into the story's definition of a "scissor statement" that people disagree about them for stupid or arbitrary reasons, so you were unhappy when it presented important political issues as "scissor statements." I didn't really think the story was implying that.
posted by value of information at 12:12 PM on February 5, 2019


If anything, I find Alexander a... uh... naïve empiricist? When there were lots of studies coming out that supported genetic determinism, he became a genetic determinist. When larger, more rigorous studies came out saying that things weren't quite so deterministic as that, he changed his views (and acknowledged that he'd changed them). When studies came out saying that Head Start didn't do any good, he talked about how Head Start didn't do any good. When other studies came out saying that Head Start did good things which weren't looked for in earlier studies, he said that he had been wrong and updated his viewpoint.

I think that he often comes to wrong conclusions, but I do appreciate the way that he's willing to consider new evidence and change his mind. It's just that his evidence filter is tuned... I dunno... he trusts the studies he's read a bit too much, and takes leaps of logic from them that they aren't solid enough to support. It's like he knows but doesn't know that scientists studying human nature have moral agendas, even (especially?) when they say they don't.
posted by clawsoon at 12:40 PM on February 5, 2019 [2 favorites]


The whole premise of a "scissor statement" is that it "appears trivially true" but "somehow" people see it opposite ways. Then it applies this to things like "black people shouldn't get shat on for doing peaceful, unobtrusive protests" or "stealing children from families just because they're immigrants is wrong" and wants you to see how your knee-jerk political opinions are all just scissor statements. This is not a great premise and does not lead to any useful insights, whereas a story about an AI that makes unimportant and actually trivial arguments into death-duels would be amusing.
posted by Scattercat at 1:26 PM on February 5, 2019 [5 favorites]


It's basically another variant of the "tribalism" argument, which is a) old, worn out, and full of holes; and b) is Alexander's stock in trade. He wants you to see him as the clear eyed "rationalist" who is above the "tribal" fray - rather than being another self-interested actor.
posted by NoxAeternum at 2:16 PM on February 5, 2019 [4 favorites]


when Scott Alexander uses "rationalist" he's referring to a specific subculture. it's the same kind of entity as "the punk scene", or something like that. they have meetups, work on projects together, and often date each other and share group houses. and there's lots of scene drama and people can develop a sense of smug superiority towards outsiders.

frequently find myself annoyed by his blog but the writing about his actual area of expertise (psychiatry), and many of the book reviews, are pretty good.
posted by vogon_poet at 2:42 PM on February 5, 2019 [3 favorites]


Also he definitely is modestly famous at least among the type of person who reads or writes opinion columns and thinkpieces. Ross Douthat wrote an NYT column about this scissor statement short story recently, which is about as good as you might expect.
posted by vogon_poet at 2:46 PM on February 5, 2019 [1 favorite]


Setting aside the author for the moment, current machine learning generation doesn't generate new ideas/phrases, because it uses a large volume of data to develop associations. It remixes existing content from that data based on what it believes is related. So the protagonist making the leap that just because existing stuff was repeated there's a conspiracy doesn't really make sense if he knows how machine learning works.

So it would just combine a bunch of controversial posts in different ways, not come up with some new idea.

I don't know if that's the author's failing, but being generous, lets say it's a protagonist issue.

Hell, I could have told you sexual assault accusations about a political figure as a teenager and republican supreme court nominees would be controversial, because both those things have absolutely been discussed and are controversial on their own. That's the only prediction other then the internal one he claims was intended damage, that wouldn't have been discussed on reddit and thus included in the data. And I absolutely believe that, if a machine learning algorithm was trained on reddit, it theoretically could have found controversial tech decisions others had discussed and strung some together when asked to do so based on internal discussions.

So the protagonist... I think arguably he was always a bit unhinged in suggesting agency behind these things. And it's a first person writeup, so we don't have, say, his therapist saying, "he was actually so sweet, but he's changed," or something to indicate that this isn't usual, that he was sane to begin with instead of prone to magical thinking.

So maybe the story is equally about the tendency of people to want to explain things in terms of conspiracies, IE QAnon BS. To want to avoid talking about why people argue by suggesting that the argument itself is divisive.

Or maybe I'm reading too much into it.
posted by gryftir at 3:04 PM on February 5, 2019


his actual area of expertise (psychiatry)

Maybe this is a story from a patient.
posted by clawsoon at 4:29 PM on February 5, 2019


The whole premise of a "scissor statement" is that it "appears trivially true" but "somehow" people see it opposite ways. Then it applies this to things like "black people shouldn't get shat on for doing peaceful, unobtrusive protests" or "stealing children from families just because they're immigrants is wrong" and wants you to see how your knee-jerk political opinions are all just scissor statements. This is not a great premise and does not lead to any useful insights, whereas a story about an AI that makes unimportant and actually trivial arguments into death-duels would be amusing.

Yeah, I feel like I recognize the feeling that might have inspired this story.

I listen to Donald Trump and it is clear that even the stupidest person on earth can see that he is obviously a nasty, self-centered, dishonest, fool. And then half the country votes for him.

This is a very disorienting feeling. I feel the temptation to question the entire foundations of epistemology.

But it doesn't take too much observation and thinking to find some better explanations for how this can happen than Alexander's weird sci-fi concept that my disagreement with Trump supporters is inexplicable and intractable because it's rooted in a weird bug in human cognition.
posted by straight at 5:32 PM on February 5, 2019 [5 favorites]


gryftir, now I desperately want Janelle Shane to train one of her weird little neural networks on argumentative Reddit threads.
posted by eirias at 5:47 PM on February 5, 2019


when Scott Alexander uses "rationalist" he's referring to a specific subculture. it's the same kind of entity as "the punk scene", or something like that. they have meetups, work on projects together, and often date each other and share group houses. and there's lots of scene drama and people can develop a sense of smug superiority towards outsiders.

More broadly, I think "rationalist" is, like "centrist" and "nice guy" (three qualities that can inhere in the same body!), a term that is used without edge by people who are self-identifying as such, and with considerable edge by people when describing others.
posted by running order squabble fest at 6:57 PM on February 5, 2019 [2 favorites]


"Nice guy," "rationalist," and "centrist" are also all qualities that many people want to ascribe to themselves but which, frankly, are all qualities that are more "in the eye of the beholder." Like, if someone feels the need to tell you they're a nice guy, it's best to get a second opinion.
posted by kewb at 3:11 AM on February 6, 2019 [2 favorites]


kewb, I don’t think that would be true in the traditional/non-normative use of the term “rationalist,” but thanks for clarifying why I find the normative use confusing and annoying!
posted by eirias at 4:11 AM on February 6, 2019


In the slatestarcodex comments, someone mentions the Monty Hall Problem, which is a whole nother category of controversial statement ("Switching is a superior strategy") which superficially looks like it could be acting as a Scissor Statement but which clearly has a right answer. Which again makes the whole concept of Scissor Statements seem like a blind alley.

Given clear examples of people being passionately, divisively wrong in very different sorts of ways (slavery, the Monty Hall Problem), it seems very likely there are similar mundane explanations for whatever divisions he's tempted to ascribe to hypothetical glitches in human reasoning that allow certain statements to cause people to passionately choose a side for no reason at all.
posted by straight at 11:42 AM on February 6, 2019 [4 favorites]


It's worth remembering that a lot of the backlash against the original presentation of the Monty Hall Problem was fueled by good old sexism, as it was a woman presenting the "counterintuitive" answer.

Again, the real problem is that "tribalism" is a bad theory, and even Alexander is starting to see the fraying edges,as much as he'd like to deny it.
posted by NoxAeternum at 1:31 PM on February 6, 2019 [1 favorite]


In the slatestarcodex comments, someone mentions the Monty Hall Problem, which is a whole nother category of controversial statement ("Switching is a superior strategy") which superficially looks like it could be acting as a Scissor Statement but which clearly has a right answer.

Clearly. In fact, as some of the past discussion has indicated, it's been entirely clear to people who answered both ways that their answer was very much correct and people who believe differently are wrong-headed and maybe bad and should feel bad.

I am *not* saying that either view of the Monty Hall problem is equally defensible -- I know full well that switching is the superior strategy.

But I would say that Monty Hall is a pretty good example of how something can have a perfectly correct answer... and still function like a scissor statement. Perfectly correct doesn't mean accessibly clear (much less universally clear). The world is full of things that are counterintuitive depending on how you've trained your intuition on top of whatever hard-wired biases people might have.

there are similar mundane explanations for whatever divisions he's tempted to ascribe to hypothetical glitches in human reasoning that allow certain statements to cause people to passionately choose a side for no reason at all.

The speculation isn't that people choose a side for no reason at all. It's that the specific mechanism that causes the passionate choosing -- whether it's notorious hazards of casual reasoning about probability or temperamental bases for racism -- matters less once you assume that there are ways of mapping discussion spaces to find and classify flashpoints without having to have a conscious model of how the specific mechanisms cause people to see things clearly but differently. Which is something ML could semi-plausibly do, and probably wetware too.
posted by wildblueyonder at 1:37 PM on February 6, 2019


NoxAeternum: Again, the real problem is that "tribalism" is a bad theory

Like a lot of theories, "tribalism" has some value in explaining stuff that people do. I'd say it's more of an okay theory. Maybe a mediocre theory. Not bad. It can, though - like many theories that have some value in explaining stuff people do - be used badly, dismissively, for purposes of manipulation and control and gaslighting all the other bad stuff we do with mediocre theories.

Let's say you want to talk about systemd and PHP. You'll find that tribalism has a certain explanatory power when it comes to how those discussions go. Drop "the UNIX philosophy" onto a Debian systemd mailing list and you'll see a scissor statement doing its work.
posted by clawsoon at 1:44 PM on February 6, 2019


Mention of Monty Hall reminds me of Newcomb's paradox, and of the Sleeping Beauty problem, which are similar problems involving decisions under simple probability distributions where people seem to arbitrarily end up very confident in one view or the other. the difference is they definitely don't have a clear answer yet.

these two problems are of great theological importance for the AI singularity people, and often discussed in the rationalist community. i could even imagine they might be the original inspiration for the idea of a scissor statement.
posted by vogon_poet at 1:46 PM on February 6, 2019


Mention of Monty Hall reminds me of boats that can sail up a river with no wind, or land yachts that can sail directly downwind faster than the wind. Until it was done, there were lots of people confidently proclaiming that it was impossible.
posted by clawsoon at 1:54 PM on February 6, 2019 [1 favorite]


But I would say that Monty Hall is a pretty good example of how something can have a perfectly correct answer... and still function like a scissor statement.

I think there's a significant difference between "A tricky question that causes arguments because it is presented in a way that some people will get it wrong but be sure they are correct" and the imagined "Scissor Statements" in the story, the point of which seems to be that there are supposedly some controversies where nobody is right so the best move is either to not take a side, or pick whatever side you want and make sure you ignore people on the other side because their disagreement with you is totally non-rational.
posted by straight at 4:59 PM on February 7, 2019 [2 favorites]


« Older Edible extremophiles   |   Robot Learns How to Play Jenga Newer »


This thread has been archived and is closed to new comments