Google, democracy and the truth about internet search
December 4, 2016 9:25 AM   Subscribe

 
"i-s-g-o-o-g-l-e making us stupid" Apparently so.

I'm having a hard time seeing auto-complete as "a secret Nazi cell."
posted by SPrintF at 9:41 AM on December 4, 2016 [2 favorites]


Why did my Google search return nine out of 10 search results that claim Jews are evil?

Because once you strip out irrelevant punctuation and the short common word "are" which google ignores, you just searched for "jews evil". It's not that mysterious.

ixquick (results from google, possibly less evil)
duckduckgo (popular, cute name)
searx.me (open source)
yandex (sometimes good for results others exclude)
posted by sfenders at 9:52 AM on December 4, 2016 [8 favorites]


Excuse me while I curl up in the corner and sob.
posted by medusa at 9:54 AM on December 4, 2016 [5 favorites]


Google autosuggests "Are Jews [evil]" and then returns a lot of results saying "Yup, they sure are!" and you are suggesting this is innocuous and something no one should be concerned about?
posted by jeather at 9:55 AM on December 4, 2016 [74 favorites]


Where are the people who read things like this and then decide to do something about it?
posted by amtho at 10:02 AM on December 4, 2016 [2 favorites]


Denial still ain't just a river in Egypt.
posted by saulgoodman at 10:08 AM on December 4, 2016 [1 favorite]


Sfenders, you missed the fact that Google suggested the question. It wasn't some random desire to ask if Jews are evil - although even so, one might hope that the results wouldn't be so unremittingly vile.
posted by Joe in Australia at 10:09 AM on December 4, 2016 [10 favorites]


So, what I got from that article is -- We're all living in a Black Mirror episode and don't know it.

But seriously, I knew Google and Facebook were problematic, but this article does an extraordinary job of laying out just how dangerous and insidious and fucking evil they both are. Kind of makes me want to grab the bourbon and assume a fetal position in a dark, remote corner of an uninhabited island in an uncharted sea.
posted by pjsky at 10:10 AM on December 4, 2016 [7 favorites]


Where are the people who read things like this and who are in any position to do something about it? As an average person living an average life, I don't know that there is anything I could do, or even if banded together with many like-minded average people, that a group of us could do to actually change anything.

Either Google thinks this is something that needs to be changed (it is) or it doesn't (they're wrong).

If you have action suggestions that will actually make a difference, I'm keen to hear them.
posted by hippybear at 10:10 AM on December 4, 2016 [5 favorites]


you missed the fact that Google suggested the question.

Well, I didn't comment on it. It's obviously bad. Google auto-complete is bad in general, and I don't know how anyone finds it useful for anything. Phrasing your query as a question probably doesn't help lead it to good results though, since that's what people who don't understand search engines do.
posted by sfenders at 10:12 AM on December 4, 2016 [2 favorites]


Other suggested searches:

- Are Republicans stupid
- Is Christianity a cult
- Are evangelicals crazy
- Is Trump a fascist
- Is Jesus satan

And probably the most pressing:

- Are cats evil
posted by jpe at 10:12 AM on December 4, 2016 [8 favorites]


Where are the people who read things like this and then decide to do something about it?

This is probably the most important question coming out the article. Who should do something about this. I argue that they should be in legislatures, and legislatures only (ok, maybe courts too down the line).

This is a web problem. The web is racist, very racist in parts and increasingly so. Google and Facebook are reflections of the web, and so they're also producing racist content as a result. The web is a problem so the search engines are a problem.

The question is who stops that?

Google and Facebook hate this situation, and for good reason.

Should an engineer in a cubefarm somewhere be making decisions about what to filter and when? What is or is not hate speech? What to suggest or not suggest? What if they, say, go overbroad and start filtering out those fighting hate speech too?

Companies don't want that responsibility. They shouldn't have that responsibility. They are unelected and responsive really only to their shareholders, not the public at large.

They already do this for, for example, pedophelia and copyright violations. They have clear legal mandates to do so, and so they do. They have no clear legal mandate for hate speech.

Should they invent one themselves? Who would that be accountable to? What oversight would any affected group have over that? What recourse would they have if blocked?

We as democracies tend to reserve the final say over rights to elected representatives, not private individuals or companies. I don't think somewhere in Mountain View or Menlo Park is the right place for decisions about fundamental rights. That should happen in places like the US congress.
posted by bonehead at 10:13 AM on December 4, 2016 [43 favorites]


Phrasing your query as a question probably doesn't help lead it to good results though,

It's a great way to gin up outrage, though.
posted by jpe at 10:15 AM on December 4, 2016 [4 favorites]


Maybe 15 years ago search engines only cared about keywords and it was appropriate to laugh at people who typed in whole questions, but now they look at the whole query.
posted by Joe Chip at 10:15 AM on December 4, 2016 [27 favorites]


In the interests of seeing how deep this rabbit hole goes, when I just tried this with "are white people," I discovered that Google also autocompletes that to "are white people evil." As a disturbing variation on this whole theme, when I try this with "are black people," it autocompletes to "are black people real."

We are all in serious trouble.
posted by Sonny Jim at 10:17 AM on December 4, 2016 [8 favorites]


> Where are the Google people who read things like this and who
> are in any position to do something about it?

I pasted your question, slightly modified, into the Google Search. You should try that.

The first answer I got was "Donald Trump"

The second answer was "Google head of people operations Laszlo Bock decides ..."

Give them a call ....
posted by hank at 10:17 AM on December 4, 2016 [4 favorites]


"But seriously, I knew Google and Facebook were problematic, but this article does an extraordinary job of laying out just how dangerous and insidious and fucking evil they both are."

You realize that Google's suggested searches are based on what other people are searching for? Why attack Google for the fact that people are googling for "are jews evil?"
posted by I-baLL at 10:18 AM on December 4, 2016 [13 favorites]


Looking around to check on some things I thought I remembered and found an article from 2011 on how autocomplete works, with some discussion of efforts to prevent it suggesting hate speech.
posted by dilettante at 10:20 AM on December 4, 2016 [4 favorites]


Saying that it's just the algorithm is missing the point of the article. Algorithms aren't exempt from being questioned just because they aren't sentient, or because undesirable results are an unintentional side effect of the design.

It's a long article, that touches on many different aspects of this problem. But one of those points is that when algorithms become this influential, we have to consider the effects of their output. Racist inputs can lead to racist outputs. When they become targets of gaming the problem is magnified.

Like, seriously, if you read this article and think the issue is that "this is how the algorithm works" is a cogent reponse to the points it makes, you haven't really understood it.
posted by Kutsuwamushi at 10:21 AM on December 4, 2016 [72 favorites]


Why attack Google for the fact that people are googling for "are jews evil?"

because a glitch in Google's tech is being exploited to offer up slanted and sinister search results. I mean, I'd put the blame in Donald Trump's lap if I could but he's not the man with the algorithms.
posted by philip-random at 10:23 AM on December 4, 2016 [4 favorites]


Maybe no more than two years ago, Google had some cute commercials in which its algorithms were a man at a desk. The motif of the gags were taboos. Their philosophy is data is amoral and it's difficult to argue against.

How long ago was it so many argued that China's refusal of Google and FaceBook was a measure of its suppression of speech? The hubris of western corporations to pawn the surveillance that social media became as a technology all countries should adopt is impressive.

On Preview: No, it's not really a "long" article and what anyone defines as "influential" is the core contention.
posted by lazycomputerkids at 10:23 AM on December 4, 2016 [1 favorite]


"Like, seriously, if you read this article and think the issue is that "this is how the algorithm works" is a cogent reponse to the points it makes, you haven't really understood it."

I didn't say that. I said that the reason why "are jews evil" appears as an autocomplete is because a lot of people are googling for "are jews evil". It's like you guys think that the autocomplete is the bad thing as opposed to the fact that enough people are searching for "are jews evil" to make it one of the most popular suggestions.
posted by I-baLL at 10:24 AM on December 4, 2016 [15 favorites]


It's also entirely possible that the responsible thing for Google to do is to not add searches like that to its algorithm to be returned as a popular search.

Just because a lot of people are doing a thing doesn't mean it needs to be suggested to others to do the same thing.
posted by hippybear at 10:27 AM on December 4, 2016 [31 favorites]


How is showing what others have done a suggestion to do the same?
posted by lazycomputerkids at 10:29 AM on December 4, 2016 [1 favorite]


You realize that Google's suggested searches are based on what other people are searching for? Why attack Google for the fact that people are googling for "are jews evil?"

Well, I-baLL, because I read the article. It very clearly delineates all the ways Google and Facebook deflect responsibility. " It’s the equivalent of going into a library and asking a librarian about Judaism and being handed 10 books of hate." - Danny Sullivan, the founding editor of SearchEngineLand.com.

Is bias built into the system? Does it affect the kind of results that I was seeing? “There’s all sorts of bias about what counts as a legitimate source of information and how that’s weighted. There’s enormous commercial bias. And when you look at the personnel, they are young, white and perhaps Asian, but not black or Hispanic and they are overwhelmingly men. The worldview of young wealthy white men informs all these judgments.”

I'll stop, because there's no need to copy and paste huge sections of the article. How can it not be clear these companies are complicit in the perpetuation of horrible racism unless you just didn't bother to read the article?
posted by pjsky at 10:30 AM on December 4, 2016 [19 favorites]


The funny thing about the "it's just an algorithm, it can't be helped" excuse is that my android (Google) phone's autocorrect is perfectly happy to tone-police my text messages. I'm not prepared to accuse Google specifically of anti-Semitism (a word autocorrect enthusiastically suggests) but not de-prioritizing hate speech is a choice.
posted by klanawa at 10:30 AM on December 4, 2016 [1 favorite]


But one of those points is that when algorithms become this influential, we have to consider the effects of their output. Racist inputs can lead to racist outputs. When they become targets of gaming the problem is magnified.

What the article doesn't discuss is who would make those calls. I would be really, really unhappy if that were just Google or Facebook.

Banning automatic suggestions for articles or searches altogether is one possibility, but that choice needs to be made by someone other than in a PR position at a private company somewhere.
posted by bonehead at 10:32 AM on December 4, 2016 [4 favorites]


Google doesn't autocomplete everything -- they refuse to autocomplete for piracy-related terms (or refused, I don't know what they do now).

(NB I am doing all this on a separate, private, non-google browser.)

Are Christians doesn't autocomplete evil.
Are white people comes up with a lot of social justice stuff.
Are black people doesn't autocomplete evil.
Are Muslims autocompletes bad and comes up with the predictable hate stuff (and one "how to stop being islamophobic" article).
Are women autocompletes evil with the predictable results.
Are men doesn't autocomplete evil.

Google wrote the algorithm to decide what searches to autocomplete. They have their algorithm to decide what pages to promote. These algorithms were written by humans with biases, and they have predictably biased results. They could be fixed, if google cared to do so.
posted by jeather at 10:33 AM on December 4, 2016 [39 favorites]


The 'agnostic', if you will effect, of the google algorithm is one thing (and certainly worth discussing) but this article was the second thing to bring the analytics and political influencing firm Cambridge Analytica to my attention today - the first being this video that I saw earlier on Twitter about the 'pyschographic' techniques they use to very precisely and psychologically target advertising at voters. In this video their CEO claims that they have 4-5,000 pieces of information about every individual in the US to use to model how to best influence them.

Thinking of the combined effect that this kind of targeted social engineering and the algorithmic influencing discussed elsewhere in the article is having on our public discourse and democracy is incredibly chilling. Thinking of how to address it is utterly overwhelming.

Like others I am eager to know who and how to do something about it.
posted by roolya_boolya at 10:34 AM on December 4, 2016 [17 favorites]


lazycomputerkids: How is showing what others have done a suggestion to do the same?

Are you really asking this? It's the same principle that all social media operates on. It's why Facebook shows you all the random crap your friends liked and commented on. It's peer-pressure to get you to engage with the content, and more importantly, the platform. Whether it's Google's suggested searches, or Facebook's activity feed, the entire point is to get you—you specifically—to engage, click, and do more of the same.
posted by SansPoint at 10:37 AM on December 4, 2016 [13 favorites]


@SansPoint

Rhetoric such as "are you really" and use of the second person to "explain" to me my own usage is laughable.
posted by lazycomputerkids at 10:39 AM on December 4, 2016


"Well, I-baLL, because I read the article. It very clearly delineates all the ways Google and Facebook deflect responsibility. " It’s the equivalent of going into a library and asking a librarian about Judaism and being handed 10 books of hate." - Danny Sullivan, the founding editor of SearchEngineLand.com. "

See, that's my issue with the article. It's all over the place. Danny Sullivan is obviously talking about search results however if we're talking about the autocomplete of "are jews evil" then it's a different story.

Hmm, actually, here's an idea: Open up google, and type in for any group:

are groupx

and see the autocomplete suggestions

then open up an incognito/privacy mode window and try the same thing.

I'm saying this because we all seem to be getting different results for different examples mentioned in this thread.
posted by I-baLL at 10:41 AM on December 4, 2016 [2 favorites]


lazycomputerkids: But it's true. All of these things are designed with the direct intention to keep their users using the product. For Google, it's searching and clicking links. For Facebook, it's posting, liking, and commenting. They wouldn't show you what everyone else is doing if it didn't work.
posted by SansPoint at 10:42 AM on December 4, 2016 [7 favorites]


And we have futurists telling us that Google is on the verge of the next technology revolution with their implementation of AI, which presumably will rely on some of the massive pile of data created by it search properties.

That AI interface is gonna be hella racist and sexist.

#microsotftwitterbot
posted by bobloblaw at 10:42 AM on December 4, 2016 [12 favorites]


Google doesn't autocomplete everything -- they refuse to autocomplete for piracy-related terms (or refused, I don't know what they do now).

Because there is law explicitly telling them not to do so. The same is true for legal obscenity like child porn.

Here's my problem: you're assuming the person making the calls has the same ethical compass you do.

What if they don't? What if, for example, they're evangelicals who assume that it's their civic duty to extinguish all mention of homosexuality? That's not a stretch in the US.

That's why companies or single individuals should never be making these choices.
posted by bonehead at 10:44 AM on December 4, 2016 [17 favorites]


@SanPoint...Qualifying something as true isn't truth. I'll repeat: What defines "influential" is the core contention of a relatively important article that repeated as much as it conveyed.
posted by lazycomputerkids at 10:44 AM on December 4, 2016


I am absolutely dreading the next four years.
posted by SisterHavana at 10:45 AM on December 4, 2016 [4 favorites]


lazycomputerkids: So you don't think people are influenced by seeing the behavior of others?
posted by SansPoint at 10:45 AM on December 4, 2016 [6 favorites]


I-baLL, I would disagree with your opinion of the article, that it is "all over the place." I thought it was extremely well written. But, ok, you think otherwise. I would disagree too that the article is only about "autocomplete." That is one component of the article. If that's the only part of the article you are interested in, I'd say you're missing the bigger picture.
posted by pjsky at 10:47 AM on December 4, 2016 [4 favorites]


Here's my problem: you're assuming the person making the calls has the same ethical compass you do.

What if they don't? What if, for example, they're evangelicals who assume that it's their civic duty to extinguish all mention of homosexuality? That's not a stretch in the US.

That's why companies or single individuals should never be making these choices.


Yes, probably they should not.

But I am saying: Google can be explicitly faulted for showing this autocomplete and giving these results. This is 100% a result of decisions the company has made, and is continuing to make.
posted by jeather at 10:47 AM on December 4, 2016 [1 favorite]


@SanPoint...conflation isn't argument...of course I comprehend mirror neurons are essential to development and learning...my response to hippbear you're stepping into is not your own.

So, again, my question: How is showing what others searched for a suggestion to do the same?
posted by lazycomputerkids at 10:48 AM on December 4, 2016


pjsky:

" That is one component of the article. If that's the only part of the article you are interested in, I'd say you're missing the bigger picture."

That's actually what I mean by it's all over the place. It is covering multiple topics but doesn't make it clear what parts of the article refers to which topic, like the Danny Sullivan quote for example.
posted by I-baLL at 10:50 AM on December 4, 2016


PageRanking algorithms and autocomplete aren't just agnostic -- they're amoral, with only one goal -- increased revenue to Google (or the relevant ranking/promoting system). Given that they must deal in content with moral implications, this is at best problematic.

For example, Google autocomplete suggest "are evangelicals crazy" and "are catholics christians"

Neither strikes me as a particularly useful place to start a conversation. But also, surely this is not protected public speech -- it is occurring on a private platform, the same way that speech is (also problematically) not protected in a shopping mall. But if you pull that thread to its logical conclusion, then we would have walled garden search engines, with separate ones for white racists. Is that better for national polity? I dunno. Isn't it necessarily true, however, that part of Google's revenue stream derives from (traffic to and from) hate sites? Isn't that a more fundamental problem?
posted by PandaMomentum at 10:50 AM on December 4, 2016 [5 favorites]


lazycomputerkids: How is showing you an advertisement not a suggestion to buy the product?
posted by SansPoint at 10:50 AM on December 4, 2016 [1 favorite]


Oh, look, an analogy...how logical?
posted by lazycomputerkids at 10:52 AM on December 4, 2016 [1 favorite]


lazycomputerkids: I give up. You tell me why it's not, rather than just automatically dismissing any attempt at my explanation, then.
posted by SansPoint at 10:53 AM on December 4, 2016 [4 favorites]


Prove a negative...uh huh
posted by lazycomputerkids at 10:53 AM on December 4, 2016


lazycomputerkids: There is quite a bit of research about how merely listing results in a particular order affects the choices of people who have never seen information on a subject previously.

One excellent article about this phenomenon is Robert Epstein's "The New Mind Control".

Another useful book-length reference about how people are affected by information (which goes well beyond the mere prioritization of information in lists, though it covers that, too) is Daniel Kahneman's Thinking Fast and Slow.
posted by mistersquid at 10:53 AM on December 4, 2016 [27 favorites]


lazycomputerkids: 'Cause what I'm seeing you say is "People cannot have their behavior influenced by computer algorithms" which is absolute bullshit and goes completely against how all these fucking companies make every last MOTHERFUCKING PENNY ON THEIR EARNINGS STATEMENTS.
posted by SansPoint at 10:54 AM on December 4, 2016 [26 favorites]


It's like you guys think that the autocomplete is the bad thing as opposed to the fact that enough people are searching for "are jews evil" to make it one of the most popular suggestions.

The conversation isn't about how awful it is that people are racist. The conversation is about how the creators of algorithms should handle the fact that people are racist, and will feed their algorithms racist inputs.

If you think that the creators of algorithms don't have any responsibility to prevent or mitigate the racist output that results from racist input, then make a good argument for that. But don't keep falling back on, "but the output is racist because the input is racist." We all know that. All you're doing is making an argument that shows you don't know what the conversation is even about.
posted by Kutsuwamushi at 10:55 AM on December 4, 2016 [58 favorites]


Mod note: lazycomputerkids, I'm not sure what's up here exactly but you're kinda hyper-engaging here in a way that's not great for conversation. You've probably made your case as much as is going to be doable at this point, so go ahead and give this thread a break so the conversation doesn't continue to be a taking-on-all-comers thing. Likewise, everyone responding please go ahead and rerail.
posted by cortex (staff) at 10:55 AM on December 4, 2016 [14 favorites]


Asking why google is racist is like asking why toilets have poop in them. The internet is a network built on the back of porn, with the original intention (when it was invented by the DoD) of coordinating the simultaneous mass murder of most of the population of the planet. The fact that we can get anything uplifting and good out of it at all is more of a bug than a feature.
posted by blue_beetle at 10:56 AM on December 4, 2016 [10 favorites]


Kutsuwamushi I would like to go on record as favoriting your comment x100,000
posted by pjsky at 10:57 AM on December 4, 2016 [1 favorite]


I will refrain from linking the nearly 4-hour-long documentary about manipulation of populations through media The Century Of The Self, but I do recommend watching it. Watch it if you have never seen it, and watch it again if you haven't watched it for a while. It's illuminating.
posted by hippybear at 10:57 AM on December 4, 2016 [12 favorites]


So much sealioning on this thread, including a great deal coming from an individual who seems to do this perennially. Flagging the hell out of it, and recommending you do the same.
posted by adamgreenfield at 10:58 AM on December 4, 2016 [9 favorites]


Flagging the hell out of it, and recommending you do the same.

You're manipulating the algorithms!

I look forward to a breathless Guardian expose about this shocking abuse.
posted by jpe at 11:02 AM on December 4, 2016


I'm not sure how much the mods like being called "algorithms", but there we are.
posted by hippybear at 11:08 AM on December 4, 2016 [10 favorites]


I think Google and Facebook are not the most important actors here. In many ways, they have provided a garden, and people are growing a very particular kind of vegetable in there. Cambridge Analytica, mentioned in the article, seem to be influential players of a rather predictable sort. They use a specific kind of prediction, based upon a specific kind of psychological model, and it proves effective at pumping up the white rage. They probably don't even intend any such (though they are mighty proud of themselves for helping the trumpfucker get elected).

I have a personal and professional dislike of the kind of model they use, which masquerades as science, showing off its statistically significant results, when it is actually a specific kind of ideological creation. If you know the article about WEIRD people, you know what I mean. (Summary, article)
posted by stonepharisee at 11:12 AM on December 4, 2016 [8 favorites]


The screencap of an "answer" highlighted in bold is particularly egregious to me. Maybe because, last night, one such algorithmically highlighted answer resulted in my placing a call to 911 while trying to activate an iPhone without a sim card. At any rate I would love to see the anti-defamation league highlighted at the top of a question like "Are Jews evil?" If that's too much of a moral stance, start with something more anodyne like "How to kill yourself" --> suicide help line.
posted by Lorin at 11:13 AM on December 4, 2016 [1 favorite]


If that's too much of a moral stance, start with something more anodyne like "How to kill yourself" --> suicide help line.

IIRC Google actually does this.
posted by jason_steakums at 11:16 AM on December 4, 2016 [1 favorite]


IIRC Google actually does this.

Not for me. Maybe in the US? Another example of how the filter bubble complicates things I guess.
posted by Lorin at 11:19 AM on December 4, 2016


This all seems about right. Reading over all the articles that I've been seeing that are more or less asking why there seems to be a wave of insular nationalism and tribalism welling up in a wide variety of locations, one answer that has to come up in terms of cultural factors is the internet. Technology tends to have an amplifying effect on human nature, the internet has always allowed members of isolated subcultures to find each other and influence those who might be influenceable more easily. Until this year, though, I don't know if I'd really reckoned with those who might find the microtargeted influence potential and decide that one of the outcomes to their advantage would be stoking bigotry and tribalism.

bonehead is right that we need a response in law that translates to responsibility in our technological systems, but there are several problems with this: on a practical level this isn't going to be an option for at least few years, because the house of representatives has already been gerrymanderigged, and we aren't going to get it unhacked without anything less than a concerted, heroic, and lucky effort between now and the 2020. So, the other option is that we have to figure out counter-operations using what's in place now. Chad and Brad and their product managers and owning class might wake up and realize that the world could burn if they're not careful, and we should probably be trying to persuade them on any point we can, but in the meanwhile, we should be learning to use the tools as well as anybody and figure out techniques for shaping how the systems respond and our own micromessaging. The information wars are here, the velocity of fascism might be something we find has exceeded what we can and should do in conventional political campaigning, we probably need to start campaigning on this battleground now and as smartly as possible.
posted by wildblueyonder at 11:20 AM on December 4, 2016 [9 favorites]


I try this with "are black people"

For me, the following don't autocomplete:

Are whites
Are blacks
Are Christians

but Jews, Muslims, Buddhists, Hindus all do.
posted by Johnny Wallflower at 11:22 AM on December 4, 2016 [4 favorites]


we can advocate/lobby for Google to change algorithms, like Safiya Noble does
posted by mollymillions at 11:30 AM on December 4, 2016 [5 favorites]


For me, the following don't autocomplete:

Are whites
Are blacks
Are Christians


Although, interestingly enough, "are Christian" (no s) does autocomplete, as does "are Christianity," leading with: "are Christianity and Catholicism the same"? That's a very evangelical/fundamentalist Protestant question...
posted by thomas j wise at 11:44 AM on December 4, 2016


The Safiya Noble link is AWESOME! Thank you mollymillions
posted by pjsky at 11:45 AM on December 4, 2016 [1 favorite]


It's the Tay Corollary. If you can make an AI you can teach it to be racist.
posted by Talez at 11:45 AM on December 4, 2016 [2 favorites]


I'm personally of the belief that the rise in popular use of the internet, information technology and the 24 hour infonews cycle has brought us to where we are today. Most people cannot reliably parse information presented to them and separate out truth from fact, noise from meaning, or even know that such activities are necessary. All this information, most of it opinion, much of it lies, and only a little of it of clear definite positive value is helping to create and maintain world views that are incredibly damaging to everyone. People are simply not able to take in this flood of information and make sense of it.

The algorithms Google uses I'm sure could certainly be improved and it is troubling that any company or companies can have this much effect or even control over large amounts of the population, but they are not the heart of the problem, just another contributing factor to it. If we can't find ways to help people better understand information and parse how info is presented, then we aren't going to have much luck with any adjustments to searches or whatever else since its the viewers who are the only real line of defense against the spread of misinformation and hate. That isn't to say I oppose trying to better search engines or "news" placements on Facebook or whatever else, just that we can't rely on those at the top of the chain to best look out for those at the bottom since their perspectives are not aligned financially or informationally.
posted by gusottertrout at 12:24 PM on December 4, 2016 [8 favorites]


Can confirm that in incognito mode for me in a non US country "how to kill yourself" returns as top result suicide hotline number.

By top result I mean the same way 3x5= will return a highlighted answer at the top of the page.

There are plenty of searches where Google jumpstarts you with information. Many recipes, for example.

So no, it's not a stretch to have anti-racism resources promoted as results to relevant questions.

And I just checked for various variations for is the Holocaust a hoax and the top results are all denial websites.
posted by Cozybee at 1:45 PM on December 4, 2016 [5 favorites]


And neither Bing nor Yahoo produce the same results.

It's a feature, not a bug.
posted by jrochest at 1:48 PM on December 4, 2016 [2 favorites]


I haven't read it yet, but I definitely will buy and read Weapons of Math Destruction which talks directly to this issue: The hidden biases and subjectivity in "objective" algorithms. It's a pervasive problem, especially in the financial industry but in many other sectors as well. The people and institutions who make these tools should be exposed to the light and held accountable. Claiming that an algorithm is morally neutral just because, once designed and implemented, it runs all by itself without further human input is an abdication of responsibility. And claiming, "I didn't say that, the algorithm did" is moral cowardliness.
posted by mono blanco at 2:00 PM on December 4, 2016 [7 favorites]


I have been helping to build the internet for almost 25 years. This is not what I expected to create.
posted by rmd1023 at 2:16 PM on December 4, 2016 [16 favorites]


You realize that Google's suggested searches are based on what other people are searching for? Why attack Google for the fact that people are googling for "are jews evil?"

And yet they do manage to prevent any porn stars names from autocompleting, which I assume people are also searching for.
posted by dng at 2:19 PM on December 4, 2016 [15 favorites]


> Maybe no more than two years ago, Google had some cute commercials in which its algorithms were a man at a desk. The motif of the gags were taboos. Their philosophy is data is amoral and it's difficult to argue against.

Those were the opposite of Google ads. They were a series of College Humor sketches mocking Google and the ways people interact with it.
posted by ardgedee at 2:43 PM on December 4, 2016 [6 favorites]


The crux of the problem is that the internet was supposed to make us more educated instead of less. When major news sites are linking to ads like "Hillary Clinton, Guilty of Molesting Children", we are in a world of trouble. When Google continually spews forth links to garbage sites ad nauseum to common questions, then the internet is broken. When advertisers control what news, products, and events you see, something is wrong. Their only interested in seeing their own reflection, no matter how false.

I just read a NyTimes article where the comments complained about Blacks getting "freebies" and all the girls and jobs, that ethnic college clubs support and create racism toward whites but no one will do anything about it. When is someone going to call these people out on their bullsh*t? Nothing they say is true. It's all as valid as the "war" on Christmas. Unfortunately, the internet helped them fall for it hook, line, and sinker. It's heartbreaking.
posted by xammerboy at 2:55 PM on December 4, 2016 [4 favorites]


I admit that I am only barely grasping the nuances of how algorithms work, but - if I understand correctly, the reason why those autocompletes are there is because some users created sites that made those claims and then boosted them in Google rankings. Do I understand correctly?

If so, then it strikes me that maybe the way to fight fire with fire is to make a gabillion "the Alt-right lies" sites and the like, and then boost them in Google rankings so that whenever anyone types in "alt-right" that's what will come up.
posted by EmpressCallipygos at 2:57 PM on December 4, 2016 [1 favorite]


If you've ever gone to the Trump Reddit, about a quarter of the posts are "help make this statement the number one result on Google" or "This will be the first election won through memes." They know what they're doing.
posted by xammerboy at 3:01 PM on December 4, 2016 [13 favorites]


"are jews" does not suggest "evil" for me either under my account or in incognito mode. Recent change or just location/account dependent?
posted by markr at 3:01 PM on December 4, 2016


It looks like that particular autocomplete was just removed.
posted by jeather at 3:03 PM on December 4, 2016 [5 favorites]




Oh man my head is spinning and I think it all comes down to scope.

But to start, I didn't see this anywhere so here's a link to report problems to google:

https://support.google.com/websearch/answer/6223687?hl=en&ref_topic=3285072

I'm having trouble with stating that Google just needs to go take care of this' and feel like 'job done, liberal internets.' There is no definition of scope at all in terms of user, time intervals, tuning, etc.

When Joe Lieberman demanded that Google take on the permanent job of removing content he thought was dangerous, I remember my personal reply being to have him use the tools already in place to remove content instead of creating a new opaque one size fits all mechanism to keep one government or another happy. Now I see people I sympathize with asking for something similar.

So I know machine learning is a thing, and these things have to be trained. How many situations does this service need to be trained for? And what does the Venn diagram look like for responsibility here? Government, commercial interests, personal prefs?

I would rather that GoogleFacebookEtc give us a channel to help weight terms and combinations of terms, and explain how these will be used for different scopes from personal to national to global.

Let's pretend we have this channel. Scope is key here in terms of how well the problem can be solved, and for the relative satisfaction of different communities that would score the solution and attempt to influence it further. The problem might be that personalized results will be much easier than global results, and the problem will be airbrushed and left to fester rather than be lessened.

Google will be able to solve the problem for the individual with some ease I would guess, the same way Amazon would and this would be great for personal trigger warnings etc, but this means Mefites getting 'Jews are $onething' and someone else getting 'Jews are $another'. based on purchase/search/creepyexternalfactdb history. Problem 1, racists still gonna racist so they get their evil on. 2. You are less aware that 1 is happening, and didn't this siloing of information fuck us up already?

The question then becomes how far you think your prefs should influence the whole but we are then back to square one and I can't keep up any perfectly academic distance and I then have sadfeels. More and more they are of the pitchfork variety and I check myself and go to the top of the comment again.. No joy yet.
posted by drowsy at 3:06 PM on December 4, 2016 [5 favorites]


The Wikipedia model works fine. Flag a site or result. Have that site reviewed. Remove from Google results.
posted by xammerboy at 3:10 PM on December 4, 2016


I believe in free speech. I'm okay with Nazis giving speeches, but I am not okay with them dressing up like reporters and stating factual falsehoods as leading news stories.
posted by xammerboy at 3:13 PM on December 4, 2016 [5 favorites]


xammerboy: The Wikipedia model works fine. Flag a site or result. Have that site reviewed. Remove from Google results.

Except that it doesn't work fine. It's extremely open to abuse, as are upvote/downvote systems like Reddit. All you need a bunch of dedicated people to flag anything they don't like / upvote anything they do like / downvote anything they don't like and they've completely wrecked the system.
posted by SansPoint at 3:17 PM on December 4, 2016 [9 favorites]


Can someone help me square this quote:

"...And when you look at the personnel, they are young, white and perhaps Asian, but not black or Hispanic and they are overwhelmingly men. The worldview of young wealthy white men informs all these judgments.”

with the fact that Google and Facebook are companies founded and headed by 2 (and a half) Jews? Does the author (or Frank Pasquale, who is being quoted here) believe that Larry Page, Sergey Brin and Mark Zuckerberg are all complicit in promoting the trope that "Jews are evil"?
posted by mtVessel at 3:18 PM on December 4, 2016 [3 favorites]


I wonder if the dent this stuff made in the 2016 election was enough to put it over the top...

Cambridge Analytica said its 4,000 different online ads for Trump were viewed 1.5 billion times by millions of Americans. The data science team could glean information about users to deliver pitches "based on the issues they care about," Oczkowski said.

"I can take a predictive model of a potential Trump voter that I can match to Pandora, and match it to the kind of music they listen to and target them on Snapchat based on the filters they use," Wilson said.

A report by Bloomberg/BusinessWeek said the Trump campaign used micro-targeting to deliver negative messages on Facebook -- reminding them of Clinton's comments and "super predators" seen as disparaging to African-Americans -- in an effort to depress turnout.
posted by xammerboy at 3:23 PM on December 4, 2016 [1 favorite]


I get 'are jews white/a race/a religion'' fwiw.
posted by Sebmojo at 3:23 PM on December 4, 2016 [2 favorites]


I'm not getting any of these autocomplete results at all. Did Google already change something? I've tried in my standard browser, and in an incognito browser, as well as on my phone.

For "Are Jews..." I get "a race", "Christian" and "white", which are also problematic, of course, but not "evil". I do get one of those boxed special content top answers, but it goes to the Jerusalem Post and is about whether Judaism is considered a religion or race.

For "Are women..." I get "stronger than men" and "equal to men".

Actually, when I start typing "Are wom..." it autocompletes with "Are wombats nocturnal?", which is a much better question.
posted by lollusc at 3:28 PM on December 4, 2016 [4 favorites]


Yeah, Google literally just changed the auto-complete results for these words. But that still doesn't change the garbage results when you search for the questions.
posted by xammerboy at 3:31 PM on December 4, 2016 [2 favorites]


I think that asking "should Google offer 'Are Jews Evil?' as a suggested search?" and "Is it okay that the answer to that search is overwhelmingly yes?" are two very different questions.

I don't think it's really a problem that searching "Are Jews Evil?" gets you a bunch of anti-semitic sites. There are all kinds of questions that presuppose their answers, and this is not an inherently search engine related problem. Once you are asking "Are Jews Evil?" you are already 90% of the way to "of course they are." And as much as I wish there weren't so many racists and that they weren't using the Internet to spread their hate, I also don't think it's Google's job to be scrubbing them from their search results.

But offering that as a suggested search is unconscionable and Google should probably be rethinking the value of suggested searches in the first place.
posted by 256 at 3:34 PM on December 4, 2016 [6 favorites]


And people who google how to commit suicide are already down the way to killing themselves, but Google takes it upon itself to put help lines up at the top of the search results anyhow.
posted by jeather at 3:45 PM on December 4, 2016 [9 favorites]


mtVessel: There's an attitude among Valley types that technology is a neutral tool, and that all they have to do is create the tool, let people have at it, and watch the money roll in. That someone might be using the tool to promote worldviews and abuse people like them, well, hey, not their problem. (This is due to a combination of privileges involving being white men, having wealth, and being tech-savvy.) After all, the other side of the argument can use those same tools to do the same thing, right?

Problem is tech isn't neutral, and it never is. Page, Brin, and Zuckerberg don't care because they're not going to suffer its worst effects any time soon, if ever. Money goes a long way for that.
posted by SansPoint at 3:49 PM on December 4, 2016 [4 favorites]


See if I was really concerned about Nazis taking over the last thing I would think wise would be to establish powerful new methods & norms of censorship.

That sort of thing would make more sense if I was concerned with my fading authority as part of a waning priestly caste.

This is our internet. Not Google’s. Not Facebook’s. Not rightwing propagandists. And we’re the only ones who can reclaim it.

Not yours either, never was, fuck off.
posted by save alive nothing that breatheth at 4:09 PM on December 4, 2016 [7 favorites]


How about "Did the Holocaust happen?" being suggested after typing in "Did the Holo", and then returning this as the top result: Top 10 reasons why the holocaust didn't happen. - Stormfront

Two things really bother me. One is garbage results in return to factual questions. The second is the prominence of these results. When you are browsing books about the holocaust in a bookstore or the library, you may see some garbage books, but they likely do not make up the first shelf of books you look at. Moreover, the librarian will likely not shepherd you to a table discussion with Nazi extremists.
posted by xammerboy at 4:13 PM on December 4, 2016 [12 favorites]


save alive nothing that breatheth: The Nazis have as much of a right to be on the Internet as anyone else. Where it gets tricky is when the Nazis are so good at manipulating the ways people access the Internet that they can control the discourse. If you search for "Did the Holocaust happen?" as xammerboy did, you should not get Stormfront giving you 10 reasons why it didn't, because it is a solid historical fact that the Holocaust did happen. But because there's a bunch of folks on Stormfront who know how to abuse Google's algorithm, they can hijack that search and get their propaganda in front of people, which is insanely dangerous.
posted by SansPoint at 4:16 PM on December 4, 2016 [4 favorites]


It's not censorship to re-order search results in such a way that truthful results are on top. Wouldn't that make it a better product? Are Jews evil? No, here's some links to the ADL and some sites that debunk antisemitic bullshit. Breitbart is still there, somewhere on page 20. If people want to be mislead they need to work for it.
posted by um at 4:20 PM on December 4, 2016 [9 favorites]


um: It depends who you ask, of course. Stormfronters have a made a meme out of "disagreeing with us is censorship".
posted by SansPoint at 4:25 PM on December 4, 2016


Yeah but who gives a shit what Nazis think? The whole 'some may disagree' rebuttal falls apart when the 'some' turn out to be fascist white supremacists.
posted by um at 4:32 PM on December 4, 2016 [4 favorites]


um: I sure as fuck don't. But apparently, some people do, because that meme has got some serious legs.
posted by SansPoint at 4:38 PM on December 4, 2016


This was finally enough of a trigger for me to get around to switching to DuckDuckGo as my default search engine. I had been meaning to for a while, but just hadn't got around to it.

I just tested the "Are Jews" and "Are women" searches on it, and "are Jews" didn't autocomplete at all, and "Are women" autocompleted with "tom ford we are women not objects", which I'm okay with.

Also, "Did the holocaust happen" on DuckDuckGo has the top result as Wikipedia's "Holocaust Denial" article, followed by a bunch of sites about "Why did the holocaust happen", and the Stormfront article is, while on the front page, way down near the bottom.

Of course, switching search engines is not really a solution to the underlying problems - it just increases the gap between my experience of the internet and someone else's experience.
posted by lollusc at 4:54 PM on December 4, 2016 [9 favorites]


the reason why "are jews evil" appears as an autocomplete is because a lot of people are googling for "are jews evil".

That's not quite accurate I suspect. It's not that a lot of people made that search query. It's that the vast majority of the people who asked a question of the form "are jews ...?", no matter how small their numbers, were asking about evil or some more commonplace but just as vile stereotype that either they'd heard about or were seeking to confirm. Any other more benign reasons to type such a thing into a search engine come up even less often.

And then once you search for "are jews evil" there isn't much a search engine can do if it's going to be honest to its mission aside from giving you some links to pages that talk about how evil Jewish people are, which are almost all going to be suggesting that the answer is very evil indeed because wtf, it's not the kind of question you'd expect unless perhaps you're a lot more versed in the semantics of anti-semitism than anyone at google is likely to be, so all they can do is remove such things manually from the auto-complete when they're complained about. Which they almost certainly should have done sooner, considering they already have a policy of removing some things, but that's the people to blame, not the algorithm or anyone gaming it.
posted by sfenders at 5:18 PM on December 4, 2016 [4 favorites]


Google's mission was to give you useful, accurate links. If you're claiming that Stormfront is useful and accurate, then I guess a bunch of results that link there are helpful. But their mission is not "follow the algorithm blindly", it is "create the best algorithm and prevent it from being gamed as much as possible", so changing their algorithm is within their mission statement.

The results page is not a neutral result that no one has responsibility for -- it is the entire point of their company.
posted by jeather at 5:30 PM on December 4, 2016 [15 favorites]


I just posted this on my Facebook page. One of my friends replied with a screenshot of a Facebook survey, that shows my post and this question:
To what extent do you think the link's title witholds key details of the story?
Not at all ○ Slightly ○ Somewhat ○ Very Much ○ Completely○
posted by louche mustachio at 5:40 PM on December 4, 2016 [2 favorites]


This is mere minutes after me posting the link.
posted by louche mustachio at 5:41 PM on December 4, 2016


This conversation is moving me to go to all of my (Trump-voting, rust-belt white boomer) aunts and uncles and install ad-block on their computers. I won't tell them why, just 'here's a way not to get all of those annoying advertisements'. It might at least help
posted by overhauser at 5:49 PM on December 4, 2016 [4 favorites]


You realize that Google's suggested searches are based on what other people are searching for? Why attack Google for the fact that people are googling for "are jews evil?"

Suggested searches are not a fact of life, and I bet you, like me, are old enough to remember when Google didn't have any suggestions, either as autocomplete or in the search results. They could, at their option, simply decide not to do it anymore, once they see what they are inadvertently doing.
posted by kenko at 5:55 PM on December 4, 2016 [6 favorites]


Surely, if Google was willing to bend its algorithm for "Santorum" (to the mild disappointment of so many people), then it can do something about searches on "Jews", "Muslims", etc.
posted by fredludd at 6:15 PM on December 4, 2016 [9 favorites]


To what extent do you think the link's title witholds key details of the story?
Not at all ○ Slightly ○ Somewhat ○ Very Much ○ Completely○


Adding to what I posted above: I used to work for a market research firm that subcontracted for a political polling company. We weren't just asking straightforward questions - these were often "opinion shaping" polls, designed to test language and phrasing. Sometimes the questions were openly inflammatory, but often they were less obvious yet still manipulative. The survey question that appeared to my friend is very much that kind of question.
posted by louche mustachio at 6:24 PM on December 4, 2016


Worth a read. NYT: The Secret Agenda of a Facebook Quiz
posted by vers at 6:27 PM on December 4, 2016 [2 favorites]


Google exists to make money. Google makes money by selling search results. (This is a bit of a simplification.) Google does not exist to provide information. It only provides enough information to keep you using it, but as much as possible steers you towards links that will generate income for it. Google is an infomercial. Google is working very hard to reduce the information side of the information:marketing ratio. Their optimal business model would be if you connected to Google and they steered you to the site that was paying them the most without actually having to consider where you want to go. Fortunately for Google the sites that pay them are smart enough to pepper themselves so thickly with the words that you might enter in your search, so that it makes them at least mildly plausible as a destination, so that you don't generally open Google, type in "cute kittens" and find yourself on a site with no cats at all.

Auto-complete is a useful tool for them to send you to higher paying sites more often and more quickly. What it amounts to is that those repulsive sites are paying Google to appear up at the top. Oh, it's certainly complicated and involves algorithms and proprietary information and Google doesn't actually want them to appear at the top - but sites that don't play by Google's rules don't appear at all, and Google is stuck because those toxic sites are playing by the rules that make Google money, and they can't change their business model, that more profit for Google means a higher page rank.



For a long time now I have observed that if I search for obscure data - say the name of a random obscure person in the city where I live - instead of pointing me towards links where that name I entered occurs Google points me towards businesses that do networking such as LinkedIn. These sites have paid Google so that they show up first instead even though they likely do not have a listing for anyone with the name I am searching. Google spotted that I was searching for a person, and instead of taking me to the only three locations on the internet where 83 year old Algernon O. Rhythm's name appears (nursing home fundraising newsletter, short article about veterans, and the obituary I am searching for which was posted yesterday) I get steered towards a bunch of sales people who claim they know him but don't, and want me to sign up and share my personal data before they will actually search their own database and come up with the fact that Alg is not one of their members and they have no members with that particular name.

It's the same if I search for a small business. The number one hit will be Google's own site for the business, which ALWAYS has significant misinformation on it, followed by Manta and a dozen other sites that have short incomplete listings on the business generally three years out of date or more. The small business's own accurate website, if it exists, is way, way down, maybe on the third page.

And it's the same if I search for other sorts of information. Typically, instead of showing links regarding historical instances of dolls made out of pastry being given as gifts (for example), it steers me towards cheery sites that have directions for making gingerbread men, dozens of such sites, with same apocryphal paragraph how gingerbread men were invented, obviously all copy-pasted out of the same original location.

Long ago I couldn't bring myself to watch TV because it was all horrible programs, sitcoms with laugh tracks and game shows. And then cable TV became available and there was PBS and The History Channel and A &E.... which started by showing programs with information, but by the mid eighties started to get more and more inaccurate and absurd and dramatic -they expunged their content in order to get the ratings.

Google has done the same thing. The content is draining rapidly. Now, if I am lucky I find a site that has information on it at all, a single paragraph or two, or a diagram, or a figure with footnotes, on pages that are otherwise full of marketing. Misinformation sells better than accurate information - things we are afraid are true, or things that we wish are true, so that's where the links will go. How many of the people who clicked on top ranked sites following the auto-suggestion "Are Jews evil? did so out of horror or delight, rather than because they didn't know and were trying to find out? Part of the reason the auto-suggestion comes up is because everyone who starts to type in a question like, "Are Muslims... ...allowed to eat shellfish?" (Because I want to bring smoked oysters and crackers to the potluck at work.) is prone to open their eyes wide and check out where the search with the word evil goes.

But the internet is where many, many people go when they need to talk about the things they are too scared or angry or embarrassed or ashamed to ask at the same time as looking a real person in the face, or hearing a real voice. It's absurd. The internet is where the gaffes you make, the indiscretions you commit, the slanders you spit out and the secrets you whisper are stitched to you, linked to you, recorded, remembered, tracked, tagged and weighed. If you had two or three beer and you asked your buddy, "C'mon, tell me, you think it's true that Jewish guys bleed every month like women do?" chances are like all the rest of our evanescent, ephemeral words it would be forgotten when he was sober the next morning, maybe a vague feeling that you're some kind of a naive asshole, but no witnesses anyway, no proof you said it, only hearsay that you once asked some kind of a question that sounded ignorant and racist. But at the same time, your buddy would look at you when you said it, yeah, he would look you in the eyes and you might see the incredulity and revulsion, "Are you serious? You think that could possibly be true?" so the question gets typed into a search engine instead, and tagged with an IP address and a time and date stamp and filed in the NSA memory banks for all eternity.
posted by Jane the Brown at 6:48 PM on December 4, 2016 [29 favorites]


I have been helping to build the internet for almost 25 years. This is not what I expected to create.

Me too, at 22 years and it weighs heavily on me. Thing is I feel responsible for helping unfuck it, just when I was hoping to bail on this god awful mess of work we've made.
posted by Annika Cicada at 6:49 PM on December 4, 2016 [5 favorites]


In the very early 90s, when search engines were barely a thing, I bought a thick magazine, which listed nothing but internet addresses, sorted by subject. The good old days.
posted by Beholder at 6:56 PM on December 4, 2016 [4 favorites]


Google's mission was to give you useful, accurate links. If you're claiming that Stormfront is useful and accurate, then I guess a bunch of results that link there are helpful.

It could be perceived as a useful result. The trivial example would be someone searching for "stormfront". Someone researching the nature of anti-semitic hate speech might search for "jews are evil" or the like and find it a useful result. It really is an important and oft-cited website in the narrow universe of sites that discuss in detail the evilness of Jews, or such is my impression.

Google aims for relevant results, I don't think it can make any pretense of limiting its results to things that are in any sense accurate. Until the search engine is more intelligent than most of us it's going to be easy to craft queries that get some decidedly inaccurate results. Are jews evil? Did hitler do 9/11? NASA spraying chemtrails? Benghazi cover-up? Illuminati tinfoil? I mean that last one gets you a good Weird Al video, but it's hard to imagine any non-sentient search algorithm coming up with accurate and reasonable results for every possible query. It wouldn't even be what its users wanted, most of the time. They can't just link to a limited pre-approved set of sources whenever the query involves Jewishness, that would draw complaints from people who have unusual, non-obvious questions about some particular aspect of the faith, or are searching for a specific blog post they saw a year ago, or basically anything where a search engine is actually useful compared to starting with Wikipedia.
posted by sfenders at 6:56 PM on December 4, 2016 [2 favorites]


Google has no problems putting their thumb on the scale of search results when it suits them. If they refuse to do so for antisemitism, islamophobia, misogyny, homophobia, transphobia, and so on because, well, there's gold in them thar hills, what good is pagerank? What good is a search engine that traffics mainly in lies?
posted by um at 7:58 PM on December 4, 2016 [14 favorites]


overhauser: This conversation is moving me to go to all of my (Trump-voting, rust-belt white boomer) aunts and uncles and install ad-block on their computers.

You should also install Facebook Purity and follow the instructions here to set up a text filter that blocks fake news sites. In your case, maybe edit the list to remove at least some of the liberal sites but leave all the right-wing stuff blocked.
posted by Johnny Wallflower at 8:07 PM on December 4, 2016


Of course, switching search engines is not really a solution to the underlying problems - it just increases the gap between my experience of the internet and someone else's experience.

I switched to DDG years ago because I thought Google searches were becoming shitty, and I don't like them all up in my business anyway. An increased gap between my experience of the internet and others is what I'm looking for.
posted by bongo_x at 8:43 PM on December 4, 2016 [1 favorite]


It's painful to realize how loathsome your fellow human beings are.
posted by Max Power at 9:16 PM on December 4, 2016 [2 favorites]


@hippybear "Where are the people who read things like this and who are in any position to do something about it?"

Looks like they work for Google, and did something. I'm not seeing these autosuggestions either.
posted by dougfelt at 9:38 PM on December 4, 2016


I have been helping to build the internet for almost 25 years. This is not what I expected to create.

We created a network for people. What did you really expect? People are crap. People want to make money and argue.

The internet has become a money making, shouty, toxic environment. I miss the days when I just surfed the web looking for stuff. Not buying stuff and not arguing about stuff. At the same time though I would not have a job - 99% of my work sells something to someone.

I'd love to get a job just building infrastructure to share knowledge, unfortunately those jobs are few and far between.

I'd also love to get back to a world when we have to actually look someone in the eyes before telling them they are a libtard or whatever insult is flavour of the week.
posted by twistedonion at 3:21 AM on December 5, 2016 [2 favorites]


Also, I don't know if these things are geographic or not but typing in "Jews are" gives no suggestions. "Are Jews" gives "Are Jews a race, are jews white and are jews christian". Nothing about evil.

"did the holo" autocompleted with a stormfront article about "10 reasons the holocaust didn't happen"

Not good. So I went to the bottom of the page and clicked "make a suggestion". Helpfully Google asked my to select the link and send a suggestion. Might help?
posted by twistedonion at 3:30 AM on December 5, 2016


Holy fuck. That Google returns Stormfront for me above Wikipedia's article on Holocaust denial in response to its own autocomplete of "did the Holocaust happen?" is completely disgusting.
posted by mediareport at 4:31 AM on December 5, 2016 [4 favorites]


I'm uncomfortable with some aspects of the thinking that Google or Facebook "owe" the world a 100% politically/ethically "correct" resultset for every search.

First, it's a moving target. Not every country acknowledges the same set of values or concerns, and it's something that evolves with time, as well. Just about no one will be 100% happy with it.

Secondly, search results have already become crappy, mostly due to extreme gaming and "search engine optimization". I do find some of the autocompletions wierd, though most are useful. A PC filter would not likely make results more accurate, there would be all sorts of accidental skewing or omissions, like porn filters sometimes stripped out important sexual health info and breast exams.

I think that a very big problem right now is not having enough sources of good information that are vetted for accuracy. I am a fan of Wikipedia, I think they're doing a good job more often than not. I'm also a fan of the better state-run broadcasters (CBC, BBC, etc); I think their news coverage is less biased and more in-depth than private broadcasters. We need some way to give weight to more trustworthy sources. It would be great if there was a broadly respected standards body whose "stamp" of approval indicates that a site is truthful, accurate and ethically responsible. But again, who says what is trustworthy and accurate and who else agrees?

Selecting results for "correctness" is a big technical problem, til AI is an order of magnitude better, so the only realistic way to implement such filtering is with human management of a list of "unacceptable" terms and sites.

Anyway, if a search-engine company is going to impose a stronger ethical filtering of terms and results, I would want it to be known to be in place, and defeatable (eg like SafeSearch is a choice in Google). I'm an older adult, I know what evil there is online; I'm not going to the dark side because of a perverse autocomplete or some gamed results.
posted by Artful Codger at 5:42 AM on December 5, 2016 [4 favorites]


Holy fuck. That Google returns Stormfront for me above Wikipedia's article on Holocaust denial in response to its own autocomplete of "did the Holocaust happen?" is completely disgusting.

Seconding that. Exactly the same result for me on google.co.uk, typing "did the ho...": Stormfront, then Wikipedia on Holocaust denial, then expeltheparasite.com (deniers), bbc.co.uk twice, a Reddit thread about denialism, a 2004 denial thread at some forum called "CODOH", two different denier videos on YouTube, and more denialism at nodisinfo.com.

It's like the fascists twenty years ago saw John Perry Barlow's Declaration of the Independence of Cyberspace and thought "Lebensraum!"
posted by rory at 5:45 AM on December 5, 2016 [1 favorite]


But what if X really is good - and not evil? What if Y is not pure evil, but a bit more evil than good?

The problem is, we depend on other people to tell us the answer in 3 or 4 lines. The truth is a bit more complicated than that. Unless you haven't heard of this one neat trick..
posted by beesbees at 6:37 AM on December 5, 2016 [1 favorite]


Aaaand this is why the problem is so pressing. Fortunately, nobody died. This time.

N.C. man told police he went to D.C. pizzeria with gun to investigate conspiracy theory (WaPo)

A North Carolina man was arrested Sunday after he walked into a popular pizza restaurant in Northwest Washington carrying an assault rifle and fired one or more shots, D.C. police said. The man told police he had come to the restaurant to “self-investigate” a false election-related conspiracy theory involving Hillary Clinton that spread online during her presidential campaign.
posted by Johnny Wallflower at 7:28 AM on December 5, 2016 [2 favorites]


Facebook, Twitter, and Google are still failing to curb hate speech, EU says

Tech companies may face new legislation after struggling to comply with voluntary code of conduct

So this is one way to handle this problem: through negotiation with an elected legislature working from a rights-based foundation. Europe has given the companies an opportunity to do it themselves, through voluntary compliance, but with little headway, is now looking to introduce legislation to force compliance.

This approach has a lot to recommend it to me: it's driven by the right organization, elected representatives, it's being done based on a human rights-based agenda, and it's neutral across all the companies. They can't complain that some get special treatment over others. And, say, Baidu, if they want to expand out of China, will have to play by the same rules too---this fight won't have to be fought all over again.

Yes, it limits absolute free speech, but free speech is already limited (copyright violations, child porn, for example). And this is being negotiated though a public sphere, presumably with plurality support of the public at least , rather than on case-by-case bases through individual decision making.
posted by bonehead at 8:49 AM on December 5, 2016 [2 favorites]


Or by voluntary and unaccountable patch-work solutions based on individual complains, done out of good will (which seems to be where we are now).

A framework for complaints that has accountability and responsiveness built-in seems to me to be a much better answer.

Make the companies do the filtering, but don't make (or let) them decide, based on their own caprice, what to filter.
posted by bonehead at 9:01 AM on December 5, 2016


N.C. man told police he went to D.C. pizzeria with gun to investigate conspiracy theory (WaPo)

Let me guess: he was white, right?

Just a hunch.
posted by acb at 9:04 AM on December 5, 2016 [1 favorite]


acb: The guy could hide naked in a snowbank.
posted by SansPoint at 9:05 AM on December 5, 2016 [1 favorite]


it's driven by the right organization, elected representatives,

Having seen what elected legislatures have done in this area in the past, as recently as last month for example, I'd prefer mine to stay as far away as possible from this problem. An approach less open to abuse and less inviting of disaster would work more like web ad blockers. You could subscribe to any of various lists of things to block, and people would be free to pick ones that suit their preferences. It could be made as easy as turning on "safesearch". I would be happy to subscribe to a browser plugin that gave search results from stormfront a -10 pagerank modifier in all my searches, and Fox news a -2 or whatever, run by, say, the University of Manitoba in a fully transparent way. All we need is an API from Google, the other search providers will follow, and with any luck at least one organization providing this service would turn out to be trustworthy and widely used. It's not perfect, but might be an improvement on doing nothing. Ad-blockers are rapidly becoming extremely popular, and this function is not much different in principle than what they already do.
posted by sfenders at 9:21 AM on December 5, 2016 [1 favorite]


Still requires a choice on the part of the user and/or a change default configuration in the browser/search engines. This is a higher-information answer.

That doesn't help the low-information users who just use their systems as configured by the manufacturers.

The only way this could possibly work was if it became as big a deal as say security was in the 90s. MS and Apple and Google etc... would need to start shipping hate-speech blockers on their own.
posted by bonehead at 9:30 AM on December 5, 2016 [1 favorite]


I don't think Google would want to make its own, but if some suitably reputable international agency were to get in on it (e.g. the United Nations), perhaps Google could be persuaded to make its hate-speech-indicated site list the default. They could be seen to be doing the right thing, without taking any responsibility for making the decision to block individual sites. So long as it is only a default that can be easily changed if it goes bad, opposition might be less intransigent than you'd expect for other approaches.
posted by sfenders at 10:00 AM on December 5, 2016 [1 favorite]


It's server-side though too. Don't forget that the original concerns were about autosuggestions provided by Google. So Google (and Facebook who are the same issue) would have to be actively involved.

I do think the EU would prefer that the companies do this themselves, but it doesn't look like the companies are moving enough for the EU's tastes.
posted by bonehead at 10:07 AM on December 5, 2016


he was white, right?

Good guess.

My first try was a typo, "Hood guess."
posted by Johnny Wallflower at 11:02 AM on December 5, 2016




Instant communication as a technology exploded so quickly that very few society-wide structures exist to provide professional guidance on it, let alone effectively and safely control its use and content. My thought is that there needs to be an objective check on the powers at play here that can set common-use guidelines, something staffed by elected, educated professionals representing minority proportions equal to that of the population. This check would be independent of business, independent of government, beholden only to the benefit of individual internet users. Right now, at this moment in time, who are we look to for guidance on how to handle the problems created by instant communication? A partisan government and its FCC? A massive international business like Facebook? Some cross-section of CS experts/W3C/ICANN and psychosocial experts? Who can we trust to guide the internet?
posted by theraflu at 5:03 PM on December 5, 2016


Feeling sad out here on the edge of the internet tonight.

The first three Google autocomplete results for "People are":

...awesome
...crazy
...strange

They don't have my thought: People are terrible.
posted by limeonaire at 9:03 PM on December 5, 2016 [1 favorite]


-Google should hire Cathy O'Neil to audit their algorithm
-Facebook should hire me to audit their algorithm: "I already have a company, called ORCAA, which is set up for exactly this..."
posted by kliuless at 9:56 PM on December 5, 2016 [4 favorites]


Yes, it limits absolute free speech, but free speech is already limited (copyright violations, child porn, for example).

Aside from that, we have extensive laws about advertising and marketing speech, which is what Google's search results are.
  • Just talking to people you meet: Allowed to say almost anything. Not allowed to incite or plan crimes, but that's about it.
  • Talking to people with the intent of getting money from them: Laws restrict what you can say and how you can say it.
  • Talking to people so that someone else will pay you: Same laws apply.
It is illegal to lie to people in order to get them to pay you; that's fraud. It's a bit of a stretch to say that giving top search status to outright lies, in order to get paid by the producers of the lies, is also fraud - but it's not a big stretch. The real difficulty comes in getting a court to agree that holocaust denial/ climate change denial/ various forms of hate speech are "lies" of the sort that matters to a fraud case.

You'd have to prove that the algorithms aren't free from human-influenced bias... but since they already have a track record of adjusting some of the algorithms (porn, torrent links, suicide info), and they give better ranking to paying clients, they can't say "we don't touch the math!"
posted by ErisLordFreedom at 10:47 AM on December 6, 2016


Google has reacted: Google’s auto-search results have become slightly less offensive (Ars Technica).

I'd argue that this is actually a worst-case outcome, an ad hoc fix, without considering a systematic framework by which to really address concerns. Yet another bandaid that does nothing to make anything better the next time problems happen.
posted by bonehead at 1:14 PM on December 6, 2016 [5 favorites]


The acute problems that were uncovered and subsequently (somewhat/haphazardly) addressed by Google are scary enough. The broader problems of framing, fragmenting, and objectivity in the face of such a large corpus with less-than-angelic actors is terrifying. I'm not sure, even now, that I'm fully cognizant of the breadth and depth of this problem but it is putting some shape to some worries I've been having over the last few months.

I am anxious to spend some time with the links in this thread and then doing some serious thinking while tracing the suggestions and references inside those links. I hope I'm overreacting. I'm afraid I'm not.
posted by Fezboy! at 2:45 PM on December 6, 2016


Google has tons of humans employed in adding exception cases to search. I met a guy whose job was to maintain an exhaustive list of german bands to better tune searches. They can skew results any way they like, they just don't care as long as somebody is motivated to click on ads.
posted by benzenedream at 11:02 PM on December 7, 2016 [2 favorites]


Way up-thread someone wrote: This is a web problem. The web is racist, very racist in parts and increasingly so. Google and Facebook are reflections of the web, and so they're also producing racist content as a result."

Am not sure "prodiucng" is the right word here, I'd have opted for "propagating".

Anyway, what caught my attention in the article was this...
Last week Jonathan Albright, an assistant professor of communications at Elon University in North Carolina, published the first detailed research on how right-wing websites had spread their message.
The linked article is titled The #Election2016 Micro-Propaganda Machine. It sets out to explore what are the factors "1) producing the content and 2) driving the online traffic."

It's a long, long read. I skimmed some of it. It left me feeling quite depressed.

Nevermind. Onward and Upward, as they say.

And so to the second part: #Election2016: Propaganda-lytics & Weaponized Shadow Tracking

A gluttion for punishment? Then sally forth to the third part: Data is the Real Post-Truth, So Here’s the Truth About Post-#Election2016 Propaganda

Like the first, these too are long, long reads. I skipped some. It left me feeling quite depressed. But I am quite resilient. So that's OK.
posted by Mister Bijou at 3:50 AM on December 9, 2016 [3 favorites]


« Older Marcel Gotlieb est mort   |   The Distribution of Users’ Computer Skills: Worse... Newer »


This thread has been archived and is closed to new comments