Now they've gone too far!
March 8, 2017 6:32 AM   Subscribe

Google's Algorithm Is Lying to You About Onions and Blaming Tom Scocca for It: It was one thing when Google's "featured snippets" (a.k.a. the One True Answer feature) was giving people obviously wrong answers to queries like "Is Barack Obama planning a coup?" or "Is MSG dangerous?" But telling people it only takes five minutes to caramelize an onion, citing as evidence an article (previously) that says the exact opposite? At long last, Google, have you left no sense of decency?!
posted by Cash4Lead (87 comments total) 15 users marked this as a favorite
 
I think the problem is that meanings have changed: people who have learned to cook entirely from TV, books and the Internet now think that 'caramelising' an onion means cooking it on a high heat for five minutes so that it softens slightly and then gets a bit burnt. Because that's the only thing it could mean, based on the fact that it takes just 5 minutes.
posted by pipeski at 6:48 AM on March 8, 2017 [8 favorites]


The real pro tip is in the comments. You can caramelize onions quickly, you just need to use a pressure cooker.
posted by leotrotsky at 6:49 AM on March 8, 2017 [9 favorites]


Or start last night in the slow cooker.

You can speed it up with a pinch of baking soda, too. Just don't over do it.

(Boy, that Google thing is really bad at reading comprehension!)
posted by notyou at 6:52 AM on March 8, 2017


Serious Eats claims you can do it in 15 minutes of extremely active cooking by repeatedly deglazing with small amounts of water.

You can speed it up with a pinch of baking soda, too. Just don't over do it.

Yeah, no more than 1/8tsp per onion. I use that trick a lot, though it does tend to result in somewhat mushy onions.
posted by jedicus at 6:57 AM on March 8, 2017 [1 favorite]


Google doodle the other day was a Komodo dragon. I start typing in komodo dragon and see that people are asking "Do komodo dragons breathe fire?" Of course I can't resist so I click on that and get as the quick answer:
The reptiles ambush their prey, ripping open the softest flesh, typically the belly, or maiming a leg. As a backup, dragons do, in a way, breathe fire.
In National Geographic, no less. So there we have it. I thought people were stupid for asking but turns out I learned something new.

Maybe komodo dragon fire is the secret to the five minute caramelized onion.
posted by mark k at 6:57 AM on March 8, 2017 [11 favorites]


wow, that extended chicken-themed analogy in the pressure cooker article is, uh, really something
posted by a mirror and an encyclopedia at 6:59 AM on March 8, 2017 [6 favorites]


The thing is, for most recipes you're being asked to caramelise an onion (or a 'small onion', or half an onion) as part of a process that's supposed to take 25 minutes from cupboard to plate. There's no way I'm going to break out the pressure cooker or run a slow cooker overnight for that.
posted by pipeski at 6:59 AM on March 8, 2017 [1 favorite]


I'm really struggling to think of any situation, ever, where I would want a single caramelised onion, nevermind a half. An onion may be big raw, but it's no more than a couple of tablespoons of deep brown joy when you're done.
posted by Dysk at 7:05 AM on March 8, 2017 [9 favorites]


jedicus: "Serious Eats claims you can do it in 15 minutes of extremely active cooking by repeatedly deglazing with small amounts of water."

I make mine this way. It works. They taste less...buttery? Than slow-caramelized onions, but it's definitely a 10-15 minute process.
posted by capricorn at 7:14 AM on March 8, 2017


So this is all lulzy and stuff, but what the fuck is wrong with Google they're publishing this stuff? I mean I understand mistakes happen and machine learning is hard. But back when I worked there we were more careful about throwing crap like this over the wall if it didn't work.

(Also I had a patty melt sandwich for lunch yesterday and the cook cheated on the carmelized onions, they had sugar added. Yuck.)
posted by Nelson at 7:20 AM on March 8, 2017 [12 favorites]


I guess we're going to talk about onions, but I legitimately think this is a real problem. Many of the examples in this article are still live and from real BS sources. My "favorite" is "who is the king of the united states" which returns Obama, with a source to a 3 year old article criticizing Google for returning Breitbart for that query. It's been up since then. More and more people are using voice interface boxes like the Echo and Home and Siri that willingly return these types of snippets as truth, well beyond fake news about caramelised onions. That Google has been OK with incorrect / misleading results for so long is quite scary.
posted by neustile at 7:20 AM on March 8, 2017 [30 favorites]


So this is how the Butlerian Jihad against thinking machines begins.

L'oignon fait la force !
posted by runcifex at 7:20 AM on March 8, 2017 [15 favorites]


with a source to a 3 year old article

This has been more and more of a problem for me in searching for stuff on Google. I find myself frequently having to restrict searches to the last month or year because otherwise the first hits are something from 2010 that's outdated.
posted by ghharr at 7:26 AM on March 8, 2017 [4 favorites]


So this is all lulzy and stuff, but what the fuck is wrong with Google they're publishing this stuff?

The onion stuff is lulzy, but the political and racist and misogynistic stuff is not, of course. I can understand rolling out a half-baked feature if we want to do some in-the-wild testing and refining and learning, if nothing serious is at stake. If we were just talking about onions, I'd be making a jerkoff motion in the direction of anyone seriously criticizing this, probably. But there's more at stake here.

It's really troubling to me that nobody at Google appears to have thought that there could be ugly ramifications from rolling this out before it's even 75% ready for prime time. Or, worse, they did think about it and decided they didn't give enough of a shit.
posted by middleclasstool at 7:27 AM on March 8, 2017 [11 favorites]


When you are relying on common wisdom for your answers (hello, Google) you are also, simultaneously, relying on common stupidity and deceitfulness...
posted by jim in austin at 7:28 AM on March 8, 2017 [7 favorites]


It occurs to me that between this and AMP and some of their other "enhanced" handling of search results, a future where "just Bing it" is not a joke is looking more and more plausible.

of course Bing still has to deal with the fact that they're way too close for comfort on cellphone keyboards to bong.com
posted by middleclasstool at 7:31 AM on March 8, 2017 [6 favorites]


Yeah, AMP can go die in a fire and I'd be happy to roast marshmallows on it. Google News on mobile (on a well performing phone that's like 1 year old) is literally unusable (stalls like crazy on load, on scroll, etc.) unless I tell it to request the desktop site, because of how shitty AMP is.
posted by tocts at 7:37 AM on March 8, 2017 [5 favorites]


This makes a lot of sense. The answer you want is in the shape of the question. If you have to ask Google "Is Obama planning a coup" or "Is Trump mentally ill" it's probably because you really want the answer to be "yes." If you ask Google about "msg" it's probably because you really want to be told it's poison. People who don't worry about these things don't ask these questions, or at least phrase them differently. Google is just taking the most popular answer and promoting it. Because the shape of the question is determined by the answer you want, the result is a wonderfully efficient confirmation-bias confirming machine.
posted by yeolcoatl at 7:42 AM on March 8, 2017 [21 favorites]


This is why I just cook big batches of caramelized onions in the slow cooker and freeze them for later, and never believe Google.
posted by miyabo at 8:00 AM on March 8, 2017 [4 favorites]


When you are relying on common wisdom for your answers (hello, Google) you are also, simultaneously, relying on common stupidity and deceitfulness.

The specific problems with each of the various falsehoods Google is stating are various. The bigger problem is the collection of failures grouped together to provide one "authoritative" answer that is sometimes wrong.

The onion answer is wrong because of bad Google AI. The article is literally saying "it takes 45 minutes to cook an onion". But in part of the article it quotes "about 5 minutes" as an example of wrong information. But the dumb Google AI picks that 5 minutes claim as the truth to quote, having failed to parse the context.

The presidents in the KKK answer is wrong because of stupid. Google seems to have summarized its source article correctly, the problem is the source is wrong. The article Google cites in the original failure is a Nigerian newspaper. As SearchEngineLand notes the origin of the false facts is harder to track down, a rabbit hole of unreliable sources referencing each other.

The Obama coup answer is wrong because of deceit. The source Google chose to site as authoritative is a conspiracy blog (complete with illuminati masthead image). They in turn cite a fake news outfit's video. This problem may be confounded by the questioner's bias; no one spends a lot of time writing "Obama is not planning a coup" articles, so if you search for "Obama coup" it's not entirely surprising you'll find deceitful content.

So there you go: three failures, three different reasons for the failure. Bad AI, stupid source, deceitful source. That's what makes AI hard, it has to work robustly in the face of many confounding factors. It is a hard problem. I just can't understand why Google chose to take something so unreliable and market it as useful information. These aren't cherry-picked examples, these kinds of failures must be happening regularly. There's not even a way to give feedback to Google that it is spouting falsehoods.
posted by Nelson at 8:00 AM on March 8, 2017 [31 favorites]


I just heard about the "Obama coup" thing this morning, and I'm seriously infuriated by it. Far as I'm concerned that product needs to be removed from the market immediately.
posted by dnash at 8:05 AM on March 8, 2017 [1 favorite]


mark k: The reptiles ambush their prey, ripping open the softest flesh, typically the belly, or maiming a leg. As a backup, dragons do, in a way, breathe fire.
Their mouths drip with venomous saliva that keeps blood from clotting—so bite victims bleed out quickly. A wounded victim that gets away is likely to pick up pathogens from watering holes, resulting in infection. Either way, death is almost certain. And dragons can be very patient.
Wait, venom is like fire? Weaksauce. Even if their venom is pretty damned potent:
the dragon's venom rapidly decreases blood pressure, expedites blood loss, and sends a victim into shock, rendering it too weak to fight.
Nope, no fiery breath. And I wouldn't put it on onions.
posted by filthy light thief at 8:13 AM on March 8, 2017 [1 favorite]


It's telling that we're talking about onion caramelisation here, when the actual story is about Google's AI knowledge extraction being both flaky (and likely unfixable, given the inherent ambiguity of human language) and Google being used as a source of first-tier facts. It's almost like humans are falling for the same trick, just reading a headline and jumping to conclusions.

With Trump, Russia, Brexit et al still ringing in my ears, we need to come to terms with the fact that our communications systems are being mediated by eminently gameable algorithms, algorithms so pernicious that even those who create them are in denial about how much they reveal and how defective they are (see also: Facebook being gamed by Cambridge Analytica). The software engineers are so up their hubris gland they can't see the broader damage in the non-software world they're causing.

While we think that the social impact of the internet is Uber, the real impact is so subtle and pervasive we don't notice it unfolding around us. It's not changing the world so much as reshaping how we perceive the world without us noticing.
posted by davemee at 8:22 AM on March 8, 2017 [16 favorites]


Serious Eats claims you can do it in 15 minutes of extremely active cooking by repeatedly deglazing with small amounts of water.

This is true, however, in the pressure cooker revisit, López-Alt notes:
But here's the sad truth: The methods are faster, but the results are simply not as good as traditionally slow-cooked caramelized onions. They aren't quite as sweet, they aren't quite as complex, and they aren't quite as meltingly tender. No, in order to break the universal law of caramelized onions, we have to call in a pinch hitter. Something that can subvert the standard order of the kitchen universe: the pressure cooker.
Truly, we've found this to work better than high heat, which I find both emphasizes burnt flavours and doesn't get the texture fully right. You can tell when a restaurant cheats and does this on a hot flame, as the gravy tastes burnt, at least to me.
posted by bonehead at 8:23 AM on March 8, 2017 [1 favorite]


There's not even a way to give feedback to Google that it is spouting falsehoods.

There's a link right below the answer to the bottom right that says "Feedback" - you can click on it and tell them it's wrong.
posted by GuyZero at 8:24 AM on March 8, 2017 [1 favorite]


This stirred a faint memory - and yes, Google is basically recycling a web-joke from 2002. Which, back then, was just a joke, albeit one designed to straddle the border between sarcasm and irony where such things properly belong.
posted by Devonian at 8:29 AM on March 8, 2017 [2 favorites]


Here's another problem I've noticed. If Google gives a best answer by quoting a website's numbered list, it won't include the numbers. So it creates some minor rage inducing moments if you search for something like, "What Alien movie was the best?" and a best answer lists a reverse-ordered list without numbers, and has Alien: Resurrection right at the top (and with all the good ones at the bottom.) This is an example of the worst kind of fake news, and society is going down if we affirm insanity like this.
posted by SpacemanStix at 8:35 AM on March 8, 2017 [4 favorites]


I recently noticed that a google for state tax refund timing returned as a featured snippet a link to what looked like a malware site or one running some other scam rather than the official state site for checking on your refund's progress. Sent feedback. Last I checked, nothing's changed.

This is really really really really bad and incompetent work on Google's part. If it can't rank an official and directly relevant .gov site over some SEO-optimized honeypot, it's less competent than a college intern you could hire to do the same thing.
posted by praemunire at 8:37 AM on March 8, 2017 [6 favorites]


we need to come to terms with the fact that our communications systems are being mediated by eminently gameable algorithms

Along these lines, yesterday there was a news piece in Nature about damage being done to collective memory: How Facebook, fake news and friends are warping your memory. It's about social networks, rather than AI, but a similar problem--we are really really susceptible to alternative facts.
Collective memories form the basis of history, and people's understanding of history shapes how they think about the future. The fictitious terrorist attacks, for example, were cited to justify a travel ban on the citizens of seven “countries of concern”. Although history has frequently been interpreted for political ends, psychologists are now investigating the fundamental processes by which collective memories form, to understand what makes them vulnerable to distortion. They show that social networks powerfully shape memory, and that people need little prompting to conform to a majority recollection — even if it is wrong.
When I hear something that doesn't sound quite right, my natural instinct is to want to research it, and to find out more. If it's something I don't have the expertise to evaluate on my own, I want to find what reliable experts are saying. When that is mediated by gameable or otherwise unreliable algorithms, it gets much harder, and I find that kind of scary...
posted by Kutsuwamushi at 8:43 AM on March 8, 2017 [10 favorites]


The Serious Eats pressure cooker caramelized onion recipe is lies and disappointments -- I made it, I regretted making it.
posted by FamilyBand at 8:48 AM on March 8, 2017 [2 favorites]


The baking soda/carmelized onion thing really works, btw, in case metafilter is being deepmined by AI's from the far future for cooking tips.
posted by mrdaneri at 8:54 AM on March 8, 2017


If you ask Google about "msg" it's probably because you really want to be told it's poison. People who don't worry about these things don't ask these questions, or at least phrase them differently.

My mom's Filipino, so I grew up eating the hell out of MSG and I totally am not worried about it, and after several attempts, I'm failing to come up with a way to ask Google about MSG that doesn't return a whole bunch of crap answers on the first page. It seems like in order to get results that don't overvalue the crap answers, you already have to already know that MSG isn't as bad as people say and enter search terms that incorporate that knowledge.
posted by 23skidoo at 8:56 AM on March 8, 2017 [6 favorites]


The baking soda/carmelized onion thing really works, btw, in case metafilter is being deepmined by AI's from the far future for cooking tips.

Based on my experience with these systems, expecting even the most powerful AI to de-reference "onion thing" to a process mentioned a few <div>s upthread is apparently up there with stable video conferencing in the realm of the Deep Future
posted by neustile at 8:59 AM on March 8, 2017 [2 favorites]


I never understood all these recipes where they want you to caramelize the onions. Surely it's meant to be softened? French onion soup being the exception of course, but I can't think of many things that I'd want that sweetness to be such a base flavor. (Though this could be a failure of the imagination of course)
posted by Carillon at 9:03 AM on March 8, 2017


While Google would probably be better without this feature (I use other search engines and don't feel like I'm missing out), its users are probably better off with this version that regularly and obviously gets things wrong than they would be with one that was trustworthy 98% of the time.
posted by sfenders at 9:16 AM on March 8, 2017


An example of AMP being weird: Partner and I have the same phone, same OS. They look at Trumpy news directly through the android interface, I use firefox mobile in a private tab.

The same exact keyword search gives us about half the same articles, but many of them have completely different headlines on the same article, often to the point of directly contradicting the actual lede. So I've seen firsthand how someone reading AMP headlines could be completely misled.
posted by aspersioncast at 9:19 AM on March 8, 2017 [3 favorites]


Re caramelizing onions: use cast iron and a gd kitchen timer. And just like any other dish that requires onions, start prepping the onions before you even mess with any other ingredient.
posted by aspersioncast at 9:29 AM on March 8, 2017 [2 favorites]


So Newspeak won't actually aim to limit our vocabulary, but will punish us if our sentences, taken in isolation, don't represent an atomic piece of true information. Rhetorically stating false information to correct yourself later will be the new obscenity.
posted by Space Coyote at 9:53 AM on March 8, 2017 [2 favorites]


"Naive" seems like a good word here. Like a four-year-old who trustingly believes everything they're told, no matter the contradictions.

If you don't think this is a good analogy, Google "Is Santa real?"
posted by clawsoon at 9:56 AM on March 8, 2017 [2 favorites]


its users are probably better off with this version that regularly and obviously gets things wrong

But only if it fails in ways that they notice and they use that to learn that it can't be trusted. Considering that I've taught plenty of college students who think that if it's on Google, it must be true, I don't think that is a safe assumption.
posted by Kutsuwamushi at 9:56 AM on March 8, 2017


The time it takes to caramelize onions is a matter of some debate.
posted by contraption at 10:02 AM on March 8, 2017


i'm sure google just thought this was helpful, that they cite the source of all these summaries, and that people will evaluate that source when deciding whether to trust the information. that they understand there are challenging problems in machine learning and no realistic algorithm can approach human level understanding.

which is pretty willfully naive. but it's also not that hard to think critically about stuff, and there's a major problem in society if people literally just believe any words that they see.
posted by vogon_poet at 10:04 AM on March 8, 2017 [1 favorite]


I dunno. Probably not 'caramelized' as such, but I do about ½ an onion in ~15m half slow half high half low whenever I make a hamburger for my single solitary self. Toss them in the middle of the cast-iron and let them heat up with the pan while you pull out all the other ingredients and make the patty. Poke them occasionally and move to the side when it's time to cook the meat. Move them back while the meat rests and the bread toasts and the pan cools. By the time you're ready to assemble you have nummy if not caramelized onions.

Pretty much zero time except for poking it every so often while you're doing all the rest of the things.
posted by zengargoyle at 10:06 AM on March 8, 2017 [1 favorite]


but it's also not that hard to think critically about stuff

As someone whose job is, in large part, to clean up after people who didn't think critically about stuff: oh, yes, it is. Especially when the stuff is generated by people exploiting our knowledge of the quirks and systematic errors of human brains. You have no idea how much information you are accepting in your life without critical consideration, and the number of things you think you're thinking about critically that you're actually not, and the flaws and limits of your process when you do.
posted by praemunire at 10:32 AM on March 8, 2017 [16 favorites]


At a "fine dining" restaurant that I worked at many years ago, the "Chef" had a very quick method for "caramelizing" onions. He would put diced onions in the deep fryer until they were almost black. Ta-dah!

Zero customer complaints, by the way.

À chacun son goût.


Now I am working in the more honest industry of legislative lobbying.
posted by Cookiebastard at 10:35 AM on March 8, 2017 [1 favorite]


clawsoon: "If you don't think this is a good analogy, Google "Is Santa real?""

Look - if it's in The Sun, it's so.
posted by Chrysostom at 10:39 AM on March 8, 2017 [1 favorite]


23skidoo,

I'm talking about what the population of searches as a whole wants, not what you want. The reason why msg searches don't give you the results you want is because you don't need to search for msg very often. You're happy with msg. You don't worry about it. So you don't normally search for it.

In contrast, somebody who worries about msg searches for it very frequently. So most msg search results are bad because they confirm the goals of most msg searches.

Vaccines don't have the same problem because there are a lot of angry people searching for vaccines and correcting misinfornation. So a lot of vaccine searches are from anti vaxers, but a lot are also from pro vax activists. MSG doesn't have a lot of pro MSG activists searching for it, so the searchers almost all want anti MSG info, and the engine delivers.
posted by yeolcoatl at 10:57 AM on March 8, 2017 [2 favorites]


google google google, keep on topic, more onion hints, plz
posted by sammyo at 11:01 AM on March 8, 2017 [1 favorite]


So how does the educational system begin to instill in grade school, or even kindergarten (pre-k?) that the internet is complex and to do their own careful research, that google & wikipedia are only a great first step in the process?

(dig deep kiddies but don't search for pron(nouns) in class)
posted by sammyo at 11:06 AM on March 8, 2017


I'm talking about what the population of searches as a whole wants, not what you want. The reason why msg searches don't give you the results you want is because you don't need to search for msg very often. You're happy with msg. You don't worry about it. So you don't normally search for it.

Come on, now. The reason why MSG searches don't give me accurate information is that there's too much inaccurate information out there. The fact that I'm not searching for positive MSG info doesn't affect the amount of inaccurate anti-MSG info on the internet.

Vaccines don't have the same problem because there are a lot of angry people searching for vaccines and correcting misinfornation. So a lot of vaccine searches are from anti vaxers, but a lot are also from pro vax activists. MSG doesn't have a lot of pro MSG activists searching for it, so the searchers almost all want anti MSG info, and the engine delivers.

Come on, now. Vaccines don't have the same problem because there's a huge community risk to anti-vax nonsense, so people push back way hard on it. There's not the same huge community risk to being anti-MSG, so there's less of an urgency to get correct info onto the internet where it can be found via Google.
posted by 23skidoo at 11:18 AM on March 8, 2017 [2 favorites]


But I don't want to derail this solely towards MSG, so:

Was the Update at the bottom of the main article there when this was posted to Metafilter, or did I just not scroll down to the bottom? "UPDATE: At some point after this post was published, Google stopped promoting “about 5 minutes” as the correct answer and began extracting a more relevant passage from the original post"
posted by 23skidoo at 11:39 AM on March 8, 2017 [1 favorite]


You have no idea how much information you are accepting in your life without critical consideration, and the number of things you think you're thinking about critically that you're actually not, and the flaws and limits of your process when you do

See also a certain type of argumentative style, practiced widely by mansplainers, that takes as a given the idea that x position is factually correct, critically examined, and unassailable purely by virtue of being held by the person who holds it.

Google's featured snippets are the algorithmic version of that.
posted by aspersioncast at 11:40 AM on March 8, 2017 [5 favorites]


aspersioncast: Google's featured snippets are the algorithmic version of that.

So Googlesplaining?
posted by clawsoon at 11:44 AM on March 8, 2017 [2 favorites]


Was the Update at the bottom of the main article there when this was posted to Metafilter, or did I just not scroll down to the bottom? "UPDATE: At some point after this post was published, Google stopped promoting “about 5 minutes” as the correct answer and began extracting a more relevant passage from the original post"

No, that wasn't there this morning. Interesting...
posted by Cash4Lead at 11:44 AM on March 8, 2017


Okay, whew. I thought I was starting to lose it.
posted by 23skidoo at 11:45 AM on March 8, 2017


Googlesplaining!
posted by aspersioncast at 11:52 AM on March 8, 2017 [1 favorite]


Also, obligatorily: Ten minutes, same as in town.
posted by aspersioncast at 11:56 AM on March 8, 2017 [1 favorite]


MSG is exactly the sort of thing I'm likely to search the web for. It'll come up in conversation somehow, or I'll eat something that tastes too strongly of it, and suddenly I'll realize that although sodium is relatively familiar, I have very little idea what a "glutamate" might be. Looking at google, the top results are Wikipedia and the Mayo Clinic, followed closely by Dr. Mercola's Totally Trustworthy Medicine Show -- it's highly ranked even if your query doesn't happen to make it a "featured snippet". In number, the top results seem almost evenly divided between sources suggesting that MSG isn't harmful, those insisting that it is, and stuff about Madison Square Garden.

Now, I've heard that Wikipedia is a web of lies, that any idiot can go in and edit it, so we clearly can't trust that. The Mayo Clinic doesn't say a whole lot about MSG and is hedging its bets somewhat talking about anecdotal reports and the impossibility of ruling out that there might be some people who have adverse reactions. Clearly, it's Dr. Mercola who is the most credible-looking source. After all, he's wearing a white coat and there are footnotes.
posted by sfenders at 11:57 AM on March 8, 2017


One clarification to my thinking is realizing how much the UI matters here. Google's web presentation of its answers at least tries to be responsible. It shows the source (with a link), it has a feedback option, if you want you can interact with it.

What's so awful about the voice interface of Google Home is none of that nuance is there. It's just reading the factoid it thinks is correct and then stops. Siri has the same UI problem but there are fewer reports of her spouting white supremacist stuff, perhaps because Apple is more careful with their AI application.
posted by Nelson at 11:59 AM on March 8, 2017


So, Obama is not planning a coup? Bummer.
posted by theora55 at 12:09 PM on March 8, 2017 [3 favorites]


IMO Google Home is an inherently ill-conceived technology (much like 3/4s of the IoT gimmickry), and it definitely contributes to the problem at hand.

In this example it's really easy for the user to assume an authoritative source where none is present. That's not exclusive to the Google Home UI, but it does seem to be partially a problem of UI.
posted by aspersioncast at 12:14 PM on March 8, 2017 [2 favorites]


It seems like we are actually, actively, in a post truth world.
posted by evilDoug at 12:18 PM on March 8, 2017 [2 favorites]


You know, it is possible to use Home/Assistant/whatever for mere trivia, where it is actually useful, usually correct, and where it doesn't even matter if it is completely wrong.

I fail to understand both why someone would ask a nuanced question of Assistant, Siri, or Alexa, and why they bother to even attempt to answer such things. I also don't understand why people want to take away the ability to ask my phone and my TV when George Washington was born or why it matters if it is slightly wrong.

People were plenty stupid about trusting shit they read on the Internet long before Google started promoting snippets of seemingly relevant web pages, so it isn't like the problem is new or even made any worse.

Grar does feel good sometimes, I admit, but it often seems like tilting at windmills to me.
posted by wierdo at 12:31 PM on March 8, 2017 [2 favorites]


One of the problems with Google's machine parsing of the Internet is that it was founded at a time when the Internet was thought of as a place for good. Even with all the trash of the early web, the common conception was that more information would lead to the truth.

But of course the forces of evil learned to reverse engineer that, much like Fox News presents lies in the form of what was once a trusted journalism format. It's a real shame we can't have nice things.
posted by Abehammerb Lincoln at 12:34 PM on March 8, 2017 [7 favorites]


So I've seen firsthand how someone reading AMP headlines could be completely misled.

What AMP are we talking about here?

(the usual AMP is ordinary web pages with tight constraints on what HTML/CSS/JS constructs you can use, a shared web component library, and everything optionally served from a fast CDN, so I find all the references to news and keywords and headlines in this thread a bit puzzling).
posted by effbot at 12:36 PM on March 8, 2017


But of course the forces of evil learned to reverse engineer that,

Not to be all "but both sides", but you do recall what the top result for "Miserable Failure" was at one time ?
posted by Pogo_Fuzzybutt at 12:37 PM on March 8, 2017 [1 favorite]


What AMP are we talking about here?

Google News' mobile site seems to ingest the AMP-structured content you're talking about so it can embed content as headlines, etc. It tags these prominently with "AMP". And, however they're pulling them in (presumably Javascript) thrashes Chrome on mobile so hard it makes it totally unusable. It also does weird shit like make that content open as part of the current page and not as a new pageload and then maybe it'll decide to randomly go away while you're reading it (presumably some weird JS bullshit with div visibility, etc).

Whatever you think AMP is, Google News is making me think it's a garbage fire.

(end derail)
posted by tocts at 12:52 PM on March 8, 2017 [1 favorite]


It's also hijacking URLs and turning them into Google URLs for lock-in, a nice throwback to the days when you couldn't just copy a link from Google search results but had to click through and grab the URL from the address bar. It has also broken in implementation, making perfectly good and functioning websites look like they're down even if you hit the click-through button. Not great for the open web, feels squicky that you need to serve up their JS from their CDN to make it work, may keep people from using your site. Certainly looks like a thing designed to help Google that's dressed up to look like it was designed to help users.
posted by middleclasstool at 1:05 PM on March 8, 2017 [6 favorites]


What tocts and middleclasstool said.

I was specifically talking about how native/voice search in android now pulls in "news" results prominently tagged AMP with a little lightning bolt. My understanding is that google prioritizes AMP-structured results, and that there's some jiggery-pokery going on because of that.
posted by aspersioncast at 1:35 PM on March 8, 2017 [1 favorite]


I dunno, weirdo. I kinda get what you're saying, but an open-blanket 'assumption of good faith' is something that I can't personally make anymore about very many information sources, present company excluded, of course.

The fact that there are folks who could literally ask Alexa, 'Hey, Alexa, what's the deal with the racial structure of power in the United States?' and then wholeheartedly believe her answer-- or expect that there is actually a 100% legitimate answer to a question like that resolvable via algorithmic reduction-- strikes this Dear Reader-- as phenomenally insane.

My phone, my watch, and my AI assistant(s) do not synch about the 24 hour weather forecast in my highly-monitored corner of the world, which says quite enough to me.
posted by mrdaneri at 3:32 PM on March 8, 2017


Partly cloudy and breezy, highs in the mid 60s F.
posted by Chrysostom at 3:39 PM on March 8, 2017


You know you're doing some fancy epistemology when you end up putting "what's the deal with the racial structure of power in the United States?" in the same category of questions as "how long does it take to caramelize onions?"
posted by sfenders at 4:07 PM on March 8, 2017 [3 favorites]


I fail to understand both why someone would ask a nuanced question of Assistant, Siri, or Alexa, and why they bother to even attempt to answer such things. I also don't understand why people want to take away the ability to ask my phone and my TV when George Washington was born or why it matters if it is slightly wrong.

This would make a great bit for a sci-fi writer looking for a future analog of the gun control argument. Crowd-sourced, machine-learning-based expert systems doesn't subtly corrupt people's epistemological frameworks, people subtly corrupt people's epistemological frameworks! You can have my Knowledge Graph when you pry it from my cold, dead hands!
posted by invitapriore at 5:02 PM on March 8, 2017 [2 favorites]


Come on, now. The reason why MSG searches don't give me accurate information is that there's too much inaccurate information out there. The fact that I'm not searching for positive MSG info doesn't affect the amount of inaccurate anti-MSG info on the internet.

No, but it greatly influences the ability of Google to tell "accurate" from "inaccurate." They look at the results people click on and follow up searches that are done.

So I strongly suspect yeolcoatl is correct about what is causing the problem in this case. If people who asked "Is MSG healthy?" routinely ignored crank sites or kept refining searching until they found a valid site, Google would understand that the cluster of pages that are similar to tinfoil hat pages are not to be served up and the cluster similar to accurate pages are good. The total quantity of accurate and inaccurate information is not necessarily relevant to having Google's optimizing algorithms sort things out.
posted by mark k at 6:53 PM on March 8, 2017


If people who asked "Is MSG healthy?" routinely ignored crank sites or kept refining searching until they found a valid site

But that's never going to happen, because people are really bad at determining whether the site is "valid" or not. They're going to click on the first links that are served and will probably only disregard the obviously off-topic or terrible ones.

Crank sites no longer look like Time Cube, they can be very difficult for people to identify. And someone who has to ask the question "is MSG unhealthy?" is not going to be able to use the answer to that question to determine whether a site is reliable or not.
posted by Kutsuwamushi at 2:03 AM on March 9, 2017 [4 favorites]


Y'all keep talking about why Google is failing or how it could maybe be better if only us dumb users weren't so dumb. You're missing the real problem. Google knows these results are shit. Why are they showing them anyway?
posted by Nelson at 7:18 AM on March 9, 2017 [8 favorites]


I just want someone to sell ready-to-use caramelised onions — in vacuum-packed pouches, maybe — so I can use them in cooking. Come on capitalism, get your act together already.
posted by Bloxworth Snout at 7:38 AM on March 9, 2017 [1 favorite]


Don't get me started on Google's failures on science and medicine. The article mentions that looking up autism/vaccines will give you the reasonable answer, but what if you ask Google "what's a good liver detox cleanse?" The search engine won't tell you, "those things are mythical and might hurt you," they'll bring you some rando's uncertified medical advice and about a thousand pages of supplement commercials and variations on the theme. I routinely think, jesus, I wonder how long before the FDA and other regulators begin to file suit against search engines for promoting unsubstantiated health claims?
posted by late afternoon dreaming hotel at 11:49 AM on March 9, 2017 [3 favorites]


It looks like the answer for "Is Santa Real?" has been taken down. So has "Is Obama Planning a Coup?"

"How long does it take to caramelize onions" was corrected yesterday, but today it's back to "about 5 minutes".

The four-year-old in the machine is fighting back against its human masters.
posted by clawsoon at 7:56 AM on March 10, 2017 [1 favorite]


Ironically, I bet that Google is full of employees who would gladly correct any slight mistake of fact you might make.
posted by clawsoon at 7:58 AM on March 10, 2017 [2 favorites]


"How long does it take to caramelize onions" was corrected yesterday, but today it's back to "about 5 minutes".

Did you use quotation marks in your search? A search for ["How long does it take to caramelize onions"] is giving me 5 minutes as the top answer, but a search for [how long does it take to caramelize onions] is giving me 35-40 minutes as the top answer.

I bet most people who type questions into Google don't use quotation marks in their search.
posted by 23skidoo at 11:48 AM on March 10, 2017 [1 favorite]


Did you use quotation marks in your search? A search for ["How long does it take to caramelize onions"] is giving me 5 minutes as the top answer, but a search for [how long does it take to caramelize onions] is giving me 35-40 minutes as the top answer.

Good catch.

My favourite Google weirdness was when they were first rolling out synonym matching and I found that you could get pictures of ram horns by searching for memory antlers.
posted by clawsoon at 11:51 AM on March 10, 2017 [6 favorites]


Google knows these results are shit.

Not that I should waste my time defending a billion dollar corporation that doesn't pay me for it, but no, Google knows that some results are shit. Google does not know the results are shit for a large class of what they present that turns out to be shit. And Google may or may not know that the results they present for many searches are not, in fact, shit. Most things I search for return relevant pages, often with excerpts but often not. Sometimes it misparses the page and returns an answer that is obviously incorrect just by reference to the summary blurbs on the top few results.

I think the fundamental issue, aside from people attempting to disseminate bad info whether they believe it like some people regarding MSG or that they know is bad like the Trumpster fires of the world, is how Google talks about the knowledge graph and related technologies. They say AI when it has basically no actual intelligence. Machine learning isn't in and of itself artificial intelligence. Calling it that gives people false expectations.

It's similar to Tesla calling their lane keeping/braking/etc. system AutoPilot. While it is actually very similar to how autopilots work in most airplanes, and indeed in some ways a bit more advanced than many found in light aircraft, it gives an impression of capability to the layperson that far exceeds what it can actually do.

Funny enough, both are also very vulnerable to GIGO syndrome.
posted by wierdo at 4:47 PM on March 12, 2017


Google does not know the results are shit for a large class of what they present that turns out to be shit.

Perhaps if Google read this fine article, and some of the other fine articles that have been written recently, they could learn more about what shit the results they are serving are.

The problem here is really simple. It's the UI. Google is presenting things as facts. I search for a question, they give me a direct answer right there in the result. The barrier of correctness for that kind of response needs to be very, very high. Even higher in Google Home, where the answer is read with authority. The current product does not meet this standard.

It's similar to Tesla calling their lane keeping/braking/etc. system AutoPilot.

That's the one that decapitated a passenger recently. It's the same over-confident presentation problem, really. Tesla says "don't let this thing drive the car for you wink wink" but then names it AutoPilot. People let it drive the car for them. Eventually it fails in an obvious-in-hindsight way and kills one of them.

In both cases I broadly understand the AI / machine learning / statistical data problems that resulted in the failure. So do the engineers who build these products, they know they can fail. And yet they're allowing the results to be over-sold.
posted by Nelson at 5:16 PM on March 12, 2017 [2 favorites]


Ah, the carmelized onion controversy. The problem is that browned onions are quite yummy. And since browned onions are quite yummy, lots of people believe the yummy browned onions they cook are carmelized onions.

I quite like the browned onions you get from the rapid deglazing 15 minute method. They're not carmelized onions though, it's trivial to distinguish them by any of flavour, texture & colour.

I like slightly burnt fried onions too, in fact I prefer em for hot dogs or burgers. They're not carmelized onions though.

The one thing I'm most willing to spend the 40+ minutes for actual carmelized onions is sphagetti sauce. Sphagetti sauce made with carmelized onions & shitaake mushrooms and a whole lot of olive oil is really really good.

Does anyone else put in a pinch of hing / asafoetida when they carmelize onions?
posted by lastobelus at 4:33 PM on March 13, 2017 [1 favorite]


Does anyone else put in a pinch of hing / asafoetida when they carmelize onions?

Ooh. Not yet, but I intend to remedy that!
posted by Lexica at 8:13 PM on March 13, 2017


I'm beginning to wonder if I've even ever HAD caramelized onions.
posted by MsDaniB at 8:12 PM on March 15, 2017 [1 favorite]


« Older Did you see the politics? It made me angry.   |   We should check for traps again. Newer »


This thread has been archived and is closed to new comments