No This Isn't About Knitting
November 2, 2017 7:18 AM   Subscribe

Google's AI thinks this turtle looks like a gun. The 3D-printed turtle is an example of what’s known as an “adversarial image.” In the AI world, these are pictures engineered to trick machine vision software, incorporating special patterns that make AI systems flip out. Think of them as optical illusions for computers. Humans won’t spot the difference, but to an AI it means that panda has suddenly turned into a pickup truck.
posted by Literaryhero (66 comments total) 23 users marked this as a favorite
 
I've been wondering how much of a difference it makes that our own image recognition networks are trained on video. The world keeps moving.
posted by clawsoon at 7:21 AM on November 2, 2017 [3 favorites]


Guacamole is a cute name for a cat.
posted by TheWhiteSkull at 7:32 AM on November 2, 2017 [23 favorites]


Google's AI thinks this turtle looks like a gun.

Oh man, so that's why Leon flipped out about the tortoise question.
posted by Strange Interlude at 7:34 AM on November 2, 2017 [25 favorites]


CAPTCHA challenges often ask you to pick out the traffic signs. So it must not be an easy thing for an AI to do.
posted by Obscure Reference at 7:34 AM on November 2, 2017


Kyle McDonald: "calm down about the turtle. just because cutting edge research doesn't account for adversarial examples doesn't mean industry ignores it." (worthwhile thread)
posted by gwint at 7:37 AM on November 2, 2017 [2 favorites]


Adversarial attacks like these aren’t, at present, a big danger to the public
Wait, was I supposed to be frightened rather than mildly interested?
posted by inconstant at 7:41 AM on November 2, 2017 [7 favorites]


Guacamole is a cute name for a cat.

Or you could make your guacamole from avocato.
posted by clawsoon at 7:42 AM on November 2, 2017 [11 favorites]


CAPTCHA challenges often ask you to pick out the traffic signs. So it must not be an easy thing for an AI to do.

I got a series of pictures from the "I am not a robot" check box today that asked me to identify if there was a road in the pictures. Their self driving cars must not be doing so good. (Yes, relevant xkcd that I am too lazy to look up)
posted by Literaryhero at 7:47 AM on November 2, 2017 [1 favorite]


So to be clear because the title is a tiny bit misleading - they adjust the pattern on the turtle's back and that's what's mistriggering the classifier. The turtle is a somewhat special case in that it has a big flat patch on its back that can be adjusted to make the recognizer misclassify it. The cat photo that gets recognized as guacamole has similarly be adjusted in ways imperceptible to humans.

The major takeaway is 1) yeah, these things can be attacked and 2) neural nets don't recognize objects the way we recognize objects, nowhere close
posted by GuyZero at 7:51 AM on November 2, 2017


Do you remember that one scene in Robocop?
posted by qntm at 7:57 AM on November 2, 2017 [8 favorites]


I keep waiting for someone to do this sonically and make, say, a YouTube video that triggers Amazon Echo to place an order for a million staplers, without sounding like that at all to humans.
posted by solotoro at 7:59 AM on November 2, 2017 [19 favorites]


There are two important takeaways from all these 'adversarial attack' papers that have been appearing lately:

1. No one really knows how neural net systems work from a fundamental perspective. You select a pile of features, you ram them through an arbitrarily designed network, and you optimize for results. That's where the state of the art is right now. All these systems make errors, these guys have just found a clever way to cause one to occur.

2. Design of the classification task and understanding its performance is EXTREMELY important. The authors of this paper took an off the shelf classifier with a thousand essentially arbitrary classes, and a set of features designed to support that classification. The classifier by itself is only roughly 85% accurate. An error analysis might show that the next most confusable class when presented with a cat is 'guacamole', even if the cat classifier itself is 90% accurate - just because that's how neural net classifiers work. Errors can be completely random and capricious. But if your task was to distinguish solely between cats and guacamole, you would have selected different input features and possibly designed your entire classifier differently.

So of course this gets represented as breathlessly as 'AI confuses turtle for gun', as though that was a problem someone had set out to solve. A properly designed turtle-gun discriminator might be near-impossible to attack adversarially.
posted by scolbath at 8:01 AM on November 2, 2017 [16 favorites]


One nice thing about the plasticity of the human brain and its ridiculous pattern recognition skill is that we could teach people to recognize these kinds of attacks. AI "sees" quite differently than we do, but we can train ourselves to see what they see, so long as the issues remain within the set of things humans are physically capable of seeing.
posted by wierdo at 8:04 AM on November 2, 2017 [2 favorites]


If it doesn't recognize something, does it just default to "guacamole?" Honestly, what is there in that image that is even remotely suggestive of anything within a mile of guacamole?
posted by Naberius at 8:06 AM on November 2, 2017 [2 favorites]


...it's likely possible that one could construct a yard sale sign which to human drivers appears entirely ordinary, but might appear to a self-driving car as a pedestrian which suddenly appears next to the street...
What a strange example. Your self-driving car might slow down and be extra careful in the suburban neighborhood full of yard sales. Horrors!

Now, fooling a big heavy machine into thinking it's got a clear roadway when it does not, that might be more worrisome.
posted by Western Infidels at 8:07 AM on November 2, 2017 [3 favorites]


A properly designed turtle-gun discriminator might be near-impossible to attack adversarially.

I realize you are being totally serious, and your comment is very informative, but taking this sentence out of context is basically everything I dreamed of growing up in the 80s.
posted by Literaryhero at 8:07 AM on November 2, 2017 [31 favorites]


Naberius, most of these classifiers emit a ranked list of classes, with probabilities - in fact, they frequently rank EVERY class, even though the probability of it being guacamole is 1x10e-20. The top 1 is reported as the 'winner'. This paper, I believe, was in fact tricking the classifier into scoring the image quite highly.
posted by scolbath at 8:08 AM on November 2, 2017 [2 favorites]


Just great. So in the future we'll also have to remove our turtles from our briefcases when going through airport security.
posted by ZenMasterThis at 8:10 AM on November 2, 2017 [9 favorites]


A properly designed turtle-gun discriminator might be near-impossible to attack adversarially.

Blastoise, I choose you!
posted by EndsOfInvention at 8:10 AM on November 2, 2017 [12 favorites]


Admittedly I have colleagues working on machine learning and so on, but as someone who doesn't directly work on that stuff, it seemed pretty clear to me that the title/focus was just a funny. I doubt that the average layperson would think Google had specifically trained a model for distinguishing turtles from guns either.
No one really knows how neural net systems work from a fundamental perspective.
See, that seems scarier to me -- well, "scarier", you get the idea -- than the prospect of humans being able to trip up neural networks.
posted by inconstant at 8:16 AM on November 2, 2017


Guacamole, broccoli, burrito, soup, cucumber, cheeseburger. Jeez you guys, just FEED the AI already. It's obviously hungry.
posted by rlk at 8:17 AM on November 2, 2017 [5 favorites]


Well, the neural networks we're comparing this to -- meaty, wet ones in our skulls -- have this problem, too, we call it pareidolia, and it's the kind of thing that causes people to see ghosts out of the corner of our eyes, see shooters on grassy knolls, and cause people to slow down while running in the halls. Making fun of computer sight processes for not recognizing things deliberately designed to mislead it, when we have the same problem, is anti-robotist!
posted by AzraelBrown at 8:26 AM on November 2, 2017 [6 favorites]


I keep waiting for someone to do this sonically and make, say, a YouTube video that triggers Amazon Echo to place an order for a million staplers, without sounding like that at all to humans.

Already a thing, sort of:
Voice-controlled assistants by Amazon, Apple and Google could be hijacked by ultrasonic audio commands that humans cannot hear, research suggests.
[...]
They were able to make smartphones dial phone numbers and visit rogue websites.
My understanding is that the voice recognition devices pay attention to ultrasonic components of human speech because, well, the microphones can pick it up and it'd be daft for the speech recognition algorithms to ignore useful data. But it turns out that playing back only the inaudible-to-humans component of the voice commands is (sometimes) enough to satisfy the algorithms that they're hearing valid speech.

Not at all my area, but it at least seems plausible that you could include some of these human-inaudible commands in the quiet bits of youtube videos or whatever.
posted by metaBugs at 8:29 AM on November 2, 2017 [5 favorites]


What we need is three precogs AIs calculating independently. Then we can assume any two that agree are correct and discard the minority report as irrelevant data. So when a waymo car says that a pedestrian was really a squirrel, we'll be able to tell the widow not to be sad because the data indicates she was in fact married to a squirrel for the past two years and things are better this way.
posted by mattamatic at 8:29 AM on November 2, 2017 [7 favorites]


Plot twist! It is Dustin’s turtle!
posted by Grandysaur at 8:31 AM on November 2, 2017 [3 favorites]


I keep waiting for someone to do this sonically and make, say, a YouTube video that triggers Amazon Echo to place an order for a million staplers, without sounding like that at all to humans.

Just don't make it order paperclips or we'll all be screwed.
posted by natteringnabob at 8:32 AM on November 2, 2017 [3 favorites]


society is slowly learning to trust AI over human eyes

Ummmmmmmmmmmmmmmmmmm
posted by grumpybear69 at 8:39 AM on November 2, 2017


CAPTCHA challenges often ask you to pick out the traffic signs. So it must not be an easy thing for an AI to do.

Are CAPTCHA challenges hand-made by people? I always assumed they were whatever the CAPTCHA robot thinks it can do better than other robots and wants human verification.
posted by straight at 8:41 AM on November 2, 2017


I got a series of pictures from the "I am not a robot" check box today that asked me to identify if there was a road in the pictures. Their self driving cars must not be doing so good. (Yes, relevant xkcd that I am too lazy to look up)

FTFY.
posted by The Bellman at 8:44 AM on November 2, 2017 [1 favorite]


I agree this whole thing is anti-robotist.

As always, we sit on a perch and flatter ourselves way too much. We're intelligent by our own definition of that word.

I don't remember who said this, but we fascinate ourselves with books about "OPTICAL ILLUSIONS", when they're really, just, "BRAIN FAILURES".
posted by mysticreferee at 8:52 AM on November 2, 2017 [5 favorites]


No one really knows how neural net systems work from a fundamental perspective. You select a pile of features, you ram them through an arbitrarily designed network, and you optimize for results.

There was actually a moment of this on the Today show yesterday. Lester Holt had Tim Cook on and he (Holt) was oohing and ahhhing (as required) over the new iCandy and he said something like "So it's actually looking at my face to decide whether it's me so it can unlock?"

And there is this tiny second where you can see Tim Cook thinking "well, we trained it with a neural net, so for all we know it could be staring up you left nostril like an Egyptian embalmer and reading the chi flow from your immortal soul..."

But what he said was more like "Yes. Isn't that amazing? Something about thousands of tiny dots."
posted by The Bellman at 8:55 AM on November 2, 2017 [7 favorites]


So of course this gets represented as breathlessly as 'AI confuses turtle for gun', as though that was a problem someone had set out to solve.

To be fair a lot of people are trying to come up with human level image classifiers, and it's what most laymen leap to when you talk about image recognition.
posted by Tell Me No Lies at 9:04 AM on November 2, 2017


A self-driving car encounters a tortoise in the road. Will it treat it as an attack?
posted by dances_with_sneetches at 9:09 AM on November 2, 2017


"Hey Siri."
gubeep! "Yo mama."
"What?"
"What can I do for you, Bob?"
"Uh... right... I need a photo of a turtle."
gubeep! "I found over 87 million photos of yo mama. Would you like to see them?"
"You mean turtles?"
"I found 6 thousand photos of turtles. Would you like to see them?"
"Show me the top ten."
gubeep!
"Oh god jesus... fuck! What the hell is wrong with you, Siri?"
gubeep!
"That's better. Read me the caption on first photo."
" 'This Mojave Desert tortoise...' "
"No good. Read me the caption on second photo."
" 'Yo mama so fat-' "
"Stop, Siri."
posted by ardgedee at 9:09 AM on November 2, 2017 [6 favorites]


We may not know how emergence works in neural networks but what we do know is that our own neural network is primed by billions of years of evolution even before it begins developing in a particular instance.

So perhaps this could be a workable approach to human-like AI visual recognition: pre-simulating the evolutionary levels of visual sensors from the simplest light-sensitive single-cell eyes all the way to our relatively complex optical periferies.

At each level some priming of the developing proto-network would occur, which would be retained to the next level, e.g.:
- single-cell eye: rapid change from light to shade means danger. Initiate response. Gradual change from dark to light means chance to forage.
- dish eye: if darkness spreads from one point to the whole field of vision: danger, initiate response.
- higher up the scale: two points on a contrasting background - probable face...

The primed structure should basically be able to classify shapes, movements, elementary spatial information, proximity of own species or whatever we establish is genetically encoded for us.

This is how every actual human-like AI neural net would start out, after which it would be fed real-world information, hoping that it would obtain a world-view similar to ours.

What I see as a dead-end is to feed AI sensory inputs quite unlike ours (multiple 2D cameras, radar, IR, etc), expecting it to see the world like we do.
posted by Laotic at 9:11 AM on November 2, 2017 [1 favorite]


To make an adversarial image, do you need to be able to run the classifier many times? I guess what I'm wondering is how much of a threat this is to real-world systems. Can I trick a self-driving car into running through a stop light without access to the car's software?
posted by qxntpqbbbqxl at 9:17 AM on November 2, 2017


>> No one really knows how neural net systems work from a fundamental perspective.
>See, that seems scarier to me -- well, "scarier", you get the idea -- than the
>prospect of humans being able to trip up neural networks.

"We're not quite sure why it happens, but it's a useful effect" gets said a lot more than we might like. Remember, as a species we still don't know how gravity works.

Which isn't to say that we should run off and trust neural nets just yet, but that's more a matter of time and reliability than understanding.
posted by Tell Me No Lies at 9:19 AM on November 2, 2017


So basically these are just optical illusions for computers.
posted by Tell Me No Lies at 9:19 AM on November 2, 2017 [4 favorites]


But if your task was to distinguish solely between cats and guacamole, you would have selected different input features and possibly designed your entire classifier differently.

So of course this gets represented as breathlessly as 'AI confuses turtle for gun', as though that was a problem someone had set out to solve. A properly designed turtle-gun discriminator might be near-impossible to attack adversarially.


Except for two things: (i) it's not really practical to pick all pairs x,y and design an x-y discriminator for each (nor is this likely to be what humans do when they learn to discriminate between turtles and guns), and (ii) this stuff is being extremely overbilled in a lot of venues, both popular and technical, as the be-all of "AI" and cognitive science. A lot of people within CS are (at least verbally) saying things like -- now we have solved (or are close to solving) vision/language/whatever, and asking what the point of all this other cognitive science research is in related fields. But as long as systems like this, which we don't really even understand themselves, continue to make *extremely* non-human-like errors, we haven't moved an inch towards solving human vision for most values of "solve". All we have is a new object of study. From this perspective I don't really think the reporting on this sort of case is all that breathless.
posted by advil at 9:23 AM on November 2, 2017 [2 favorites]


Your self-driving car might slow down and be extra careful in the suburban neighborhood full of yard sales. Horrors!

Or someone who wanted to snarl traffic for hours could put up lots of "Missing Dog" posters that the AI reads as stop signs. Putting them on freeway poles would be especially entertaining - for people watching from a long distance away.

That's especially troublesome, because you want the recognition program to notice bent stop signs, tilted stop signs, stop signs with a torn section or part that's missing, and so on. You can't just decide to lower the sensitivity and recognize fewer stop signs.

And what the article doesn't say, is that it means you can make a picture of a rifle that the AI recognizes as a turtle. Twitter and Facebook are using AI to recognize ads with forbidden content, but that only works if the picture analysis is accurate.

I'm expecting hardcore porn that gets recognized as cat pictures, with captions that seem like terrible puns if you recognize the picture, and that read as innocuous meme fodder for the AI that doesn't.
posted by ErisLordFreedom at 9:27 AM on November 2, 2017 [1 favorite]


Kye Macdonald may be right that "the industry" is protecting against this, but showing that a different image classifier isn't fooled by the turtle that was built to fool only a specific one doesn't mean anything. This paper is about attacking classifiers where they have access to the model but says their next paper won't have that limitation.

I read somewhere that Apple built a second face ID classifier specifically to combat attacks against the first. Maybe using multiple classifiers would drastically improve security for important deployments of these things.
posted by macrael at 10:00 AM on November 2, 2017


A properly designed turtle-gun discriminator might be near-impossible to attack adversarially.
Define "properly designed"? Last I saw, the state-of-the-art in "properly designed" deep learning for image classification requires the designers to create their own adversarial systems, and keep those systems in the loop: while the adversary is being trained to try to create perturbed images that get misclassified after as small a perturbation as possible, while the primary net is being trained to try to properly classify those images anyway.

But here's a catch: can you define "small perturbation"? The paper I read used l2 norm IIRC (so a perturbation looked like "faint white noise" to a human eye, even if it looked like "the epitome of the difference between a cat and guacamole" to an exploitable image classifier). But a rotation can be tiny in angle while being huge in l2. A horizontal reflection is a "tiny change" in some sense, so we'd better test against those too. How about small perspective skew, or other small-magnitude positional remapping? That's all I can think of offhand, but if I've missed anything then I'm vulnerable to the first person who didn't miss it.
Can I trick a self-driving car into running through a stop light without access to the car's software?
My intuition says "probably not", but people who actually know what they're talking about say
Adversarial examples generalize across models trained to perform the same task, even if those models have different architectures and were trained on a different training set. This means an attacker can train their own model, generate adversarial examples against it, and then deploy those adversarial examples against a model they do not have access to.
posted by roystgnr at 10:04 AM on November 2, 2017 [1 favorite]


"We're not quite sure why it happens, but it's a useful effect" gets said a lot more than we might like. Remember, as a species we still don't know how gravity works.
Right, but neural networks are not an effect, and machine learning isn't a natural phenomenon -- these are human-made technologies. There's a difference in scope between "hm, I don't know how electromagnetic radiation works" and "hm, I don't know how this solar-powered gizmo sorter that I designed and am trying to patent so that it can [turn into a widespread household item / be embedded within the national defense system / revolutionize the gizmo industry] works".
posted by inconstant at 10:06 AM on November 2, 2017


What happens when your Alexa can be activated by mysterious voices in the dark?
posted by emjaybee at 10:08 AM on November 2, 2017 [1 favorite]


roystgnr, I don' t know of anyone who builds ML (machine learning) systems using that approach. It's probably good at this point to think about adversarial systems, but no amount of adversarial system development will wallpaper over insufficient training data, or an unrepresentative test set, or use of synthetic or proxy data in place of real world data, or inattention to error analysis post-evaluation, or development of the entire system in isolation of its real world deployment environment, or any of a ton of other mistakes that ML developers make on a regular basis.
posted by scolbath at 10:09 AM on November 2, 2017


Next version of the AI:

LO-CA-TING [LOGO] PROG-RAM-MING LAN-GUAGE IN-TER-FACE...

TUR-TLE FAI-LING TO RES-POND TO [FIREWEAPON] COM-MAND... CA-PA-BI-LI-TY NON-EXIS-TENT...

OB-JECT CLAS-SIFI-CA-TION: HARM-LESS PROG-RAM-MA-BLE TUR-TLE FRIEND...
posted by wwwwolf at 10:12 AM on November 2, 2017


"Box turtle with the 40 watt range..."
"Hey only what you see, pal"
posted by Flashman at 10:32 AM on November 2, 2017 [4 favorites]


I think there's worth in showing the ways in which NNs can fail, even when they aren't trained "properly."

I mean, it's not like companies are having trouble hiring people and are taking anyone who can pretend they understand deep learning long enough to get through an interview or anything.
posted by quaking fajita at 11:14 AM on November 2, 2017 [1 favorite]


Right, this is the kind of insight I am dying to see in one of these adversarial papers. WHY does changing one pixel on a cat make it guacamole? :-)
posted by scolbath at 11:39 AM on November 2, 2017


qxntpqbbbqxl: Can I trick a self-driving car into running through a stop light without access to the car's software?

roystgnr: My intuition says "probably not" ...

Depends on how many ways the car recognizes a stop light. If V2I comes into play, infrastructure and vehicles will talk to each other (the fact that it's even an "if" is because Trump's administration may end the requirement for V2V communications ... because it's a restriction to industry? Fuck your raising death tolls, the money must flow!)

It'd be harder to make a car ignore a stop light if it recognizes the light as a part of the infrastructure, and it recognizes vehicles moving around it (slowing ahead in the same lane, speeding up at the cross street). It's likely that autonomous vehicles won't only rely on visual signals.
posted by filthy light thief at 12:23 PM on November 2, 2017


And back to the main topic: Google’s AI Wizard Unveils a New Twist on Neural Networks (Tom Simonite for Wired, Nov. 1, 2017)
Late last week, [Google researcher, former University of Toronto professor, Geoff Hinton] released two research papers that he says prove out an idea he’s been mulling for almost 40 years. “It’s made a lot of intuitive sense to me for a very long time, it just hasn’t worked well,” Hinton says. “We’ve finally got something that works well.”

Hinton’s new approach, known as capsule networks, is a twist on neural networks intended to make machines better able to understand the world through images or video. In one of the papers posted last week, Hinton’s capsule networks matched the accuracy of the best previous techniques on a standard test of how well software can learn to recognize handwritten digits.

In the second, capsule networks almost halved the best previous error rate on a test that challenges software to recognize toys such as trucks and cars from different angles. Hinton has been working on his new technique with colleagues Sara Sabour and Nicholas Frost at Google’s Toronto office.

Capsule networks aim to remedy a weakness of today’s machine-learning systems that limits their effectiveness. Image-recognition software in use today by Google and others needs a large number of example photos to learn to reliably recognize objects in all kinds of situations. That’s because the software isn’t very good at generalizing what it learns to new scenarios, for example understanding that an object is the same when seen from a new viewpoint.

To teach a computer to recognize a cat from many angles, for example, could require thousands of photos covering a variety of perspectives. Human children don’t need such explicit and extensive training to learn to recognize a household pet.

Hinton’s idea for narrowing the gulf between the best AI systems and ordinary toddlers is to build a little more knowledge of the world into computer-vision software. Capsules—small groups of crude virtual neurons—are designed to track different parts of an object, such as a cat’s nose and ears, and their relative positions in space. A network of many capsules can use that awareness to understand when a new scene is in fact a different view of something it has seen before.
posted by filthy light thief at 12:27 PM on November 2, 2017 [1 favorite]


"Siri, who or what am I?"

"You're guacamole."

"You think I'm guacamole?"

"No... you are, in fact, guacamole."

"I've used my face to unlock my iPhone X hundreds of times now, and you've never mentioned that."

"Well, you seemed nice regardless, so I never thought to mention it."
posted by Halloween Jack at 12:28 PM on November 2, 2017 [1 favorite]


Guacamole, broccoli, burrito, soup, cucumber, cheeseburger. Jeez you guys, just FEED the AI already. It's obviously hungry.

I HAVE NO MOUTH AND I MUST EAT.
posted by loquacious at 12:29 PM on November 2, 2017 [3 favorites]


roystgnr, I don' t know of anyone who builds ML (machine learning) systems using that approach.
Well, here's the seminal paper on the topic. It's only three years old, but in that time it's amassed a thousand citations (one of which I presume is the image-specific case I was recalling...) and about a hundred thousand Google hits. You might know someone using that approach without knowing they're doing so.
no amount of adversarial system development will wallpaper over insufficient training data
That's a tautology, so sure. If you don't have sufficient training data, adversarial training won't fix that. However, if you would have had sufficient training data iff perturbed data was included, and you fail to account for that, then you now have insufficient training data, it will probably come back to bite you when you hit a large enough volume of real world data, and it will definitely bite you if a real adversary ever gets to feed you data. That seems important. Tell two friends and so on.
posted by roystgnr at 12:44 PM on November 2, 2017


If it doesn't recognize something, does it just default to "guacamole?" Honestly, what is there in that image that is even remotely suggestive of anything within a mile of guacamole?

It has learned some way to transform the inputs it was given into a point in a feature space - picture an actual 3D space, although this feature space almost certainly has many more dimensions. It has also learned from the training set that a certain area of this space corresponds to the concept known as 'cat', and a different area to the concept known as 'guacamole'. This picture has ended up, after the transform, closer to the guacamole area than to any other.

The thing with machine learning is that in general the more impressive it does things, the harder it is to explain how it did it.
posted by kersplunk at 1:16 PM on November 2, 2017 [1 favorite]


solotoro: don't you mean a million paperclips?
posted by drfu at 1:39 PM on November 2, 2017


Maybe the machine classifiers are in fact correct, and it is our internal classifiers that are wrong. I mean, sure, that putative turtle sure looks like a turtle, but maybe my internal gun/turtle classifier is wrong and it really is a gun. Perhaps our epidemic of gun violence is caused by guns coming to shore and laying hundreds of gun eggs, and we've never noticed because we've all been misclassifying them as turtles.
posted by Pyry at 2:43 PM on November 2, 2017 [2 favorites]


Guacamole, broccoli, burrito, soup, cucumber, cheeseburger.

I'm going to vote no to AI restos that serve, for example, feline-based dips.

However, a reality cooking show with AI contestants? That could be gold.
posted by bonehead at 2:52 PM on November 2, 2017


$20 says this ends with black children carrying turtles being murdered by police bots.
posted by stet at 4:22 PM on November 2, 2017 [1 favorite]


roystgnr: GANs are a different beast than what the author of the original paper was describing - unfortunately, the use of "adversarial" is somewhat overloaded in this context.

However, I think you are misunderstanding the issue with 'in/sufficient training data'. Perturbed data does not really count as training data. Robust classifiers are trained on diverse data sets - where diversity means far more than a slight rotation or a flipped pixel - it means a whole different cat. This is machine learning 101. You are trying to force the neural net to, within a certain space, generalize a problem - and it can't generalize what it hasn't been exposed to. Sure, it might now be robust to a few pixel flips, or minor rotations - the limitations of GANs, for instance, is that they only generate within a certain region of *their* training. But an undertrained model is likely full of holes left by the images it hasn't been exposed to. All you have done is raised a bunch of prior probabilities for minor variations on the image. If you train a model on insufficient data, and deploy it in the real world - you deserve the adversaries that find you.
posted by scolbath at 5:38 PM on November 2, 2017


Why does everyone here assume that it is a image of a 3D printed turtle amd not a gun? What if the AI is right, and we're wrong?

What if the AI is also trying to instead sow seeds of doubt in us. What if the AI wants us to be conditioned to be afraid of turtles? What if the AI wants to train us that we want to eat kittens with chips?
posted by Nanukthedog at 6:33 PM on November 2, 2017 [2 favorites]


So I've been building training sets and evaluating retrained ImageNet image classifiers for weeks now. If you want to know how this works, here goes:

An image classifier is trained on training set of data, which ideally is tens of thousands to millions of images. Each image has a label which is the correct (desired) classification of the image. Cat, gun, turtle, and guacamole are all examples of labels. The training data has (hopefully) hundreds to thousands examples of each. During a single training step (of which there are thousands) it selects some random subset of the training data, say 100 images, and tries to classify each member (it gives its best guess so far based on the weights assigned to various basically-indescribable features of the image -- these are not readily human-relatable, and might as well be described as an alien intelligence). It then adjusts those weights so that the "fit" of the weights to the (correct) classification is improved. Rinse and repeat many many times. At the end you have an evaluation function which will, for any image, output a probability for each label you taught it about (cat, guacamole, gun, etc.).Typically you want the highest probability to be the correct one, but research papers often talk about a top-5 rate (that is, how frequently is the correct label one of the top 5 highest probabilities). At the end of the process you test your classifier by asking it to guess labels on a set of evaluation images and counting how many it gets right.

The best image classifiers around typically achieve an overall success rate in the mid-80% range. This means that it's extremely easy to find images that it misclassifies. Now, the question is whether the label it outputs is close to the correct one (husky vs. malamute) or not (gun vs. turtle). In my experience classifiers do not generate errors uniformly at random. Some labels it will recognize very easily (95% of the time it's correct) and some it will not (10% of the time it's correct). When it is wrong it is often very badly wrong (from a human perspective).

Another thing is the starting point in the above-described process; given that a new classifier can take weeks to train on a $30,000 computer chock full of GPUs, most people do what I've been doing and opt for the much-faster route of retraining an existing classifier that has already been through this process. The downside of that is that you don't know how well the starting classifier is suited to your particular problem domain. For example, imagine that for whatever reason (this is just an example) the classifier doesn't recognize the color red as a distinguishing feature. You're going to have a hell of a time teaching it to distinguish, say, roses from one another because color is probably key (not a botanist, I don't know if there's some other feature besides color to distinguish flowers).

The weirdest thing about all this is that you cannot typically explain why a particular model performs the way that it does. All you can do is alter the way you train it (either the input data or the amount of computation, and a few other knobs) and evaluate the outcome. You're basically feeling your way around N-dimensional space for the right weight vector for your actual problem. It's possible to overfit your model too, where it gets too wrapped up in the training data and can't generalize to broader real-world images.

At any rate, my takeaway is that there's still room for some breakthroughs in the field. The state of the art is still not good enough for many real-world problems. 85% is great for an academic paper, but not good enough for human, real-world problems.
posted by axiom at 8:31 PM on November 2, 2017 [3 favorites]


Right, but neural networks are not an effect, and machine learning isn't a natural phenomenon -- these are human-made technologies.

Well, sort of. Humans add very little to the neural network training process. Basically we stand on the top of the mountain and decide what direction to kick the ball. After that it's all up to natural laws.
posted by Tell Me No Lies at 9:25 PM on November 2, 2017


Google's AI thinks this turtle looks like a gun.

America. It's turtles, all the way down.
posted by LeLiLo at 10:40 PM on November 2, 2017 [1 favorite]


Goddamn Turtle-guns. I knew this is how it would all end.
posted by Chocomog at 5:59 AM on November 4, 2017 [1 favorite]


Ctrl-F ED-20

I am disappoint.
posted by mikelieman at 6:36 AM on November 4, 2017


« Older “They hated it. Hated. Especially the fans.”   |   Coming soon to a theater near you! Newer »


This thread has been archived and is closed to new comments