Your Goo Goo Googly Eyes
March 19, 2015 3:54 PM   Subscribe

Last week, a trio of Google researchers published a paper on a new artificial intelligence system dubbed FaceNet that it claims represents the most-accurate approach yet to recognizing human faces. FaceNet achieved nearly 100-percent accuracy on a popular facial-recognition dataset called Labeled Faces in the Wild. The paper.[pdf] (title song reference)
posted by sammyo (28 comments total) 7 users marked this as a favorite
 
Don't Be Evil
posted by Sys Rq at 4:20 PM on March 19, 2015 [5 favorites]


C'mon, you guys are being ridiculous. What could possibly go wrong with teaching robots how to recognize delicious, delicious human faces.
posted by sexyrobot at 4:36 PM on March 19, 2015 [24 favorites]


^^^ eponimabitconcerned
posted by me & my monkey at 4:40 PM on March 19, 2015 [27 favorites]


Who are the a-holes in the world that think this is a good idea? Or that will lead to anything other than more social control? Notice i didn't use the words totalitarianism or a police state. I don't necessarily think those are the logical end points of facial recognition technology, but boy will they really benefit from its use.
posted by Conrad-Casserole at 4:43 PM on March 19, 2015 [8 favorites]


SKYNET FOILED BY BALACLAVA
posted by benzenedream at 4:50 PM on March 19, 2015 [1 favorite]


SKYNET FOILED BY BALACLAVA

Terminator would have been a five minute movie if a Greek grandmother played the lead.
posted by dr_dank at 5:06 PM on March 19, 2015 [4 favorites]


Who are the a-holes in the world that think this is a good idea? Or that will lead to anything other than more social control?

I don't know - maybe they will use it to target ads to me that make me smile. I'd willing give up some of my freedom for that. Heck, I already do give up my free time searching for videos that make me smile! :)
posted by vorpal bunny at 5:10 PM on March 19, 2015


Don't Be Evil

Google motto 2004: Don't be evil
Google motto 2010: Evil is tricky to define
Google motto 2013: We make military robots

posted by a lungful of dragon at 5:12 PM on March 19, 2015 [46 favorites]


Combine this with the research into microexpressions and we get closer to the day when even your thoughts are no longer private.
posted by indubitable at 5:18 PM on March 19, 2015 [2 favorites]


In the fine print when they expanded it to a dataset of 260 million acuracy fell to 86%. Which is a high error rate for doing this at scale. Besides they already know where you are with pretty high certainty for most of the day because of all the data they already have from all your electronics.
posted by humanfont at 5:37 PM on March 19, 2015


Who are the a-holes in the world that think this is a good idea? Or that will lead to anything other than more social control? Notice i didn't use the words totalitarianism or a police state. I don't necessarily think those are the logical end points of facial recognition technology, but boy will they really benefit from its use.

Firstly, this is not how these researchers think, to a large extent. You don't really get to stop mathematics--the ideas are out there and would be out there despite any one person drawing a line in the sand and deciding that deep nets are too far.

Secondly, the uses are manyfold. The techniques that go into deep nets are usable for language, audio, video, and (of course) image processing. So figuring out techniques to train up facial recognition might mean real-time speech translation in the future (for an off-the-wall example).

Thirdly, as humanfont points out, this is not a real advance. They got 100% accuracy on a dataset that they were basically playing to--if I told you in advance the questions on a final exam, you'd probably get 100% also. It's not that cut and dry of course, but the accuracy that they got on a more general dataset is still not outperforming much simpler algorithms by much (if at all).

This is very hyperbolic, press-release journalism.
posted by TypographicalError at 5:42 PM on March 19, 2015 [5 favorites]


Combine this with the research into microexpressions and we get closer to the day when even your thoughts are no longer private.

Botox suddenly becomes a more attractive idea.
posted by emjaybee at 5:42 PM on March 19, 2015 [3 favorites]


99.67 % accuracy was just when determining if two faces in dataset were different from each other or not.

So, hold your horses for couple of more years before you start panicking about how to avoid a jaywalking ticket in your reality and how to successfully elude the govt agents in your fantasies.
posted by TheLittlePrince at 5:45 PM on March 19, 2015


Who are the a-holes in the world that think this is a good idea? Or that will lead to anything other than more social control?

WHY ARE THESE INVENTORS INVENTING THINGS? STOP INVENTING THINGS, JERKS.
posted by Going To Maine at 6:04 PM on March 19, 2015 [8 favorites]


I was considering doing a post about this other example of Evil Google, but I'll just leave it here: Google is helping to fund the group that’s trying to kill Obamacare in the Supreme Court (trigger warning: Mark "eXile" Ames). Google (and Facebook) are safely tucked in bed with the Kochs.
posted by oneswellfoop at 6:06 PM on March 19, 2015 [2 favorites]


See, the thing is, while I share all of the above concerns... as a prosopagnosiac, I'd actually be interested in a version of Google Glass that had no ability to store photos or videos, but could do facial recognition. (It's the only feature that would make me interested in such technology. The ability to reliably identify friends and coworkers on sight? Yes, please.)
posted by Shmuel510 at 7:29 PM on March 19, 2015 [1 favorite]


Meanwhile, why is it considered OK to publish papers about software without including (or linking to) working source code for the software?
posted by enf at 8:53 PM on March 19, 2015 [3 favorites]


I think I'm close to supporting niqab, if only because it's going to be the only way to have a life private from corporate and agency cameras.
posted by five fresh fish at 10:24 PM on March 19, 2015 [1 favorite]


I was considering doing a post about this other example of Evil Google, but I'll just leave it here: Google is helping to fund the group that’s trying to kill Obamacare in the Supreme Court (trigger warning: Mark "eXile" Ames). Google (and Facebook) are safely tucked in bed with the Kochs.

I wonder if it is likely that Brin, Page and Schmidt financially support right-wing extremist groups that are aligned with their long-term tax repatriation ("tax holiday") agenda. The climate denial and anti-insurance stuff is just part of the ride — the people at the top of Google are insulated from the consequences of climate change and increased healthcare costs, no matter how that plays out, but almost all rich people never want to pay any taxes.
posted by a lungful of dragon at 11:24 PM on March 19, 2015


I think I'm close to supporting niqab, if only because it's going to be the only way to have a life private from corporate and agency cameras.

Then they'll just use gait analysis or other method instead. The move to a Panopticon state (at least in the Western nations) is too tempting for monied interests to stop the snowball at this time. If the virtual Hoovering (in both senses) of personal data by intelligence agencies didn't tip the balance in favour of privacy then it's extremely unlikely that this "progress" will be slowed or stopped.

Whether it be claims that it is preventing terrorism or simply an additional way in which to direct market to individuals, the surveillance train is sadly going to keep going. It's a bit depressing to be fair but barring an outright revolution in public thinking around the topic I see it as an inevitability.
posted by longbaugh at 2:44 AM on March 20, 2015 [1 favorite]


Have they fixed the problem where this doesn't work for faces that aren't "white"?
posted by infini at 5:54 AM on March 20, 2015


As far as I can tell, all three of the researchers on this were doing the same kind of work before being hired by Google. Does that make them more or less evil?
posted by smackfu at 7:14 AM on March 20, 2015


More or less.
posted by a lungful of dragon at 1:32 PM on March 20, 2015


enf: Meanwhile, why is it considered OK to publish papers about software without including (or linking to) working source code for the software?

I have been a CS academic for a couple of decades, and this continues to puzzle me. Sometimes, purported screenshots seem to be considered evidence for systems’ functionality. I daresay mathematics would be a considerably easier field, if you could close-source everything but the QED.

As for this work: pretty cool. I wonder how it compares with Facebook’s similar advances. My only worry with (high functioning) AI is that I don’t think I’ll experience it in my life time, unless some truly radical advances are made.
posted by bouvin at 5:07 PM on March 20, 2015 [1 favorite]


Thing is - apart from the facial shit, frankly, the entire course of modern AI and even internetworked communication is towards building a layer of metadata over reality. The ontological web of Sir Tim Berners-Lee isn't just about how to make machines talk to each other and know what they're saying on a deeper level than just text on a page, but will be used in applications like AR and AI to inform machines about the world around us. You can sort of get an idea of this with the Google Glass project, but pushing it to the next level will take vision research to push it into identification of all things quickly and easily. Of course, generic things are easier to do than specifics. So "person" is easier than "Bob Jones, 123 America Ln", etc...

If we are to have an intelligence, it needs to be informed of the world around it and it needs to learn about the world around it, and having an ontological scheme for it's map of the world means we're going to build these things in the inevitable march towards AI.

Unfortunately, that means the assholes will have this ability, and will ultimately use it for various nefarious reasons. I can't recall who it was, but it might have been Berners-Lee, or someone else who previously might have been in support of AI and such who is urging us to consider the ramifications of these things now, rather than later. I don't think it was Hawking (though I think he's wary of AI, yeah?).

In the end, though. If an AI is going to inhabit the universe, it needs to identify what's in that universe, and it needs to be able to create a string of relationships, descriptors, etc... It could inhabit a lower-dimensional, simulated world, but that's still very primitive.
posted by symbioid at 5:22 PM on March 20, 2015 [1 favorite]


What recent thread had the outline of all the different projects google is working on right now, and how with the whole military robots and independent solar powered flying access points stuff we're totally fucked?
posted by emptythought at 6:12 PM on March 20, 2015


You won't be fucked, so long as you have marketable skills and don't mind an unpaid internship in one of Google's prison robo-camps. It's like the cheerful Über guy keeps saying: You just need to relax, embrace the future, and let the hooks do their work.
posted by a lungful of dragon at 1:38 AM on March 21, 2015 [1 favorite]


Shmuel510: "See, the thing is, while I share all of the above concerns... as a prosopagnosiac, I'd actually be interested in a version of Google Glass that had no ability to store photos or videos, but could do facial recognition. (It's the only feature that would make me interested in such technology. The ability to reliably identify friends and coworkers on sight? Yes, please.)"

A lapel camera and wrist band that would vibrate out people's names in Morse would be a killer app for me.
posted by Mitheral at 4:21 PM on March 23, 2015


« Older "Restaurants look, taste, sound, and smell more...   |   "It was produced in a hurry." Newer »


This thread has been archived and is closed to new comments