You look familiar. Have we met before?
June 11, 2017 3:04 PM   Subscribe

AI learns how to create human faces from scratch. Dr. Mike Tyka, a biophysicist working for Google, has been using Generative Adversarial Networks (GANs) to create images of human faces with a fair amount of success. The Dr. breaks his process down for us in his blog post Work in progress: Portraits of Imaginary people.
posted by scalefree (38 comments total) 18 users marked this as a favorite
 
Is it still considered Uncanny Valley if it's made by a computer?
posted by Cat Pie Hurts at 3:19 PM on June 11, 2017 [1 favorite]


Those are the faces of the machines which will enslave us.
posted by killdevil at 3:26 PM on June 11, 2017 [7 favorites]


Are they all white? Or just mostly white?

Also, what is the mechanism behind the backgrounds? They're a constant in most of the AI imagery I've seen. They're really interesting.
posted by nevercalm at 3:26 PM on June 11, 2017




Bah; your parents managed to create your face from a couple of things they had lying around the house.
posted by ricochet biscuit at 3:35 PM on June 11, 2017 [24 favorites]


looks like the late 90s photoshop "art" everyone was uploading to their personal WWW site

also, they look like serial killers the lot of them
posted by Foci for Analysis at 3:54 PM on June 11, 2017 [3 favorites]


I think one thing that is apparent in those images is that they were drawn by something that is making no progress in its machine learning towards a concept of symmetry. Symmetry (and asymmetries of varying degree) are fundamental to our ability to categorise and remember vast numbers of individual faces. While we may not have the only, or best way to distinguish faces, our mental processes are clearly better than those of the AI, since we can immediately identify the images as 'bogus', and presumably the AI would not be able to do so. The process demonstrated in the images is effective in giving individual portraits a distinct feel in terms of the lighting and colour, but those jarring identikit faces, with stunning mismatches between eyes, odd creases and twists and the complete disregard for symmetry, makes it clear that the AI only makes predictions based on patterns; it can't make that leap to a connection between a real, physical face and the images it's attempting to create, a connection that enables us to immediately suspect that there's something wrong when we look at them.
posted by pipeski at 4:01 PM on June 11, 2017 [4 favorites]


A few of them look South-East Asian to me, and many of the others look mixed race. Mind you, these are what the suthor calls "cherry-picked" examples but to me they mostly look as though they're recovering from heavy facial surgery.
posted by Joe in Australia at 4:03 PM on June 11, 2017 [1 favorite]


Are they all white? Or just mostly white?

The difficulty here is that, in most typically employed senses of that question, there no fact of the matter about that. After all, it's not just that they aren't people, it's also that they were not produced with any conceptual awareness of the existence of racial difference. Talking about the outputs of the process as possessing race is accordingly going to be difficult. That's not to say that questions about race aren't relevant, though.

A fairly simple question is "what is the ethnic makeup of the training set?". Looking at some of the features that the process generates, I'm pretty confident that it's not exclusively white, but I'd be very interested to find out the actual breakdown. A more difficult question is "which of these faces do we read as white and non-white, and which do we ascribe a particular non-white identity to, and why?". Of the 22 faces cherry picked from​ the outputs, I reckon I read 7 as non-white. But I think it's telling that I'd be less confident about making a positive identification of racial identity than offering that negative othering sort of classification. It's difficult, and uncomfortable, and probably reveals more about my perceptual biases than about the training data or the algorithm. Certainly I don't enjoy being brought into direct confrontation with the way my mind easily classifies certain faces as "other" while struggling to give them any clearer Identity.

This sort of work certainly raises a bunch of concerns about representation, and how human biases can be hidden behind supposedly mechanised processes, but I think it would take a bit of reading to work out exactly what the points of interest and/or concern are in regards this specific work. On the other hand, thinking about how I ascribe "whiteness" in general is revealing to me on a fairly immediate level of interaction.
posted by howfar at 4:04 PM on June 11, 2017 [10 favorites]


They all look "other" to me. They look like ghosts in a mirror, or some other horror movie image.What they do not look like is regular people of any race.
posted by mermayd at 4:17 PM on June 11, 2017 [4 favorites]


They all look like they got in bad car crashes several years ago.
posted by rhizome at 4:36 PM on June 11, 2017 [1 favorite]


Request: put these through the smiler on FaceApp
posted by rhizome at 4:42 PM on June 11, 2017 [5 favorites]


This reminds of that Japanese horror movie where the kids cursed by the evil ghost have their faces blurred in photographs. It was like The Ring or The Grudge, but not them. I just can't remember.

Bethesda should get on this for their open world games. Unique NPC faces in Elder Scrolls and Fallout!
posted by adept256 at 4:48 PM on June 11, 2017 [2 favorites]


These images are sampled from (an approximation to) the distribution over images the model was trained with --- therefore you would expect the distribution of attributes such as ethnicity, age, pose, lighting conditions, and expression also to reflect the training data. If the data are sparse in certain combinations of attributes, then the samples will be too.


I think one thing that is apparent in those images is that they were drawn by something that is making no progress in its machine learning towards a concept of symmetry.


I've done some pretty similar work recently, and I feel for the author: it's really tough getting the generator to create symmetric features. For example, glasses often look like a combination of two different styles. I don't think this is a problem with the learning, though.

The problem is that learning the distribution over combinations of features is harder than learning about individual features. Look at the sixth image from the top: most of the facial features belong to a young lady, but the model has made an unusual choice in generating graying hair on top of her eyebrows. This is the same problem as symmetry-breaking: the sample is violating a constraint on the combination of features. The model has a harder time learning the correct combinations, but note that it's not totally failing: skin tone is mostly consistent, eyebrow style is consistent, etc. The fact that it's not perfect doesn't mean that it's not able to learn, so saying that it's making "no progress ... towards a concept of symmetry" doesn't seem right to me.
posted by sloafmaster at 5:20 PM on June 11, 2017 [7 favorites]


The OG GAN papers started off on faces and things like that.

Biggest empirical problem with GANs is fucked up training. DCGAN especially is incredibly finicky
Somebody proved gradients go to ~Cauchy bec. gaussian / gaussian division during normative training for certain loss functions, and then proposed different loss functions to fix the problem, but those new loss functions suck for actually generating...
posted by hleehowon at 5:27 PM on June 11, 2017


I'm surprised that they're not all dogs.
posted by bonobothegreat at 5:39 PM on June 11, 2017 [3 favorites]


There's the "Wasserstein GAN" paper, which basically says if you just threshold the weights on the discriminator, you can kind of make a fuzzy argument that you're minimizing the Wasserstein norm with the data-generating distribution, instead of the KL divergence or whatever.

There's some cool recent work in training generative adversarial models using a fully Bayesian approach, MCMC on a probabilistic model instead of backpropping a neural network. It's just on arxiv and the authors haven't released the code yet but they claim training is much more robust.

If training adversarial models becomes easy that would be great -- especially if people started composing them with other techniques.
posted by vogon_poet at 5:52 PM on June 11, 2017


I've actually had a lot of success w/ futzing with initializations: initialize input with ~lognormal or pareto, as opposed to ~uniform or gaussian that seems to be the default thought

MCMC is always real difficult, and I think some nontrivial part of the niceness of the GAN is the backpropagating, which is comparatively way less of a pain in the butt
posted by hleehowon at 6:07 PM on June 11, 2017 [1 favorite]


It feels like MCMC has gotten better. Admittedly I was still a tiny child until quite recently so I don't have a real reference point for comparison. But I've been able to sample very fast from some surprisingly messy models with NUTS. It feels "plug-and-play", not the messy horrible thing you expect from its reputation. Not much worse than some of the finicky stuff you have to do to get neural networks to train.

Of course, I haven't seen the code for this paper, but I am cautiously optimistic!
posted by vogon_poet at 6:22 PM on June 11, 2017


Oh. Digital faces. *Yawn* Let me know when it involves microstitching muscles to nerves, spraying on layers of fat cells, and a healthy jolt of electricity.
posted by happyroach at 9:16 PM on June 11, 2017


Pic2Pix will generate an exacting reproduction of newsanchor Lara Rense, based only on your loose line drawing.
... Dutch Public Broadcaster NPO created its own artificial intelligence system that had only been fed thousands of images of one of its anchors, Lara Rense. As reported by the popular Tumblr prostheticknowledge, the web tool is called Pix2Pix.

...

spaketh the verge
posted by sebastienbailard at 9:24 PM on June 11, 2017 [1 favorite]


Pic2Pix will generate an exacting reproduction of newsanchor Lara Rense, based only on your loose line drawing.

the eyes have it
posted by OverlappingElvis at 9:59 PM on June 11, 2017 [2 favorites]


"..with a fair amount of success"

According to whom, and what metrics?
posted by Faintdreams at 12:44 AM on June 12, 2017


Time for my customary rant:

IT'S NOT AN AI, THERE IS NO AI, STOP THE LAZY BANDWAGON JUMPING BULLSHIT PLEASE.
posted by GallonOfAlan at 2:01 AM on June 12, 2017 [2 favorites]


Self-portraits?
posted by Thella at 3:12 AM on June 12, 2017


I think one thing that is apparent in those images is that they were drawn by something that is making no progress in its machine learning towards a concept of symmetry. Symmetry (and asymmetries of varying degree) are fundamental to our ability to categorise and remember vast numbers of individual faces. While we may not have the only, or best way to distinguish faces, our mental processes are clearly better than those of the AI, since we can immediately identify the images as 'bogus', and presumably the AI would not be able to do so. The process demonstrated in the images is effective in giving individual portraits a distinct feel in terms of the lighting and colour, but those jarring identikit faces, with stunning mismatches between eyes, odd creases and twists and the complete disregard for symmetry, makes it clear that the AI only makes predictions based on patterns; it can't make that leap to a connection between a real, physical face and the images it's attempting to create, a connection that enables us to immediately suspect that there's something wrong when we look at them.


I mean, DCGAN (which this is) exploits convolutional symmetries by doing the convolution thing. The Greek conceptions of symmetria is way less easy to invoke than the compressional aspects of convnet (which can get away with 2-10 orders of magnitude less params than full net and have better perf...). Same deal with time invariance and RNN. You can do autoregression with forward MLP. It's just a giant pile of shit.

You could prolly add a pretty naive loss and still get that bilateral symmetry that you're probably thinking about when you're saying symmetry, but this is one of those "expert fiddling with this bullshit" things that actual practitioners mostly end up doing.
posted by hleehowon at 3:40 AM on June 12, 2017


Fuck, is this actually MLP GAN? Now that I read his blog post again it might just be two function applications of MLP GAN, in which the symmetry stuff is just the tiling and scaling
posted by hleehowon at 3:42 AM on June 12, 2017


> IT'S NOT AN AI, THERE IS NO AI, STOP THE LAZY BANDWAGON JUMPING BULLSHIT PLEASE.

That's because every time something in AI hits the mainstream, it gets redefined so it's no longer AI.

This work is coming out of AI labs, and is being done with AI tools. It's AI.
posted by Leon at 5:05 AM on June 12, 2017


Wow. You guys are cynical. I think these are beautiful, hauntung, and wonder inducing.
posted by es_de_bah at 5:38 AM on June 12, 2017


Anyone else picture them constantly pulsing, like a painting that's still being painted, or a poorly done color rotoscope? Always correcting to try to look more accurate.

Those are the faces of the machines which will enslave us.

Don't let touch you; that's how the nanomites get you.
posted by leotrotsky at 5:39 AM on June 12, 2017


Leon: I think it would be more accurate to describe these as AI precursors or perhaps, for the best cases like Google Translate, Artificial Savants. The I is the defining part of the term and that's just not there for any common definition of intelligence.

That doesn't mean that there aren't useful tools now — Google Photos is a huge step up for most people even if it has no understanding of the connections it makes — or that this work isn't likely to be a key step on the path to future advances, but unless your goal is to get investors to signup calling the current technology AI is just setting the field up for disappointment and backlash as people realize how narrowly limited the current offerings are.
posted by adamsc at 5:55 AM on June 12, 2017 [1 favorite]


I think you're using a definition of AI that nobody in the field uses...
posted by Leon at 6:10 AM on June 12, 2017


Leon: I think it would be more accurate to describe these as AI precursors or perhaps, for the best cases like Google Translate, Artificial Savants. The I is the defining part of the term and that's just not there for any common definition of intelligence.

The issue is that it's quite possible that intelligence is just a bag of tricks like the example posted. The concept of general intelligence is the result of a kind of category mistake. Perhaps one of the brain's tricks is on itself, tricking itself into believing that there is some single "I" that chooses to act and that possesses a thing called 'intelligence', when there is no such single thing. See the Libet experiment, for example.

I understand that Buddhists also have some thoughts on this.
posted by leotrotsky at 6:12 AM on June 12, 2017 [1 favorite]


Leon: that was rather the point – some people in the field may consider high-level performance on specific narrow tasks to be part of AI but almost nobody outside has that level of nuance. That's great for marketing but it's setting users up to be disappointed when non-specialists hear the term “AI” applied to applications like Siri or Google Now which are still largely in the novelty stage.

I've seen that cycle repeated a few times with things like OCR where people hear pitches talking about high accuracy, test it on non-pristine originals or unusual content and see enough gibberish to conclude it's just marketing hype rather than asking whether there's a way to work with less than 100% accuracy or how much reasonable room for improvement there is. That can be overcome but it's still easier not to oversell things in the first place.
posted by adamsc at 6:57 AM on June 12, 2017


The important thing to think about AI is that if you can actually do it, you will also have enough of the tools to explode the idea of intelligence in the first place, and a lot of our conception about cognition besides. In that way, it's useful not like the term "renormalization" or "Parisi reweighting parameter" but like the term "phlogiston". There is no phlogiston, and the epic quests people went on to find it all of course failed. But they fundamentally helped understand the nature of oxidation, whereupon we exterminated the concept and relegated it to history.

There is no such thing as intelligence, in the way that we conceive it. There is almost certainly no such thing as consciousness, in the way that we have failed to define it. When we implement, and only when we implement, we will understand and we will throw away those previous terms. Do not forget that Watt came before Carnot.
posted by hleehowon at 7:05 AM on June 12, 2017 [2 favorites]


IT'S NOT AN AI, THERE IS NO AI, STOP THE LAZY BANDWAGON JUMPING BULLSHIT PLEASE.

if there's anything that defines AI, it's lazy bandwagon jumping bullshit.

have been in the field since 1986
posted by zippy at 8:16 AM on June 12, 2017 [6 favorites]


When we implement, and only when we implement, we will understand and we will throw away those previous terms

While I agree with the thrust of what you're saying, this stuff was pretty clear decades ago to those of us reading Nietzsche and/or Wittgenstein (and plenty of other writers in disparate philosophical traditions) with an eye to their philosophy of mind.
posted by howfar at 9:54 AM on June 12, 2017


You can do a stripped-down version of this in your browser with Pareidoloop (previously).
posted by whir at 5:04 PM on June 13, 2017


« Older The Most Hated Online Advertising Techniques   |   Referendum on Statehood Newer »


This thread has been archived and is closed to new comments