"They looked like objects in the world that were not in the world"
May 6, 2019 11:55 PM   Subscribe

Every time you look at a face, a group of neurons behind your ears goes wild with excitation. For a long time, scientists have pondered what it is, exactly, that tickles the very particular fancies of these neurons. Is it a certain eyes-nose-mouth combination that triggers its frenzy? A particular arrangement of colors? What is a face, to a neuron? In a groundbreaking Cell study, scientists found out through an unusual approach: They asked the cells themselves.
“We’ve been stuck with this problem for decades,” first author Carlos Ponce, Ph.D., a neuroscientist at Washington University School of Medicine in St. Louis, tells Inverse. Scientists trying to understand this aspect of our visual systems are trying to understand how it is we evolved to not only see but also recognize complex images like faces, and also objects, places, and animals. Previously, researchers investigated this by showing subjects countless images to find out what was best at turning their neurons on — an impossible task, since there are an infinite number of images to show.

To do the impossible, Ponce and his team took advantage of a powerful new tool. They turned to a type of A.I. used to generate imaginary but uncannily realistic images like DeepFakes and other creepy art. These generative adversarial networks, or GANs, evolve images based on input from a “discriminator” that determines what’s good and what’s not. In Ponce’s experiments, the discriminator was the monkey neuron, hooked up to the GAN, which burst with activity if it approved of the image it saw. As the images evolved, one thing became clear: These cells are into some weird shit.
posted by Johnny Wallflower (11 comments total) 17 users marked this as a favorite
Well, those monkeys aren't wrong. That evolved image is objectively more stimulating, what with the ice castle on the cliff, the quill pen in the vodka bottle, the whipped cream cloud, the swan, the angry red imp, the boxing glove, the banana leaf, the opera mask, the crouched sad figure, and the wistful monkey face.
posted by taz at 12:36 AM on May 7, 2019 [3 favorites]

Interesting, taz. Describe in single words only the good things that come into your mind about your mother.
posted by Johnny Wallflower at 12:40 AM on May 7, 2019 [8 favorites]

/me lies on back, belly baking in the hot sun
posted by taz at 12:46 AM on May 7, 2019 [7 favorites]

Pictures of angels should look like this, not like cute babies with wings.
posted by otherchaz at 4:05 AM on May 7, 2019 [9 favorites]

Pictures of angels should look like this, not like cute babies with wings.
posted by otherchaz

How do I super-favourite this comment?
posted by Jane the Brown at 5:51 AM on May 7, 2019 [2 favorites]

pictures of angels do look like this.
posted by Fraxas at 7:11 AM on May 7, 2019 [1 favorite]

Huh? If a part of my brain is set to recognize faces, don’t you think it would be more active given something ambiguous that appears to be face-like? A real face is easy, this requires some neural effort. I see faces in the wood grain on the bathroom door. I can turn this on and off, or morph them one to another. If part of our brain is set to recognize faces then this action must have some survival quality. It’s important. Better to err on the false positive then to err on the face you did not see. It’s all about ambiguity and processing of ambiguity. These so-called evolved images are ambiguous with face like qualities. The idea that the brain prefers ambiguity is ridiculous. It just needs to be more active to resolve the ambiguity. Science. For me all this neural network stuff is pushing science off into some other space where simple reason seems to be lacking.
posted by njohnson23 at 8:46 AM on May 7, 2019

njohnson23: I agree that suggesting that the brain prefers ambiguity is ridiculous, but I felt they were exactly right to focus on this area. If you want to make a map of the inputs that tickle the face-recognition neurons, the areas with high ambiguity - in this case images that caused higher stimulation - are exactly the place you look for the borders, the edge cases. To get a handle on how and why such a thing as face recognition even exists, you need to get into the edge cases, because without knowing where the borders are you struggle to even get a clear view of the thing you want to investigate.

I think the basic idea here is beautiful - evolving images in a feedback loop with the recognition neurons is a wonderful way to investigate just what the hell is going on in there. It's another one of those deceptively simple ideas that make me go "well of course that's how you do it". I don't think that the neural network is driving this research, more that it's just a new way of generating an input stream. The real point of this is discovering the relationship between the input images and the recognition neurons, and the GAN is a uniquely useful data source because allows the scientists to ask "how about this one?" many times over, using the answer to help evolve the next question, homing in on the places where it breaks down.

This may not be your kind of science, but it certainly is mine.
posted by Gamecat at 11:31 AM on May 7, 2019

But when they looked closer, the team realized the images resembled real-life objects but didn’t seem quite right. This was surprising. “We’re used to thinking that these cells are used to responding to very realistic depictions of the world,” says Ponce. “But in fact, we found that these cells were responding better to — I guess if I can use poetic license — the dream versions of these natural-world pictures.”
This is kinda reading way too much into the nightmare soup lightly trained GANs always produce, no?
posted by lucidium at 1:12 PM on May 8, 2019

"They asked the cells themselves"

What were the cells attached to?

Oh, monkeys!

The following is paraphrased from the methods section:

There were 7 monkeys, all adult males, age 5 to 14. They lived in quad cages, in social housing. Their living conditions were overseen by a research ethics committee.

Their names were Ri, Gu, Ge, Jo, Y1, Vi and B3, age 5 to 14.
First, they were trained to play a video game where they focused on a dot on the screen and got drops of water or juice as rewards.

One day they were placed under general anesthesia and microarrays of electrodes were implanted into their brains. These were attached by a small data ribbon cable to a metal cube screwed onto the skull. The electrode arrays are the size of a scrabble tile, the pins are as long as one or two toothpicks, and the cubes are the size of an apple phone charger.

Except for Monkey B3. He had an acute recording chamber, and neuronal activity was recorded using a 32 channel NeuroNexus Vector array that was inserted each recording day.

(I can't figure out what this means. Does anybody know?)

Then to get the data, they played the video games while the electrodes were hooked up and recording.

I hope the monkeys have or had good lives and perhaps enjoyed playing the weird video games, and maybe got their favorite juice flavors for the reward drops. Thanks little macaques. Science thanks you.
posted by dum spiro spero at 9:33 PM on May 8, 2019 [1 favorite]

« Older The lusty month of Meltdown May   |   What happened after my 13 year old son joined the... Newer »

This thread has been archived and is closed to new comments