"They looked like objects in the world that were not in the world"
May 6, 2019 11:55 PM Subscribe
Every time you look at a face, a group of neurons behind your ears goes wild with excitation. For a long time, scientists have pondered what it is, exactly, that tickles the very particular fancies of these neurons. Is it a certain eyes-nose-mouth combination that triggers its frenzy? A particular arrangement of colors? What is a face, to a neuron? In a groundbreaking Cell study, scientists found out through an unusual approach: They asked the cells themselves.
“We’ve been stuck with this problem for decades,” first author Carlos Ponce, Ph.D., a neuroscientist at Washington University School of Medicine in St. Louis, tells Inverse. Scientists trying to understand this aspect of our visual systems are trying to understand how it is we evolved to not only see but also recognize complex images like faces, and also objects, places, and animals. Previously, researchers investigated this by showing subjects countless images to find out what was best at turning their neurons on — an impossible task, since there are an infinite number of images to show.
To do the impossible, Ponce and his team took advantage of a powerful new tool. They turned to a type of A.I. used to generate imaginary but uncannily realistic images like DeepFakes and other creepy art. These generative adversarial networks, or GANs, evolve images based on input from a “discriminator” that determines what’s good and what’s not. In Ponce’s experiments, the discriminator was the monkey neuron, hooked up to the GAN, which burst with activity if it approved of the image it saw. As the images evolved, one thing became clear: These cells are into some weird shit.