# Brain hardware support for Bayesian processing spotted

July 17, 2019 6:53 AM Subscribe

Brain hardware support for Bayesian processing spotted.
“We have never seen such a concrete example of how the brain uses prior experience to modify the neural dynamics by which it generates sequences of neural activities, to correct for its own imprecision. This is the unique strength of this paper: bringing together perception, neural dynamics, and Bayesian computation into a coherent framework, supported by both theory and measurements of behavior and neural activities,” says Mate Lengyel, a professor of computational neuroscience at Cambridge University, who was not involved in the study.

Well, you *tend* to perceive what you believe". Some people carry it (much) further than others.

posted by aleph at 7:24 AM on July 17

posted by aleph at 7:24 AM on July 17

So this is... highly relevant to my interests. First time that metafilter pointed me to a relevant paper before my colleagues did ;) Thanks!

posted by Alex404 at 8:15 AM on July 17 [2 favorites]

posted by Alex404 at 8:15 AM on July 17 [2 favorites]

Really nice paper. (Link to the original article.) This directly intersects with some of my research interests.

There's a lot to unpack here and the MIT press release doesn't really do it justice. I'm still digging through the paper myself, but one thing that I really like about it so far is that it provides a much more solid mechanistic grounding for certain analysis techniques in computational neuroscience that have become very popular in recent years but which I've had some reservations about. This idea of "neural manifolds," which conceives of neural activity being embedded in a high dimensional, geometrically complex space, is something I find aesthetically appealing but I've been suspicious of as lacking a firm mechanistic interpretation. This paper actually seems to give a plausible interpretation to the structure of these manifolds as being governed by synaptic weights and functioning to adaptively distort a neuronal ensemble's path for evolving from a starting point governed by sensory inputs to its endpoint determining the motor action, with this distortion reflecting the Bayesian prior.

posted by biogeo at 8:18 AM on July 17 [2 favorites]

There's a lot to unpack here and the MIT press release doesn't really do it justice. I'm still digging through the paper myself, but one thing that I really like about it so far is that it provides a much more solid mechanistic grounding for certain analysis techniques in computational neuroscience that have become very popular in recent years but which I've had some reservations about. This idea of "neural manifolds," which conceives of neural activity being embedded in a high dimensional, geometrically complex space, is something I find aesthetically appealing but I've been suspicious of as lacking a firm mechanistic interpretation. This paper actually seems to give a plausible interpretation to the structure of these manifolds as being governed by synaptic weights and functioning to adaptively distort a neuronal ensemble's path for evolving from a starting point governed by sensory inputs to its endpoint determining the motor action, with this distortion reflecting the Bayesian prior.

posted by biogeo at 8:18 AM on July 17 [2 favorites]

**Alex404**, could be fun to journal club this paper together here!

posted by biogeo at 8:19 AM on July 17

The use of the word “belief” seems to portend a lot of potential misinterpretation. I perceive what I believe? What do you mean by perceive? Are we still at the sensory level or are we already at the interpretive level? This is based on experience, which strikes me as almost a tautology as I perceive what I have already perceived repeatedly in the past. The surface view of these findings strike me as being obvious. The interpretation of the results strikes me as just predicating computational and mathematical models as being the actual process under observation. But, it is true that if a person believes that a certain class of people are bad, then that person will perceive another person who is a member of that class as bad despite any other perceptions. Ontology versus epistemology strikes again.

posted by njohnson23 at 8:44 AM on July 17

posted by njohnson23 at 8:44 AM on July 17

Very cool, I'm certain I don't understand the neuroscience, but it's going in my pile of Brunswik-ian references.

posted by anthill at 10:54 AM on July 17

posted by anthill at 10:54 AM on July 17

Thanks for posting this.

posted by OlivesAndTurkishCoffee at 9:46 PM on July 18

posted by OlivesAndTurkishCoffee at 9:46 PM on July 18

Yes, thanks for posting! And thanks for the link to the paper,

I'm completely outside this field so it took a bit to wrap my head around the concept of neural trajectories (which are somewhat better explained in an earlier paper from the same group), but it's an intruiging way to represent the data.

From what I understood, each neuron was represented as a dimension to create the manifold, and the state of the whole system (i.e. the firing rates of all measured neurons) at a given time was plotted as a single point in that manifold. As we crank time forward and the firing rates of the measured neurons changes, the point travels around in the manifold to create the trails we see in the videos and a few of the figures. Instead of trying to cope with hundreds of dimensions, PCA was used to find the most influential factors (dimensions?) controlling or representing the state of the system.

I think this is where my understanding falls down, because I've never used PCA. Given that the plots we see in the paper -- and which they seem to be using to project onto a single axis to give the curves they observe -- seem to have just two or three dimensions (plus time), does this mean we're only looking at data from two or three neurons over time, having identified those as the most important? Or have the hundreds of dimensions been somehow combined/clustered into those two or three we see, perhaps with weighting, so each of the three axis we're looking at are actually more like a weighted average of many more dimensions?

I'm a bit dubious about their discovery that plotting the curved neural trajectory onto a specific, single dimension produced curves fitting the distribution that they saw in their animal experiments. They say it makes sense "given our understanding of the geometry", but it's opaque to me (almost certainly because I don't understand the geometry!). Is it obvious why choosing that dimension is better than any of the others, such that a reasonable person would've predicted it? Or, having seen these data, would be able to predict which dimension to plot against for a similar future experiment? Otherwise it feels a bit like saying "we tried all these comparisons, this one looks like it correlated, so it must be correct". Which is tremendous fun, but can be a bit dodgy.

I'd be interested to read along, if you decide to do it.

Come to think of it, I'd be up for a regular interdisciplinary journal club... something we could crowbar into FanFare, maybe?

posted by metaBugs at 5:14 AM on July 19

**biogeo**.I'm completely outside this field so it took a bit to wrap my head around the concept of neural trajectories (which are somewhat better explained in an earlier paper from the same group), but it's an intruiging way to represent the data.

From what I understood, each neuron was represented as a dimension to create the manifold, and the state of the whole system (i.e. the firing rates of all measured neurons) at a given time was plotted as a single point in that manifold. As we crank time forward and the firing rates of the measured neurons changes, the point travels around in the manifold to create the trails we see in the videos and a few of the figures. Instead of trying to cope with hundreds of dimensions, PCA was used to find the most influential factors (dimensions?) controlling or representing the state of the system.

I think this is where my understanding falls down, because I've never used PCA. Given that the plots we see in the paper -- and which they seem to be using to project onto a single axis to give the curves they observe -- seem to have just two or three dimensions (plus time), does this mean we're only looking at data from two or three neurons over time, having identified those as the most important? Or have the hundreds of dimensions been somehow combined/clustered into those two or three we see, perhaps with weighting, so each of the three axis we're looking at are actually more like a weighted average of many more dimensions?

I'm a bit dubious about their discovery that plotting the curved neural trajectory onto a specific, single dimension produced curves fitting the distribution that they saw in their animal experiments. They say it makes sense "given our understanding of the geometry", but it's opaque to me (almost certainly because I don't understand the geometry!). Is it obvious why choosing that dimension is better than any of the others, such that a reasonable person would've predicted it? Or, having seen these data, would be able to predict which dimension to plot against for a similar future experiment? Otherwise it feels a bit like saying "we tried all these comparisons, this one looks like it correlated, so it must be correct". Which is tremendous fun, but can be a bit dodgy.

**biogeo**:**Alex404**, could be fun to journal club this paper together here!I'd be interested to read along, if you decide to do it.

Come to think of it, I'd be up for a regular interdisciplinary journal club... something we could crowbar into FanFare, maybe?

posted by metaBugs at 5:14 AM on July 19

Hey metaBugs, just to answer your questions:

Dimensionality reduction is a hot topic in computational neuroscience right now ('Influential factors' indeed corresponds to dimensions here), because it's allowing neuroscientists to understand high-dimensional data in intuitive, and arguably meaningful ways. If you're recording from 100 (or 1000, or 10,000...) neurons in parallel, then you can represent all their activity as a 100 dimensional vector. That's hard to interpret though, so you want to map that down to a small set of significant dimensions.

Dimensionality reduction doesn't pick the e.g. 5 most significant neurons, it tries to reduce the high dimensional, global activity, to a low-dimensional set of principal components/factors/latent variables, such that given just the values of these low-dimensional variables, you can (reasonably well) reconstruct the activity of the whole population. The reason why neuroscientists also feel like this produces "meaningful" results (as opposed to just nice visualizations), is that both the number of low-dimensional variables and the way they interact are often fairly reproducible across different experiments/animals.

Principal Component Analysis (PCA) is the most basic form of dimensionality reduction. PCA is to dimensionality reduction what normal distributions are to statistical models... if you don't have a clearly better technique, PCA is probably a reasonable place to start. What they're plotting in this paper are the low-dimensional variables (in this case, principal components) extracted from the data by PCA (over time), and what they're finding is that those principal components are influenced in a systematic way by the prior/context information given to the animal.

It's certainly possible that PCA might be a suboptimal technique here (part of my work is on "better" dimensionality-reduction techniques for neural data) but it's probably not the case that the results are deeply misleading. Dimensionality reduction techniques tend to produce broadly similar results (don't hold me to this statement), and since there's no technique that's clearly better than PCA for this data (other then my method natch), PCA is a good basis for the claims they make in this paper.

posted by Alex404 at 9:34 AM on July 19 [2 favorites]

Dimensionality reduction is a hot topic in computational neuroscience right now ('Influential factors' indeed corresponds to dimensions here), because it's allowing neuroscientists to understand high-dimensional data in intuitive, and arguably meaningful ways. If you're recording from 100 (or 1000, or 10,000...) neurons in parallel, then you can represent all their activity as a 100 dimensional vector. That's hard to interpret though, so you want to map that down to a small set of significant dimensions.

Dimensionality reduction doesn't pick the e.g. 5 most significant neurons, it tries to reduce the high dimensional, global activity, to a low-dimensional set of principal components/factors/latent variables, such that given just the values of these low-dimensional variables, you can (reasonably well) reconstruct the activity of the whole population. The reason why neuroscientists also feel like this produces "meaningful" results (as opposed to just nice visualizations), is that both the number of low-dimensional variables and the way they interact are often fairly reproducible across different experiments/animals.

Principal Component Analysis (PCA) is the most basic form of dimensionality reduction. PCA is to dimensionality reduction what normal distributions are to statistical models... if you don't have a clearly better technique, PCA is probably a reasonable place to start. What they're plotting in this paper are the low-dimensional variables (in this case, principal components) extracted from the data by PCA (over time), and what they're finding is that those principal components are influenced in a systematic way by the prior/context information given to the animal.

It's certainly possible that PCA might be a suboptimal technique here (part of my work is on "better" dimensionality-reduction techniques for neural data) but it's probably not the case that the results are deeply misleading. Dimensionality reduction techniques tend to produce broadly similar results (don't hold me to this statement), and since there's no technique that's clearly better than PCA for this data (other then my method natch), PCA is a good basis for the claims they make in this paper.

posted by Alex404 at 9:34 AM on July 19 [2 favorites]

« Older More capable than your smartphone | A Visit with the Glacier Squad Newer »

This thread has been archived and is closed to new comments

posted by otherchaz at 7:16 AM on July 17 [2 favorites]