We’re still at the point of collecting factlets
June 11, 2021 10:51 AM   Subscribe

 
Pulitzer Prize-winning Ed Yong at the Atlantic reports...
posted by pykrete jungle at 10:58 AM on June 11 [15 favorites]


In the machine learning context, two models trained on similar input and using a similar algorithm might come up with very different raw representations of the input but perform very similarly.

For example, two language models trained on similar inputs and training settings may end up with very different vector representations of words, but each model will still group semantically similar words together. For example, maybe Model 1 says: dog = 1, puppy = 2, chess = 1000, and checkers = 1050, while Model 2 says dog = 5000, puppy = 5001, chess = 4000, and checkers = 4050. Both are consistently grouping similar ideas together, but the raw numbers are basically arbitrary and can't be directly compared across models.

So I wonder if, in a similar way, what matters in brains is functional consistency rather than representational consistency. The pattern of neurons that represents the scent of a rose may change, but as long as the pattern is similar to that of other floral scents or other pleasant scents (or whatever mice think of the smell of roses), then that's all that matters. And so over time the representation space can be compressed or rearranged to take into account learning, routing around damage, whatever, without losing function.
posted by jedicus at 11:21 AM on June 11 [11 favorites]


The pattern of neurons that represents the scent of a rose may change, but as long as the pattern is similar to that of other floral scents or other pleasant scents …

I like where you’re going with this. Honestly, a “thing” isn’t a Thing in isolation. That is not how humans work. The smell of a rose is beautiful until we learn the person who has consistently brought us roses has been cheating on us. The smell and taste of chili is fantastic until we suffer a violent case of the flu the evening we gorged on it. A song is a favorite until it is playing in the background when you learn of the shocking death of a loved one.

I know there is so much we need to learn about the brain; I do firmly believe, however, that our our encoding is a combination of a thing and AND our internal state at the time of encountering it. That line between “I” and “Other” is so slight, in our beloved minds’ understanding of the world. Tat tvam asi. You are that.
posted by Silvery Fish at 11:40 AM on June 11 [11 favorites]


I only know a bit of neuroscience but this makes sense conceptually given the massive amount of information we learn over time. Once we're grown it's not like there's any "leftover" physical room in our brain, so the only way we can learn new things is if our existing neural networks rearrange themselves into more efficient/relevant structures to accommodate learning, which would have to result in some of the representations shifting physically. I suspect it is very similar to how neural networks work, which is why they're good at modeling this type of learning.

At a more philosophical level I like the idea of representing information/memory as a continuous dynamic shifting of connections over time, because that seems to me like the most coherent way to view consciousness and identity in a purely physical brain. "I" am not the current physical state of my brain's connections, "I" am a continuous function describing how the identity and narrative-relevant connections of my brain evolve over time.
posted by JZig at 11:42 AM on June 11 [6 favorites]


This was a really interesting read. I think about this stuff a lot. My understanding is that you can't really say things like "brains do __ because it lets the animal do ___" because that's more like the kind of user story you write when you're designing a tool that you need to have certain functions. But the brain isn't a tool that was ever designed. It's just kind of a series of chemical reactions that are happening inside a big crystal that happened to grow somewhere. To me it would be harder to believe that it's actually like a database that maintains links between things than the alternative which is that it's just some goo with electricity and things happen sometimes. Thank you for coming to my ted talk.
posted by bleep at 11:44 AM on June 11 [9 favorites]


This week, a connectome of a chunk of a human brain revealed a whole new set of wonders. Here reads a figure legend:
Unusual and unidentified cortical objects.

(A) Whorled axon that makes inhibitory synapses on cell soma and dendritic shafts in L5, L6 (link). (B) Soma of interneuron engulfed by the dendritic process of another interneuron (link). (C) Unidentified object filled with fibrous substance (link). (D) Whorls of loosely coiled myelin (link). (E) Egg shaped object with no associated processes (link). (F) Myelinated oval object. (G) Greyish object. (H) Interacting climbing dendrites link (I) Myelinated dendritic trunk in grey matter (link). (J) Myelinated dendritic trunk in white matter (link). (K) Swollen dendritic spine containing 150 intramembranous objects (segmented manually; link). (L) Unidentified object filled with small spherical objects (link). (M) Myelinated axon filled with unidentified substance (link). (N) Myelinated object filled with small debris (link). (O) Compartmentalized fibrous object (link). (P) Whorls within the soma surrounding a cell nucleus (link). (Q) Random lying whorled substance (link). (R) Conjoint whorls (link). (S) Synapse wrapped by astrocytic processes (link). (T) Myelinated object with multiple membranous rings (link).

In neuroscience speak, this reads: ?!?!?!
posted by Dashy at 12:05 PM on June 11 [7 favorites]


Wittgenstein, Philosophical Investigations:
In what sense are my sensations private? -- Well, only I can know whether I am really in pain; another person can only surmise it. -- In one way this is wrong, and in another nonsense. If we are using the word "to know" as it is normally used (and how else are we to use it?), then other people very often know when I am in pain. -- Yes, but all the same not with the certainty with which I know it myself! -- It can't be said of me at all (except perhaps as a joke) that I now I am in pain. What is it supposed to mean -- except perhaps that I am in pain?
Other people cannot be said to learn of my sensations only from my behavior -- for I cannot be said to learn of them. I have them.
[...]
One would like to say: whatever is going to seem right to me is right. And that only means that here we can't talk about 'right'.
posted by Theiform at 12:08 PM on June 11 [3 favorites]


If you place an order every week with Amazon for the same product, you may get the same product every time, in the same size box every time. But that doesn't mean the same employee is pulling the order every time.

The assumption that particular neurons will always be the ones to fire seems like an unfounded extrapolation from predictable behavior from the larger structure that is the whole brain.
posted by explosion at 12:10 PM on June 11 [6 favorites]


I have to say, the headline for this article really tickled me. "Neuroscientists have discovered a phenomenon they can't explain." In other news, water is wet, the sun rises in the east, and music was better when you were young. Neuroscience in so many ways is still in its infancy: having good explanations for things is the exception rather than the rule.
posted by biogeo at 12:16 PM on June 11 [6 favorites]


So I wonder if, in a similar way, what matters in brains is functional consistency rather than representational consistency. The pattern of neurons that represents the scent of a rose may change, but as long as the pattern is similar to that of other floral scents or other pleasant scents (or whatever mice think of the smell of roses), then that's all that matters.

My untrained reading of Wittgenstein suggests that we often may have the impression that we are having a particular sensation -- but as often as not it may be that we're just convincing ourselves that that sensation is the same as the one we experienced some time before, that it's helpful for us to have this name ("rose scent"), but that's not proof of, say, a particular grouping of neurons firing in our brain..
posted by Theiform at 12:18 PM on June 11 [2 favorites]


A metaphor I often return to about all kinds of "humans keep hitting confusion in making sense of things" areas is epicycles in orbital models.

Dig it: back when astronomy and astrology weren't particularly bright-line distinguished, the motion of the planets was of course closely observed. There are times when planets appear to cross the sky more slowly than other times, go to a standstill, or appear to move backwards. (Mercury retrograde in the area of the constellation of Leo, etc.) Which is weird especially if you put yourself in their shoes.

Some of that was due to geocentric models, trying to posit the Earth as a fixed point that the heavens rotated around and objects in the heaven traveled their cyclical paths through. But the thing was, epicycles didn't go away from going to a heliocentric model, even with accounting for Earth orbiting round instead of staying "still."

They happened because people kept insisting that orbits were circles. A circle was a perfect geometric figure, and obviously heavenly motions would be perfect.

Then Newton et al said...what if that's not the case? And observed even closer, and suddenly epicycles went away from the realization that circles were not how things worked; instead, orbits were elliptical, governed by shared effects of the mass and gravity involved.

Obviously that kind of thing can get metaphorically extended to all sorts of things. Obviously parts of the brain do specific things--a lot of early neuroscience and anatomy shows that fixation. This is the speech center. This is the spatial processing center. Vision processing happens here. And the thing is: largely, in general, they do...except when they don't (coming to the realization that the brain "rewires" even in adulthood was a big pop sci flurry in living memory). Representational drift happening at smaller scales as observational precision improves isn't too surprising.

I wonder if they'll figure out a new understanding that does away with the epicycles while I'm still around to go "neat" over. Hope so!
posted by Drastic at 12:49 PM on June 11 [7 favorites]


Wittgenstein, Philosophical Investigations:

What is the logic of language? If I'm up on a ladder and yell, "hammer!", there is no grammar needed.

If lions could talk, we still wouldn't be able to understand them.
posted by StickyCarpet at 12:50 PM on June 11 [2 favorites]


Daniel Kahneman talks about how when you tell people the results of a social science study the typical reaction is, 'Of course, that's obvious, why would you even test that?' Pretty much regardless of what the outcome was... It's helpful to state the /opposite/ conclusion, and see if that actually sounds surprising or if you've got an equal ability to create rationalizations for both conclusions.

This really is surprising. We're talking about basic perception here: What neurons fire when you smell something? Perception /seems/ to come gradually on-line early in life, with a fast-learning tuning phase, and then settles into stability that lasts basically the rest of our lives (perhaps modulo LSD trips). We really don't seem to be re-learning how to interpret our environments on a daily basis.

Moving these representations around seems a) error-prone, and b) expensive. If I randomly permuted all the papers on my desk every few days, I would be very likely to forget where any particular thing is, and it would take a lot of work to move it all around. In fact, it makes me wonder if the movement is happening because of buggy hardware.... If any neuron can screw up at any time, then anything important should be copied to lots of locations, with redundancy to ensure that errors can be detected and corrected. So maybe I keep five copies of my data, but then move them around constantly because all of the individual copies are constantly being corrupted.
posted by kaibutsu at 1:27 PM on June 11 [8 favorites]


But that's like saying, I don't understand this because I made a metaphor that doesn't really work. Well, maybe the problem is you're comparing two things that are actually nothing alike. The way my brain would design a tool to organize information is actually nothing like what's going on inside my brain meat while it designs that tool.
posted by bleep at 1:35 PM on June 11 [4 favorites]


Once quietly sitting at a window looking through bushes, trees, between buildings in the dim light I noticed something. Movement, a dog, no my perception transformed a bit, perhaps different movement, a person, then that didn't gain focus, then a car, then a truck. Each was just as real but some computational or sound or context change transformed into something else. All sort of real. Now build a house with lots of optical illusions, messes with people.
posted by sammyo at 1:35 PM on June 11 [3 favorites]


my perception transformed a bit, perhaps different movement, a person, then that didn't gain focus, then a car, then a truck

That kind of thing has always fascinated me. "There's a dog, no, wait, that's a log."

How quickly and without much notice, the aspect is retroactively rewritten. Makes me wonder how stable any of our perceptions really are.
posted by StickyCarpet at 1:41 PM on June 11


This phrase made me throw up in my mouth:
While the functions carried out by most of the vital organs in humans are unremarkable, the human brain clearly separates us from the rest of life on the planet.
A veritable masterpiece of anthropocentric monkey spanking.
posted by seanmpuckett at 1:57 PM on June 11 [7 favorites]


Makes me wonder how stable any of our perceptions really are.

My quatloos are ever-growing stack on not very...by most definitions of "stable."

I remember it blowing my little mind as a kid, in some early movie theater experience, while patiently waiting the foreeeeever till the show started (so you know, probably a few minutes), looking up at a ceiling fan, and realizing it was a blur, except for snapping eyes to and away from it, which made the blades distinct and apparently frozen. Indulgent "yep, looking at fans will do that. It's tied into how we see things in motion; tires are also good for it! Does it look like it's going backwards sometimes?" from mom or dad upon whispered report of my discovery, and then I realized that, yep, it could! (Ceiling fan retrograde in Star Wars?)

Later in life my mind was blown again by learning that our visual field is largely an illusion; we get the experience of seeing as if it's all at once, but actually our eyes are constantly wobbling and jerking about, snap snap snap, in staccato saccades, getting sharp focus here and there when periphery is much more blur. And from that, our brains construct what seems like a whole. That saccadal motion might seem wasteful, why not just hold the eyeball still and take in everything? What are you doing, evolution?! But as things are, the motion is integral to sight functioning as it should.

I think rather than a bug, rather than a wasteful, inefficient thing, this kind of neural structure dynamic drift is its own kind of motion that's integral to the function. We just don't know the how or why of the very motion being crucial.

Finally, a thing I forgot where I picked up and stole the quote from was "the mind is the brain in motion, just as walking is the body in motion." The motion is the thing, or rather, the process.

Anyway, that's the kind of my own biases of outlook which is why this finding wasn't a surprise at all. If they'd been, "we figured out the neurons and t heir structures are strongly invariant" that would have made me go "really?!" in readjusting layman worldview settings!
posted by Drastic at 1:58 PM on June 11 [5 favorites]


Both are consistently grouping similar ideas together, but the raw numbers are basically arbitrary and can't be directly compared across models.

Mathematically, that means that the topologies are the same/equivalent. Though I'm not sure if that would apply to actual human brains as well as machine learning models.
posted by eviemath at 3:34 PM on June 11 [2 favorites]


"There's a dog, no, wait, that's a log." When you're tripping on LSD and purposely go out looking for these things... it's called "Woogie Hunting". It's even more fun when it bounces again, "it's a snake, no a fallen branch, nope really a snake".

The Brain ‘Rotates’ Memories to Save Them From New Sensations - Quanta Magazine might be of interest. Once you have high number of dimensions the number of transforms that can be found to effect only bits and pieces of a thought/memory while leaving the vast majority of it as it were contains multitudes.

See, the mind is like one of those preschool toys where you try and fit shapes into holes, except the shapes are oddly rotatable porcupines.
posted by zengargoyle at 4:05 PM on June 11 [4 favorites]


There's a technique called "dropout" used for training deep learning models, where you randomly disable a percentage of the network links (20% or more) each training iteration -- this actually accelerates learning and improves generalization, preventing overfitting. It seems that neural structures benefit from a bit of chaos, which is how I justify continuing to enjoy alcohol.
posted by RobotVoodooPower at 5:10 PM on June 11 [5 favorites]


My untrained reading of Wittgenstein suggests that we often may have the impression that we are having a particular sensation -- but as often as not it may be that we're just convincing ourselves that that sensation is the same as the one we experienced some time before

While I'm inclined towards agreement with your broader suspicion, I think Wittgenstein's argument, being linguistic in nature, probably intersects with the epistemological question of whether our sensations are the same from time to time (in philosophy speak: whether "qualia" are a useful concept), rather than addressing it directly. The Investigations, in the passage quoted is generally taken as part of a critique of the notion of a private language, and the belief (which arguably underpins much philosophy from at least the Enlightenment on), that it is intelligible to speak of a mental process of fully scrutable reference (a private and complete cognitive model of a word, its referent and the relationship between these things) that underpins the meaning of public languages. If private languages exist, then one can arguably determine the truth of at least certain phrases in English (e.g. "You are in pain") by translating them into that language and seeing if they are true there. Wittgenstein's late position, as set out in the Investigations is that language is a fundamentally public affair, that we can't know what words, as actually used, mean by measuring them against some internally intelligible language, but only by looking for the rules of the game they are being publicly used in.

What Wittgenstein thought about experience itself is, I think deliberately, opaque: my feeling is that a lot of his philosophical endeavours were directed at protecting experience from philosophy, due to a sort of ascetic mysticism, which if not religious, was at least profoundly shaped by the questions of religion.

Now that said, as you're observing, Wittgenstein's line of thought does seem to intersect with the idea that epistemological reference isn't fixed, and I think it also points toward the possibility that the thing stabilising our experiential and conceptual realm goes beyond just neurological holism, and extends into our public social and linguistic reality. We very often think of ourselves as our brains, but this seems to me to be oddly dualistic: why do the processes of matter occuring inside my skull define "me" more than those which happen outside that shell? When talking of selves, why should we so privilege the think-meat? It feels like a sort of vestigial remnant of a belief in the self as soul, rather than a serious answer to the question of what a "self" is.
posted by howfar at 5:41 PM on June 11 [4 favorites]


There's a dog, no, wait, that's a log

"It was like that song of Harry Lauder's where he's waiting for the girl and says 'This is her-r-r. No, it's a rabbut.'" - The Metropolitan Touch, PG Wodehouse
posted by howfar at 5:53 PM on June 11 [2 favorites]


my perception transformed a bit, perhaps different movement, a person, then that didn't gain focus, then a car, then a truck

I woke up one night to see a ghostly figure in the semi dark. As i stared, it slowly dissolved into nothing.
posted by Horselover Fat at 9:00 PM on June 11 [3 favorites]


Does this mark the final end of the idea of the "grandmother neuron"?
posted by clawsoon at 9:11 PM on June 11 [2 favorites]


I wonder if, in a similar way, what matters in brains is functional consistency rather than representational consistency.

Yeah, I'm with Drastic. I don't understand why anybody would ever have assumed otherwise. It's a fundamental property of information processing.

Even in information-processing systems as grossly different in detail from brains as computers are, functional consistency matters way more than representational consistency; any coding will do as long as the subsystems working on the information so coded agree on it.
posted by flabdablet at 10:01 PM on June 11 [1 favorite]


Should I sell my Neuralink stock?
posted by Phanx at 10:13 PM on June 11


Yeah, I'm with Drastic. I don't understand why anybody would ever have assumed otherwise. It's a fundamental property of information processing.

I don't disagree, but I think it's still important to ask whether this drift is a matter of tolerable entropy and/or is adaptive in some other way(s). The former seems not unlikely: our brains use so much energy that something which mitigates this will presumably have significant adaptivity, even if it comes at some cost. However it doesn't seem established that this drift does provide an overall energy saving, so trying to devise a way of testing the hypothesis that it does seems like a useful goal (although I suspect that the list of other things we'd need to know before we could test that properly is as long as your memory). Falsifying that hypothesis, in particular, would probably tell us something significant about how the human brain is adapted to its environment.
posted by howfar at 3:33 AM on June 12 [3 favorites]


What Wittgenstein thought about experience itself is, I think deliberately, opaque...

'That whereof we cannot speak, thereof we must remain silent'
posted by StickyCarpet at 6:47 AM on June 12


Well quite. One problem that I've had with a number of scholars of Wittgenstein (in the Anglo-American tradition, at least) is a tendency to want to wave away things like the last few propositions of the Tractatus as nonessential, when it seems pretty clear that Wittgenstein saw them as anything but. I once saw Wittgenstein's project described as like making a nautical chart of an island: the purpose isn't to let one traverse the island, but rather navigate around it. I've always felt pretty confident that this is an accurate conception.
posted by howfar at 7:16 AM on June 12 [1 favorite]


the purpose isn't to let one traverse the island, but rather navigate around it

That seems to be his mission in the Tracticus, not really so much to say anything, but to outline the boundaries of the negative space, about which nothing can be meaningfully or productively said.

He relates a poignant story, about a person in old age sitting under a now-fully-grown tree that he planted with his father in childhood: Just sit there and experience it, any words you might utter can only subtract.
posted by StickyCarpet at 8:50 AM on June 12 [1 favorite]


Speaking of "bad luck and sorrow", Wittgenstein's Tracticus, which was the only one of his many published volumes that he intentionally authored (everything else is notes taken by students during lectures, etc), happened to debut at the same conference where Gödel’s "Incompleteness Theorems" first appeared, and Gödel completely dominated the critical dialogue. Poor Ludwig.
posted by StickyCarpet at 9:10 AM on June 12


Even in information-processing systems as grossly different in detail from brains as computers are, functional consistency matters way more than representational consistency; any coding will do as long as the subsystems working on the information so coded agree on it.

I'll continue to push back a bit, though... Yes, functional consistency is what matters most, but any given instantiation is typically going to have some amount of representational consistency.

Consider program compilation: You specify the behavior you want, and don't really care about the particulars of how it happens, and the compiler finds a reasonably good set of instructions to carry them out. But then, once compiled, the representation is pretty static, and will even stay mostly the same with small code changes and recompilation. This is because the compiler is finding something relatively optimal, and so the bytecode for any particular function comes out nearly identical every time.

Likewise, if some slice of the brain is able to turn a particular set of neural outputs into 'grandma' reliably, why mess with it once it's working?

Again, but for databases: Yes, we don't care where on disk the data sits, so long as the DB query finds it. But that doesn't mean we go around reshuffling data on disk for funsies. Moving data around comes with expenses: In the DB analogy, you've got caching for efficiency, at the cost of lots of caching problems. (Cache misses! Stale data! Inconsistency!) Moving data around for no reason is wasteful and error-prone. You can also interpret the errors as failures of functional consistency, fwiw...

Maybe a way to reframe the objection, in a more hardware-independent way: What does the CAP theorem say about brains?

Due to the difficulty of maintaining functional consistency while moving things around constantly, I would expect there's a some good reason that the representational drift is happening. So, I for one would love to know /why/ this happens. (with the understanding that the whys of neuroscience are generally never answered.)
posted by kaibutsu at 11:29 AM on June 12 [2 favorites]


Bless this mess.
posted by rrrrrrrrrt at 10:40 PM on June 12


kaibutsu: if there's any representational consistency to be found, I would expect it to be found in specific temporal patterns of spike trains far more often than in activation patterns for specific neurons; players may come and go but the song remains the same.

If there is neuron-specific representational consistency to be found, I would expect it to become easier to find the closer to motor neuron outputs you look for it, and possibly sensory input neurons as well. Interfaces are generally more hardware-specific than internal processing is.
posted by flabdablet at 5:39 AM on June 15


Aside: "Timothy O’Leary, a neuroscientist at the University of Cambridge" must hear a lot of smart-arse remarks about his relatives and his research techniques.
posted by illongruci at 3:51 AM on June 16


« Older The nadir of a lifetime of eating cereal   |   "Greed. Kidding. Well, sort of, greed." Newer »


You are not currently logged in. Log in or create a new account to post comments.