Could a neuroscientist understand a microprocessor?
June 22, 2016 3:52 PM   Subscribe

Studying the 6502 chip using the tools we have available to study nematode brains and the like. The paper (PDF).
posted by bigbigdog (23 comments total) 26 users marked this as a favorite
 
I would like to turn the challenge around, taking development into account: Could a chip designer build a chip which started as a single transistor and grew from there, and was able to play ever-more-complex games as it grew without any interruption or sudden loss of function along the way?

And if they built a chip that way, could another chip designer understand it?
posted by clawsoon at 4:52 PM on June 22, 2016 [1 favorite]


That's a really interesting premise thanks for posting.
posted by Annika Cicada at 5:12 PM on June 22, 2016 [1 favorite]


Could a chip designer build a chip which started as a single transistor and grew from there, and was able to play ever-more-complex games as it grew without any interruption or sudden loss of function along the way?

People have been studying using genetic algorithms for programming FPGAs for a while:

"A conventional, human approach to designing a circuit to discriminate between two tones would use a clock to count the intervals between the peaks of the tone sound waves - with longer intervals between the 1KHz tones than between the 10KHz tones. If no clock was available among the components, the human designer would need to create one on the chip. This is not difficult - but it would take far more than the array of 10¥10 logic components that Adrian Thompson allocated on the FPGA for his experiment.

So without a clock or the apparent wherewithal to make one, how did the genetic algorithm and FPGA solve the problem?

'I haven't the faintest idea how the circuit works,' he says."

posted by GuyZero at 5:15 PM on June 22, 2016 [13 favorites]


I should add that sadly Thompson's research never really went anywhere as far as I can tell.
posted by GuyZero at 5:17 PM on June 22, 2016 [4 favorites]


GuyZero: People have been studying using genetic algorithms for programming FPGAs for a while

I remember that one! Now combine the randomness of evolution with the challenge of development, and you've got a nearly inscrutable system.

That said, I just finished a neurology book that's 16 years old, and I was impressed with how much we'd learned about brains even then. (The spatial organization of the auditory cortex in bats was especially interesting, but I digress.)

I've only read the abstract and the blog post so far, haven't read the paper yet, but at this point it's reminding me of the physicists' disease. One of the comments on the blog post points to this blog post, which makes a pretty good case for the idea that the biologists tackling the 6502 weren't very good at doing lesion studies. They used the tool poorly, and then decided that the tool was poor.
posted by clawsoon at 5:37 PM on June 22, 2016


Could a chip designer build a chip which started as a single transistor and grew from there, and was able to play ever-more-complex games as it grew without any interruption or sudden loss of function along the way?

I think your thought experiment may be showing the opposite of what you think it's showing. If I understand correctly, the point of the paper is "If modern neuroscience tools can't even help us understand something as simple and consistent as a microcomputer CPU, what hope do they have of helping us understand something as complicated as a brain?" To which you seem to be adding "Yeah, but brains are a actually lot more complicated and hard to understand than a CPU!", which isn't arguing against the paper at all.
posted by baf at 6:24 PM on June 22, 2016 [8 favorites]


The basic flaw of the paper is that a microprocessor is a single point of failure, as the word "central" in CPU indicates, responsible for supporting every single action of the overall organism. While there are some primitive nerve networks which seem pervasively necessary for consciousness there is nothing similar to a CPU in the brain, and the "processing" which the 6502 does is in brains very much distributed among the neurons that are also performing memory and algorithmic functions.

A better analogy/experiment (and an easier one to do) would be to ask whether you could reverse engineer the structure of a working program by doing "lesion studies" on bits of its program memory, ignoring the CPU and other infrastructure as just emulating the more distributed architecture of a brain. And in that case you'd get a much more positive result, because that's essentially what hackers have been doing since the 1970's.
posted by Bringer Tom at 6:54 PM on June 22, 2016 [2 favorites]


Yes, baf, now that I've read the paper, that's a fair criticism of my point. I was reacting to the tone of the blog post, which has more in common with the earlier "Can a Biologist Fix a Radio?" paper. That paper (linked from the blog post) has more of an "if only we did it like engineers it'd be easy" feel to it. This paper is more "let's be humble about our techniques", which is fair.

Both papers, though, point out that if you asked a person who knows how to build a radio or a processor to take a look at one they haven't seen before, they'd do a pretty good job of figuring it out. If you had someone who knew how to build brains, they'd probably also do a pretty good job at looking at a brain they hadn't seen before and figuring it out. Those people don't exist, though. Right now we're closer to the stage where a relevant comparison would be to hand a radio to Volta or a processor to Faraday and ask them to figure it out; they were both brilliant experimentalists, but they wouldn't stand a chance.

The suggestion that was (almost) made in the Jonas and Kording paper to develop techniques that allow us to understand currently-inscrutable neural nets could turn out to be helpful. And I found the quote in the Lazebnik paper, that "biochemistry disappeared in the same year as communism", intriguing. It'd be interesting to know more about the background to that.
posted by clawsoon at 6:55 PM on June 22, 2016 [1 favorite]


They are trying to understand how the processor runs a game. You can't figure that out by destroying individual transistors in the processor. That's the biological equivalent of deleting random genes in the neurons to determine how a fruit fly finds food. It is operating at the wrong level of the hierarchy.

It would make more sense to change random bits of the Donkey Kong program to understand how that program works.
posted by monotreme at 7:00 PM on June 22, 2016 [3 favorites]


This makes me wonder if anyone has done lesion studies on recurrent neural networks. Seems like those might be a much better equivalent to biological lesion studies.
posted by ymgve at 8:31 PM on June 22, 2016 [2 favorites]


They are trying to understand how the processor runs a game.

No, they are trying to understand whether investigative techniques in current use by neuroscientists to understand how the brain controls behaviour are likely to be fruitful.

The point of using Donkey Kong running on a simulated 6502 as some kind of incredibly rough analogue of a simple brain is that we already know how that system works.

It would make more sense to change random bits of the Donkey Kong program to understand how that program works.

That might be true if current neuroscience had anything even vaguely resembling a model of a "program" as a thing conceptually separable from the underlying hardware. To the best of my understanding that is nowhere near true, and might well turn out to be an inapplicable model in any case.
posted by flabdablet at 8:53 PM on June 22, 2016 [3 favorites]


From the "Biologist Repairs a Radio" paper:

...geniuses, who are so rare even outside biology...

Snerk!

Oh, next sentence:

In engineering, the scarcity of geniuses is compensated, at least in part, by a formal language [...]

Shots fired!
posted by spacewrench at 9:37 PM on June 22, 2016 [1 favorite]


flabdablet: That was the point I was trying to make. The brain is not running a program, neuroscience techniques are designed to investigate a system that does not run a program and they are not a useful means of investigating a system that is running a program.
posted by monotreme at 10:18 PM on June 22, 2016


The brain is not running a program

I don't believe we actually know that.

It may well be that the assorted modes of brain operation we currently classify as mental illnesses and/or personality disorders and quirks do indeed correspond in some conceptually useful fashion to programs.

I know from personal experience how different my world seems to be when depressed vs. not, and how quickly my own brain is capable of flipping from one mode to the other. Thinking of depressed-me and contented-me as roughly analogous to two different programs running on the same underlying hardware does not strike me as completely unreasonable.

It also seems to me that an engineered system capable of using general-purpose programmable hardware to implement processes isomorphic to those that happen inside a working brain is currently impossible for purely practical reasons: we don't know how to build it, and even if we did I'd be astonished if it could be done with any sane level of power consumption. But I don't see any in principle reason to rule it out.
posted by flabdablet at 11:00 PM on June 22, 2016


Isn't the brain's "program" the action potential of every neural connection? Basically the brain is a collection of billions of really tiny "CPUs" each running their own tiny program.
posted by ymgve at 6:26 AM on June 23, 2016 [1 favorite]


the brain is a collection of billions of really tiny "CPUs" each running their own tiny program

That's one way to look at it.

Of course, you could make exactly the same claim about every individual transistor inside a 6502. As the paper hints, though, doing so doesn't yield much in the way of useful understanding about how a 6502 actually works.
posted by flabdablet at 7:42 AM on June 23, 2016


Isn't the brain's "program" the action potential of every neural connection?

If you actually want to get value from applying programming analogies to biology, it seems to me that pondering ribosomes, RNA and DNA as molecular analogs to a massively parallel collection of CPUs, working memory and disk drives might yield more interesting insights.
posted by flabdablet at 7:50 AM on June 23, 2016 [1 favorite]


This makes me wonder if anyone has done lesion studies on recurrent neural networks. Seems like those might be a much better equivalent to biological lesion studies.

A quick search suggests it was done in 1989 but not a lot since then: H. J. Chiel, A lesion study of a heterogeneous artificial neural network for hexapod locomotion, Neural Networks, 1989.
posted by jedicus at 8:50 AM on June 23, 2016


>ribosomes, RNA and DNA as molecular analogs to a massively parallel collection of CPUs, working memory and disk drives

@flabdablet: Nope. For one, the brain seems to operate on the data store directly. The storage system is the RAM. It's all RAM, as the fact that accessing a memory almost always changes it shows.

When I was studying applied physics, I had quite a bit of (in)organic chemistry thrown in. Couple that with an interest in biology and I understand the draw to equate the current computer programming model to how biological brains work: information is refered to in ISO extensions, you can talk about speeds of operations/chemical reactions/instructions per second.

But the brain seems to work like a fuzzy neural net using functional programming with the inscrutableness of a multi-level hidden Markov model learning function operating on a genetic-algorythmicaly formed FPGA. It uses anything that works to do what it wants, which leads to results from functionality we just don't see or can model or can generalise.

It's like those genetic algorythms which try to create the best signal processor on an FPGA, which, after n generations, create a great circuit for the fitness function. And we then try and copy/paste that to another FPGA, only to discover it doesn't work because the first circuit only worked due to very specific flaws/properties of the substrate the first FPGA was built on.

But then backed by millions of years of generations.

Point being: it is very compelling to see the brain as something we can model in software/hardware. But there are fundamental differences. One being that storage=data=operating memory=working set. Another being that the substrate, the actual material it's running on fundamentally affects the workings. And yet another is this: we know about DNA, RNA ... but we still have little idea how all these things interact or really work. We've only recently proved that a person's life experiences actually can be propagated genetically (yes, we are heading into odd things like racial memory ... at the very least we do know stress changes the expression of the genome and that certain knowledge can be passed on through DNS).

I might not be explaining this correctly, but I'm trying to hint at the fact that computers are deterministic and simple. Brains are a mess which happen to work. And I think the only overlap is superficial. Not coincidental ... but any computing paradigm is just overly simplistic still when trying to apply it to the way the brain functions.
posted by MacD at 10:27 PM on June 24, 2016 [2 favorites]


I'm trying to hint at the fact that computers are deterministic and simple

So I take it you are not a user of Microsoft Windows based PC's.

Seriously, the brain has 1010 neurons with 1015 interconnections. We do not know the rules by which those interconnections are formed. We know the main cables are laid genetically during early brain development to wire the 100 or so major areas to one another through the white matter, but we don't know how the fine details emerge and there are a lot of reasons to assume those are mostly self-programmed through experience. The thing is, there might be weird stuff going on but there's no need for weird stuff, the rules could actually be quite simple once the pattern is known. For example it's almost known for certain at this point that long-term memories are encoded by synapse and dendrite growth, like plugboard wiring. Neural firing seems more concerned with the decision making process by which this growth process is directed, and that relationship isn't understood at all. That doesn't mean it's complicated though. It just means that something simple iterated billions of times will develop quite spectacularly complicated and hard to understand emergent properties.
posted by Bringer Tom at 5:53 AM on June 25, 2016


Nope. For one, the brain seems to operate on the data store directly. The storage system is the RAM. It's all RAM, as the fact that accessing a memory almost always changes it shows.

It's worse than that. It's massively parallel distributed processors as well.

It's like those genetic algorithms which try to create the best signal processor on an FPGA, which, after n generations, create a great circuit for the fitness function. And we then try and copy/paste that to another FPGA, only to discover it doesn't work because the first circuit only worked due to very specific flaws/properties of the substrate the first FPGA was built on.

Quite so. The way to replicate the results of a self-trained FPGA on another FPGA is to put the second one through the same kind of lengthy, time-consuming training process you used to get the first one working; simply copying some kind of state snapshot from the first to the second won't, in general, yield a functional part. Brains appear to be similar in that regard, which is why the transhumanist idea of uploading ourselves into the cloud has always struck me as utterly fanciful.

The interesting thing about self-trained FPGA design, though, is that you can use the very same training approach on any FPGA architecture and end up with something that works, as long as the underlying hardware is comparably capable. If we're going to get strong AI via engineered brain analogs, the degree to which we can capture the precise internal architecture of any given scratch monkey might actually not matter as much as one might expect it to.

computers are deterministic and simple

...when carefully built and programmed by competent engineers who consciously and deliberately limit themselves to the use of techniques designed to create and preserve that determinism and simplicity. That takes insight, careful design, discipline and hard work, and with the rise of Web services and single-page Javascript applications it's fairly rapidly falling out of fashion.

I've seen an awful lot of "it works on my machine" failures; the combination of half-assed programming, inadequate code review and testing, and subtle timing variations between supposedly identical machines can quite easily generate results every bit as non-replicable as those in a self-trained FPGA if somewhat less potentially diverse.

any computing paradigm is just overly simplistic still when trying to apply it to the way the brain functions.

I completely agree that any functional design paradigm is hopelessly inadequate to the task, and I agree with the authors of the paper linked in the OP that the kind of investigative techniques used by neuroscientists will probably never yield an understanding of what's actually going in inside a microprocessor, much less a brain.

However, I can see no in principle reason (merely an enormous collection of practical ones) why some engineered structure analogous to a brain could not be constructed which, if subjected to lengthy and intensive training comparable to what any of us have had since our brains began to form in utero, would end up perceiving and interacting with the rest of the world in much the way we do. We wouldn't be able to understand what it was actually doing any more than we understand what any of us is actually doing, but that needn't stop it from doing it. The guy who did the self-designing FPGAs ended up with no clue at all about how the evolved designs did what they did, but that didn't stop them didding it.

And every such structure ever built would be an individual, every bit as incapable of being backed-up into the cloud as you or me.
posted by flabdablet at 9:33 AM on June 25, 2016 [1 favorite]


the brain has 1010 neurons with 1015 interconnections

AMD and nVidia are both working on it. Current Radeon Pro Duo GPU has roughly 103.5 shaders and 1012.5 bytes of RAM per hemisphere. Clock speed is higher than ours, too.

What's seven orders of magnitude between friends?
posted by flabdablet at 9:47 AM on June 25, 2016


What's seven orders of magnitude between friends?

Next time I'm about to stomp a cockroach I'll ask it first.
posted by Bringer Tom at 4:30 PM on June 25, 2016 [2 favorites]


« Older Sorry Converse, it's been done   |   The Ups and Downs of the Baltic Dry Index Newer »


This thread has been archived and is closed to new comments