Join 3,557 readers in helping fund MetaFilter (Hide)


Blue Brain
March 3, 2008 1:42 PM   Subscribe

Out of the Blue: "Can a thinking, remembering, decision-making, biologically accurate brain be built from a supercomputer?"
posted by homunculus (38 comments total) 16 users marked this as a favorite

 
Metafilter: "We've got all these tools for studying the cortex"
posted by TheOnlyCoolTim at 1:50 PM on March 3, 2008


Programmed by fellas with compassion and vision.
posted by Dave Faris at 1:50 PM on March 3, 2008


My opinion is that the entire resources of Blue Brain could not reproduce the dynamical trajectory of a single neuron. It is a pale reflection of a richer reality. Of course, that doesn't rule out some sort of quantized parody of intelligence emerging from the numerical simulations.

Markram tried to get around "the mystery problem" by focusing on a specific section of a brain: a neocortical column in a two-week-old rat.

Mystery problem indeed.
posted by kuatto at 1:53 PM on March 3, 2008


Nice article. Thanks.
posted by seanmpuckett at 1:55 PM on March 3, 2008


MetaFilter: some sort of quantized parody of intelligence
posted by GuyZero at 2:00 PM on March 3, 2008


I was going to make my stock point on this issue, but they made it themselves in their excellent faq:

"Mammals can make very good copies of each other, we do not need to make computer copies of mammals. That is not our goal."
posted by Baby_Balrog at 2:16 PM on March 3, 2008 [1 favorite]


Well, the problem now is that we don't actually know how each and every neuron behaves. There are small neurons deeper in the brain that are too small to be probed with voltage meters to see when they fire, and the bottom line is that it hasn't been mapped in nearly enough detail yet. If the promise of nanotech holds true, it might be possible to someday map out the whole brain.

The other way of attacking this problem is to simulate the growth of the brain, from DNA on up, but that would require simulating things on a molecular level, maybe you could get away with less detailed approximations, but it would be very challenging on a computational level.

I think this article suffers from a serious case of bad science writing as well. They're trying to make a simple experiment (simulating one small part of the rat brain) into some huge thing that it's not (simulating an entire brain, and thus create AI)

My opinion is that the entire resources of Blue Brain could not reproduce the dynamical trajectory of a single neuron. It is a pale reflection of a richer reality. Of course, that doesn't rule out some sort of quantized parody of intelligence emerging from the numerical simulations. -- kuatto

That's a really dumb statement, for a number of reasons. For one thing, no one would seriously try to suggest creating 'intelligence' this way, it would be absurdly wasteful. For another, all activity in the brain, as well as the rest of the universe, is quantized. Furthermore, any analogue system can be quantized without losing any information if the artifacts of the quantization are smaller then the analogue noise in the system. From what we know of neurons, they are pretty much an on/off affair.

And finally it misses the point. The point of this work isn't too create AI, it's too (I'm assuming) create a brain simulation that can be used for experimentation , the same way aerodynamic simulations are used to help build airplanes, traffic simulations are used to help build roads, etc. This type of work would let scientists perform experiments much more efficiently, and in theory try out ideas that they didn't get a chance to do before. (Of course, at this point the computer itself is absurdly expensive, but if mores law holds up use of brain simulators in neurological research might become cost effective)
posted by delmoi at 2:17 PM on March 3, 2008 [3 favorites]


Good post.
posted by sveskemus at 2:18 PM on March 3, 2008


Nice link, thanks for posting it. It will be interesting to see what happens to this project over the next decade.

I think this article suffers from a serious case of bad science writing as well. They're trying to make a simple experiment (simulating one small part of the rat brain) into some huge thing that it's not (simulating an entire brain, and thus create AI)

The impression I got from the quotes in the article was that the team is billing their project that way -- of course, it's just what a science writer wants to hear, but I don't think the author was adding too much in the way of hype beyond what he got from the lab's PR.
posted by voltairemodern at 2:58 PM on March 3, 2008


Awesome.
posted by jmnugent at 2:59 PM on March 3, 2008


Sure, it starts with portions of rat brains and ends with Terminators. Mark my words.
posted by sharpener at 3:05 PM on March 3, 2008 [2 favorites]


Sure, it starts with portions of rat brains and ends with Terminators. Mark my words.
posted by sharpener at 6:05 PM on March 3 [1 favorite -] Favorite added! [!]


Marked.
posted by nzero at 3:11 PM on March 3, 2008


Programmed by fellas with compassion and vision.

I hope my spandex jacket is dark blue with a green stripe.
posted by Hicksu at 3:24 PM on March 3, 2008


Awesome. Thanks for this post.
posted by tarheelcoxn at 4:25 PM on March 3, 2008


MetaFilter: Where the cortex studies you!
posted by monju_bosatsu at 4:26 PM on March 3, 2008


That's your stock point? That we can already make brains?

Good thing prehistoric humans didn't have that attitude. "Flint and rock? We already have a way to make fire: wait for lightning!"
posted by DU at 4:33 PM on March 3, 2008 [2 favorites]


I'd think the point of trying to create a mathematical simulation of the brain is to crack the core mechanism of consciousness at a neurotopological level. Once you have that, you can start trying to improve upon it. Do a sufficiently decent job at it, and you can let the system design its own upgrades from that point on. The end product of that process - properly contained - is the only thing like a realistic means of achieving any sort of technological singularity.
posted by Ryvar at 4:47 PM on March 3, 2008 [1 favorite]


Good thing prehistoric humans didn't have that attitude. "Flint and rock? We already have a way to make fire: wait for lightning!"

That's not really a good analogy. Making fire with flint and rock turned out to be profoundly helpful, but the point the 'stock point' is just that actually being able to exactly simulate human intelligence wouldn't be that helpful.

If you're thinking that what would be helpful is to create adult intelligence at the flick of a switch, that that is moot because it's likely impossible anyway. At least, it's a very, very, long way off. If and when we do create artificial human level intelligence, it will require training and babying and growing just as we do.

So it's not very useful, is the point. What is useful is creating AI that complements our abilities, as we've arguably been doing for a bit of time now, and which we are quickly becoming much better at.

As for this project, it's certainly worth pursuing, but the hype presented in this article is indeed far out of scale.
posted by Alex404 at 4:57 PM on March 3, 2008


I'd think the point of trying to create a mathematical simulation of the brain is to crack the core mechanism of consciousness at a neurotopological level.

Two problems with that statement:

1) It's not a simulation of the brain, it's a simulation of a model. There's many good reasons to suppose the model is still far from capturing the full range of activities of the brain.

2) Core mechanism of consciousness at the neurotopological level... yah... well, on one hand, a brain is not a sufficient condition for consciousness. On the other, being committed to mechanisms of consciousness suggests to me... well, that you're wrong, in my eyes. I guess that's not necessarily a bad thing, but if you care what I think, try to meditate on the idea of whether your comfortable letting consciousness be a product of the brain like airplanes out of a hangar.

As for me, I suspect it's a lot more subtle than that.
posted by Alex404 at 5:04 PM on March 3, 2008


That's not really a good analogy. Making fire with flint and rock turned out to be profoundly helpful, but the point the 'stock point' is just that actually being able to exactly simulate human intelligence wouldn't be that helpful.

But that's not what they're trying to do! They are trying to simulate a brain that they can perform experiments on without using live rats! That is an enormous difference. Arguably, we can already build a computer that is as intelligent as a rodent, at least as far as we can measure.

And simulating a brain at a molecular level would probably one of the least efficient ways to create an AI. But that's not their goal!

The analogy isn't wait for wait for lighting : flint and steal :: whatever we have now : simulated human intelligence. The analogy is between using live rats and cell cultures vs doing everything on a computer. There is a huge world of experimentation you can do on a simulation that doing on a live system is impossible.

The purpose of this research is to study the emergent properties of brain tissue.
posted by delmoi at 5:06 PM on March 3, 2008 [1 favorite]


So it's not very useful, is the point.

the machines will decide what's useful or not.

If and when we do create artificial human level intelligence, it will require training and babying and growing just as we do.


yes but only once. then you make as many copies as you want
posted by bhnyc at 5:10 PM on March 3, 2008 [1 favorite]


For another, all activity in the brain, as well as the rest of the universe, is quantized. Furthermore, any analogue system can be quantized without losing any information if the artifacts of the quantization are smaller then the analogue noise in the system. -delmoi

If the 'system' you refer to is a brain, how would you characterize the 'noise' as a component of neural activity? i.e. how would you make the determination as to what constitutes noise? Intractable.

Secondly, consider the gravitational constant G as a component of the universe. Clearly, a quantized, finite, representation of G is not equivalent to the actual thing? Similarly, with the differential equations that govern the mechanics of a neural network. Even if we are really precise, and carry that decimal place out to the n'th place, there is always a higher precision to be had.

So if the equations that govern the ion channels and the synaptic integration are carried out to some arbitrary precision, then my question is: In what sense can a digital machine ever be equivalent to a brain? A physical neuron, in the act of synaptic integration and the discharge of electrical energy, embodies the full 'precision' of physical reality. Thus rendering the digital simulation a pale, imperfect,reflection of continuous phenomenon. And this of course neglects any errors in the models that these researchers employ (There's that pesky "mystery" problem).

Not that I'm knocking there efforts. But I think that they should call it a something than "Blue Brain". I don't think it's a brain, nor can it ever be.
posted by kuatto at 5:24 PM on March 3, 2008


Programmed by fellas with compassion and vision.
"Open the pod bay doors please, HAL."
posted by Dave Faris at 4:50 PM on March 3 [+] [!]

Fixed that for you me.
posted by not_on_display at 5:31 PM on March 3, 2008


Compared to some of the d-bags I am around all day, I'll take my chances with Blue.

YOU'RE MY BOY BLUE!!
posted by Senator at 5:41 PM on March 3, 2008


...being able to exactly simulate human intelligence wouldn't be that helpful.

- 2008

all the calculations that would ever be needed...could be done on...three digital computers ...No one else...would ever need machines of their own, or would be able to afford to buy them.

- 1951
posted by DU at 6:02 PM on March 3, 2008


Fundamental attribution error by way of verisimilitude. The brain processes sensory data in order to avoid making a possible mistake that causes suffering, the awareness of which is self/consciousness. It has developed on its own because it succeeds. Cognition becomes meaningless outside of a jungle of pitfalls, where it never has value. Any thinking process that humans make would necessarily be engaged in a survival game with its makers. The development of this blue brain is strategic, WTKION.
posted by Brian B. at 7:03 PM on March 3, 2008


So if the equations that govern the ion channels and the synaptic integration are carried out to some arbitrary precision

Thanks to omnipresent noise, the brain cannot carry out arbitrarily precise integration of a neural signal anymore than I can use an oscilloscope to carry out an arbitrarily precise measurement. For your barrier to be valid, the noise processes themselves would have to be fundamental to consciousness. I'm not talking about difficulty in figuring out what is the neural signal and what is the noise, I'm talking about the thermal noise and so on being essential to consciousness. An interesting dualism - the soul as a white noise process. Even then, one could try hooking up the computer to a hardware random noise generator, although perhaps souls only encode information in the noise of brains, not computers.
posted by TheOnlyCoolTim at 7:25 PM on March 3, 2008


Q: "Can a thinking, remembering, decision-making, biologically accurate brain be built from a supercomputer?"
A: No, no, we don't have the technology yet.
posted by finite at 7:53 PM on March 3, 2008


Fellow humans, wake up! Our brains are obviously controlling us, making us build bigger, better mechanical brains which one day will enslave us all!
posted by parallax7d at 10:13 PM on March 3, 2008 [2 favorites]


try to meditate on the idea of whether your comfortable letting consciousness be a product of the brain like airplanes out of a hangar.

I am far more comfortable with that than I am in pseudo-magical ghosts in the machine. I am also aware that the fact of the matter is unlikely to care about how either of us feel about the situation.

I don't think it's a brain, nor can it ever be.

And a submarine isn't a fish, but functional equivalence is more than sufficient.
posted by Sparx at 4:38 AM on March 4, 2008


Please don't put it in charge of the bombs and missiles.
posted by surfdad at 4:47 AM on March 4, 2008


You have to admire Markram's determination even if you don't share his optimism. (If you wanted to be really negative you could say that what they've done is decide how they think a neural column works, build something that works like that, and voila, it turns out to work just like they think a real neural column works). It actually sounds as if some good research has come out of this; I hope it's being published appropriately.

The most depressing bit for me was the bit that begins:

It's the transformation of those cells into experience that's so hard. Still, Markram insists that it's not impossible. The first step, he says, will be to decipher the connection between the sensations entering the robotic rat and the flickering voltages of its brain cells. Once that problem is solved—and that's just a matter of massive correlation—the supercomputer should be able to reverse the process.

He doesn't get it, does he? Doesn't understand what the problem of subjective experience even is, much less have any kind of answer. It's not that he thinks awareness will follow naturally if the simulation is wired correctly - he actually has no idea that there is any such thing as awareness going on here.
posted by Phanx at 5:22 AM on March 4, 2008


Metafilter: No idea that there is any such thing as awareness going on here

A rich thread for this sort of thing.
posted by Grangousier at 5:28 AM on March 4, 2008


The naysayers here seem to think this shouldn't be studied at all. Yes, our current understanding of how neurons work in a physical brain is probably incomplete. But the basic function is very simple: When a neuron receives a certain threshold of input from its dendrites, it fires. That's a relatively simple process to model.

Is there more going on in the neurons? Maybe, maybe not. It's extremely difficult to tell by looking at the neurons themselves. So, let's build a model of the processes that we know are going on, and see how much brain function we can simulate on it. The stuff we can't simulate, we'll have to find other explanations for. We can't really tell until we try.

Some sort of emergent consciousness might be the ultimate pipe-dream goal by some team members, and Seed magazine is certainly one to play up that angle. But there are huge advances to be made if this model can teach us more about how neurons work together to produce brain phenomena. Our understanding of epilepsy is still pretty rudimentary - imagine what we could learn from a simulated chunk of brain that we can experiment with to no end.
posted by echo target at 7:41 AM on March 4, 2008


Sadly, the only way we'll know the moment they're successful is when the lead programmer is shot by some grizzled commando from the future.
posted by Uther Bentrazor at 9:23 AM on March 4, 2008 [1 favorite]


Mind-reading with a brain scan: Brain activity can be decoded using magnetic resonance imaging.
posted by homunculus at 9:54 AM on March 6, 2008


They may teach it how to think, but can they teach it how to LOVE?
posted by blue_beetle at 9:42 AM on March 7, 2008


He doesn't get it, does he? Doesn't understand what the problem of subjective experience even is, much less have any kind of answer. It's not that he thinks awareness will follow naturally if the simulation is wired correctly - he actually has no idea that there is any such thing as awareness going on here.

Nor does he care. It is, as they say, outside the scope of his project:
What the Blue Brain Project is not

The Blue Brain Project is an attempt to reverse engineer the brain, to explore how it functions and to serve as a tool for neuroscientists and medical researchers. It is not an attempt to create a brain. It is not an artificial intelligence project. Although we may one day acheive insights into the basic nature of intelligence and consciousness using this tool, the Blue Brain Project is focused on creating a physiological simulation for biomedical applications.
They're attempting to build a software simulation of the hardware that makes up the brain. They are not even attempting to build the software that runs on that hardware.
posted by moonbiter at 11:00 AM on March 7, 2008


« Older Animal magnetism, or testiment to the stupidity of...  |  Social networks are like the e... Newer »


This thread has been archived and is closed to new comments