Computronium v0.1
September 25, 2017 2:08 PM   Subscribe

"That's interesting," said the scientist observing a memory-like persistence in an atomic switch. Flash forward to a 2-millimeter-by-2-millimeter mesh of silver nanowires connected by artificial synapses. Unlike silicon circuitry, with its geometric precision, this device is messy, like “a highly interconnected plate of noodles,” and exhibits similar criticality to other complex, natural processes. (Bonus links to a related paper and an an earlier LA Times piece with some nice photos).
posted by Sparx (18 comments total) 29 users marked this as a favorite
 
As much as scientists in other fields adore outspoken, know-it-all physicists, Bak’s audacious idea — that the brain’s ordered complexity and thinking ability arise spontaneously from the disordered electrical activity of neurons — did not meet with immediate acceptance.

Such delightful shade.
posted by Slackermagee at 2:12 PM on September 25, 2017 [9 favorites]


"If we build something shaped like a brain it'll think like a brain," feels suspiciously close to "If we build our own runway like the GI's runway the planes with food will land!"
posted by leotrotsky at 2:56 PM on September 25, 2017 [38 favorites]


Wow! I didn't totally follow that but the author did a good job of making it at least fairly clear. Also appreciated the acknowledgment of the many folks involved in this project. Anyway, seems very weird and exciting.
posted by latkes at 2:57 PM on September 25, 2017 [1 favorite]


So, the "engineering end run" was long considered an, at least somewhat, valid line of research into intelligence. The idea being that we will build a computer that has "consciousness" and then, because we built it, we will be able to figure out how consciousness works. That idea has lost most, though by no means all, of its currency, but it's not philosophically void.

Also, while these specific ideas are newish, some very similar ideas are pretty damn old, and the article does not do a god enough job of explaining how the new ideas differ from the old ones.
posted by 256 at 3:34 PM on September 25, 2017 [7 favorites]


Yet in 2013, even with that much power, it took the machine 40 minutes to simulate just a single second’s worth of 1 percent of human brain activity.

Case in point, from the first paragraph. This is a dangerously misleading gloss of what actually happened.
posted by 256 at 3:36 PM on September 25, 2017 [5 favorites]


The other day, I read that some early mammal fossils have been found where the shape of the inside of the skull shows a large olfactory bulb. The fact that at least some brain organization has stayed the same for tens of millions of years suggests to me that there's a benefit to having the neurons well-organized rather that just throwing them randomly together.
posted by clawsoon at 4:17 PM on September 25, 2017 [7 favorites]


All the usual caveats of Science Writing for Laypeople aside, I'd be interested to know why that's misleading, 256. Isn't that pretty much what their own press release said?

(Not trying to argue - I'm not even a scientist, but this is an area I like to follow along at home as best I can).
posted by Sparx at 4:18 PM on September 25, 2017


The press release is also a little misleading. The key part is here:

The nerve cells were randomly connected and the simulation itself was not supposed to provide new insight into the brain - the purpose of the endeavor was to test the limits of the simulation technology developed in the project and the capabilities of K.

They simulated a network with 1% of the number of artificial neurons as the brain has actual neurons. And their artificial neurons were not really intendeded to be doing mind-like activity.

The problem is that the way it's presented makes you want to say: "Oh, well, if it takes them 40 minutes to do 1% for 1 second, then they could do 100% in 4000 minutes. Thus, we have the technology to create a human-intelligence AI today, so long as we run it at 1/240000 speed. Which is hugely untrue.
posted by 256 at 4:28 PM on September 25, 2017 [4 favorites]


Still, a cool idea, with some cool experiments. Just oversold, that's all.
posted by clawsoon at 4:32 PM on September 25, 2017 [1 favorite]


Another science writing flaw that I caught (emphases mine):

Paragraph 5: It can already perform simple learning and logic operations. It can clean the unwanted noise from received signals, a capability that’s important for voice recognition and similar tasks that challenge conventional computers.

Paragraph 22: Instead, the researchers exploit the fact that the network can distort an input signal in many different ways, depending on where the output is measured. This suggests possible uses for voice or image recognition, because the device should be able to clean a noisy input signal.

I haven't dug back in to the papers to see if they've actually demonstrated this noise clean-up. Perhaps they have. But the way this is written reads to my eye like the journalist is stating that something has been achieved in the early paragraphs when it's more likely that it was predicted based on other results. This isn't just a journalist failing; I've read tons of scientific papers where the scientists themselves make giant leaps of a similar nature.

Still, a cool idea, with some cool experiments. Just oversold, that's all.

Wholeheartedly agree! Really neat experiments.
posted by Existential Dread at 4:38 PM on September 25, 2017 [3 favorites]


The idea being that we will build a computer that has "consciousness" and then, because we built it, we will be able to figure out how consciousness works.

I assume people pointed out that being able to create something and understanding it are very distinct things - how long between discovering how to make fire and discovering oxygen?

To add to the nitpickery - I very much doubt that they can say it's a power law as opposed to, say log-normal. See Shalizi.

The sheer fact that the behavior is chaotic is interesting though.
posted by PMdixon at 5:19 PM on September 25, 2017 [1 favorite]


well the Standard Model is still incomplete so how well do we understand oxygen?
posted by aribaribovari at 5:59 PM on September 25, 2017 [2 favorites]


The research group I was in during my physics PhD was at least partially dedicated to arguing with these criticality guys. It's a simple fact that power law statistics don't imply criticality, many kinds of systems produce them. Criticality means that there is no scale in the system: you show me a picture and I can't tell you how big it is because the system is fractal (like the pictures on this page, only with noise). Wheras in the brain, a skilled neuroanatomist can in fact draw an approximate scale bar on a picture. All the different regions have names and known functions that they're involved in.

Then for my postdoc I moved into a neuroscience lab, and the real joke became clear. I spent 5 years trying to model one neuron. I came CLOSE, but not really enough to rock anyone's world. Individual neurons are tremendously complicated in ways that really matter for biology, not just nodes that add up input.

Anyway, the thing I realized about these people is that arguing with them is pointless. Few if any of them has ever actually learned the theory of criticality (which is incredibly successful at describing phase transitions in matter), so they ignore people pointing out that they're doing the theory wrong. The theory itself argues that you don't have to know much about neuroscience (or whatever other field they're slumming in: EVERYTHING is criticality to them) so they don't make any actual predictions, nor care if their model behaves in ways that don't match real nervous systems. At the same time, neuroscientsts don't much care about arguments that these guys are wrong either. The typical reaction I've seen is "Wow, that's really complicated math. They must be very smart. Good on them, I'm going back to my work". It's best to view them as a scientist version of a tongue-replacement isopod: there's a small number of them out there earning a living doing stuff I find icky, but they're not hurting me and I can't stop them anyway. Best to just move on.
posted by Humanzee at 6:32 PM on September 25, 2017 [29 favorites]


"That's interesting," said the scientist observing...

Ever since I heard how Alexander Fleming said "that's funny" when he discovered penicillin, I've been clearing my throat and saying stuff like that periodically in my work periodically, too, hoping that that's the secret Nobel incantation.

So far, it hasn't worked.
posted by gurple at 10:10 PM on September 25, 2017 [6 favorites]


So far, it hasn't worked.

It sounds about as likely to work as leaving out a bit of old bread and rind of brie cheese for the fairies.
posted by sebastienbailard at 12:21 AM on September 26, 2017 [2 favorites]


It sounds about as likely to work as leaving out a bit of old bread and rind of brie cheese for the fairies.

I don't know, that sounds like a pretty good way to discover an antibiotic.
posted by condour75 at 5:40 AM on September 26, 2017 [8 favorites]


Thanks for the OP and the resulting FPP commentary. I feel intelligent when I read it now. Especially shouting out to humanzee for sharing
posted by infini at 6:06 AM on September 26, 2017


The challenge is to find the right outputs and decode them and to find out how best to encode information so that the network can understand it. The way to do this is by training the device: by running a task hundreds or perhaps thousands of times, first with one type of input and then with another, and comparing which output best solves a task. “We don’t program the device but we select the best way to encode the information such that the [network behaves] in an interesting and useful manner,” Gimzewski said.
OH MY GOD, JUST CALL IT A SCREEN.
posted by maryr at 12:07 PM on September 26, 2017 [1 favorite]


« Older Interrupt Request: 100% synthesized, 100%...   |   Charles Bradley, you already came back strong. Newer »


This thread has been archived and is closed to new comments