Skip

Jeff Hawkins unleashes his brain: Numenta's new AI platform
April 4, 2007 1:35 PM   Subscribe

Jeff Hawkins, co-founder of Palm and Handspring, has started a new company, called Numenta, to test his controversial theory of intelligence. Whether you find his theory plausible or not, his book, "On Intelligence" is fascinating. Numenta is attempting to build A.I.s using Hawkins' theory as a backbone. They've developed a software engine and a Python-based API, which they've made public (as free downloads), so that hackers can start playing. They've also released manuals, a whitepaper (pdf) and videos [1] [2]. (At about 30:18 into the first video, Hawkins demonstrates, with screenshots, the first app which uses his system.)
posted by grumblebee (22 comments total) 20 users marked this as a favorite

 
I really enjoyed Hawkins' book, and I can't do his theory justice here. Briefly, he believes that each part of the neocortex (each neuron and each group of neurons) is "running" the same algorithm, and he has a rough idea what this algorithm is. He believes that it's matches patterns and makes predictions. Making predictions is central to his definition of intelligence. He believes the brain is a prediction-making machine, and he believes he knows the mechanism by which it makes these predictions -- and that he can model it via software.

I'm not sold on his theory, but I'm not completely skeptical about it, either. It's wise to be at least somewhat skeptical about any AI claims. But Hawkins' theory Is clever, it has some explanatory power, and it's testable. If, based on my crappy description, you think it's hogwash, I urge you to read his book and/or poke around on the Numenta site. Don't base your opinion on what I've written.
posted by grumblebee at 1:35 PM on April 4, 2007


Looks like neat stuff. With my one AI class, I'm not really qualified to judge if he's on to something or not but it sounds cool as hell.
posted by octothorpe at 1:54 PM on April 4, 2007


I've not read the book yet, but it seems like a less sophisticated version of Bayesian ideas of perception that are quite the rage right now.
posted by christopher.taylor at 1:56 PM on April 4, 2007


Hawkins may well be "onto something" in terms of devising useful procedures for sophisticated pattern recognition, although there is reason for doubt (I believe that when shown a blank field, the Numenta system responded, essentially, "Is it a dog? Is it a house?").

Whether or not there's utility in what he's devised here, though, there is nothing in it that convinces me in any way that he's successfully modeled the process of human cognition. This despite the claim he made at ETech last week, and no doubt elsewhere, that "I and several other people now understand the way the human mind works." I don't know about you, either, but when I hear the words "trust me" six or seven times in a forty minute talk, I tend to do anything but.

Finally, although it says nothing about the merits of Numenta, Hawkins in person I found to be abrasive, patronizing and serenely unaware of the arrogance of his claims. Based on that alone, I would just as soon he get an egoectomy right quick.
posted by adamgreenfield at 2:00 PM on April 4, 2007


Like a said, I'm agnostic about the theory, but I like "the arrogance of his claims." (Whether I'd like him as friend is another matter.) I think his zealotry will be useful to the rest of us. He has the ambition and money to take his theory and run with it. I believe he will run with it as-fas-as it will take him, and he'll be proved right, partially right or wrong. And we'll benefit from this extra knowledge.

As for whether or not his model is accurate, it's a fascinating issue. And to me, what would be really interesting, is if he wound up creating a sophisticated A.I. without knowing whether or not it used a similar method to the biological model. It would open up a whole other host of questions.
posted by grumblebee at 2:10 PM on April 4, 2007


I love his "grand unified theory". It seems like all these specialized attempts we have so far are garden paths and can only go so far. It seems like just the sort of thing that might revolutionize the field.

But I just read pop-sci neuropychology.
posted by Brainy at 2:14 PM on April 4, 2007


If he's right, shouldn't we all be able to predict how this is going to turn out?
posted by davejay at 2:31 PM on April 4, 2007


The part about trying to build a platform and community is really interesting and exciting. I'm a very low level coder, but if I'm gonna try and do amazing things with simple algorithms, this seems like the sort of place.
posted by Brainy at 2:37 PM on April 4, 2007


Personally, I wish to see Dr. Hugo de Garis' Robokoneko... Did anything ever come of it? The idea was interesting. But I've not heard anything in ages.

In fact, it looks like his company (genobyte) is belly up?
posted by symbioid at 2:37 PM on April 4, 2007


He may well have some good ideas for doing pattern recognition better, although I'm not sure what's so radically new about this idea. Any attempt to replicate actual human intelligence will need to be well grounded by a thorough understanding of evolution, development, and network dynamics, and above all look at information processing as something that serves behavior and not the other way round.
posted by teleskiving at 3:09 PM on April 4, 2007


Damnit, I read a great piece just the other day which listed all the ways in which brains are not like computers. Eg, brains are analogue. Brains do not separate storage and execution, so to "run a program" changes the data (think about how repetition and practise work). Brains do not have discrete specialized components (or rather, to the extent that they do, like the hippocampus and memory, those parts also have other, unrelated functions). I can't find the link! But anyway, it made me very skeptical of attempts to model brains with digital computers.

If they are simulating synapses with chemical gradients and infinite gradations of electrical potential that would be interesting, however...
posted by i_am_joe's_spleen at 3:35 PM on April 4, 2007


joe's spleen, that's pretty much my take. I used to be a big extropian/transhumanist (albeit, a Twinkie eating one).

Over the years I've come to be pretty damn skeptical of technology in general, and things like AI in particular. Physicist Roger Penrose is one ardent skeptic of AI. I haven't personally looked at his theories or the rebuttal against, but he has taken one approach against AI.

John Searle is another critic of "Strong AI". I'm more apt to agree with him. He's the guy who came up with the Chinese Room thought experiment.

It's not that I think consciousness/Strong AI is impossible. I'm pretty much a reductionist, so I think that, in theory, we should be able to create some form of advanced AI. The thing is, I don't think we're even close. And I think there's a lot of mushy utopian thinking in certain circles. At least it makes for great Sci-Fi.

The type of computer you are referring to, I believe, is the von Neumann Architecture. Interestingly, this is the topic that recently deceased John Backus argued is holding back progress. We have come to rely on this particular architecture and it limits how we approach programming languages. I'd argue, as you seem to be implying, that the structure of modern computers is a strong limit on AI possibilities.

Just because, by the year 2036 (or thereabouts) we'd theoretically have the full capacity of the human brain in processing power, doesn't mean we'll have the capability to actually do any real AI with that power. We'd in effect have a simulation, which requires a much more powerful system than the system being simulated. This is assuming we continue on our current processing development.

That said, I'll have to take a look at what this Hawkins guy is proposing. I'm sure it's interesting.

P.S. Sorry for all the wikipedia links :(
posted by symbioid at 6:54 PM on April 4, 2007


Most people in this thread will find "On Intelligence" interesting.

Hawkins believes brains and computers are fundamentally different, and he feels that this is why conventional AI approaches have failed.

He is also a big fan of John Searle and spends many pages talking about the Searle's "Chinese Translator" problem. He agrees that it's a problem, and feels that not grappling with it is another reason why so many people have failed to create real A.I.s.

He hates Expert Systems, Turing Tests and classic neural nets. He presents a compelling argument that his work is going in a different direction.
posted by grumblebee at 8:09 PM on April 4, 2007


Yes, after I read more about the book and his thoughts, I do think I'd find it quite an interesting read. I have to admit I jumped the gun. Thanks for pointing that info out, though, grumblebee...
posted by symbioid at 10:28 PM on April 4, 2007


This nym hereby goes on record that this shit is going break shit wide open.

The sudden anger of the old dude in the Q & A is almost enough by itself to convince me that Hawkins is on to some deep shit.

Shit.
posted by Moistener at 10:50 PM on April 4, 2007


Forgot: Bless you, grumblebee. Thank you for posting this.
posted by Moistener at 10:52 PM on April 4, 2007


I totally understand your initial reaction, symbioid. When I first read about this, I thought, "Yet another guy who thinks he's found The Secret. Ho Hum..." Then I read his book. If he's wrong, I'm not knowledgeable enough to figure out how. All I can say is that it seems plausible.

One thing I love about his book is that, in the back, he proposes all kinds of experiments that could (and should) be done to test the validity of the theory. So he's committed to falsifiability. He may be wrong, but he's convinced me that he's a real scientist bent on figuring out the truth.

(In other news, Douglas Hofstadter has a new book out.)
posted by grumblebee at 5:43 AM on April 5, 2007


This is the link on why brains are not computers.
posted by kensanway at 7:12 AM on April 5, 2007


I do not believe that there is something special about intelligence (for which we don't even have a working definition) that it can only run on a hardware platform of fatty grey meat, powered by water, glucose, oxygen, and some electrolytes. Yeah, Penrose has his microtubles / quantum physics connection, but that doesn't mean we couldn't pull off something similar with machinery that isn't exactly fatty grey meat.

I do believe that radical new approaches, like this, are needed if any progress could be made.

My doubt comes in when I ask if humans are smart enough to replicate even chimp-level intelligence. Do we have a guarantee that it is possible? Golden retrievers can't make cobble together Babbage engines (out of fur and old chewbones) that are smart enough to talk to said golden retriever. And then there's the question, why do we want to replicate human intelligence, anyway? We've got seven billion of them mucking up the landscape, they're cheap.

Maybe we should look into finding the kind of intelligence we want, but lack as a species, and capitalize on that. Something that would complement humanity, rather than supplant in an effort to, say, fire more workers and make our goods at Walmart even cheaper.
posted by adipocere at 7:44 AM on April 5, 2007


Maybe we should look into finding the kind of intelligence we want, but lack as a species, and capitalize on that.

I know I sound like an apologist for Hawkins. Really, I'm not. But he's anticipated almost everyone in this thread. If you listen to his talk (video [1]) or read his book, you'll see that this is one of his main points. He's MOST interested in complimenting human intelligence. He talks about all sorts of things (he hopes) his AIs will be able to do that humans can't do.

He's fundamentally different from someone like Hofstadter, who claimed, in a recent "NY Times Magazine" interview that he was totally uninterested in computers. Hofstadter is interested in human creativity, and to him, computers are tools for studying.

On the other hand, Hawkins is an engineer. (His first job was creating chip designs for Intel, and this is what made him skeptical about classic A.I approaches: he could see that CPUs and brains were totally different creatures.) His goal is to build intelligent machines. He happens to believe that the easiest way to do this will be to follow biological models. For him, biology is a means to an end, and that end is mechanical. For Hofstadter, machines are a means to an end, and that end is biological.

===

To me, there's one claim Hawkins is making (which he credits to Vernon Mountcastle) that's more important than anything else he's saying. It's important whether specific details of Hawkins' theory are right or wrong:

He's claiming that the whole neocortex is running ONE simple algorithm. Or rather than each part of it is running this algorithm: each cell is running it; cell groups are running it; groups of groups are running it, etc.

He claims he knows what this algorithm is, but to me that's less important than whether or not it exists. If it exists, then many researchers are barking up the wrong tree by over-complicating things.

As usual, I'm not doing justice to his theory, but Hawkins claims that the neocortex is too new (in Darwinian terms) to have evolved thousands of different algorithms. He also notes that the brain is incredibly plastic. It's ability to learn new things and remake itself in all sorts of ways (e.g. blind people using the sight-areas for hearing) suggests a simple, flexible program rather than a bunch of highly-complex, specific-function programs.

THIS, to me, right or wrong, is what needs to be studied and tested. Hawkins may be right about his details too, but I can totally imagine some lecturer in 2060 saying, "Even though he was wrong in almost every detail about how the mind works, we must give Jeff Hawkins credit for promoting the single-algorithm theory. This lead to the real breakthrough by Dr...."

Of course, if Hawkins is wrong on this point, he's useless. (Except in helping us prune off a bad branch of science.) What would it mean for him to be wrong? For some, it would mean that he's misunderstood biology. For others, it would mean that no intelligent system -- even a non-biological one -- can be engineered using a single algorithm.
posted by grumblebee at 8:16 AM on April 5, 2007 [2 favorites]


Of course, if Hawkins is wrong on this point, he's useless.

Useless to whom? Professors? Paper writers?

The Human Stumble of Progress is far more forgiving than that and much more like your quote from 2060. Humans have a history of building amazingly useful things based on models and theories that later prove completely wrong different.

Progress doesn't require inventors to "correctly" perceive/describe the universe they build stuff in. His cortical loop theories are just a powerful design resource -- and irrelevant once the computerland code starts running.

Utility vs. correctness. Humans choose the first. Science journals choose the second.

Yours Truly,
The Chaotician from Jurassic Park
posted by Moistener at 10:14 AM on April 5, 2007


He's claiming that the whole neocortex is running ONE simple algorithm. Or rather than each part of it is running this algorithm: each cell is running it; cell groups are running it; groups of groups are running it, etc.

Yeah, that's the part that I find a bit suspect. The brain obviously is a prediction making machine -- this is one of the things human brains clearly do. A claim that this is actually most of what the brain does (and other stuff arises from that) is definitely bigger and maybe grandiose. It smacks a bit of the uncommon corner solution, but who knows? It seems like an interesting model to examine, at any rate.

Physicist Roger Penrose is one ardent skeptic of AI. I haven't personally looked at his theories or the rebuttal against, but he has taken one approach against AI.

I think his position isn't so much that AI is impossible as that conscious intelligence doesn't arise from algorithmic computation as we currently understand it. IIRC, this flies against some positions in strong AI, but really doesn't pick a fight with much of the rest of the field and leaves a huge amount of room for the obviously broad utility of algorithmic computation.

Yeah, Penrose has his microtubles / quantum physics connection, but that doesn't mean we couldn't pull off something similar with machinery that isn't exactly fatty grey meat.

My reading of Penrose's stuff suggests he would agree with that position.
posted by weston at 11:19 AM on April 5, 2007


« Older Turn it up!   |   Don't get fooled again Newer »


This thread has been archived and is closed to new comments



Post