Moving AI out of its infancy
December 24, 2005 6:23 AM   Subscribe

Moving AI out of its infancy: Changing our Preconceptions [.pdf]. Accelerating Change 2005: The Prospects for AI - A Panel Discussion [.mp3] Tetris AI. A few fun AI links to get the brain working [via]
posted by MetaMonkey (29 comments total)
 
I just spent 20 minutes reading things on colinfahey.com. This guy has some sort of serious nerouses.
posted by WetherMan at 8:46 AM on December 24, 2005


AI is a pipe dream.
posted by quonsar at 8:48 AM on December 24, 2005


I'd smoke it.
posted by Tikirific at 9:04 AM on December 24, 2005


very nice. thank you for the Panel Discussion link.
posted by nickerbocker at 9:09 AM on December 24, 2005


AI is a pipe dream.
quonsar

Why do you say that?
posted by Sangermaine at 10:15 AM on December 24, 2005


i just like saying "AI is a pipe dream".
posted by quonsar at 10:22 AM on December 24, 2005


It is fun to say.
posted by maxsparber at 11:27 AM on December 24, 2005


Sangermaine: Why do you say that?
You can pretty much write off anything quonsar says as parenthetical. One does not question the quonsar and expect an actual answer.
posted by lodurr at 12:27 PM on December 24, 2005


I have to laugh whenever I hear people say things liks "AI is a pipe dream." If it's really just being said in an agent provacateur mode, then it's more or less irrelevant, of course, but plenty of people do say it with perfectly straight faces. And they seldom mean it in any really nuanced sense.

Because, of course, "AI" in the sense in which people mostly used the term circa 1990 or so is a complete and total pipe dream. It was never going to work, because it was based on utterly and obviously mistaken premises and astonishingly unrealistic goals. Back then, grad students in the field sincerley thought that by thinkign through the linguistic actions that people engaged in, they could capture the essence of what it meant to be human. Not even the Behaviorists were that deluded (and I'd argue they were considerably less so.)

At the same time, a lot of people were laying the groundwork for what AI has become: A much more practical discipline, focused not on recreateing humans out of non-human materials, but on making something new that does intelligent things. Which things may not seem intelligent to humans, because we have a very human-centric and liguistically focused concept of what constitutes "intelligence."

Back then, everyone was focused on thinkign up clever paper titles; now, they focus much more on actually accomplishing things.
posted by lodurr at 12:44 PM on December 24, 2005


i just like saying "AI is a pipe dream".

Looks like an actual answer to me. :)

The first article says something a little different, but not too different. AI developed purely from symbolic logic is a pipe dream. AI researchers tend to stylize and overformalize. "There’s one class of machine that we know for sure can solve all these problems of perception and complex action: the mammalian brain. But we simply don’t know its fundamental
operating principles (although I’m quite certain it has some). We have no periodic table of neural function to help us see the underlying logic." And then he suggests some operating principles.

I enjoyed reading it.
posted by weston at 12:52 PM on December 24, 2005


No, AI is a hookah dream.
posted by muppetboy at 1:01 PM on December 24, 2005


This is not an AI.
posted by muppetboy at 1:05 PM on December 24, 2005


From the first link:
So, is there a coordinate frame in which a banana, for the sake of argument, looks exactly the same shape, regardless of its position, orientation, and scale? Yes. It’s a rather abstract frame, but try this: Imagine the image of a banana falling on your retina. Now mentally project yourself until you are inside the banana, looking outward. What you have just performed is a conversion from eye-centered coordinates into banana-centered coordinates. From inside, the banana remains exactly the same shape, regardless of its location and orientation with respect to your original viewpoint. So if brains can perform on-the-fly transforms from egocentric to object-centered coordinate space, they have the means to develop visual invariance.

Exactly how this might happen is an unsolved problem, but it’s something I’ve been working on with a modicum of success—enough to suggest that it’s a meaningful idea. Significantly, if a general mechanism can be found, it’ll bring the two key visual data streams (the “where” pathway of the parietal lobes, and the “what” pathway of the temporal lobes) into a common level of explanation.
Very interesting, but I don't get it... I don't even understand how he can say that an object has a fixed appearance in object centred coordinates. You still have to choose an origin, and an orientation...

I'm pretty sure that what drives our ability to recognize objects is familiarity and likelihood... I am quite sure that when I see an object that I have trouble identifying I start thinking about what it could be, based first on 'key properties' (colour, a particular juxtaposition of lines, who knows what) and then testing those properties against a list of items that I expect might be in that location. It is less clear what the thinking process is for objects that are immediately identified...
The more complex the robot, the easier it is to make progress
A similar argument about scale applies to robots and to the segregation of disciplines in AI. Toy environments are often far too stylized and reduced to capture the essential features of a problem, and with robots, that’s especially true. How intelligent would you have become if you’d been born equipped with only two wheels and a handful of bump sensors?
...
How can we learn to see depth unless we have the ability to reach out and touch things to confirm how far away they are? How can we learn to reach out and touch things unless we can see their depth? Learning is a process of integration, correlation, and confirmation between all the senses and motor systems at once, so we need to study them together.
Brilliant! More is different.

So who is Steve Grand?
I’ve no doubt Edward de Bono would agree that most breakthroughs arise when people doggedly plow their own furrow, unknowingly attempting things that wiser people “know” to be impossible. Ignorance can therefore be a huge asset, and as an unfunded amateur, I have no obligation to stick to the rules and etiquette of professional science, so the less I know about other people’s ideas, the better. In fact, if I were you, I wouldn’t be reading this article at all—it would only color my thoughts and reduce the potential for novelty. But perhaps the final paragraph was
I'm not sure the system of professional science has as little value as all that... Regardless, I think independent approaches have great value. It is incredibly heartening to see his independent approach embraced by 'the system'.
posted by Chuckles at 1:13 PM on December 24, 2005


Heavier-than-air flying machines are a pipe dream.
posted by spazzm at 1:23 PM on December 24, 2005


Yeah, pure logic AI designed to "solve" a human is pretty much dying (although my alma mater is still hanging on to the ideas). The two branches I see having a lot of success in the near future are:

1) Using a wide variety of techniques, including the old logic based approaches, to make things that are intelligent but not human. Things like neural networks and bayes systems aren't great at emulating humans, but they ARE good at making good decisions.

2) Using techniques like cognitive modelling to create systems that are good approximations of the behavior and some internal structures of human cognition. This helps us understand the parts of human cognition that we don't really think of as "intelligent". Perceptual modeling is one field that is advancing rapidly here.

I think after a few decades of solid research in both, the field will be in a much different place. It may never converge into one field after that, they really are pretty seperate areas.
posted by JZig at 1:45 PM on December 24, 2005


lodurr: safe to assume you weren't a grad student circa 1990, then? (Which is a snarky way of saying you're beating on a straw man, IMO).
posted by Leon at 2:17 PM on December 24, 2005


I think the most promising direction was articulated by Bruno Olshausen in the second link:
"If we are to make progress in building truly intelligent systems, Olshausen says we need to turn our efforts toward understanding how intelligence arises within the brain. Neuroscience has produced vast amounts of data about the structure and function of neurons but what is missing is a theoretical framework for linking these details to intelligence. Theoretical neuroscience seeks to bridge this gap by constructing mathematical and computational models of the underlying neurobiological mechanisms involved in perception, cognition, learning, and motor function."

Cognitive neuroscience can be used to reverse engineer the brain and people have already started producing models based on a theorized "cortical algorithm" that has been inferred by looking at the ~6 layer neural structure within minicolumns in the cortex.

It may take a while, but given that there is already an example in nature that we can study, we should get there eventually if we can maintain scientific progress of the coming decades.
posted by Meridian at 2:50 PM on December 24, 2005


I always thought quonsar was an AI.

colin fahey is fucking hardcore.
posted by delmoi at 3:47 PM on December 24, 2005


What you have just performed is a conversion from eye-centered coordinates into banana-centered coordinates.

be the banana.
posted by quonsar at 8:19 PM on December 24, 2005


In the words of the Bobs "Ba-na-na-na-na-na-na-na OH! Ba-NA-na!"
posted by namespan at 8:35 PM on December 24, 2005


yeah but let's talk about something important, like what the heck happens in the last few frames of that tetris video?? the game seems to jump a bunch of moves ahead, first on the "AI" screen, then on the game screen. i call shenanigans
posted by rswst8 at 10:09 PM on December 24, 2005


be the banana.

Because there is no spoon.
posted by spazzm at 2:28 AM on December 25, 2005


Leon: lodurr: safe to assume you weren't a grad student circa 1990, then? (Which is a snarky way of saying you're beating on a straw man, IMO).
You are correct, I was not. However, i was studying AI in 1988 as an undergrad, and reading a lot of papers in AI and philosophy of mind. I am not basing my opinion on a superficial familiarity with the popular work. I'm basing it on the journal articles I read and on conversations with profs.

That said, if you were a grad student at the time and you weren't under the false impression that you could solve the Turing Test and be done with the problem, then that's great. And if my slice of understanding was non-representative, then I apologize. However, I still think it can be documented that there was a lot of funding going to peopel who were assuming that linguistic behavior was essentially the same thing as intelligence.
posted by lodurr at 8:27 AM on December 25, 2005


lodurr: Merry Christmas! I'm a lot less snarky now the Christmas pressure's let up... Here's how I read your original post: "AI was this single monolithic discipline circa 1990, and every researcher was chasing formal logic down a blind alley bar a few visionaries".

I would argue that all kinds of pragmatic techniques were well established by 1990... off the top of my head neural networks (for whatever they're worth), ALife, Genetic Programming, and a host of probalistic techniques (eg hidden Markov models in speech recognition) were all part of the AI mainstream by 1990.

But I'm not trying to win an argument here... my recollection could easily be off by a few years. After all, we both agree that AI as a discipline seems a lot more pragmatic today.
posted by Leon at 10:34 AM on December 25, 2005


Leon: All I really meant is that, at that time, it seemed to me that that's where the karma was being invested. There was corporate and defense money in the more practical, lower-level stuff, but it was regarded as a lower discipline as well. I had a prof at the time who would confess in private that he was very interested in whoever had the money. He was a sharp guy; I'm not impugning his intellect, just saying that he knew where the checks came from, and he was willing to taylor his efforts a little to make sure that they kept coming.

I always thought there was a degree of intellectual sloppiness in a lot of AI at that time in the sense that it handwaved a lot on the part between "collect underpants" and "Profit!"
posted by lodurr at 5:41 PM on December 25, 2005


Earlier this year - Is computer artificial intelligence a dead field?

My two cents: any approach that completely ignores evolution and development is going to be a waste of time. I don't think this has really sunk in to the neuroscience or AI communities yet, although there are a few people who get it. Steve Grand is one of them and if you enjoyed the linked article you'll get a kick out of his book, Growing Up With Lucy. Disclaimer: he is a friend.
posted by teleskiving at 11:16 AM on December 26, 2005


Indoor plumbing is a pipe dream.
posted by brundlefly at 12:28 PM on December 26, 2005


Some very interesting developments in AI are reported in the wired article on the most recent DARPA Challenge, via slashdot:
The CMU team also used Pomerleau's approach. They drove their Humvees through as many different types of desert terrain as they could find in an attempt to teach the vehicles how to handle varied environments. Both SUVs boasted seven Intel M processors and 40 Gbytes of flash memory - enough to store a world road atlas. CMU had a budget of $3 million. Given enough time, manpower, and access to the course, the CMU team could prepare their vehicles for any environment and drive safely through it.

It didn't cut it. Despite that 28-day, 2,000-mile sojourn in the desert, CMU's premapping operation overlapped with only 2 percent of the actual race course. The vehicles had to rely on their desert training sessions. But even those didn't fully deliver. A robot might, for example, learn what a tumbleweed looks like at 10 am, but with the movement of the sun and changing shadows, it might mistake that same tumbleweed for a boulder later in the day.

Thrun faced these same problems. Small bumps would rattle the Touareg's sensors, causing the onboard computer to swerve away from an imagined boulder. It couldn't distinguish between sensor error, new terrain, its own shadow, and the actual state of the road. The robot just wasn't smart enough.

And then, as Thrun sat on the side of that rutted dirt road, an idea came to him. Maybe the problem was a lot simpler than everyone had been making it out to be. To date, cars had not critically assessed the data their sensors gathered. Researchers had instead devoted themselves to improving the quality of that data, either by stabilizing cameras, lasers, and radar with gyroscopes or by improving the software that interpreted the sensor data. Thrun realized that if cars were going to get smarter, they needed to appreciate how incomplete and ambiguous perception can be. They needed the algorithmic equivalent of self-awareness.

Together with Montemerlo, his lead programmer, Thrun set about recoding Stanley's brain. They asked the computer to assess each pixel of data generated by the sensors and then assign it an accuracy value based on how a human drove the car through the desert. Rather than logging the identifying characteristics of the terrain, the computer was told to observe how its interpretation of the road either conformed to or varied from the way a human drove. The robot began to discard information it had previously accepted - it realized, for instance, that the bouncing of its sensors was just turbulence and did not indicate the sudden appearance of a boulder. It started to ignore shadows and accelerated along roads it had once perceived as being crisscrossed with ditches. Stanley began to drive like a human.

Thrun decided to take the car's newfound understanding of the world a step further. Stanley was equipped with two main types of sensors: laser range finders and video cameras. The lasers were good at sensing ground within 30 meters of the car, but beyond that the data quality deteriorated. The video camera was good at looking farther away but was less accurate in the foreground. Maybe, Thrun thought, the laser's findings could inform how the computer interpreted the faraway video. If the laser identified drivable road, it could ask the video to search for similar patterns ahead. In other words, the computer could teach itself.

It worked. Stanley's vision extended far down the road now, allowing it to steer confidently at speeds of up to 45 miles per hour on dirt roads in the desert. And because of its ability to question its own data, the accuracy of Stanley's perception improved by four orders of magnitude. Before the recoding, Stanley incorrectly identified objects 12 percent of the time. After the recoding, the error rate dropped to 1 in 50,000.
A sort of Kalman filter but for decision based or discrete event systems.
posted by Chuckles at 12:33 AM on December 29, 2005


From Probabilistic Algorithms in Robotics - Sebastian Thrum:
Traditionally, the two most frequently cited limitations of probabilistic algorithms are computational inefficiency, and a need to approximate. Certainly, there is some truth to these criticisms. Probabilistic algorithms are inherently less efficient than non-probabilistic ones, due to the fact that they consider entire probability densities. However, this carries the benefit of increased robustness. The need to approximate arises from the fact that most robot worlds are continuous. Computing exact posterior distributions is typically infeasible, since distributions over the continuum possess infinitely many dimensions. Sometimes, one is fortunate in that the uncertainty can approximated tightly with a compact parametric model (e.g., discrete distributions or Gaussians); in other cases, such approximations are too crude and more complex representations most be employed.
This could be a key way that the brain processes certain types of information much more efficiently than turing machines. If signals in the brain already propagate as probability distributions, then all the extra processing described above comes for free - similar to how lenses create fourier transforms at the focal plane "for free".
posted by Chuckles at 1:02 AM on December 29, 2005


« Older Merry Christmas   |   In Search of Mornington Crescent Newer »


This thread has been archived and is closed to new comments