Of Artilects, Kolmogorov and the Hutter prize
December 17, 2007 10:34 AM   Subscribe

Three AI researchers:
Hugo de Garis: Home - Wiki
Jürgen Schmidhuber: Home - Wiki
Marcus Hutter: Home - Wiki
posted by sushiwiththejury (28 comments total) 11 users marked this as a favorite
 
Lol, that de Garis seems like a bit of a nut:
He has more recently been noted for his belief that a major war between the supporters and opponents of intelligent machines, resulting in billions of deaths, is almost inevitable before the end of the 21st century.[2]:234 This prediction has attracted debate and criticism from the AI research community, and some of its more notable members, such as Kevin Warwick, Bill Joy, Ken MacLeod, Ray Kurzweil, Hans Moravec, and Roger Penrose, have voiced their opinions on whether or not this future is likely.
But it is Wikipedia, so I can picture some academic drama resulted in a little creative vandalism.
posted by delmoi at 10:38 AM on December 17, 2007


They're rubbish unless they write terrible SF.
posted by Artw at 10:47 AM on December 17, 2007 [1 favorite]


Please help me understand this post, as I an interested in the underlying subject matter. Why these three researchers? Is there something connecting them other than their chosen field?
posted by Pastabagel at 10:50 AM on December 17, 2007


You're missing Minsky.

You can read his latest book, the emotion machine, in draft form: Chapters: 0 1 2 3 4 5 6 7 8 9
posted by blahblahblah at 10:57 AM on December 17, 2007


Three AI researches, no context, one thread. Who will survive?
posted by xmutex at 10:58 AM on December 17, 2007


Please help me understand this post

I was staring at Alexander Ratushnyak's entry for the Hutter prize while Beefheart gargled 'Sure 'Nuff 'N Yes I Do' in the background.

Hutter's homepage loaded simultaneously with the Kapitan and for half an hour I tore my hair out googling 'WTF The Police and P Diddy covered the shit outta Beefheart???' and 'wow that beefheart, he sure overlaid that flute all fucked up' till I realized what magic the good Herrdoktor Hutter's page really is.

Oh, and they're unusual, strongly opinionated scientists with unique extrapolations of their work.

See also Kevin Warwick, who bills his lectures as 'I, Cyborg'.
posted by sushiwiththejury at 11:05 AM on December 17, 2007


One of the most interesting/terrifying things about AI is not how smart computers are becoming but just how dumb most people are.

If the Turing test isn't beaten in my lifetime I'll be very disappointed.
posted by Skorgu at 11:06 AM on December 17, 2007 [2 favorites]


See also Kevin Warwick.

Or as The Register calls him "Captain Cyborg".

He's a bit too invested in publicity stunts for his own good, that one.
posted by Artw at 11:14 AM on December 17, 2007


Welcome to my Scientific Homepage!
posted by 2or3whiskeysodas at 11:15 AM on December 17, 2007


Hugo de Garis sounds like an honest-to-god mad scientist, and I love it.
posted by OverlappingElvis at 11:25 AM on December 17, 2007


This prediction has attracted debate and criticism from the AI research community, and some of its more notable members, such as Kevin Warwick, Bill Joy, Ken MacLeod, Ray Kurzweil, Hans Moravec, and Roger Penrose, have voiced their opinions on whether or not this future is likely.

Just to be clear, but Ray Kurzweil's entire opinion on everything nowadays is "If it doesn't fit my happy hope for a perfect immortal future I don't like it." He used to be a visionary, but fear of an impending mortality hit him HARD.
posted by shmegegge at 11:38 AM on December 17, 2007


Stanford insider Theodore Roszak does a good job of debunking AI optimism and optimists' motives in The Cult of Information.
posted by meehawl at 11:50 AM on December 17, 2007


Okay sushiwiththejury, can we have just a little hint as to why we're supposed to care about these guys in particular? Or what ties the three of them together, besides unconventional, somewhat open ended AI projects?
posted by Alex404 at 12:08 PM on December 17, 2007


Please help me understand this post

I was staring at Alexander Ratushnyak's entry for the Hutter prize while Beefheart gargled 'Sure 'Nuff 'N Yes I Do' in the background.

Hutter's homepage loaded simultaneously with the Kapitan and for half an hour I tore my hair out googling 'WTF The Police and P Diddy covered the shit outta Beefheart???' and 'wow that beefheart, he sure overlaid that flute all fucked up' till I realized what magic the good Herrdoktor Hutter's page really is.


Please help me understand this reply to the request for help understanding this post.
posted by DU at 12:08 PM on December 17, 2007


This is some kind of Turing test, isn't it? One of us has failed.
posted by mumkin at 12:09 PM on December 17, 2007 [1 favorite]


Please help me understand this reply to the request for help understanding this post.

'twas a sign. in '94 geocities neon comic sans. with little dancing animated gifs.
it said: MAKE ME AN FPP NO I DO NOT LINKTRADE
posted by sushiwiththejury at 12:14 PM on December 17, 2007


If I were them, I'd be constantly looking over my shoulder for time-travellers from a future robot-war dystopia coming back to assassinate me.
posted by fearfulsymmetry at 12:17 PM on December 17, 2007 [2 favorites]


Seems Hutter would enjoy reading Conscious Entities and Schmidhuber would probably enjoy Machine Learning (Theory).

de Garis is a tad bonkers, but FPGA's are becoming cheaper every day, and the innovative things people are doing with them has continually impressed me.
posted by zap rowsdower at 12:25 PM on December 17, 2007


Are any of these folks actually attempting to build an AI? They seem rather formalist (if that's the right word) to me, going for theories of computation and how they might help an AI rather than attempting to work those theories into a system that might understand what we call the world. For the moment, and I am merely an interested layman, I'll put my bucks behind Ben Goertzel (not just because I FPPed him and assuming I had any bucks), for the simple reason that he's actually working on it and is probably aware of the theoretical implications of their work, but has been developing actual models based on current thought for some time and knows something of their implications with regard to implementation.

Given that the most recent cite to work given on wikipedia is 2002 - what have they done for us lately?

Though de Garis does posit an amusing tale of the near future, I'd trust him about as much as I'd trust Roszak to grasp actual science.
posted by Sparx at 12:30 PM on December 17, 2007


I was in the same subfield as Hugo de Garis back in the mid 1990s. I don't think it's fair to call him a fraud since I think he believes in what he's doing. But he's the kind of mad scientist / engineer that doesn't ever actually create anything that works. Some of his ideas provocative and interesting, to his credit.
posted by Nelson at 1:14 PM on December 17, 2007


The joke about AI is that as soon as somebody gets something to work, it's no longer considered Artificial Intelligence. Because if we built it and understand the mechanics of its workings, who can consider it to be intelligence? In fact, the problem of defining an AI is quite huge in itself. The Turing test seems wholly inadequate: it also tests being able to learn and use a human grammar and language, which may be unrelated or very different from intelligence. Not many of us define ability to converse as intelligence. Certainly, nobody will value an intelligence unless it can communicate, learn, and share, but that seems somewhat orthogonal to the problem of intelligence. Many brilliant young scientists or engineers have trouble communicating their ideas, and only learn how to teach others once they have years of experience, if they ever learn. And there's no reason to believe that human language would be universal to intelligent beings. There's plenty to argue about with Chomsky's ideas on language, but we must at least consider the possibility that human language is a peculiarity of our particular evolutionary path, and intelligent beings may not be able to speak our language. Suppose someone were to write a program that figures out a proof for P ?= NP. It may not be able to explain it in a human language, but even if we could, wouldn't we just think that it found the proof by "accident" and not intelligence?

For these reasons, I don't see why any new researchers would launch their research program under the banner of AI. There are so many new areas of research filled with low hanging fruit, why would a person pursue an older path where objective and subjective success are ill-defined and unlikely? Explore the vast new territories that are opening up in biology, or try to exploit the new computing cloud that Google/Amazon, etc are building. If we get to a functional AI, I think it will come from one of those areas, not someone who says that they are researching AI.
posted by Llama-Lime at 1:20 PM on December 17, 2007


Thought it was going to be about Amnesty International researchers. That would have been really interesting.
posted by runincircles at 1:57 PM on December 17, 2007


AI claims have been such a dead end for so long it's hard not be dubious. There was great article in Skeptic magazine a while ago about the horrible track record of AI researchers (http://www.skeptic.com/eskeptic/06-08-25.html#feature). It's been 'around the corner' for about 50 years now.
posted by lumpenprole at 2:04 PM on December 17, 2007


AI claims have been such a dead end for so long it's hard not be dubious. -- lumpenprole

That's like saying because some peak oil or global warming predictions have never come (Oil's not $100 $1,000 a barrel, the earth hasn't heated up 10°) oil production will never peak and Carbon Dioxide production won't heat up the world. Well, that's not a valid argument. Some predictions are off the mark, and others are not. Still more are way to conservative.
posted by delmoi at 3:39 PM on December 17, 2007


See, the thing is, whenever AI researchers are successful (like say, building a program that plays chess really well, or trade stock options, or a chatbot that fools a lot of people) then people by and large decare that it's not AI really, just programming. People keep moving the bar on AI goals, and/or declaring that the only real goal of AI would be a fully conscious, smarter-than-your-average-guy-at-everything kind of dealio. I hardly think it's fair - not that I'm involved in AI in any way, just sayin' -
posted by newdaddy at 3:57 PM on December 17, 2007


I'd trust him about as much as I'd trust Roszak to grasp actual science.

I guess it's lucky, then, that AI isn't really Science™.
posted by meehawl at 6:18 PM on December 17, 2007


AI student here-
Please don't get under the assumption that the entire field of AI is made of people like this. The vast majority of AI researchers are sober scientists and engineers who wouldn't be caught dead claiming things like exactly when and how we'll achieve human-level intelligence. Most research happens now in the more applied areas of AI (the workings of Google, for example), but even in the old-fashioned lets-build-a-brain flavor of AI it is possible to make incremental progress (at least within a given framework) without claiming that your particular pet algorithm has all the answers, and this is done relatively quietly (compared to these guys) all the time.

Recreating intelligence is a very, very hard problem, and it could be years, decades, or centuries before we know for sure whether a given approach to general AI will work, since we just don't have the data on what a half-completed intelligence looks like. But that doesn't mean that the problem is impossible, or that it can't be pursued scientifically. It just means that it is very hard to tell the correct ideas from the incorrect ones until they've matured enough, and that it's pretty likely most of them will be wrong.
posted by antispork at 6:55 PM on December 17, 2007


Please don't get under the assumption that the entire field of AI is made of people like this.

I'm sure that's the case but the 'It's coming! It's coming! The unstoppable robot army is coming! Buy my book!' are much more fun.
posted by fearfulsymmetry at 3:03 AM on December 18, 2007


« Older Magic Highway U.S.A.   |   I'll have the ham Newer »


This thread has been archived and is closed to new comments