ChatNannies fools New Scientist and others
March 23, 2004 3:40 PM   Subscribe

ChatNannies: the first real AI bot, or a hoax? Cameron Marlow's private interview with the ChatNannie bot (which was recently reported by New Scientist) reveals that it has perhaps one of the most sophisticated artificial intelligence engines ever devised. So far neither New Scientist nor any of the other media outlets that posted the story have shown the slightest hint of skepticism about its authenticity. Andy Baio's additional research digs up some information that further places ChatNannie's existence in doubt, but how does one conclusively prove that Jim Wightman's bot isn't for real? If it is for real, the New Scientist probably has a much bigger story to report--AI has been solved!
posted by mockerybird (12 comments total)
Well, on reading that transcript, I got the feeling that all the participants were bots. Or inarticulate idiots. It's so hard to tell on the internet.

[Jimw] LOL sorry about that man
[Jimw] its still really raw code


posted by chrid at 3:49 PM on March 23, 2004

this is obviously fake to anyone who understands the difficulties of NLP, in particular the difficulties of imbuing a bot with "common sense". either the bot in the "private interview" is phony, or this guy has single-handedly leapt 50 to 100 years ahead of current technology.
posted by jcruelty at 4:13 PM on March 23, 2004

One conclusively proves whether or not it's real by running it in a controlled environment. Extremely easy to do. Just run the software on a computer without any I/O but the keyboard/screen and chat with it there.

Oh, and it's not real. Sorry if you got your hopes up. Sure, technically it could be possible, but all evidence (and there's plenty of it) indicates that it's just a really weak hoax.
posted by whatnotever at 4:15 PM on March 23, 2004

For those interested in the real state-of-the-art chatterbots, the Loebner Prize is awarded annually to the most human-seeming bot. The page for the latest contest is here. Download the transcripts to see what they're like. Here's a snippet from the winner, Jabberwock:

JUDGE: Whatever. You know one of the judges is an expert on lying?
JABBERWOCK: Yes I know one of the judges is an expert on lying
JUDGE: DO you know which one it is?
JABBERWOCK: No. Tell me more
JUDGE: She's blonde.
JABBERWOCK: It is sometimes hard to predict what females will do
JUDGE: No more so than us lot
JABBERWOCK: Huh? You want more?
JUDGE: Are you threatening me?
JABBERWOCK: Was it a slip of the pen? I don't know if I am or not
JUDGE: I wasn't threatening you. Calm down
JABBERWOCK: I still don't understand
posted by whatnotever at 4:29 PM on March 23, 2004


NannieBots are the automated side of the site. They are, even as we speak, watching out for suspicious and in some cases criminal behaviour in every chatroom they can find on any server in the world. They do this automatically. They do not need human help and they are not something that you can use yourself, download, upload, view the source-code of, have your own....this technology has taken literally years of development to produce and it is meant to take away the pressure from needing a massive volume of human users to visit and rate chatrooms. This gives us rule #2 - Unless you have Terabytes of storage space, ultra quick Dell PowerEdge servers and a big fat internet connection (and a bulging wallet), you cannot 'use' a Nanniebot!
posted by Decypher at 5:35 PM on March 23, 2004

Nah, my hopes weren't up. There are some tremendously silly holes in Jim's argument such as the existence of a core dump message that accidentally got posted to the screen but which didn't result in any negative side effects for the bot's session, and his claims on how the bot could tell exactly which Boston Cameron was in.

I'm just amazed at the level of care put into some portions of this hoax (the ChatNannies website, the patient and thorough explanations from Jim, etc) paired with the lack of any attempt to make it appear anything like a bot, and the somewhat uninformed excuses for why it acts the way it does.

And the biggest mystery of all: when will the journalists catch on to the hoax?
posted by mockerybird at 5:43 PM on March 23, 2004

AI has been solved


I mean, even if you solve NLP, you still haven't solved "AI", Turing's test and its strong adherents notwithstanding.

As for the "interview"... not funny.
posted by azazello at 8:02 PM on March 23, 2004

Actually, NLP is known as an "AI complete" problem. That is, if you've solved the NLP problem it will probably have to be with a system theoretically as smart as a human. It's amazing how much world knowledge is needed just to generate a simple dialog.

Anyway, if one had a giant corpus of previous IM converations, it is possible to use a language model to provide guesses for responses. Still, I'm remaining skeptical about this.
posted by Alison at 8:28 PM on March 23, 2004

Yeah, Alison, the problem is that "AI complete" is by no means a strict term, and is subject to a lot of debate. I'm not denying that NLP is the "holy grail" of AI research, I'm just saying that there are a lot of different components and aspects of "intelligence", by far not all of them necessary to hold a conversation with the average human, and I'm irritated by this simplistic P/NP-like representation of the entire field. It's probably because I specialize, among other things, in a part of AI not strictly necessary for NLP - machine vision =)
posted by azazello at 10:39 PM on March 23, 2004

The question is reaction times. If he has like an 8-Xeon machine or whatever, the thing perhaps still won't respond instantly, but faster than a human could at least. The fact that it doesn't capitalize (something that could be done easily via a regular expression after the fact) is a sign that there's simply fast typing involved. Anyway, flagrant fake, and it's disturbing how people who are both scientists and journalists can't pick up on it.
posted by abcde at 11:29 PM on March 23, 2004

I'd guess that this "hoax" is being perpetrated in order to leave net child abusers with a sense of unease. If they believe such a thing is possible, and that they may be talking to a bot which will shop them to the authorities, they may be less inclined to prowl chatrooms... It's like empty burglar alarm boxes on the side of houses - a visual deterent.
posted by benzo8 at 12:22 AM on March 24, 2004

Benzo, you have that right on, I think: it's a classic media virus. But the "donate" via PayPal button makes me uneasy about the pureness of the "creator's" motive.
posted by bclark at 5:36 AM on March 24, 2004

« Older Bob Edwards   |   Chilling your food. Newer »

This thread has been archived and is closed to new comments