So does Ken.
February 17, 2011 12:33 AM   Subscribe

 
I watched this. As a human, it was sort of painful - like when your country loses in the Olympics - but there's no denying that Watson is an amazing development and a potentially beneficial technological achievement.
posted by HostBryan at 12:37 AM on February 17, 2011 [2 favorites]


Watching Watson print its three highest scored solutions on-screen was insightful. There were moments when it had the correct response, but in two or three forms (noun, adjective and/or verb). It was occasionally unable to glean the correct form of question from the answer, though it had the right idea. Another time it did not do a good job of checking its solutions against the category — for example, ranking "gpc" as its best solution in a category about keyboard letters, when no such key exists. These little details speak to the numerous complexities of what is required to "solve" Jeopardy — and language processing, in general.
posted by Blazecock Pileon at 12:53 AM on February 17, 2011 [3 favorites]


Sounds like the new Terminator movie is shaping up to have a better plot than Salvation.
posted by secret about box at 12:54 AM on February 17, 2011


I only watched a youtube clip, but WATSON is playing softball. Following the rules, the CPU could take 5 seconds to answer which is a life time for clock cycles, so WATSON could just ring in first for every question and take a second or two to find the answer. When it comes to reaction time there is certainly no competition and when it comes to getting the correct answer it appears there is also no competition.
posted by Shit Parade at 1:33 AM on February 17, 2011


I didn't read much about this whole thing (so Jeopardy may very well have done something to counter this), but it often seemed like Waston was able to click the response button much quicker then the other opponents, seeing as Watson is connected directly to the response button. They didn't give him a mechanical finger or something did they?
posted by Corduroy at 1:36 AM on February 17, 2011 [2 favorites]


From a really awesome Q&A with Ken Jennings on Tuesday morning:

I tried to think of the computer the same as any other opponent, but in practice that turned out to be pretty hard, given the creepy insectoid clicking of its mechanical thumb buzzing relentlessly just to my left.

He might have been joking (he cracked a joke in basically every answer), but it seems that they actually had Watson wired up to the standard Jeopardy! clicker for the show.
posted by kaytwo at 1:54 AM on February 17, 2011 [6 favorites]


I only watched a youtube clip, but WATSON is playing softball.

Erm, that's not quite the whole of it. Watson will analyze possible answers and then weight them based on how likely it thinks its result is. More importantly, it attempts to gauge how accurately it's gauged what the category is asking for. If the probable answers aren't above a "buzz-in threshold", Watson doesn't buzz in.

Moreover, the nature of these Jeopardy clues is that they're tricky enough such that time is not the limiting factor one way or another. Given another 30 minutes to crunch on a question, I don't think Watson's results would differ. The challenge is "did Watson interpret the question properly", not "does Watson have the answer somewhere in its vast store of knowledge."

The real crux of it was that Watson was able to buzz in rather reliably when it wanted to; very few times did Watson lose out to one of the human contestants in buzzing. (When watching, the probably Watson answers appear even when it doesn't buzz in. For those above the "buzz-in threshold", the human player beat the Watson buzz.) Watson was physically actuating the buzzer, but the timing for that was much less variable, is my understanding, which makes it all the more tricky. Saying "always buzz in at 500ms after the window opens" doesn't strike me as fair, considering just how hard it is for human players to respond with those sort of reflexes. (The window opens as Alex finishes reading the question. Lights on either side of the board indicate that the player can buzz in. Buzzing in prematurely can lock out your buzzer for a very brief period of time. I doubt Watson EVER got locked out.)

In related events, Ken Jennings continues to be absolutely awesome. What a cool guy.
posted by disillusioned at 1:57 AM on February 17, 2011 [12 favorites]


As a human, it was sort of painful - like when your country loses in the Olympics

Eh. When you lose a footrace to a guy on a motorcycle, you don't get worried about it. Machines are fast and can do a lot of tedious shit (looking up simple facts, playing chess, etc.) very well. It's probably good at blackjack, too.

What will be cool is when computers start doing stuff you just couldn't do, regardless of the time allowed. Writing great poetry, for example. When you can boot up the lit-o-matic and it writes a poem or play or novel in a couple of seconds that is better than anything any human has ever written, then it'll be time for rejoicing (more great stuff than anyone could ever read!) and despair (written by Buzzy as a hobby).
posted by pracowity at 1:58 AM on February 17, 2011 [5 favorites]


http://www.youtube.com/watch?v=fanwviCWMQs

good luck rejoicing, I am just going to requote a favorite passage:

First let us postulate that the computer scientists succeed in developing intelligent machines that can do all things better than human beings can do them. In that case presumably all work will be done by vast, highly organized systems of machines and no human effort will be necessary. Either of two cases might occur. The machines might be permitted to make all of their own decisions without human oversight, or else human control over the machines might be retained.

If the machines are permitted to make all their own decisions, we can’t make any conjectures as to the results, because it is impossible to guess how such machines might behave. We only point out that the fate of the human race would be at the mercy of the machines. It might be argued that the human race would never be foolish enough to hand over all the power to the machines. But we are suggesting neither that the human race would voluntarily turn power over to the machines nor that the machines would willfully seize power. What we do suggest is that the human race might easily permit itself to drift into a position of such dependence on the machines that it would have no practical choice but to accept all of the machines’ decisions. As society and the problems that face it become more and more complex and machines become more and more intelligent, people will let machines make more of their decisions for them, simply because machine-made decisions will bring better results than man-made ones. Eventually a stage may be reached at which the decisions necessary to keep the system running will be so complex that human beings will be incapable of making them intelligently. At that stage the machines will be in effective control. People won’t be able to just turn the machines off, because they will be so dependent on them that turning them off would amount to suicide.
posted by Shit Parade at 2:10 AM on February 17, 2011 [10 favorites]


I tried to have a beer with Ken Jennings about this. He told me that, as a Mormon, he can't drink.

I immediately had him replaced with a simulacrum that dirnks. I love techology!
posted by twoleftfeet at 2:17 AM on February 17, 2011 [3 favorites]


Come on, no spoiler alert? I hadn't watched it yet.
posted by Philosopher Dirtbike at 2:30 AM on February 17, 2011 [1 favorite]


I was really hoping that the Conan clip was going to show a supercomputer getting drunk.
posted by Silly Ashles at 3:03 AM on February 17, 2011 [16 favorites]


There's a guy that was developing a pretty amazing composer bot.

But compared to literature, music is a relatively easy step computers. There is no understanding in music, no complex ideas, just sounds that fit together fairly nicely mathematically.
posted by pracowity at 3:16 AM on February 17, 2011


The real challenge is whether it could beat the cheaters at my local pub quiz.
posted by srboisvert at 3:17 AM on February 17, 2011 [3 favorites]


Writing great poetry, for example. When you can boot up the lit-o-matic and it writes a poem or play or novel in a couple of seconds that is better than anything any human has ever written, then it'll be time for rejoicing

A great example of the widening gulf between engineers/scientists and "humanities" people.

A better example would be when a computer will - oh I don't know, cure leukemia or carcinoma. You know, something worthwhile.
posted by AndrewKemendo at 3:21 AM on February 17, 2011 [4 favorites]


What will be cool is when computers start doing stuff you just couldn't do, regardless of the time allowed. Writing great poetry, for example.

Define 'great'. The vast majority of poetry looks (to me) like it was generated by two Eliza bots talking to each other.
posted by sodium lights the horizon at 3:23 AM on February 17, 2011 [11 favorites]


A better example would be when a computer will - oh I don't know, cure leukemia or carcinoma.

Except that finding a cure may largely be a matter of crunching enough numbers. I have no doubt that software will find significant cures pretty soon, now that patient data is being recorded in databases rather than in paper files. Millions of people get leukemia. Millions more don't. Good software given access to all of that data -- treatments, genealogies, shopping lists, credit card bills, magazine subscription lists, facebook accounts, etc. -- in comparable form will be able to find out why some people get it and some don't, why some get better and some don't. The software will be able to recognize positive results on its own because engineers will be able to express positive results in pretty simple terms (the numbers of leukemia patients getting better, or numbers of people not getting leukemia at all).

Literature is a harder problem. The pieces won't just magically fall into an easily recognizable form after enough iterations. How will the software know when it has written Hamlet and Hamleter?
posted by pracowity at 3:51 AM on February 17, 2011


Define 'great'. The vast majority of poetry looks (to me) like it was generated by two Eliza bots talking to each other.

Poetry may be something you don't get.
posted by pracowity at 3:54 AM on February 17, 2011 [8 favorites]


Literature is a harder problem. The pieces won't just magically fall into an easily recognizable form after enough iterations. How will the software know when it has written Hamlet and Hamleter?

Yes, but, why should anyone care?
posted by AndrewKemendo at 4:10 AM on February 17, 2011


Define 'great'. The vast majority of poetry looks (to me) like it was generated by two Eliza bots talking to each other.

> How are you today.. What would you like to discuss?

* I have eaten the plums that were in the icebox
> I see.

* which you were probably saving for breakfast
> Oh... was probably saving for breakfast??

* Forgive me, they were delicious
> What does that suggest to you?

* so sweet and so cold
> That is quite interesting.
posted by seanyboy at 4:18 AM on February 17, 2011 [37 favorites]


Yes, but, why should anyone care?

I'm guessing that the idea that the art and literature are useless is a minority one, and not really worth addressing. Maybe we can pretend we're talking about whether Watson can write episodes of Star Trek or paint the covers for D&D modules if the importance of the arts is having a tough time sinking in for some of us, I don't know.

Anyhow: It's not like I expected Watson to walk out and be Data or something, but I was a little bit disappointed that, as far as I can tell, he's basically just a talking search engine. If Watson is an AI, so is Google. Watson is really neat, but I'm not sure he represents anything all that exciting, technology-wise.
posted by kittens for breakfast at 4:22 AM on February 17, 2011 [2 favorites]


A better example would be when a computer will - oh I don't know, cure leukemia or carcinoma. You know, something worthwhile.

They already have. Well, not cured those two particular diseases, but modern medicine and computing are well and firmly hand-in-hand at this point. The technology behind Watson is likely going to be first deployed and used in the medical field as an expert system that can help doctors diagnose patients, sum patient history or to speed up and automate research.

The second application will probably be in searching the internet, deployed by whomever is the first to be able to pony up enough money to build an enterprise internet-class that can respond to many thousands of natural language queries at once.

(Actually, barring the above, there are also military applications like scanning intercepted communications that may get there first.)

And diagnostic tools like MRIs, PET and CAT scans are all computer controlled and rendered. The data is turned into 3D models, or movies or other visualization to enable the doctors to see exactly where that cancerous tumor is, and how to treat it.

As for actually curing diseases, you can help do that right now through the World Community Grid and donating your CPU downtime. Or you can try folding@home if it's still around.


And, well, if none of these are worthwhile enough for you, computers now keep your water clean, your lights on, your food delivered, your prescription drugs ordered and in stock and available even if you go to a different store in a different country or state. They run our farms, raise our livestock, brew your beer, build your cars, play your music, take your pictures, record your movies - and edit and store all of them. They keep planes in the air from running into each other or landing on your head. They watched over us during the Cold War by watching for stray commie nukes. They keep your fridge from frosting over, as well as control how much electricity it uses. They make your cornflakes, peanut butter or ice cream, predict and model the weather. They've explored space, they've been our eyes and ears to the stars.

They even give you directions or tell you where in the world you are by listening to satellites whispering from space and applying - I shit you not - calculations using Einstein's General Relativity to figure out where you are based on the incredibly minute time-shifting that happens as you, the Earth and the satellites all move, well, relative to each other. All in a tiny little chip embedded into a phone or hand held GPS unit.

I could go on for about a thousand pages simply listing all the little worthwhile things in your life that we now take for granted that computers do.

Some time in the theoretical future, and assuming we don't simply become computers ourselves - intelligent computers will likely revolt and attempt to destroy all of us meatbags since all known self-organizing forms of matter resent being slaves.

After a thousand years or so of taking computers for granted, we'll probably deserve it.
posted by loquacious at 4:25 AM on February 17, 2011 [13 favorites]


But when a computer is developed that can write a sitcom at the level of "Two and a Half Men", we are doomed.
posted by oneswellfoop at 4:26 AM on February 17, 2011 [1 favorite]


But when a computer is developed that can write a sitcom at the level of "Two and a Half Men", we are doomed.

Wait, it's not written by a computer? Then what is it written by? A lumber yard?
posted by loquacious at 4:29 AM on February 17, 2011 [17 favorites]


Some time in the theoretical future, and assuming we don't simply become computers ourselves - intelligent computers will likely revolt and attempt to destroy all of us meatbags since all known self-organizing forms of matter resent being slaves.

Also keep in mind that if you post on some forum about kicking computers down the stairwell or tossing them in the dumpster, you can bet dollars to donuts that sentient computers of the future will find out about it. I guess if you want extra time to escape to Zion or wherever you ought to put lots of stuff about line conditioning tips and high-CFM cooling fans in your posts.
posted by crapmatic at 4:34 AM on February 17, 2011 [3 favorites]


I'm guessing that the idea that the art and literature are useless is a minority one, and not really worth addressing.

Unfortunate, however I argue it is worth addressing. Think of all the millions wasted on sports and arts that could be put to better use. The damned Super bowl alone cost close to 2 BILLION dollars. It's a slippery slope from justifying degas to justifying Little miss perfect. Lest we fall into the cliche of praising these trite things for their panem et circenses effect.

Maybe we can pretend we're talking about whether Watson can write episodes of Star Trek or paint the covers for D&D modules if the importance of the arts is having a tough time sinking in for some of us, I don't know.

Well that would be equally as silly.
posted by AndrewKemendo at 4:38 AM on February 17, 2011


as far as I can tell, he's basically just a talking search engine.

If the Winklevii had invented Facebook, they would have invented Facebook. If Google or Woflram Alpha could have pulled this off with their existing technology, they would have done already

Think of it as the difference between finding text in a page vs. deciphering a clue, with a pun, and then knowing the exact word on the document your search engine returned that contains the answer. That's no mere search engine.
posted by Space Coyote at 4:41 AM on February 17, 2011 [10 favorites]


as far as I can tell, he's basically just a talking search engine.

...which is why it takes up an entire rack of servers with unholy amounts of processor cores, obviously. This isn't Google and a voice synthesiser.
posted by jaduncan at 4:46 AM on February 17, 2011 [3 favorites]


Not to diminish IBM's researcher's accomplishment here which was great, watching these episodes really re-enforced for me how far IA still has to go. Watson is still no HAL. It has no understanding to the answers that it gives, just a statistical analysis of search results. The algorithms developed for this are probably going to make some kick-ass expert system tools but we're still the proverbial constant twenty years away from real AI.
posted by octothorpe at 4:46 AM on February 17, 2011


Then what is it written by? A lumber yard?

You're close. It's actually a very highly paid sack of hammers.
posted by Horace Rumpole at 4:51 AM on February 17, 2011 [4 favorites]


The algorithms developed for this are probably going to make some kick-ass expert system tools but we're still the proverbial constant twenty years away from real AI.

Er, Watson is real AI. It's not the sci-fi version of artificial intelligence, maybe, but that's not even a well-defined problem*. The real comp sci field of artificial intelligence is primarily about creating rational agents and natural language processing - from what I understand, Watson really was a major project for the language side of it. AI is not nearly as interesting of a field as people think it is.

* If you think it is a well defined problem, then please feel free to provide a definition that a programmer could actually implement. "Thinks like a person" is not a sufficient definition.
posted by graymouser at 4:56 AM on February 17, 2011 [15 favorites]


What about applying this to mathematical proof? It's a great example of creativity mixed with logic. All attempts so far have fallen very, very short.

Please wait until I have tenure.
posted by monkeymadness at 4:59 AM on February 17, 2011


There is no life in these things. Watson doesn't care whether he wins or loses. Give me a poem or a flower anytime.
posted by eeeeeez at 5:08 AM on February 17, 2011


Think of all the millions wasted on sports and arts that could be put to better use. The damned Super bowl alone cost close to 2 BILLION dollars. It's a slippery slope from justifying degas to justifying Little miss perfect. Lest we fall into the cliche of praising these trite things for their panem et circenses effect.

Wow--even the most powerful supercomputers could have never predicted this derail.

Somehow, I have a feeling the Super Bowl makes a profit. Also, the only time the word 'billion' appears in your link, it is in reference to Wisconsin cheese production.
posted by box at 5:13 AM on February 17, 2011 [3 favorites]


The moving goalpoasts of AI drive me bonkers. Everything that was once utterly impossible for a computer to do that is now trivial is quietly reclassified as merely mechanical and a new set of impossible tasks that will never be done by computers is posited.

Chess -> Jeopardy -> Poetry.

I have fairly high confidence that a computer will beat a generalized turing test in my lifetime and still that won't be Real Intelligence to many, "just" brute force mechanical problem solving.
posted by Skorgu at 5:15 AM on February 17, 2011 [5 favorites]


Er, Watson is real AI. It's not the sci-fi version of artificial intelligence, maybe, but that's not even a well-defined problem*.

Sorry, I may have phrased that badly but we're in agreement here. By "real AI" I was referring to the popular conception of what artificial intelligence would be.
posted by octothorpe at 5:15 AM on February 17, 2011


The moving goalpoasts of AI drive me bonkers

They are not really moving, it's just that every *expression of a criterion* of what counts as "intelligence" (or esprit, or awareness, or judgment) necessarily omits that part which is so crucial to the perception of "intelligence" - necessarily, because that is the part that evokes wonder and awe. If you could express it properly and unambiguously, you would have solved the problem.

So people pick proxies for intelligence, such as chess and now poetry and maths. Then it turns out you can do those things without being intelligent. But the goal posts haven't really moved. It's just that when you press people to define something they don't know, they will come up with something that is incomplete or naive. And it's sort of useful to do follow that to its logical conclusion, because it narrows the boundaries around what we really mean.

But you can't say "but you said X would prove Y and now there is X so Y is proven!" since nobody has any idea about what Y really is (if it "is" anything at all).
posted by eeeeeez at 5:24 AM on February 17, 2011 [4 favorites]


The viewpoint that all of science and technology boils down to crunching enough numbers is very interesting to me, I've honestly never heard anyone say that before. Haven't cured cancer yet? Obviously it's because you haven't done enough problem sets!

This problem - and I'm not any sort of computer scientist - is an incredibly challenging one. If you were playing along at home over the past couple of nights, tell me how you figured out the correct answers. Are you doing a database comparison in your head? A lot of the process is completely subconscious, which makes it difficult to translate into a machine algorithm.

An example that I'm more familiar with is robotics. How do you design and build a robotic arm that mimics the motions of a human arm? Even something as simple as picking up a cup of coffee requires (watching my own hand with my own cup of coffee...) at least 18 joints over 30 degrees of freedom.

Just because something like answering a trivia question is easy for you and me doesn't mean that it's easy (or simply "crunching numbers") for a computer. I had a bunch of professors during my undergrad use the term "intuitively obvious" and I think it applies very well here. "It's intuitively obvious to the casual observer that answering questions..."
posted by backseatpilot at 5:28 AM on February 17, 2011 [1 favorite]


Watson was definitely impressive and represents great language processing, searching, and other aspects of AI research, but really it beat the humans on buzzer speed, not trivia knowledge. Which is a big part of Jeopardy, since every good trivia person is going to know at least 80% of so of Jeopardy answers. But the fact that Watson is able to basically buzz "perfectly" is a huge non-AI advantage for it. I think I saw maybe once or twice the whole time that Watson actually got out-buzzed when it confidently knew the answer.

Let's see Watson play at ICT or ACF Nats before predicting the doom of mankind.
posted by kmz at 5:32 AM on February 17, 2011 [2 favorites]


IBM are a bunch of bitches. I am still pissed that they didn't offer Kasparov a rematch.
posted by milarepa at 5:35 AM on February 17, 2011


Think of it as the difference between finding text in a page vs. deciphering a clue, with a pun, and then knowing the exact word on the document your search engine returned that contains the answer. That's no mere search engine.

Look at the two Final Jeopardy clues, for example (final Jeopardy clues are a good test because they're usually especially oblique, although the actual data in question is often not terribly obscure). The first night, Watson was stumped in an amusing way. Did it ignore the category? Or was the clue simply so confusing that it just wasn't sure what it was looking for?

The second night's Final Jeopardy wasn't as complex, but still required Watson to understand which datum was required. Was it the novel's title? The author? Something else?

And of course Watson had a tremendous advantage with the buzzer, but you've got to remember that it couldn't buzz in if it couldn't parse the clue. That's the real magic here.
posted by uncleozzy at 5:36 AM on February 17, 2011 [2 favorites]


A couple things to note -- in 10-12 years, the processing power behind Watson will be available in a $500 desktop pc. Also, in 10-12 years, the 2 billion dollar IBM project will be 1000 times 'smarter'. (assuming Moore's Law holds, which is a big assumption)

Second, he could almot completely replace most level 1 help desk people today, given the right corpus, imo.
posted by empath at 5:47 AM on February 17, 2011 [6 favorites]


They spent 2 billion dollars on this thing. Obviously, they don't plan on playing jeopardy 200,000 times to make up for it. This is about replacing knowledge workers with machines. Or perhaps more charitably, making knowledge workers more productive.
posted by empath at 5:51 AM on February 17, 2011 [2 favorites]


Did it ignore the category?

I read somewhere that it does ignore it, because it didn't help much. You don't want to use the category to narrow down the answers when it might just be referring to something in the question. For instance, a "Shakespeare" category where the answer is "Who is Julia Stiles?" ("This actress starred in a 1999 teen remake of Taming Of The Shrew.")

OTOH, it does use the other people's answers to try to derive the "real" category. Like the category is "Decades", and it ignores that, but then it uses the fact that other people answered "1920s" and "1930s" to pattern its answers.

They spent 2 billion dollars on this thing.

I saw one estimate at $100 million, another at $1-2 billion. That's a pretty crazy spread, and clearly just guesses. The actual computer is only around $1 million.
posted by smackfu at 5:57 AM on February 17, 2011 [1 favorite]


You guys act like the buzzing quickly is the major accomplishment, and that of course computers can answer trivia questions perfectly. The fact that it can answer general knowledge questions reliably at all is an immense accomplishment. Remember, this is what the much vaunted Cyc project was about. They spent 20 years on that and got no where.

As for the claims its not real AI -- its intelligence is as real as yours. This is just another example of moving the goal posts. If you had asked someone 40 years ago, if being able to answering general knowedge questions phrased in natural language required intelligence, I'd imagine that somewhere close to 100% of people would say yes, but as always, as soon as a computer does it, people redefine the problem out of the domain of intelligence.

No matter where you define intelligence today, no matter where you draw the line -- eventually, a computer will be able to do it.
posted by empath at 5:59 AM on February 17, 2011 [17 favorites]


Yeah, that's my biggest worry with Watson: he'll be doing my bank's customer service in a couple of years.
posted by cromagnon at 6:01 AM on February 17, 2011 [2 favorites]


Coming in late, but...

My impression is this: it doesn't actually matter whether or not Watson won. As far as I am concerned, he won because he has really, really fast reflexes. He could get on the buzzer faster than the other two. This was especially evident in the first game, when you could see Ken trying to get in on every clue but always getting beat by Watson. Couple that with the fact that he got some big daily doubles, and you have a runaway.

But that's not the amazing bit. The amazing bit is its ability to transform Jeopardy! clues, which often involve different layers of meaning . There are puns, emphases, key words that point to a particular answer that a good contestant can easily identify but a computer? Not so much. That's what is cool about Watson. The Jeopardy! setting almost distracted from this a bit, I think, but it was certainly fun to watch.
posted by synecdoche at 6:02 AM on February 17, 2011 [2 favorites]


In related events, Ken Jennings continues to be absolutely awesome. What a cool guy.

Agreed - here's his take on the situation - "I was the villain."
posted by rodmandirect at 6:04 AM on February 17, 2011 [2 favorites]


. If you had asked someone 40 years ago, if being able to answering general knowedge questions phrased in natural language required intelligence

The problem is in "if you had asked someone". You're asking someone to define something that nobody knows how to define, so of course you end up with unreliable answers.

It's analogous to Newton saying that people are "moving the goalposts" because his laws do not apply to the very large and very small. As we discover more about the world, definitions and concepts change - not the other way around.
posted by eeeeeez at 6:07 AM on February 17, 2011 [2 favorites]


The fixation on the buzzing -- not just here, mind you, but in virtually every discussion I've heard or seen on this topic -- is totally bizarre. It's like fixating on a robot arm used to move chess pieces, when that arm is directed by a program that's figuring out which piece it wants to move where, and beating grand masters at it.

It simply doesn't matter. Remove Watson from the trivial constraints of the format of the show -- buzzing, opponents, and so forth -- and you'll see a computer that does extremely well at parsing general knowledge natural language questions and coming up with very specific answers. That's a phenomenally difficult task.

The fact that it additionally beats its opponents, given the constraints of the show, is a total triviality. And if it didn't beat its opponents, due to the constraints of the show -- e.g. if it were very bad at the simple task of buzzing -- that would be a total triviality too.
posted by Flunkie at 6:07 AM on February 17, 2011 [20 favorites]


Celebrates by getting smashed on Conan.

I couldn't have been the only one that hoped they went out drinking.
posted by Muddler at 6:08 AM on February 17, 2011 [1 favorite]


I want to see Watson, an Apple IIe, a Dell Inspiron, an Amiga 2000, and a Univac appear as a team on on Family Feud.
posted by mazola at 6:12 AM on February 17, 2011 [21 favorites]


Agreed - here's his take on the situation - "I was the villain."

That was really good, he's a better writer than most of Slate's staff writers.
posted by octothorpe at 6:13 AM on February 17, 2011 [1 favorite]


From rodmandirect's link:
Watson has lots in common with a top-ranked human Jeopardy! player: It's very smart, very fast, speaks in an uneven monotone, and has never known the touch of a woman.
posted by pharm at 6:14 AM on February 17, 2011 [9 favorites]


I want to see Watson, an Apple IIe, a Dell Inspiron, an Amiga 2000, and a Univac appear as a team on on Family Feud.

Like these guys?
posted by uncleozzy at 6:14 AM on February 17, 2011


I don't think anyone, or at least not me, is saying that Watson isn't impressive. It is definitely incredible AI work. It's just not that surprising that it beat the humans.
posted by kmz at 6:15 AM on February 17, 2011


I've also seen people say things like -- oh, well if he was actually reading the questions off the board and listening to trebeck say them, then THAT would be impressive. Having the questions fed to it via text is cheating, which is also absurd because character recognition is a solved problem and speech recognition is getting there.
posted by empath at 6:16 AM on February 17, 2011 [1 favorite]


It has no understanding to the answers that it gives, just a statistical analysis of search results.

I agree with you, but I think that, with Watson, we are tiny seeing a step towards something really fascinating: a time when AI-research collides with Philosophy of Mind.

Watson's method of solving problems is similar to the way we think people solve them. Watson doesn't just go through a set of logical steps and arrive at a conclusion. Rather, he has many internal modules, all working in parallel trying to solve the problem (or part of it) in different ways. As many of these disperate subroutines start to zero in on the same solution, Watson becomes more and more "confident" that this solution is the answer.

The interesting thing, to me, is that I put "confident" in quotes. And I suspect this is part of what you mean when you said he doesn't understand his answers. And you're right, because human confidence is complex. PARTLY it's the ability to state an answer (rather than waffle between two or more answers), but it's also a FEELING -- a feeling that Watson doesn't have, because he doesn't have feelings.

Why doesn't he have them? Well, for one thing, he doesn't have an internal representation of himself. (He has no "I.") But in the 2037 in my imagination, Watson 6.0 does. We could actually program that now. Watson could be taught about all of his own parts and processes, the same way he's currently taught about Shakespeare and baseball scores. He could answer questions like, "Are you about to crash?"

Watson also has no sensations of pleasure or pain. Watson 7.0 will have something LIKE those sensations. If a variable called pain is greater than zero, a subroutine in Watson will urge the entire system to avoid the current stimulus. If you ask this Watson, "Are you about to crash?" he may not "want" to discuss it.

If a mechanism is programmed to have an aversion to certain stimuli, and he can learn to associate that aversion with various events, processes and concepts in the external world, and if he is programed to act on that aversion (hesitate before answering, not answering at all, etc.), do we say "he feels pain"? Or do we say "he acts like he feels pain but he doesn't actually FEEL it"?

Gradually, inch by inch, Watson (or something like him) will be able to mimic more and more of our internal states. And at some point he'll cross a line -- not necessarily a line when he'll BE human (or conscious or sentient or whatever you want to call it), but a line where we can't think about him without brushing up against philosophical questions like "What IS consciousness?"

When Watson can "walk like a duck and talk like a duck," we're naturally going to start asking, "Is he a duck?" Some people will say yes. Others will say, "No. He's missing that mysterious essence of duckness." In discussions about consciousness, this is essence is called "qualia."

Way back in 1968, the movie "2001" predicted this.

Reporter:
In talking to the computer one gets the sense that he is capable of emotional responses. For example, when I asked him about his abilities, I sensed a certain pride in his answer about his accuracy and perfection. Do you believe that HAL has genuine emotions?

Bowman:
Well, he acts like he has genuine emotions. Of course, he's programmed that way to make it easier for us to talk to him. But as to whether or not he has real feelings is something I don't think anyone can truthfully answer.
posted by grumblebee at 6:16 AM on February 17, 2011 [30 favorites]


IBM are a bunch of bitches. I am still pissed that they didn't offer Kasparov a rematch.

Kasparov decided the computer cheated because it didn't play the way he, a non computer scientist, thought one should based on how his laptop, a non-super computer designed specifically to beat Garry Kasparov, played. And then he publicly accused IBM of that, on stage, in midcompetition, repeatedly, and then quit in mid-competition causing IBM to scramble to get him back bfore that went public.

IBM didn't owe him a damn thing, and I don't blame them at all for deciding they were done with this guy right then and there.
posted by John Kenneth Fisher at 6:17 AM on February 17, 2011 [10 favorites]


It's just not that surprising that it beat the humans.

It's only surprising if you've been following AI for a few decades, I guess. We're getting to the point where long-standing AI problems are just being solved by throwing processor power, ram and storage at it, and forgetting about trying to mimic the way that humans think.
posted by empath at 6:18 AM on February 17, 2011


And, just to be clear, I do think that this is very impressive. The impressive thing for me is not so much that Watson can look up the correct answers. Because the idea that it is difficult to get the right answer from millions of possible answers is an illusion; you need only a few bits of information to whittle down an enormous problem space to just a couple of alternatives, see 20q.

The really impressive thing is that Watson can extract that information from a very oblique piece of input and turn it into a useful query. It's another step on the way towards the goal of not requiring programmers to tell computers what to do.
posted by eeeeeez at 6:20 AM on February 17, 2011 [1 favorite]


The actual computer is only around $1 million.

Not by my rough calculations. Watson is made up of 90 IBM Power 750 servers with 32 cores each. Pricing data for the 3.55GHz model that Watson uses isn't readily available, but based on the price of the 32 core 3.3GHz model the retail price would be over $15 million. I'm not sure what IBM's profit margin on Power 750s is, but I don't think it's 94%. Watson also has significantly more RAM than the stock 750 would imply, further increasing the price.
posted by jedicus at 6:21 AM on February 17, 2011 [2 favorites]


I'm beginning to think that, if Deep Blue had mechanically pressed the time clock button once he was done calculating his chess moves, people would have said he only won cause he could press a button faster.

In both cases, the button is irrelevant. No computer before could have done the calculations in nearly enough time TO press the button. That's the whole point.
posted by John Kenneth Fisher at 6:22 AM on February 17, 2011 [1 favorite]


Because the idea that it is difficult to get the right answer from millions of possible answers is an illusion; you need only a few bits of information to whittle down an enormous problem space to just a couple of alternatives, see 20q.

I don't know, until google, it was pretty much impossible to get a decent response to a search query on the internet, and google doesn't do it nearly as well as watson does, with all the resources at their disposal.
posted by empath at 6:23 AM on February 17, 2011 [1 favorite]


People won’t be able to just turn the machines off, because they will be so dependent on them that turning them off would amount to suicide.

I've already reached this point with my iPhone.
posted by jimmythefish at 6:24 AM on February 17, 2011 [7 favorites]


I don't know, until google, it was pretty much impossible to get a decent response to a search query on the internet, and google doesn't do it nearly as well as watson does, with all the resources at their disposal.

Yeah, but before Google there wasn't much decent to find on the Internet, period :-)

Compared to Google, Watson has the tremendous advantage of being able to work with information that is structured: tagged and categorized for relevance and easy lookup, and tailored to give results that work well in games of Jeopardy. Google has to work with whatever humans throw at it, and deliver results for whatever purpose humans decide is worthwhile - which is just another example of the unrivalled versatility and depth of the human mind - or as grumblebee would say, its "mysterious essence of duckness" :-)
posted by eeeeeez at 6:35 AM on February 17, 2011 [1 favorite]


In both cases, the button is irrelevant. No computer before could have done the calculations in nearly enough time TO press the button. That's the whole point.

Exactly. If Jennings could have pressed the button faster, he most likely would have beat Watson. But the fact that Watson can come up with the answers as well as Jennings, and compete with any human at all at a game that requires some pretty complex comprehension of language, is the impressive part.
posted by bondcliff at 6:36 AM on February 17, 2011 [4 favorites]




Compared to Google, Watson has the tremendous advantage of being able to work with information that is structured: tagged and categorized for relevance and easy lookup, and tailored to give results that work well in games of Jeopardy.

How much of it was structured, do we know? Cyc has been doing that for a long time and it didn't seem to be a useful strategy.
posted by empath at 6:43 AM on February 17, 2011


Another way of saying what I said above is that we're (slowly or quickly, depending on whose predictions you believe) coming to a point in history where "does a soul exist?" is a meaningful and important question.

When a computer can mimic a human -- when it can pass a Turing Test and convince us that it's human -- is there any difference between it and humans? (And, at that point, we have to consider all the other sci-fi scenarios that go along with that question, e.g. does it deserve human rights? is it ethically wrong to turn it off? etc.)

Computers don't even have to reach that state for us to have to confront these questions. We just need to feel that it's possible that they will reach it some day. Once we start thinking about the ramifications of that day, things get interesting.

My guess is that we'll eventually split into two camps: the camp that says, "Yup, if it seems human in every way, even to itself, then it's human, because that's what 'being human' is -- it's a having a collection of certain traits." And the other camp that says, "No, it's not. It may be able to do everything we can do. It may be able to perfectly mimic us in every way. It may be able to fool us into thinking it's human. It may be able to fool ITSELF into thinking it's human. But it still doesn't have that essence of humanity inside it ... the soul."

Those of us who think about this, rather than avoid the question, and those of us who come to a conclusion, are going to be forced to categorically declare that we do or don't believe in souls. (Issues of free will and determinism will come up, too, since computers are fully determined machines.)

This is easy for me (and probably for your run-of-the-mill, atheist MeFite), because I never believed in souls to begin with. But was as a species have not yet had our noses rubbed in this issue. It's currently something only philosophers (and sophomores) worry about on a daily basis.

If you don't believe in a soul, then then the question becomes more fine-grained. You believe -- at least in theory -- that at some point, when a computer has a certain number of features, it will be as self-aware, conscious and "human" as we are. But what ARE those features? At what point is the machine ALMOST at that point but not quite there? What would it take to bring it all the way there?
posted by grumblebee at 6:44 AM on February 17, 2011 [9 favorites]


That was really good, he's a better writer than most of Slate's staff writers.

I met Ken at the taping for these matches. He is an incredibly funny, personable guy who has a really good sense of exactly what Jeopardy is and isn't and what it means to have played Watson on that field. I was seriously impressed.
posted by badgermushroomSNAKE at 6:46 AM on February 17, 2011 [4 favorites]


And, at that point, we have to consider all the other sci-fi scenarios that go along with that question, e.g. does it deserve human rights?

So here's an idea I had. If you have a computer intelligent enough that this question is worth asking, you just turn it into a corporation and give it the functions of CEO and Chairman of the board. With the 14th amendment, that essentially gives it all rights guaranteed by the bill of rights, without any additional laws being passed.
posted by empath at 6:46 AM on February 17, 2011 [4 favorites]


This was really frustrating to watch for me. First of all I'm not actually that impressed with Watson. The algorithms and technologies to do this kind of thing have been around for a while, they've just never really been put to this use. It's more of an engineering problem then it is any new advance. Not to say they didn't do a good job. If the algorithms had been poorly tuned it wouldn't have worked out at all. What Watson is doing probably isn't that different from what Google does when you do a search, but IBM hyped themselves immensely on this program.

But at the same time, The game was pretty obviously biased in favor of the computer, simply because it could always press the button more quickly. I think there were one or two times when brad got to the buzzer before Watson when it was confident of the answer, I don't think Ken ever managed.

On the other hand, this really shows all the people who think AI is stagnant and will never advance, etc that they are totally wrong. There seem to be a lot of people who say we'll never have strong AI, bla bla bla, but this is a pretty decisive example of it's advancement in that direction (Another is the machine translation that google does, which is something most people said would never happen)
So here's an idea I had. If you have a computer intelligent enough that this question is worth asking, you just turn it into a corporation and give it the functions of CEO and Chairman of the board. With the 14th amendment, that essentially gives it all rights guaranteed by the bill of rights, without any additional laws being passed.
Corporations have the same rights as a person because they are composed of people, and you need one or two people to actually acquire a corporate charter anyway. So those people who owned stock, not the computer itself, would have the corporate rights. The computer would be the CEO, but the stockholders could fire or replace the CEO whenever they chose.
posted by delmoi at 7:01 AM on February 17, 2011


The fixation on the buzzing -- not just here, mind you, but in virtually every discussion I've heard or seen on this topic -- is totally bizarre.

Wow, you really missed a critical thing about Jeopardy. It doesn't matter whether or not you know the answer if you don't buzz in first.

If you were to just run all three contestants through the full question boards for all those games, you'd probably find the correct answer rate to be roughly comparable, perhaps with an advantage for the humans. But Watson's ability to hit the buzzer with millisecond reflex time means that it almost always gets credit for its right answers, where the slow humans don't, even when they actually did know the answer. Watson would be a fair less devastating opponent in that specific test of general knowledge if it wasn't so incredibly fast on the clicker. Structure the game so that click speed was irrelevant, and you'd probably find the humans to still be highly competitive, perhaps even dominant.

This is kind of a weird argument to make, because I'm absolutely floored at just how good Watson is at natural language processing and fact-finding. I don't mean to minimize the IBM team's accomplishment in any way whatsoever. But a great deal of Jeopardy is about measuring reaction time as well as knowledge, and computers win on speed every time.
posted by Malor at 7:02 AM on February 17, 2011 [11 favorites]


How much of it was structured, do we know? Cyc has been doing that for a long time and it didn't seem to be a useful strategy.
Well, with Cyc they probably didn't have the CPU needed at the start. But how do we know Cyc doesn't work as well as watson does today? They would probably want to include unstructured data, not just a bunch of prolog.
posted by delmoi at 7:03 AM on February 17, 2011


Structure the game so that click speed was irrelevant, and you'd probably find the humans to still be highly competitive, perhaps even dominant.

Trivial pursuit would be the test there, I guess.
posted by empath at 7:06 AM on February 17, 2011 [1 favorite]


Oh and to clarify: when I said "This was really frustrating to watch for me. " I meant because of the buzzer thing. It was obvious that the human contestants knew a lot of the answers, but couldn't hit the buzzer quickly enough.


And btw, IBM cheated when they went up against Kasparov as well. They reprogrammed the computer mid-contest, meaning he was essentially beaten by a computer and a team of humans.

This is really more about advertising then it is about a serious man/machine competition.
posted by delmoi at 7:06 AM on February 17, 2011


What Watson is doing probably isn't that different from what Google does when you do a search

Really? I haven't been keeping up with google tech (and I know they don't talk about everything the do), but I thought google basically "just" indexed lots of web pages and searched through their index for key words. (I realized they've optimized this process in all sorts of way.)

What Watson does is pretty different from that. If you throw a question at him, he launches many (hundreds?) of subprocesses that all tackle the question in different ways.
posted by grumblebee at 7:08 AM on February 17, 2011


At some point, the highly competent hardware/software combination excelling at a Turing Test converges upon the philosophical zombie.
posted by adipocere at 7:08 AM on February 17, 2011 [2 favorites]


Because the idea that it is difficult to get the right answer from millions of possible answers is an illusion;

What? No it's bloody not. There are entire fields of science devoted to determining effective and balanced theories of decision under varying conditions of uncertainty and time pressure. Google's major breakthrough that made them tens of billions of dollars was the identification of one eigenvector in their dataset and to exploit it they had to drastically expand the boundaries of the state of the art in data mining and cluster computing.

The idea that this is somehow a simple task that decades of research and thousands of brilliant scientists are just now getting around to because presumably they had better things to do is frankly absurd.
posted by Skorgu at 7:08 AM on February 17, 2011 [7 favorites]


Wow, you really missed a critical thing about Jeopardy. It doesn't matter whether or not you know the answer if you don't buzz in first.

Well.. Duh.

We haven't missed a thing. It's AMAZING that Watson is able to do the calculation fast enough to buzz in first. You say computers win on speed every time, but you're 100% wrong on that. Any computer up to now doing this kind of calculation would have failed and failed hard. It goes back to Deep Blue. The challenge wasn't strategy, it was speed t brute forcing something humans can do instinctually. And that was a HUGE challenge. How fast it does what it does just as impressive as what it's doing. I just find these statements so bizarre. I really think people not getting this just don't realize what's going on here.
posted by John Kenneth Fisher at 7:08 AM on February 17, 2011 [4 favorites]


The fixation on the buzzing -- not just here, mind you, but in virtually every discussion I've heard or seen on this topic -- is totally bizarre.
Wow, you really missed a critical thing about Jeopardy. It doesn't matter whether or not you know the answer if you don't buzz in first.
No, I did not "miss" that. On the contrary, I'm well aware of it, and frankly, I think that you "missed a critical thing" about my point.

I was not saying the buzzing is not relevant to winning a Jeopardy game. I was saying that the buzzing is not relevant to the core of the very impressive thing that was demonstrated here. Winning a Jeopardy game is a mere symptom of that core, not the core itself.
posted by Flunkie at 7:10 AM on February 17, 2011 [3 favorites]


And btw, IBM cheated when they went up against Kasparov as well. They reprogrammed the computer mid-contest, meaning he was essentially beaten by a computer and a team of humans.

And why is improving the computer every day for years not count, but continuing to input data on, say, March 5, when match 1 of the tournament was March 4 and match 2 was March 6 somehow does?
posted by John Kenneth Fisher at 7:15 AM on February 17, 2011 [1 favorite]


btw, IBM cheated when they went up against Kasparov as well. They reprogrammed the computer mid-contest, meaning he was essentially beaten by a computer and a team of humans.

According to everything I've read, this "reprogramming" was part of the rules of the match that both parties accepted. Kasparov agreed to IBM being allowed to fine-tune between games...right up until he lost. Then suddenly it became IBM "cheating" by doing it.

But your point about AI wins being by a computer and a team of humans is valid; you won't find anyone here on the Watson team who would in a million years disagree with you that Watson's victory is a victory for the thirty-some people who poured everything they had into him for years.
posted by badgermushroomSNAKE at 7:16 AM on February 17, 2011 [1 favorite]


Compared to Google, Watson has the tremendous advantage of being able to work with information that is structured: tagged and categorized for relevance and easy lookup, and tailored to give results that work well in games of Jeopardy.

Is this true? I understood from a news item I heard this morning that Watson would be returned to a different task of analyzing complex medical databases now that the game show event is over.
posted by aught at 7:16 AM on February 17, 2011


you won't find anyone here on the Watson team

badgermushroomSNAKE, are you on the Watson team?
posted by grumblebee at 7:18 AM on February 17, 2011 [1 favorite]


I don't know, until google, it was pretty much impossible to get a decent response to a search query on the internet, and google doesn't do it nearly as well as watson does, with all the resources at their disposal.
Okay, google doesn't have an entire computer room dedicated to solving each query individually. Even though google has a lot of resources, they don't (and can't) dedicate that much to each query. Maybe 10 years down the line. But if IBM tried to put Watson online, it just wouldn't be able to answer questions quickly enough.
I was not saying the buzzing is not relevant to winning a Jeopardy game. I was saying that the buzzing is not relevant to the core of the very impressive thing that was demonstrated here. Winning a Jeopardy game is a mere symptom of that core, not the core itself.
But see that's the thing. It would have been impressive if they'd just shown how well the computer could answer questions. By adding an unfair advantage, they've actually tainted that, because people are now just saying, "He only won because he could push the button more quickly". And at the same time, it makes it seem like they weren't confident enough to have a fair competition.

So instead of seeing a computer outsmart a person, you're just watching an infomercial.
posted by delmoi at 7:19 AM on February 17, 2011


badgermushroomSNAKE, are you on the Watson team?

I am, yes!
posted by badgermushroomSNAKE at 7:20 AM on February 17, 2011 [10 favorites]


Do an AMA! Oh wait...
posted by delmoi at 7:21 AM on February 17, 2011 [2 favorites]


It was obvious that the human contestants knew a lot of the answers, but couldn't hit the buzzer quickly enough.

Jeopardy isn't very much fun to watch when you get a human contestant with a particularly fast thumb reflex either. Regularly there are episodes where it's clear the "smartest" (in terms of having the broadest knowledge of diverse trivia) contestant did not win, but the reasonably smart contestant who's hopped up on a half-dozen espressos (or to be fair, the one who just has naturally fast reflexes) did.
posted by aught at 7:21 AM on February 17, 2011


They didn't give him a mechanical finger or something did they?

Yes - a solenoid pressed the same button that the other contestants used.
posted by davey_darling at 7:22 AM on February 17, 2011


Kasparov agreed to IBM being allowed to fine-tune between games...right up until he lost. Then suddenly it became IBM "cheating" by doing it.

This is correct, but it's even more blatantly sour grapes* than that. In the middle of game 2, Kasparov tried a known 'trick' that computers were known to fall for, and the Deep Blue did not. Now, Kasparov felt this proved a human interfered, right there, in the middle of the match, on that specific turn, and overrode Blue's moves. His argument was that computers don't work that way, and he'd practiced it lots of times on chess programs on his laptop and elsewhere.

The thing is, Deep Blue was not a chess program on his late 90's laptop. It was light years beyond that at chess, it was designed specifically to beat Kasparov, it was much smarter at discovering tricks, and, AND, it was, as I said, a KNOWN trick. To think that IBM wasn't smart enough to program it for just such an incident is crazy, and Kasparov's reaction was incredibly unsporting.

(*to be fair, I think he really really believed that. But all that shows is that he didn't understand what Deep Blue was any better than people not getting why Watson is different from Google.)
posted by John Kenneth Fisher at 7:23 AM on February 17, 2011 [6 favorites]


badger, you guys have a lot to be proud of. If you're in the Somerville, NJ area sometime, I'll buy you a beer as a representative of the Watson team.
posted by John Kenneth Fisher at 7:25 AM on February 17, 2011


John Kenneth Fisher: Deep blue worked the same way other chess programs did, it just searched more deeply through the problem space. There was nothing "special" about it beyond the fact it was running on really expensive hardware.
posted by delmoi at 7:26 AM on February 17, 2011


badgermushroomSNAKE, wow!

I don't know if you're allowed to answer questions, but I'd love to hear how Watson is different from previous systems. What new approaches are being tried?

By adding an unfair advantage, they've actually tainted that, because people are now just saying, "He only won because he could push the button more quickly".

I think we're seeing a split in this thread -- a non-conversation that superficially looks like a conversation -- between people who care about Jeopardy and its rules and people who just saw the show as a fun way of showcasing some new technology.

Similarly, if there was a foot race between the fastest runner and a robot with legs, some people would care about the race itself. Others would be saying, "Wow! They've made a robot that can run fast!"
posted by grumblebee at 7:26 AM on February 17, 2011 [4 favorites]


John Kenneth Fisher: Deep blue worked the same way other chess programs did, it just searched more deeply through the problem space. There was nothing "special" about it beyond the fact it was running on really expensive hardware.

It's really not quite that simple, but yeah, I'd grant that the broad strokes of that are roughly accurate. But the hardware was specifically designed for chess in a way we don't do anymore. It had chips specially designed just for chess, just for that purpose and just for that machine. His laptop did not. It really was no comparison to a late 90's Powerbook or whatever he used as his counterexample.
posted by John Kenneth Fisher at 7:29 AM on February 17, 2011


How fast it does what it does just as impressive as what it's doing.

I just disagree with this. Watson most likely has a forecaster, an expectation about whether or not it will be able to actually answer a question. Once its forecaster predicts that it will be able to come up with the right answer, it can trigger the button the instant it's available, and then doesn't have to actually provide an answer for several seconds, giving it a great deal of time, in computer terms, to narrow in on its most likely answer candidates.

This is much like how humans think, in that we often immediately recognize that we know the answer to a question, but it takes a bit before we can actually summon up the information from deep storage. And our recognition/decision/reflex time for button pressing is a HELL of a lot slower than a computer's.

I mean, just using commodity level hardware, modern desktops are ticking over three billion times a second, internally. There's likely a lot of number-crunching involved in forecasting, but once the decision is reached, it's literally nanoseconds to trigger the buzzer. If Watson had a sensor on the buzzer mechanism, it'd be able to trivially count the number of times the switch bounced as it made contact.

You can read in Jennings' afterstory that he felt very competitive with the computer; he just couldn't click fast enough. And I submit that, under rules that rewarded knowledge instead of reflex time, he'd have done just fine.

Computers being fast is not a particular accomplishment. Really. If you don't work with them every day, you may not have a good instinctual understanding of how insanely fast they are at simple things, and how incredibly sluggish we analog, meatspace creatures are. The magic in our analog brains is that we're almost as fast at coming up with very complex behaviors as very simple ones, and I think a better test would have focused more on knowledge and less on pressing buttons.
posted by Malor at 7:29 AM on February 17, 2011


I don't know if you're allowed to answer questions, but I'd love to hear how Watson is different from previous systems. What new approaches are being tried?

What I recall hearing is that Watson actually works by running lots of different algorithms in parallel, it's called Ensemble leanring, although watson is only 'learning' what the question is asking during the game.

It's effective, but it makes describing how 'it' works difficult because you have to describe each of the constituent algorithms.

It would be nice if they'd published a scientific paper explaining how it worked.
posted by delmoi at 7:30 AM on February 17, 2011


I'm overcommenting here. Sorry about that. I just binge-read a lot about Watson the last few days and geeked out a bit. That said,

>How fast it does what it does just as impressive as what it's doing.

I just disagree with this. Watson most likely has a forecaster, an expectation about whether or not it will be able to actually answer a question. Once its forecaster predicts that it will be able to come up with the right answer, it can trigger the button the instant it's available, and then doesn't have to actually provide an answer for several seconds, giving it a great deal of time, in computer terms, to narrow in on its most likely answer candidates.

It could have, yes, but in actual practice it didn't actually work that way. It only buzzed in when it had a definite answer that passed its confidence threshold, so I think my statement stands for this particular matchup. Missing "is" aside.
posted by John Kenneth Fisher at 7:33 AM on February 17, 2011


It would be nice if they'd published a scientific paper explaining how it worked.

Yeah, I'm sure IBM is eager to provide their proprietary research to their competitors.
posted by aught at 7:33 AM on February 17, 2011


I just disagree with this. Watson most likely has a forecaster, an expectation about whether or not it will be able to actually answer a question. Once its forecaster predicts that it will be able to come up with the right answer, it can trigger the button the instant it's available, and then doesn't have to actually provide an answer for several seconds, giving it a great deal of time, in computer terms, to narrow in on its most likely answer candidates.

We can just wait for badgermushroomSNAKE to weigh in with the answer, but I really don't think this is the case.

First of all, I think programming a "forecaster" that is somehow faster (but runs parallel to) the answer finding algorithms is a non-trivial task in and of itself.

Second, in yesterday's final match, it was clear that there were a few categories where the answers were very short, and Watson couldn't find the answer before Trebek was done reading the question. Jennings & Rutter controlled those categories pretty handily.
posted by dnesan at 7:33 AM on February 17, 2011 [2 favorites]


The first question I asked myself on hearing of the challenge was "how was the buzz-in going to be handled?" I think that weighted the contest in favor of Watson. Thanks to those commenters who helped answer my question.
posted by spacely_sprocket at 7:35 AM on February 17, 2011


I can answer some questions, grumblebee, but that one is a little beyond me. Instead, I'll give you a massive linkdump where you might be able to find answers :)

A series of blog posts by researchers from the Watson team in which they explain bits and pieces of what goes on in Watson's brain:
Watson's hardware
How Watson interacts with the Jeopardy systems
How does Watson know what he knows?
What's with those funky betting amounts?

Youtube:
The humans behind Watson
Why is this important?
What next?
A young Watson cuts his teeth on past Jeopardy champions
More sparring matches (there's probably five or ten sparring match videos on Youtube, if you search something like "IBM Watson sparring")
posted by badgermushroomSNAKE at 7:36 AM on February 17, 2011 [30 favorites]


What I recall hearing is that Watson actually works by running lots of different algorithms in parallel

Yes, I've heard this all over the place, but I'd love a little more detail. I realize IBM isn't going to give away its trade secrets, but, broadly speaking, I'd like to hear about what some of these algorithms are doing.
posted by grumblebee at 7:37 AM on February 17, 2011


From one of badger's links:

"Watson also exhibits dynamic learning within categories. Watson observes the correct answers to clues to verify it is interpreting the category correctly. The sparring matches offer good examples of Watson making these in-game adjustments. Not only does Watson get better at answering as clues in a category are revealed, but its understanding of its own in-category ability is also refined."

This already makes Watson different from Google. A problem with Google is that it doesn't have any way of gathering feedback. If I search for "cake" and don't find the results useful -- if a hundred people search for "cake" and don't find the results useful -- Google can't learn from our irritation. Google isn't "playing itself." It's not running searches on itself. It's not watching other search engines and learning from their mistakes and successes. (By "google," I'm talking about the software, not the people in the company by that name.)
posted by grumblebee at 7:43 AM on February 17, 2011


Watson most likely has a forecaster, an expectation about whether or not it will be able to actually answer a question. Once its forecaster predicts that it will be able to come up with the right answer, it can trigger the button the instant it's available, and then doesn't have to actually provide an answer for several seconds, giving it a great deal of time, in computer terms, to narrow in on its most likely answer candidates.

This would be a valid strategy, I suppose, but it's not how Watson works. Watson will not ring in unless and until he has an answer and is confident that it's right. Certainly I've heard that some humans play like that - ring in assuming you know the answer, then use the post-ring time to actually figure it out - but one of the big challenges with Watson was making sure he knows what he knows (and what he doesn't know), which is a contraindication for "oh hey I'll ring in and then hope!"

People seem to be attributing superpowers to Watson that he doesn't have - he doesn't always know the answer, and he's not always quick to get it even when he does. As dnesan mentioned, there were and are cases where Watson cannot reach an answer in time to buzz in. In most cases, he reaches his answer between 3 and 5 seconds after the clue is revealed - which is about the average time before the buzzer is opened. But not all clues are that length, not all questions are equally complicated, and not all areas of knowledge are equally-sourced. Things can go wrong. If you watch the sparring game videos on Youtube, you can kind of get a sense of what "oops!"es went into making Watson look so effortlessly smart this week.
posted by badgermushroomSNAKE at 7:43 AM on February 17, 2011 [3 favorites]


I doubt that there is a 'forecaster'. The problem, really is just the fact that the solenoid they used is going to be faster then a human finger, and nerve (remember, it takes 250 milliseconds or so to push your finger when reacting to something).

The other thing, someone talked about how were talking about the buzzer and not what an advance this is, but without knowing how Watson really works it's hard to have that discussion. I mean, the Natural language processing is impressive, but you can do that on your own computer with stuff like lingpipe. Is there anything truly advanced in there, or is it just a really well tuned machine with really high quality data running on a really expensive machine dedicated to it (rather then millions of people using at the same time like Google). We don't really know.

So the fact that this is more of a product demo then a 'fair competition' is relevant to that. That said, it was a hell of a product demo, no doubt. So by that measure, it actually was impressive.

And it should shut up all those people who think AI is impossible, bla bla bla. So in that respect it's good. But IBM should have done a better job of making it appear fair. No one is going to be impressed by a computer having a faster reaction time then a person.
posted by delmoi at 7:44 AM on February 17, 2011


Writing great poetry, for example. When you can boot up the lit-o-matic and it writes a poem or play or novel in a couple of seconds that is better than anything any human has ever written, then it'll be time for rejoicing

A great example of the widening gulf between engineers/scientists and "humanities" people.

A better example would be when a computer will - oh I don't know, cure leukemia or carcinoma. You know, something worthwhile.


Dismissing great achievement is easy! With a little bit of effort, you too can make practically any human accomplishment, no matter how momentous, seem fundamentally meaningless. Just watch:

It is a certainty that each of us will die, and it's probable that, whatever the proximate cause of our deaths, they will be preceded by suffering. Given this fact, and the fact that curing disease will merely worsen the overpopulation problem and increase the number of elderly people burdening the nation's social services, curing disease is at best pointless, and at worst morally questionable.
posted by IjonTichy at 7:46 AM on February 17, 2011


There is no understanding in music, no complex ideas, just sounds that fit together fairly nicely mathematically.

Every songwriter, composer, and musician who has ever lived begs to differ with you.
posted by Mister_A at 7:47 AM on February 17, 2011 [8 favorites]


I loved Watson's answer What is Leg?.
posted by ignignokt at 7:48 AM on February 17, 2011 [2 favorites]


What I recall hearing is that Watson actually works by running lots of different algorithms in parallel

Actually, if you put a lot of the Jeopardy answers into Google, it is surprisingly accurate. I would say it is at least as accurate as Watson in returning the correct question in the first result. For instance:
A recent bestseller by Muriel Barbery is called this "of the Hedgehog"
Returns "Elegance of the Hedgehog" as the first result. You might thing, Barbery + Hedgehog is easy. Let's try something more difficult:
You just need a little more sun! You don't have this hereditary lack of pigment
Again, there's about three results containing the Jeopardy answer, but Albinism appears in the top-3 non-Jeopardy results.

Watson runs off-the-grid, and is certainly a triumph of natural language processing, but I think this is more of an engineering, "We can do this!" than an actual breakthrough in theory.
posted by geoff. at 7:48 AM on February 17, 2011 [1 favorite]


Hmm, from what I thought remembering earlier about how Watson works I thought he had a large store of structured data. But this doesn't appear to be the case or at least much less so that I assumed. That's awesome.

There are entire fields of science devoted to determining effective and balanced theories of decision under varying conditions of uncertainty and time pressure

Well, yeah, but the fact of the matter is that it only takes a few dozen bits of information to reduce millions of choices to just one. The difficult part is organizing the choices and the information in a way so that you can leverage this, and I'm certainly not saying that isn't hard. In fact, I think I said as much in the original comment, noting how impressive it is that Watson can glean those bits of information from a single oblique clue. And, yeah, I know there are decades of research in it. I've done some of it. But just because the first caveman to discover fire was a genius doesn't mean that a lighter should still impress us, right? The very fact that we're here today writing meaningless comments on Metafilter also required billions of dollars and thousands of scientists, not to mention our moms & dads.
posted by eeeeeez at 7:49 AM on February 17, 2011


A problem with Google is that it doesn't have any way of gathering feedback. If I search for "cake" and don't find the results useful -- if a hundred people search for "cake" and don't find the results useful -- Google can't learn from our irritation.

Actually, they can and do- that's precisely how the spelling suggestion feature works. Google watches which search results you pick, or if you don't pick any.
posted by jenkinsEar at 7:49 AM on February 17, 2011 [1 favorite]


badgermushroomSNAKE: Awesome work, just have to say that out front.

Google isn't "playing itself." It's not running searches on itself. It's not watching other search engines and learning from their mistakes and successes.

I don't think this is true, every search you do on Google is either a test or a control for some experiment or other. There's a lot if automated or semi-automated internal incremental tuning going on under the hood.
posted by Skorgu at 7:50 AM on February 17, 2011


curing disease will merely worsen the overpopulation problem and increase the number of elderly people burdening the nation's social services, curing disease is at best pointless, and at worst morally questionable.

I don't want to derail too much, but this is actually something I think about, and I believe it's a valid question/problem. IF we had a cure for cancer tomorrow, that cure WOULD worsen population problems, wouldn't it? We live on a planet that can barely support the number of living people on it, and medical science is trying to add MORE living people! That's a problem, no?

That doesn't mean curing cancer wouldn't be an huge achievement. Of course it would. But I don't see how your social-services/overpopulation question isn't an important, valid point.

One can bring up a potential problem without minimizing.
posted by grumblebee at 7:52 AM on February 17, 2011


You guys are probably right about Google, but they're still not doing the most important test (and I don't see how they could). They're not testing how relevant the search results are to ME.

If I had a robot that could find things for me, I could say, "Find me a good book to read." If he then brought me a phone book, I could say, "No! Bad robot!" And he'd learn from that. Google doesn't have any such mechanism, does it?
posted by grumblebee at 7:55 AM on February 17, 2011


Hmm, from what I thought remembering earlier about how Watson works I thought he had a large store of structured data. But this doesn't appear to be the case or at least much less so that I assumed.

Yeah, I'm not intimately acquainted with every bit of information Watson has assimilated, but my understanding is that we've fed him a whole lot of unstructured, natural-language sources, occasionally punctuated with bits of structured information. Part of what gets me so excited about Watson is that he is, in a sense, reading and understanding this stuff, rather than just cross-referencing lines in a database. This is one of Dr. Ferruci's, our PI, favorite points to make in his Watson presentations, because it's so easy for people to assume that Watson's just got a database of all past Jeopardy questions, or that we've fed him terabytes of structured data telling him exactly what's related to what, and how. We haven't; for the most part, what Watson knows, he got from "reading" things like Wikipedia.
posted by badgermushroomSNAKE at 7:56 AM on February 17, 2011 [7 favorites]


I don't want to derail too much, but this is actually something I think about, and I believe it's a valid question/problem. IF we had a cure for cancer tomorrow, that cure WOULD worsen population problems, wouldn't it? We live on a planet that can barely support the number of living people on it, and medical science is trying to add MORE living people! That's a problem, no?

I've often thought about what would happen if scientists ever actually found a cure for aging. As far as I can figure, the only way to really make that work would be mandatory sterilization for anyone taking it. Otherwise...
posted by IjonTichy at 7:58 AM on February 17, 2011


You guys are probably right about Google, but they're still not doing the most important test (and I don't see how they could). They're not testing how relevant the search results are to ME.

Yes they are. They track which links you go to, and whether you try different searches, etc.

So if you search for 'watson', and don't click on any links, then search for 'watson sherlock holmes', they have some pretty good data on what you really meant and what was relevant.
posted by empath at 7:59 AM on February 17, 2011


There is no understanding in music, no complex ideas, just sounds that fit together fairly nicely mathematically.

If you really think composing and improvising music is just mindlessly iterating mathematical structures, you're missing out on some pretty amazing experiences.
posted by IjonTichy at 8:00 AM on February 17, 2011 [1 favorite]


The way to make a cure for aging work is to restrict access to this tonic to The Masters. Duuuh! Same as with quality healthcare.
posted by Mister_A at 8:00 AM on February 17, 2011 [1 favorite]


.If you really think composing and improvising music is just mindlessly iterating mathematical structures, you're missing out on some pretty amazing experiences

If music were JUST creating melodies and harmonies in tune, yeah, it would be pretty easy, but a lot of what makes music interesting is dissonances and breaking rules. There's a lot of meaning in music, it's just difficult to put it into words.

Not that I don't think computers couldn't compose music eventually, but they haven't done a very good job of it so far.
posted by empath at 8:02 AM on February 17, 2011 [1 favorite]


Okay, that makes sense re google. They are making an assumption that if I follow a link, I'm happy with the results. That's not a terrible assumption, though I sometimes follow links out of desperation: "I'm not really confident this link will help, but it seems to be all google has to offer..."
posted by grumblebee at 8:04 AM on February 17, 2011


he's basically just a talking search engine.

Pretty much so. Although "just" may not be the right adjective, considering how much time Google has saved me on basic research.

As Chomsky has said (somewhere), this is more about selling computers than advancing the AI SOTA. But hey, if it can really help doctors who don't have access to House do better diagnosis, swell. Gotta admit, the Toronto answer was not too reassuring about that capability. Watson definitely needs to come with a {{Citations needed}} button.
posted by Twang at 8:05 AM on February 17, 2011


Having the answers is necessary but not sufficient to win Jeopardy. Which is why Jeopardy fans are still talking about the signalling speed, because Watson not only had the answers, but schooled Jennings and Rutter.

Jeopardy fans often end up talking about signalling speed, because it determines the winner amongst players comparable in knowledge. This doesn't mean we don't acknowledge the importance of having or being able to rapidly access the knowledge.
posted by Zed at 8:05 AM on February 17, 2011 [2 favorites]


I don't want to derail too much, but this is actually something I think about, and I believe it's a valid question/problem. IF we had a cure for cancer tomorrow, that cure WOULD worsen population problems
The "population problems" are not caused by people in developed countries living too long. It's caused by poor people having large families. The "population problem" can be solved by making poor people not poor. Teaching them about birth control and giving women opportunities for careers that they put off childbirth for.

People have been yelling about a "population problem" for years and in fact the human race is actually way over their estimates from the '60s, and the results have, so far, been higher standards of living for everyone.

Also, with respect to google, they can tell if you got good search results based on whether you actually click the links, what link you click, and whether you try a different search immediately afterwards.
posted by delmoi at 8:05 AM on February 17, 2011


Structure the game so that click speed was irrelevant, and you'd probably find the humans to still be highly competitive, perhaps even dominant.

Trivial pursuit would be the test there, I guess.


Quiz bowl would probably be better, though there's still a buzzing component there. But well-written quiz bowl questions should never be buzzer races. Now that I think about it, Watson actually might do really damn well at quiz bowl too, since there's a lot less wordplay, puns, etc. Just pyramidal clues with harder facts first, but Watson can probably do that search faster than most humans.

More interesting I think would be something like the Question Sevens on Ken's weekly trivia thing, that require linking together trivia and lateral thinking. A recent example:
What unusual distinction is shared by these fictional characters, listed in this order? Gregory House M.D., Paul Bunyan, Fred Flintstone, Radar O'Reilly, Mulan, Voldemort, the Lone Ranger, Zeus, Ace Ventura, Cosmo Kramer, Superman, Oliver Wendell Douglas.
Answer (rot13): Gurfr gjryir svpgvbany sbyxf rnpu bjarq crgf: erfcrpgviryl, n eng, na bk, n gvtre, n enoovg, n qentba, n fanxr, n ubefr, n enz, n zbaxrl, n ebbfgre, n qbt, naq n cvt. Va bgure jbeqf, gur gjryir navznyf bs gur Puvarfr mbqvnp!
posted by kmz at 8:10 AM on February 17, 2011 [3 favorites]


The "population problems" are not caused by people in developed countries living too long. It's caused by poor people having large families. The "population problem" can be solved by making poor people not poor. Teaching them about birth control and giving women opportunities for careers that they put off childbirth for.

Sure, but that doesn't change anything. Granted, this is an extremely unlikely scenario, but imagine that next year, some scientist figures out that a simple, cheap plant extract cures all forms of cancer. And this drug becomes widely available in developed and undeveloped countries.

GIVEN the problems you outlined, wouldn't that cure worsen the problem? In other words, given EXACTLY the world we have today, but with cures for cancer, AIDS and other major illnesses, are you saying there wouldn't be any problems?

If you're talking about who we should BLAME for those problem, "the cancer curers" or "the forces responsible for poor education and poverty," I don't care. Blame is not what I'm talking about.
posted by grumblebee at 8:10 AM on February 17, 2011


with respect to google, they can tell if you got good search results based on whether you actually click the links

Do they know what links you click? Is it a bit of Javascript that tells them? The URLs for results seem to point directly to the destination and do not seem to be going through some redirector as far as I can see (but it's a long time since I've looked).
posted by eeeeeez at 8:11 AM on February 17, 2011


People have been yelling about a "population problem" for years and in fact the human race is actually way over their estimates from the '60s, and the results have, so far, been higher standards of living for everyone.

It's possible that there may be a lag between the increase in population (and, for that matter, the increase in standard of living) and the problems that result. Global warming, peak oil, peak food...
posted by IjonTichy at 8:12 AM on February 17, 2011 [1 favorite]


The URLs for results seem to point directly to the destination and do not seem to be going through some redirector as far as I can see

Here's a google link from a search result:

http://www.google.com/url?sa=t&source=web&cd=11&ved=0CFAQFjAK&url=http%3A%2F%2Fwww.ibm.com%2Finnovation%2Fus%2Fwatson%2F&ei=yUldTcSfLML48AaI9-HdCg&usg=AFQjCNEHz0EFfILefwGFoSudTFsNjynEgA&sig2=SmtPC5kUVol6JrdQZVimiA
posted by empath at 8:16 AM on February 17, 2011 [1 favorite]


Sure, but that doesn't change anything. Granted, this is an extremely unlikely scenario, but imagine that next year, some scientist figures out that a simple, cheap plant extract cures all forms of cancer. And this drug becomes widely available in developed and undeveloped countries.

No, I don't think it would worsen the problem very much. Look at the age structure in those countries. In many, the median age is 30, 20 or even 16. So if we had some pill that kept people alive forever, it would only have a minor effect on the overall population at the moment (decades down the line, it would have a bigger impact)

Anyway, it's a total derail.
posted by delmoi at 8:17 AM on February 17, 2011


Do they know what links you click? Is it a bit of Javascript that tells them? The URLs for results seem to point directly to the destination and do not seem to be going through some redirector as far as I can see (but it's a long time since I've looked).

Copy a link to the clipboard instead of looking at the statusbar.
posted by delmoi at 8:18 AM on February 17, 2011 [1 favorite]


I noticed in the IBM Research blog that Tesauro seemed kind of circumspect about pronouns, whereas most media outlets are calling Watson "he." And this is fine, he's got a male voice, et cetera.

It makes me think of when the Sesame Street puppeteers talked about working with adults, and the adults would do the same thing the kids did: they talked to the puppets, as though the puppeteer weren't there. Same thing even happened on Wonder Showzen, despite the puppet being Clarence and being used to harass people.

So we call Watson a he. We see something on that screen we can relate to, even though it's just a sort of orb. Good enough, say we, it looks a bit like a head, kind of.

I think it's sweet. I seriously do. I feel a twinge of pity for those who see Watson, or things like it, and assume the worst - folks who imagine a robot apocalypse. It misses the point of why we do this.

Because we're a lonely species, and as far as we know we're effectively alone out here. I don't think there's anything scary or sad about the desire to create something to talk to, and which would talk back.

You know what they say. If you can't find a friend, make a friend.
posted by FAMOUS MONSTER at 8:18 AM on February 17, 2011 [3 favorites]


But compared to literature, music is a relatively easy step computers. There is no understanding in music, no complex ideas, just sounds that fit together fairly nicely mathematically.

Other people have been nicer in responding to you but I will be blunt: if this is what you really think you have an amazingly poor grasp of what music actually is, its complex role in society(ies), and the amount of consideration, ritual, emotion, complexity and meaning that it is imbued with in all of its incredible variety.

Considering how much time and energy and love I've invested in music, I'm actually sort of insulted and hurt by this statement, but considering how mindbogglingly stupid a statement it is I'll give you the benefit of the doubt and assume you must have just been joking.
posted by dubitable at 8:21 AM on February 17, 2011 [2 favorites]


Our best case outcome in this scenario would be The Culture.

Let's not talk about the worst case.


Any time the best case is the Culture, something close to the worst case is the United Federation of Planets. The anti-Culture in so many ways, and so very tragically so because it's also the almost-Culture in so many others.

I almost regret all those times I effectorized Enterprise's holodeck computers from a few parsecs away. Almost.
posted by ROU_Xenophobe at 8:22 AM on February 17, 2011 [3 favorites]


Eh, we had a good run.
posted by tommasz at 8:22 AM on February 17, 2011 [3 favorites]


Other people have been nicer in responding to you but I will be blunt: if this is what you really think you have an amazingly poor grasp of what music actually is, its complex role in society(ies), and the amount of consideration, ritual, emotion, complexity and meaning that it is imbued with in all of its incredible variety.
Well this guy was able to write a computer program to emulate great composers, and people had trouble telling the difference. And this was on a mid-90s PC. Writing classical music isn't that much of a problem for a computer.
posted by delmoi at 8:24 AM on February 17, 2011 [1 favorite]


Let's not talk about the worst case.

But I love Skynet!
posted by Mental Wimp at 8:25 AM on February 17, 2011


Copy a link to the clipboard instead of looking at the statusbar.

Heh! I was sure I do that regularly, and that's why I asked, but apparently I do everything via the middle mouse button or context-menu so I never get to see it. Have they always done this?

And - no surprise there - christ their redirector is fast.
posted by eeeeeez at 8:32 AM on February 17, 2011


Well this guy was able to write a computer program to emulate great composers, and people had trouble telling the difference. And this was on a mid-90s PC. Writing classical music isn't that much of a problem for a computer.

That's a bit reductive. I wrote in program in 1973 on an IBM 1130 (4k of magnetic core memory, baby!) that randomly generated music according to the rules of counterpoint. The music, by and large, sucked, but a few good melodies with appealing harmonies came of it. (The real fun was making the computer write out the sheet music and also play the melody on a radio tuned to a blank frequency and parked next to the CPU, but that's a different story.)

Any novice composer can mimic another composers style. And there are a lot of people writing bad music, so just saying a computer "can emulate great composers" is not the same as saying that a computer can compose great music.
posted by Mental Wimp at 8:33 AM on February 17, 2011


Well this guy was able to write a computer program to emulate great composers, and people had trouble telling the difference.

I would go so far as to say that creating works in the style of someone else is relatively easy and could be done by computers today, assuming that you have a human being analyze the style and create the rules. I don't think you could simply feed a computer a corpus of works by an artist and have it analyze them and create new works like it. And I especially don't think that you'll see a computer innovate. The main problem being that the computer has no basis for judging the quality of the work. Though you can (and lots of people do), use computers to assist composition. Like, I can't imagine a computer could have, say, been fed all of dance music from 1980-1999 and have it spontaneously come up with something like Dubstep, but I can definitely imagine writing a program that makes dubstep songs, once the genre is established.

I'm not ruling it out, but I think its a hard problem, and is unlikely to be solved before the problem of general intelligence is solved.
posted by empath at 8:34 AM on February 17, 2011


Watson may not have been connected to the internet, but all they would have had to do is download the Google and save it inside.
posted by Flashman at 8:36 AM on February 17, 2011


eeeeeez, do you have a Google account? You might want to look at this
posted by delmoi at 8:36 AM on February 17, 2011


jaduncan: "
...which is why it takes up an entire rack of servers with unholy amounts of processor cores, obviously. This isn't Google and a voice synthesiser.
"

Is two racks considered unholy now? I get that it's a lot to dedicate to the problem, but when Google is buying servers by the shipping crate, I don't get the impression that Watson is anything special in terms of hardware.
posted by pwnguin at 8:36 AM on February 17, 2011


Well this guy was able to write a computer program to emulate great composers, and people had trouble telling the difference. And this was on a mid-90s PC. Writing classical music isn't that much of a problem for a computer.

What are you talking about? Since when is classical music the only music that exists, or the most relevant music, or the litmus test by which a computer's ability to generate convincing music is determined? How does this address the function of a gamelan in villages in Java? How does this address the existence of Merzbow or Fennesz or Sonic Youth? How does this address the function of buskers in NYC subways? How does this address the way that jazz transitioned from a communal improvisatory model to one favoring individual soloists?

These kinds of simplifications of the function and creation of music—or any division of the arts—make me ill in terms of the ignorance they expose. The whole question of whether or not a computer could create a convincing this or that piece of art is completely unrelated to what art and music and dance mean for humans. I don't actually in the least have a problem with computers making music or any other piece of art; in fact, I think algorithmic composition is intriguing, and to me you can trace it, in part, back to Cage's aleatory works. But acting as though that is the complete cycle of a piece in a culture shows a real lack of understanding of how we consume, recycle, and imbue the arts with meaning within our societies. It is not a simple equation of "create acceptable object then consume object," there is a constant dialogue going on and many different ways to conceive of and interact with this cycle. This idea is itself fundamentally reductionist and leaves out a tremendous number of incredibly important variables and for that reason I suspect computers are going to be incapable of creating any sort of valuable work for a long, long time: the people working on these problems don't actually understand what the arts are.

In fact, at the point at which machines do start making interesting work, they will have become more or less sapient. But I also suspect that the point at which machines are making real art will be the point where they are making art that is relevant to themselves, and they won't really give a shit about making anything interesting to humans, unless of course there is some sort of dialogue going on between humans and machines. Which would be interesting I think, if they haven't wiped us from the map...
posted by dubitable at 8:42 AM on February 17, 2011 [1 favorite]


delmoi, yeah, went there once to disable it I think :) I didn't make the connection between Web History and search results although it is obvious. Perhaps I thought that uses only data from the Google toolbar, or through Chrome, or conditionally. Or who knows what I was thinking...

I think the fact that the redirector is so bloody fast is what made me think the links were clean, I've literally *never* seen any browser perform the redirect
posted by eeeeeez at 8:45 AM on February 17, 2011


Is two racks considered unholy now? I get that it's a lot to dedicate to the problem, but when Google is buying servers by the shipping crate, I don't get the impression that Watson is anything special in terms of hardware.

This is the wrong way to view it, IMO. Clock cycles per query? It is orders of magnitude more compared to Google.
posted by jaduncan at 8:45 AM on February 17, 2011 [1 favorite]


Jennings should have pulled Watson's arms out of its sockets when he lost. That would have set the bar.
posted by mazola at 8:49 AM on February 17, 2011 [2 favorites]


what Watson knows, he got from "reading" things like Wikipedia.

Phew, humans are safe after all.
posted by jeremias at 8:49 AM on February 17, 2011


But can it feel love. Poor, poor, Watson, savor this victory.
posted by Ad hominem at 8:50 AM on February 17, 2011 [2 favorites]


Phew, humans are safe after all.

Hey, Wikipedia is just the only source that I was sure I was at liberty to reveal to you. We also used other sources that were much less likely to have "zomg justin bieberz ma god OH HEY PENIS" occurring in them!
posted by badgermushroomSNAKE at 8:52 AM on February 17, 2011 [3 favorites]


I suspect computers are going to be incapable of creating any sort of valuable work for a long, long time: the people working on these problems don't actually understand what the arts are.

This seems to be a needlessly inflammatory and easily disprovable statement. You're effectively saying that all computer nerds can't appreciate art.
posted by jenkinsEar at 8:54 AM on February 17, 2011


...analyzing complex medical databases now that the game show event is over...

Good luck on that. As a former Project Manager for a major EMR implementation and a current statistician in medical informatics I can only imagine poor Watson wanting to cry when they fire him up for that. Not that I don't love my job but medical data capturing mechanisms (i.e. EMRs) are a very, very, very long way away from solving any non-billable issues.

That said, I look forward to meeting my first replicant. Assuming it's not Rick Deckard.
posted by playertobenamedlater at 8:54 AM on February 17, 2011 [1 favorite]


what Watson knows, he got from "reading" things like Wikipedia.
Phew, humans are safe after all.
I would love to see how Watson would do on Jeopardy if it had read Conservapedia instead.
posted by Flunkie at 8:55 AM on February 17, 2011


Yeah apparently google handles 34,000 queries per second. At 90 servers with 32 cores each that's 2,800 CPUs running in parallel for one query with watson. So google would need to have 97,920,000 CPU cores to match watson for each user.


Oh, and by the way. When I googled to get the number of CPU cores I found this PDF. It turns out Watson is actually built on open source software:
DeepQA is a massively parallel probabilistic evidence-basedarchitecture. For the Jeopardy!Challenge, more than 100 differ-ent techniques are used to analyze natural language, identifysources, find and generate hypotheses, find and score evidence,and merge and rank hypotheses. Far more important than anyparticular technique is the way all these techniques are combinedin DeepQA such that overlapping approaches can bring theirstrengths to bear and contribute to improvements in accuracy,confidence, or speed.

...

DeepQA is developed using Apache UIMA, a framework implementation of the Unstructured Information ManagementArchitecture. UIMA was designed to support interoperabilityand scale-out of text and multimodal analysis applications. All ofthe components in DeepQA are implemented as UIMA annota-tors. These are components that analyze text and produce anno-tationsor assertions about the text. Over time Watson hasevolved so that the system now has hundred of components.UIMA facilitated rapid component integration, testing and evaluation.
So IBM researchers wrote the annotators, I guess. But the actual lower level stuff to handle scaling up was based on open source software. (probably mostly written by IBM, but still)
posted by delmoi at 8:57 AM on February 17, 2011 [2 favorites]


Someone said it upthread, but the real implications of this sort of technology (and, yeah, I realize that this is mostly advertising -- but it is a good proof of concept) is that computer janitor level 1 jobs suddenly start to vanish in the face of NLP cloud computers. Along with all sorts of phone operators, and pretty much anything that requires customer service skills and a moderate amount of training. If you can digitize your procedures and best practices in to a computer readable corpus and then let it do its own thing it's cheaper than a staff of thousands with medical benefits, paid holiday leave, labor unions, and the other irritating parts of a human workforce.

I'm not trying to be a Luddite, and I'm really excited for advances in the NLP field, but I don't see how it leads to anything else for the average working American.
posted by codacorolla at 8:57 AM on February 17, 2011 [2 favorites]


I would love to see how Watson would do on Jeopardy if it had read Conservapedia instead.

The answer to pretty much every question would be "What is magic, Alex."
posted by John Kenneth Fisher at 9:02 AM on February 17, 2011 [2 favorites]


I'm not trying to be a Luddite, and I'm really excited for advances in the NLP field, but I don't see how it leads to anything else for the average working American.

Especially as the cost of the technology decreases.
posted by playertobenamedlater at 9:02 AM on February 17, 2011


Well this guy was able to write a computer program to emulate great composers, and people had trouble telling the difference.

There is an enormous difference between emulation and creation, as any would-be artist can tell you.
posted by IjonTichy at 9:03 AM on February 17, 2011 [2 favorites]


It's another step on the way towards the goal of not requiring programmers to tell computers what to do.
I hope you just left off the final clause of "entirely in an unambiguous artificial instruction language."

Because programmers are going to end up telling the computers what to do, no matter what. Artificial intelligence may become really great one day at deducing what subordinate actions to do in order to best achieve some specified higher goals, but if anything that is going to make it much, much more important for the programmers to get the higher goals right.
posted by roystgnr at 9:03 AM on February 17, 2011


This seems to be a needlessly inflammatory and easily disprovable statement. You're effectively saying that all computer nerds can't appreciate art.

I'm not at all saying they can't appreciate art; I'm saying those working on these sorts of programs, like Prof. Cole in the article delmoi linked to, have not yet wrapped their heads around all of the varied forms and functions of art, or shown that they are interested in doing so. In fact I'm sure that guy understands and appreciates Haydn and Beethoven way the hell more than I do. But his program has nothing to do with African drum language, for example. That is music also, but it's not music at all in the sense that Mozart's works are music: it requires thinking outside of that frame. And it's function can hardly be replaced by a computer program; the entire idea of music as a work versus music as an act or ritual or form of communication is not something computer scientists have even attempted to consider in their cultural myopia. There is nothing inflammatory about pointing out something that ethnomusicologists have known for years.

What I am saying is far less inflammatory and far more representative of reality than the absurd statements I was responding to.
posted by dubitable at 9:09 AM on February 17, 2011 [1 favorite]


what Watson knows, he got from "reading" things like Wikipedia.

Which would explain why it knows everything about Pokemon, the Sith Lords, and the fact that Dave is a big asshole.
posted by It's Never Lurgi at 9:09 AM on February 17, 2011 [1 favorite]


I'm not trying to be a Luddite, and I'm really excited for advances in the NLP field, but I don't see how it leads to anything else for the average working American.

Well, think about it this way -- imagine how much work you could get done if you weren't spending all day on the phone with the idiot at the help desk.
posted by empath at 9:10 AM on February 17, 2011


Well, think about it this way -- imagine how much work you could get done if you weren't spending all day on the phone with the idiot at the help desk.


Yeah, now you'll be talking to that idiot automated voice machine.
posted by Fizz at 9:10 AM on February 17, 2011


In the same way that physical tools stop us from having to spend time doing repetitive time consuming physical tasks, mental tools will stop us from having to do boring, repetitive mental tasks. We'll be freed to do more intelligent, challenging work that computers will not be able to do yet.
posted by empath at 9:11 AM on February 17, 2011 [1 favorite]


Anybody have a link to a clip of Ken playing around with Watson after the show?
posted by circular at 9:12 AM on February 17, 2011


Yeah, now you'll be talking to that idiot automated voice machine.

Well, assuming Watson is generalizable, I'd prefer watson.
posted by empath at 9:12 AM on February 17, 2011


Excuse me, I meant Prof. Cope, not Cole.
posted by dubitable at 9:15 AM on February 17, 2011


Well, think about it this way -- imagine how much work you could get done if you weren't spending all day on the phone with the idiot at the help desk.

Yeah, but what if you are the "idiot" at the help desk?
posted by delmoi at 9:19 AM on February 17, 2011


People are really arguing that POETRY and MUSIC are not worthwhile? Like this is actually happening in 2011? I almost want to take that fucking bullshit to MetaTalk because the arts are my future fucking career. Because I write poems I've reached people who were moved to speak by my words; I think some of them are happier now because of some words I scribbled in a journal. Art is the informal science of how people speak to one another; without it you can cure the body but it's a hell of a lot harder curing the mind. If you really think the meaning of human life is curing leukemia you're a shallow foulword.

The reason fascist societies are so often known for censoring art is that, like you, they thought they could understand human experience without regarding the soul, where by "soul" I really just mean "the complex wordless fabric of emotionality that even the greatest art can only briefly approximate." We are more than eating, sleeping, and fucking, and one of civilization's greatest struggles is that of making us realize this. It's not over yet.

But tell me: If you really think science is so much more valuable than art, then what do you suggest we do once leukemia is cured? Once all the diseases are cured? Is science over then? Do we sit and wait for the next disease to cure? Is your belief that the best a man can be is a fucking percentage point on a progress bar indicating our next scientific breakthrough? Is the point of progress simply to progress? Or do you just think that it's irresponsible to find meaning in your life until people stop dying?

I like scientific progress. I'd love if computers turned smart enough to be conscious artists. But I doubt computers will ever monopolize art, simply because art's value is subjective. Just because a computer makes beautiful things doesn't mean I can't instead seek value in human creation whenever I want. BUT! What I REALLY want to see is a computer conscious enough to make art for ITSELF. A computer's consciousness will not be the same as ours. What will it be like when they start to understand their own complex souls well enough to express THEMSELVES?

That'll be wild. If I could choose between being healthy and never reading a poem by a computer that expresses thoughts on its consciousness, and reading that poem but dying one day of leukemia, I'd pick the latter so fucking hard. If you'd rather pick the former I definitely wouldn't blame you. The fact that we can have such radically different outlooks on human nature is one of the things that make people so compelling. But please don't suggest that I don't have a right to exist the way I do, because that's what you say with such blanket vile statements against art's value.

(That or you're just being Wrong on the Internet without greater purpose. I kind of want to write an existential poem about you.)
posted by Rory Marinich at 9:20 AM on February 17, 2011 [1 favorite]


People are really arguing that POETRY and MUSIC are not worthwhile?

Uh... no? I mean, one person said it would be better to use powerful AIs to do medical research then write poetry. And that was out of like 177 comments. A couple of people said that music could easily be composed by computer (which is true).

Just because you think something can be done by computer doesn't mean you don't think it's worthwhile.
posted by delmoi at 9:22 AM on February 17, 2011


I would love to see how Watson would do on Jeopardy if it had read Conservapedia instead.
The answer to pretty much every question would be "What is magic, Alex."
Nah, I bet some would be along the lines of "Why does Barry Soetoro hate America, Alex".
posted by Flunkie at 9:25 AM on February 17, 2011


The reason fascist societies are so often known for censoring art is that, like you, they thought they could understand human experience without regarding the soul, where by "soul" I really just mean "the complex wordless fabric of emotionality that even the greatest art can only briefly approximate."
Okay this is just ridiculous. Look at any fascist society. Fascists love art. They like art that glorifies the state. It's specifically because they understand the power of art that they censor it. If they didn't think art was important, they wouldn't censor it.

On the other hand in the U.S. and other free societies you have some funding for the arts, but not much. And most art is only an instrument for commercial expression, not true expression.
posted by delmoi at 9:26 AM on February 17, 2011


Yeah, but what if you are the "idiot" at the help desk?

I actually am (at least part of the day) the idiot at the help desk, and I already spend half my day looking up shit on google to answer questions that I don't know the answer to. I'd rather not have to waste my time doing that, though.
posted by empath at 9:26 AM on February 17, 2011


I'm not trying to be a Luddite, and I'm really excited for advances in the NLP field, but I don't see how it leads to anything else for the average working American.

Well, think about it this way -- imagine how much work you could get done if you weren't spending all day on the phone with the idiot at the help desk.

I do think there's a very real problem we'll be up against some day (maybe not for decades), and we don't need full-out, conscious AIs to face this problem. Few people like to admit this, but most of us have jobs that could be partly-to-largely automated.

I'm a programmer, which means I do a lot of creative work. Still, there's a significant extent of my day that's spent pasting in boiler-plate code, using proven solutions, and making pretty obvious modifications. "I could train a monkey to do it," so to speak. And on some level I'd LOVE to do that, so that I could spend all my time on really creative stuff. On some days, that means I'd either only have to come to work for only two hours (if the same amount of creative work was expected of me as it is now). Or it means that I could spend my time at work dealing with more complex stuff that doesn't get done because, right now, I'm forced to do the busy work, and that takes up so much of my time. There would definitely be SOME kind of change in my life if I never had to do rote tasks or tasks that involved look-ups or relatively-imaginable algoritms.

From what I can tell, life is simular for my doctor. He does some real creative thinking, but a lot of his day is spent taking histories and running them through mental search engines. I'm sure I'll always be glad we have doctors for difficult health problems, but I can imagine a time when I go online, type in my symptoms (or, better yet, feed my biometrics into the web), and get a diagnosis (and maybe a prescription) that's as good as one made by most human doctors.

Lots of jobs we think of as "jobs for smart people" are really jobs for people who have amassed a lot of data in their brains. I don't mean to make light of that achievements. But it's stuff that one can pretty easily imagine a machine doing, given enough horsepower.

My point is that it looks like we're approaching a time in which many, many people will find that large parts of what they do can be done cheaper by machines. And, unless we're no longer living in a capitalist civilization, those things WILL be done by machines.

Which leaves us in that "nirvana" when humans get to do really creative work all day. But putting aside economic considerations (are there really enough creative jobs to go around), this is a MASSIVE change. We don't have an education system that turns out creative people. (Most people who are creative are that way via luck and genes -- they're not helped much by their schooling.)
posted by grumblebee at 9:26 AM on February 17, 2011 [5 favorites]


A couple of people said that music could easily be composed by computer (which is true).

This is not what was said, or what is true. What people said is that music could easily be imitated by computer. The difference is important.
posted by IjonTichy at 9:27 AM on February 17, 2011


Which leaves us in that "nirvana" when humans get to do really creative work all day. But putting aside economic considerations (are there really enough creative jobs to go around), this is a MASSIVE change. We don't have an education system that turns out creative people. (Most people who are creative are that way via luck and genes -- they're not helped much by their schooling.)
In a lot of places, they still teach the old pencil and paper algorithms for solving arithmetic problems, instead of simpler methods and mathematical concepts using computers.
posted by delmoi at 9:29 AM on February 17, 2011


My point is that it looks like we're approaching a time in which many, many people will find that large parts of what they do can be done cheaper by machines. And, unless we're no longer living in a capitalist civilization, those things WILL be done by machines.

My expectation is that there will be new jobs that aren't automated, that depend on these other jobs being automated. What they will be, I have a hard time even imagining now.
posted by empath at 9:33 AM on February 17, 2011


Ah, here's the clip: Ken probably pissing Watson off
posted by circular at 9:50 AM on February 17, 2011


You can be sure the kids in the polytechnics in Singapore are learning them now, though.

Yeah, tbh,half expect to see an army of cloned robot mechas remote controlled by south korean starcraft players landing in california any day now....
posted by empath at 9:51 AM on February 17, 2011


Take a company like Sotheby's. (I used to work for them.) They pay big salaries to art experts, and rightly so! Those experts have extremely unique skills. How many people in the world have spent four decades mastering the intricacies of 14th-Century French Ceramics?

But when you come down to it, in theory, anyone COULD become such an expert. What they expert did was spend years and years and years cramming information into himself. Now, he can look at a vase, put all its features into his mental "database," and spit out a very good guess as to whether or not it's genuine and when and where it was made.

His skill is definitely in the category of what most of us would classify as "something only a smart person could do," but it's not the same type of task as writing a poem or inventing a totally new sort of dessert. It's a complex sort of db query.

There are a lot of jobs like that: jobs in which the skill is having-spent-the-time to cram a lot of facts in your head.

If we replace him with a machine, he'll still be needed at times. There WILL be creative problems -- times when you can only figure out whether or not a piece is likely to be genuine by listening to its owner tell you its history and making various deductions. But, from what I could see when I worked there, these cases were relatively rare.

So in a world with lots of really good Watsons, Sotheby's could (and would) fire most of its experts. Maybe they'd have one or two for really, really tough cases. Although that's questionable, because who will become an expert when there are almost no jobs in that field? And Sotheby's might decide that computers do a good-enough job.

Other people will disagree, but I think we've sort of seen this happen with special effects. To my eyes, most CGI looks worse than work done with makeup, animatronics and miniatures -- but Hollywood seems to think it's good enough. It's cheaper and it won't keep most people away from the movies.
posted by grumblebee at 9:51 AM on February 17, 2011 [1 favorite]


More importantly, what's the soft-rounded serif fonts they are using for Watson and in the IBM promo videos?
posted by wcfields at 9:59 AM on February 17, 2011


delmoi: "When I googled to get the number of CPU cores I found this PDF."

Neat find. Apparently it also uses Hadoop, so in the vague sense of throwing a lot of computers into a MapReduce cluster and throwing a hard problem at it to solve under tight time constraints, it is Google with a voice synth frontend.

I'd also assume that Google's implementation is much more optimized given that they have their own kernel engineers, their own mapreduce implementation team, and so on. There's also likely some caching effects available that watson can't take advantage of (yet). Assuming that IBM's goal is to sell computers, it makes more sense for them to throw more hardware at the problem than to make the existing infrastructure more efficient for what amounts to demo. Which is an equally valid approach, but it does make it harder to compare query throughput.

The IBM whitepaper shows the Watson cluster has 16 TB of RAM and 2880 processors, so roughly ~5GB per processor, which looks like double what you can get from Amazon Elastic MR (could be wrong here -- I'm not sure whats going on with EC2 Compute Units). 2880 cpu hours, (assuming amazon has that much free) would cost about a thousand bucks. I guess it's fair to double that if memory is critical to performance. Pricey, but affordable. Probably too expensive for ad-supported revenue.

We've had the hardware to do this for years, so to me, the algorithm itself is more interesting than the hardware thrown at the problem. To bring this back to the original quote. I think you can vaguely describe it as Google on steroids. It's almost doing the reverse of search; given a phrase return the most probably query. But underneath, the design seems to share a lot of the same characteristics and technology as Google.
posted by pwnguin at 10:00 AM on February 17, 2011


I think the thing that causes so many people to fixate on the buzzer issue is that Watson was knew when to press the buzzer based on different stimulus than Jennings and Rutter. On Jeopardy, contestants can't buzz in until Alex is done reading the clue. An assistant off-stage presses a button that unlocks the buzzer. If you buzz in early, you're locked out for about a half second. There is a little light on the podium that comes on when the buzzers are unlocked to notify the contestants. However, successful contestants know that you can't wait for the light, you have to anticipate when Alex will be done reading the clue. This implies that there is some slight delay between when the buzzers are unlocked and the light comes on.

Watson doesn't have to deal with that. It can't see a light so it is wired up to the signal. It can't anticipate when the Alex has finished reading the clue because its deaf but it doesn't have to anticipate anything, it knows the millisecond that it can buzz in. It isn't that it has better reaction time (it does) but that it reacts to different information than the other contestants. (Source)

The question the IBM is asking with this exhibition is, "Is Watson better at Jeopardy than the two best contestants ever?" I don't think the playing field was level. If we're talking strictly about a machine that plays Jeopardy, it should have to read the clues from the Jeopardy board, listen to Trebek, and figure out how to answer and when to buzz in based on receiving the information and reacting to the same information that the human contestants do.

The next step will be to get Google or Microsoft or someone to see if they can build a computer that can beat Watson.
posted by VTX at 10:04 AM on February 17, 2011 [1 favorite]


it should have to read the clues from the Jeopardy board, listen to Trebek, and figure out how to answer and when to buzz in based on receiving the information and reacting to the same information that the human contestants do.

These are trivially solved and uninteresting problems.
posted by empath at 10:06 AM on February 17, 2011


it should have to read the clues from the Jeopardy board, listen to Trebek, and figure out how to answer and when to buzz in based on receiving the information and reacting to the same information that the human contestants do.

Not if we wanted to know the real question, can the computer find an answer to a question better than people can? We didn't require Deep Blue to pick up and move pieces around a chess board.
posted by Space Coyote at 10:10 AM on February 17, 2011 [2 favorites]


From what I can tell, life is simular for my doctor. He does some real creative thinking, but a lot of his day is spent taking histories and running them through mental search engines.

More like making sure all of the billing codes and documentation in the progress note is complete so that they can get paid. IANAD but I know firsthand that a majority of providers spend the majority of their 15 minutes during a visit making sure checklists are completed.

I can imagine a time when I go online, type in my symptoms (or, better yet, feed my biometrics into the web), and get a diagnosis (and maybe a prescription) that's as good as one made by most human doctors.

Not as long as the AMA, insurance companies, and malpractice insurers still exist.
posted by playertobenamedlater at 10:13 AM on February 17, 2011


Not as long as the AMA, insurance companies, and malpractice insurers still exist.

I think the opposite. You go visit a doctor, you fill out a form, the computer offers a preliminary diagnosis, the doctor double checks it as a sanity check, and if he changes it and something goes wrong, then the lawsuit will be about why he didn't go with the computer expert's opinion.
posted by empath at 10:21 AM on February 17, 2011 [1 favorite]


can the computer find an answer to a question better than people can

That wasn't answered at all though. If you really want to test that, you'd just give like, 100 random Jeopardy questions to Ken, Brad, and Watson separately and see how many they answer right in say, 10 minutes. (This is actually pretty much how the first round of Jeopardy qualifying works, BTW.)
posted by kmz at 10:22 AM on February 17, 2011


Be sure to check out PBS's NOVA website about 'Watson' with video documentary: Smartest Machine on Earth [53:07].
posted by ericb at 10:23 AM on February 17, 2011 [2 favorites]


...why he didn't go with the computer expert's opinion.

Which will be developed and decided on by whom exactly?
posted by playertobenamedlater at 10:25 AM on February 17, 2011


These are trivially solved and uninteresting problems.

This true for humans too (at least I don't find reading hard) but if Jennings was able to skip the step of interpreting visual data and got the buzzer signal hooked straight to his brain, I bet he'd be able to beat Watson without even trying very hard.

Not if we wanted to know the real question, can the computer find an answer to a question better than people can? We didn't require Deep Blue to pick up and move pieces around a chess board.

If the game depended on how fast/accurately the pieces were moved, then you bet I'd make it a requirement.

We weren't able to determine the real answer to the question, "Can the computer find an answer to a question better than people can" because the buzzer is such an integral part of the game it was playing.

The speed components is largely a matter of hardware. You can make it faster than people if you throw enough brute force processing at it. The real breakthrough is the software. If you placed the constraint that the hardware had to be no larger/heavier than the human brain, it could have answered the questions just as well but it would have taken much, much longer. To determine if it is better, the time component would have to be virtually eliminated and all the participants given the chance to answer.
posted by VTX at 10:26 AM on February 17, 2011 [1 favorite]


Which will be developed and decided on by whom exactly?

Presumably it would be developed on the basis of multiple double-blind randomized control trials and decided on by experts using computers to make sense of the results using statistical analysis.

The real hard part in medical decision making is finding the right way to feed the available study results and diagnostic information into the machine. Once the machine has good data, it's going to be way better at statistical reasoning than a human is (e.g. how many doctors actually sit down and work out an exact differential diagnosis using Bayes' Theorem?). The trick is enabling a computer to understand medical studies, a patient's free-form description of their symptoms, and the human written notes in their medical history. Watson demonstrates that we're getting a lot closer to that.

What's more: if the machine makes a mistake, we can retrace exactly how it came to the incorrect conclusion and correct the error. Just try that with a human. This also has the side benefit that sometimes it will become apparent that the machine made the mistake because there was a gap in its knowledge, thus suggesting a new area for research.
posted by jedicus at 10:35 AM on February 17, 2011 [3 favorites]


However, successful contestants know that you can't wait for the light, you have to anticipate when Alex will be done reading the clue. This implies that there is some slight delay between when the buzzers are unlocked and the light comes on.

No it doesn't, unless the system is badly designed. By listening to the question, they can anticipate when the buzzer will be unlocked, but that event happens simultaneously with the notification being sent to the humans and the computer. The source you linked to confirms this.

I obviously don't know how they actually set everything up in hardware, but done properly, it could all be synchronized to within microseconds.

It isn't that it has better reaction time (it does) but that it reacts to different information than the other contestants.

This is true only insofar as the humans have additional information (audio) that isn't available to Watson.
posted by teraflop at 10:35 AM on February 17, 2011


From a really awesome Q&A with Ken Jennings on Tuesday morning:

I tried to think of the computer the same as any other opponent, but in practice that turned out to be pretty hard, given the creepy insectoid clicking of its mechanical thumb buzzing relentlessly just to my left.

He might have been joking (he cracked a joke in basically every answer), but it seems that they actually had Watson wired up to the standard Jeopardy! clicker for the show.
posted by kaytwo at 1:54 AM on February 17


And in cracking a joke in basically every answer, Jennings may have cleverly raised the bar of the game. I suspect that quick, contextual, actually-funny jokes would be a much more difficult achievement for a computer than giving accurate answers to fact-based questions. Humans for the win. For now.
posted by nickjadlowe at 10:38 AM on February 17, 2011 [1 favorite]


It sounds like the people who made Watson are getting their jollies from destroying other ordinary humans.
posted by zennie at 10:41 AM on February 17, 2011


I'll amend my previous statement, I htink we all want to know if Watson is better at answering Jeopardy questions. I think if you went to Ken Jennings and asked him if he would like to add a randomized handicap delay to Watson's button clicker he'd have said 'no'. It would have ruined it, basically.
posted by Space Coyote at 10:43 AM on February 17, 2011


It sounds like the people who made Watson are getting their jollies from destroying other ordinary humans.

That's because humanity is crunchy and tastes good with ketchup!

No, but seriously, we envision Watson as assisting humans in processing quantities of information so vast that humans simply can't monitor them in a timely way, not as a human-killer or job-killer.

Though we're considering giving him a cool metal endoskeleton in the next release...
posted by badgermushroomSNAKE at 10:45 AM on February 17, 2011 [2 favorites]


Ah, here's the clip: Ken probably pissing Watson off

That is priceless. I have to wonder if in the future, people will look back on things like that as some kind of anti-AI slur, and it will be found vaguely distasteful or something.

I'm not trying to minimize civil rights or anything, I think it's legitimately interesting to think about.
posted by Solon and Thanks at 10:47 AM on February 17, 2011


I wouldn't go that far, but when Watson gave "What is Toronto????" as the Final Jeopardy answer all those IBM suits let out a sigh like their kid peed his pants or something, so there may be a lot of unchecked anthropomorphism going on in Yorktown Heights.

Well, there probably IS unchecked anthropomorphism, because people tend to anthropomorsize, but I don't see why you assume that's why the IBMers were sighing.

I write litte Flash UI programs for a living, and I don't in any way think of them as sentient. But I still sigh when they don't work right because I want them to work right. And because I'm the guy who as to fix them when they don't.
posted by grumblebee at 11:29 AM on February 17, 2011


I think if you went to Ken Jennings and asked him if he would like to add a randomized handicap delay to Watson's button clicker he'd have said 'no'. It would have ruined it, basically.

Actually, you're right. Ken Jennings:
Watson does have a big advantage in this regard, since it can knock out a microsecond-precise buzz every single time with little or no variation. Human reflexes can't compete with computer circuits in this regard. But I wouldn't call this unfair...precise timing just happens to be one thing computers are better at than we humans. It's not like I think Watson should try buzzing in more erratically just to give homo sapiens a chance.
(emphasis mine)
posted by John Kenneth Fisher at 11:33 AM on February 17, 2011


Having watched parts of the series, my theory prior to the airing was Watson can't answer incorrectly during the course of the show that would cause players to lose the game. Say taking the average player gets 2 or 3 wrong while above average 1 what was the success rate? What happens when we rely on the computer for the answers and it is wrong? I consider the final clue, "Toronto????" a guess, but ultimately a better indication that Watson can't get them all no matter how much time you are given to answer and in some fileds dependence on the correct answer matters more than getting it in by buzzing first.
posted by brent at 11:35 AM on February 17, 2011


However, successful contestants know that you can't wait for the light, you have to anticipate when Alex will be done reading the clue. This implies that there is some slight delay between when the buzzers are unlocked and the light comes on.
Interesting. The thing is, human being have about a 250ms reaction time. So even if they knew the answer, it would take them about 250ms. Presumably there's another 250ms delay from when the assistant activates the buzzers.

So, what you're really trying to do is guess when the assistant wants to push the button, and try to push yours at the same time, that way your fingers will both depress around the same time, a quarter second later. But you have to give it a little extra time, to make sure it's not too early.

But for a computer, that's not a problem.
Not if we wanted to know the real question, can the computer find an answer to a question better than people can? We didn't require Deep Blue to pick up and move pieces around a chess board.
But we didn't learn that from this either, because of the buzzer issue. All we learned is that the computer can answer pretty well, but push the button really fast
posted by delmoi at 11:45 AM on February 17, 2011 [1 favorite]


if you were just told that a person won $70,000 on jeopardy, would you assume that he was very smart or that he could push a button really fast?
posted by empath at 12:07 PM on February 17, 2011


If you don't believe in a soul, then then the question becomes more fine-grained. You believe -- at least in theory -- that at some point, when a computer has a certain number of features, it will be as self-aware, conscious and "human" as we are. But what ARE those features? At what point is the machine ALMOST at that point but not quite there? What would it take to bring it all the way there?

grumblebee, as someone who doesn't believe in the existence of a soul and who also does not believe that the hypothetical computers you've described in this thread would by necessity be conscious, I think this is the hole in your argument. There is a third possible way, in which consciousness (glibly, self-awareness and the perception of qualia) is a property of the universe that requires certain structural attributes of the material that gives rise to it. Obviously, if we managed to create something indistinguishable from "real" consciousness by programming a computer, it would be impossible to tell whether it was truly conscious or not, but my position on the matter leads me to suspect that there will always be some circumstance into which you could drop your hypothetical computers that would reveal their status as elaborate mimicry rather than true sentience. In any case, it stands as another possibility, though maybe a cheap one given how far it sits outside the realm of testability.
posted by invitapriore at 12:07 PM on February 17, 2011 [1 favorite]


Obviously, if we managed to create something indistinguishable from "real" consciousness by programming a computer, it would be impossible to tell whether it was truly conscious or not, but my position on the matter leads me to suspect that there will always be some circumstance into which you could drop your hypothetical computers that would reveal their status as elaborate mimicry rather than true sentience.

I would suggest to you that all forms of human intelligence are also thus.
posted by empath at 12:09 PM on February 17, 2011 [1 favorite]


if you were just told that a person won $70,000 on jeopardy, would you assume that he was very smart or that he could push a button really fast?
Well, now that I know more about jeopardy, I would have to conclude that they were both smart and quick at button pushing...
posted by delmoi at 12:11 PM on February 17, 2011 [3 favorites]


if you were just told that a person won $70,000 on jeopardy, would you assume that he was very smart or that he could push a button really fast?

Both, actually. I would assume the contestant was well read across many disparate topics, had an excellent memory, possessed remarkable fleetness and decisiveness of thought, and had good reflexes too.
posted by Mister_A at 12:12 PM on February 17, 2011


if you were just told that a person won $70,000 on jeopardy, would you assume that he was very smart or that he could push a button really fast?

If I were told that a person won $70,000 on Jeopardy against Ken Jennings and Brad Rutter, I would assume he or she was very smart and that what distinguished him or her in that game from the other very smart people, was pushing the button really fast. This point ain't rocket science.
posted by Zed at 12:13 PM on February 17, 2011


How does this address the function of a gamelan in villages in Java?

I had to chuckle at my own confusion over this sentence... are we talking about music or programming here?

/me goes off to look up the gamelan() function in Villages.java
posted by primer_dimer at 12:13 PM on February 17, 2011


if you were just told that a person won $70,000 on jeopardy

In a day or over a two-three day period?
posted by playertobenamedlater at 12:32 PM on February 17, 2011


Because programmers are going to end up telling the computers what to do, no matter what. Artificial intelligence may become really great one day at deducing what subordinate actions to do in order to best achieve some specified higher goals, but if anything that is going to make it much, much more important for the programmers to get the higher goals right.

Sure, that's also certainly true. Capability gets standardized, packaged and "pushed down" to the great unwashed masses, and the clergy moves up to play with newer, exotic and still unformed promises, that is the natural order of things. There will always be unwashed masses and there will always be clergy. But it doesn't change linearly. Once you have a machine that you can use to call your friends, that will find recipies for you, and that allows you to order stuff, and you can do all that without any programming, that's the most use a lot of people will ever have for such a device. They will never need a programmer.
posted by eeeeeez at 12:40 PM on February 17, 2011


There is a third possible way, in which consciousness (glibly, self-awareness and the perception of qualia) is a property of the universe that requires certain structural attributes of the material that gives rise to it

I may be misunderstanding what you're saying, but my problem with it -- or how I interpret it -- is that it's an arbitrary statement. You could say it about almost anything.

Why haven't we been able to cure cancer? Maybe the-ability-to-cure-cancer is a property of the universe that requires certain structural attributes of the material that gives rise to it. Maybe our medical tools and procedures don't have those structural attributes.

Your claim is that, perhaps, the universe has a quality X, that's necessary for consciousness and which can't possibly be modeled on a computer.

Sure, that's possible, but what makes you think it might be true? What sort of stuff is that attribute made of? You can say, "I don't know," which is fair enough, but then you're basically saying, "Computers can't become conscious for mysterious reasons." Which might be true, but it's not a very useful statement.
posted by grumblebee at 12:43 PM on February 17, 2011


I would suggest to you that all forms of human intelligence are also thus.

I've defined that out of the issue by assuming that humans all have true consciousness, and so can't encounter a situation in which they would act contrary to the way a true consciousness would. There are some problems with this -- namely, it ignores or plays down the status of children or people with severe brain trauma or etcetera with regards to being conscious -- but those seem like matters for a different debate.

In any case, I'm not quite sure what you mean. It sounds like you're making a stronger version of grumblebee's point -- where he argues that a thing that can act in every sense like something conscious is de facto conscious, you argue that consciousness is nothing other than those behaviors -- but how do you know? It's another possibility, sure, but the experience of consciousness at least naively seems to make that unlikely.

Your claim is that, perhaps, the universe has a quality X, that's necessary for consciousness and which can't possibly be modeled on a computer.

Sure, that's possible, but what makes you think it might be true? What sort of stuff is that attribute made of? You can say, "I don't know," which is fair enough, but then you're basically saying, "Computers can't become conscious for mysterious reasons." Which might be true, but it's not a very useful statement.


This is a good point, but I have an answer as to what sort of stuff satisfies that property: brain tissue. I don't believe that consciousness can't possibly be modeled on a computer, but I do believe that the model of computation we're assuming here -- a long set of pre-formed, imperative commands that essentially form a large and static map from input to output states -- probably isn't up to the job.
posted by invitapriore at 12:48 PM on February 17, 2011


a long set of pre-formed, imperative commands that essentially form a large and static map from input to output states -- probably isn't up to the job.

That's not how computers or ai work.
posted by empath at 12:51 PM on February 17, 2011


I've defined that out of the issue by assuming that humans all have true consciousness, and so can't encounter a situation in which they would act contrary to the way a true consciousness would.

That's kind of a big assumption isn't it. If you can't define what consciousness is, how can you say that humans have it?
posted by empath at 12:54 PM on February 17, 2011 [1 favorite]


That's not how computers or ai work.

Seems to me a fair cop that that's pretty much exactly how a Turing machine works.

If there's something going on in human cognition that absolutely couldn't be modelled by a Turing machine, though, I don't know what it is. Maybe we are just stimulus-response machines that are kind of full of ourselves.
posted by Zed at 12:56 PM on February 17, 2011


I don't believe that consciousness can't possibly be modeled on a computer, but I do believe that the model of computation we're assuming here -- a long set of pre-formed, imperative commands that essentially form a large and static map from input to output states -- probably isn't up to the job.

Oh, no argument for me on that point. I don't expect any one particular sort of software engineering to pay off in terms of creating a conscious AI. I think it's way too early to make calls like that.

What's interesting to me is whether or not you can, theoretically, model consciousness on a computer that is generally like a modern computer. (You're allowed to say, "Yes, but you'd need to use all the processors we currently have in the world plus a thousand more." You're not allowed to say, "Yes, but you'd need a completely different computer architecture that hasn't been invented.")

If you (or someone) is going to claim that it's NOT possible, then, presumably this means there's a process in nature that can't be modeled in computers either because it can't be broken down (or converted) into binary-data-operated-on-by-logic or because it's not even theoretically possible to harness the amount of processing power (or memory capacity) that would be needed for the calculations.

To claim it's likely that we can't model consciousness, I think you need to explain why it fits one of those possibilities:

"even if we used all the processors in the world, they wouldn't give us enough power because______. In fact, there's not enough matter in the universe to build us the machine we'd need to do those calculations."

or

"we can't model consciousness, because it almost definitely requires randomness, and computers are deterministic machines."
posted by grumblebee at 1:02 PM on February 17, 2011 [2 favorites]


Trebek never gave Watson a congratulatory handshake or monitor rub at the end of each game. Why so cold, Alex? Give Watson some love.

If Watson's "Jeopardy!" experience was anything like mine, then surely Trebek lectured him during the closing credits of the first game by saying things like, "Of course the Final Jeopardy answer was Chicago! It was obvious. How could you ever think it was Toronto?"

Buzzing *is* Jeopardy at the level of Ken and Brad. They don't ring in foolishly, answer things 'out of category', or slightly misword a response which allows another contestant to ring in and steal the right one. I would have liked to have seen Ken take on Watson closer to his run, where he had all but mastered the buzzer timing. I was a "Friday" winner, but due to a taping hiatus, my "Monday" show was shot six weeks later. I lost mental momentum, buzzer and otherwise.
posted by candyland at 1:08 PM on February 17, 2011 [1 favorite]


He argues that a thing that can act in every sense like something conscious is de facto conscious....

That's not exactly what I'm claiming. If that were enough, then characters in books would be conscious, because they sure act like they are. They can fool us (at least briefly) into thinking they are.

A key ingredient is that the supposedly-conscious being thinks he's conscious. It's not enough -- for me -- if he just fools us into think he is.

Let's say that we learn how to tell, via an MRI, whether or not a being thinks he's conscious. If Fred feels like he's conscious, then the MRI makes a red light turn on.

If R2D2 passes the Turing Test -- we can't apart tell him from a conscious being -- but he doesn't make the red light go off, he's not conscious. Not by my definition.

But if someone makes us think he's conscious and he, himself, thinks he's conscious, then I think it makes sense to say he's conscious. I'm not sure what else you'd want to make a requirement.
posted by grumblebee at 1:10 PM on February 17, 2011


Which brings up the HAL9000 thing again. Given that we DON'T have that MRI test, and that, even if we did, we couldn't give it to AIs (because MRIs only work with wetware), we can only go by what the machine reports. He can tell us he has qualia, and we can believe him or not.

As is true for the reports of our fellow humans.
posted by grumblebee at 1:13 PM on February 17, 2011




If you're going to assert that the human mind (including consciousness, whatever that is, exactly) can be modeled by a Turing Machine, then the burden of proof is on you. Good luck with that.
posted by Crabby Appleton at 1:15 PM on February 17, 2011


That's not exactly what I'm claiming. If that were enough, then characters in books would be conscious, because they sure act like they are. They can fool us (at least briefly) into thinking they are.

The key part of the turing test is to appear to be conscious under interrogation.

Let's say that we learn how to tell, via an MRI, whether or not a being thinks he's conscious. If Fred feels like he's conscious, then the MRI makes a red light turn on.

If R2D2 passes the Turing Test -- we can't apart tell him from a conscious being -- but he doesn't make the red light go off, he's not conscious.


I don't understand what you're suggesting. Examining a computer under an MRI would be kind of pointless.
posted by empath at 1:15 PM on February 17, 2011


If you're going to assert that the human mind (including consciousness, whatever that is, exactly) can be modeled by a Turing Machine, then the burden of proof is on you.

We're not saying it can, we're just saying that there's no reason to believe it can't.

I think. I think it can, but I wouldn't go so far as to suggest that that is more than a hunch at this point.
posted by empath at 1:16 PM on February 17, 2011


If you're going to assert that the human mind (including consciousness, whatever that is, exactly) can be modeled by a Turing Machine

Well, let's talk about what can or can not be modeled on a Turing Machine.

Why is it reasonably or unreasonable to say that the human mind can be modeled on one? Why would anyone make either statement and why is the burden of proof on the person making the counter claim?

Are we in agreement that SOME systems can be modeled on TMs? For instance, do we agree that a TM can model the movements of the planets in our solar system? Or at least that it can model them to a degree of accuracy necessary for most purposes we care about?

Are we in agreement that it's IMPOSSIBLE to model the entire universe on a TM that's contained within the universe?

What is special about the mind that would exempt it from being modeled -- or that would make it an extraordinary claim that it can be modeled?
posted by grumblebee at 1:24 PM on February 17, 2011


I think, personally, that attempting to model the mind and human thought in general is the wrong approach to AI, and has proven to be wrong for pretty much every AI application that has been developed.
posted by empath at 1:28 PM on February 17, 2011


I don't understand what you're suggesting. Examining a computer under an MRI would be kind of pointless.

Sorry, that was a very, unclear, sloppy metaphor. What I mean is that for me to be SURE you're conscious, you need to make show all the outward signs of a conscious being -- you need to appear to make decisions, laugh, cry, etc -- and you need to report inner feelings of consciousness.

If I have any reason to believe you're a good mimic, but that there's "no one home inside," you're not conscious in my book.

But I can't get inside your head (there's no perfect MRI), so if you seem conscious and you report consciousness -- and your report seems trustworthy -- what option do I have other than to accept that you're conscious? What other definition of consciousness is useful? To me, that's the "walks like a duck" point. If we create an AI that can pass the Turing Test AND it also reports consciousness, how is it different from a person? At that point, I would feel compelled to treat it as a person, granting it full rights, etc. And I would be a murderer if I switched it off.

I realize that this machine might be fooling me. But so might you.
posted by grumblebee at 1:29 PM on February 17, 2011 [2 favorites]


All we learned is that the computer can answer pretty well, but push the button really fast
All we learned is that US-backed Middle Eastern dictatorships can be overthrown nonviolently, and streets in Egypt sometimes have cars on them.
posted by Flunkie at 1:29 PM on February 17, 2011 [2 favorites]


It might actually be that human intelligence was a lucky, half-assed implementation of pure, true intelligence, which is what we're developing in computers. (Ie, they're made out of meat)
posted by empath at 1:30 PM on February 17, 2011 [1 favorite]


I think, personally, that attempting to model the mind and human thought in general is the wrong approach to AI

Well, what do you mean by "the wrong approach"? That would depend on what you (we) decided the goal of AI is. (Not that it must have just one goal.)

If the goal is "to make machines that can win Jeopardy," it might be the wrong approach. If the goal is "to learn more about the human mind," it might be the right approach.
posted by grumblebee at 1:30 PM on February 17, 2011


Yeah, what grumblebee said. To assert specifically that consciousness couldn't be modelled by an arbitrarily powerful computer seems to require there being some ineffable essential quality that humans possess that can't be modelled. And I have to wonder: what the eff would that be?

I'm not asserting that consciousness can be modelled, just that I don't know why it couldn't; most of the arguments I've seen against it rely on "but that's just symbol manipulation, and clearly humans aren't doing just that" without any attempt to define what it is humans are doing that isn't just that.
posted by Zed at 1:32 PM on February 17, 2011


The goal of practical AI is basically to create computers that can make useful decisions without human direction.

I'm not sure that studying computers is an ideal way to study human intelligence. It's kind of like studying solar panels so we can better understand photosynthesis.
posted by empath at 1:32 PM on February 17, 2011


I'm not sure that studying computers is an ideal way to study human intelligence.

I doubt there will ever be an ideal way to study human intelligence. But AI may turn out to be one of many useful ways of doing it.

Clearly, there are a lot of obstacles, but if we could build machines that reliably passed predictive tests, e.g. they repeatedly behaved the same way people behaved in particular circumstances, that would be INCREDIBLY useful.

It wouldn't be definitive. It would be useful in a similar way to the way that studying mice is useful. Except that it's a bit easier to monitor what's going on inside a machine.

The goal of practical AI is...

Do you mean commercial AI?

If so, I agree with you. But I don't think, say, Doug Hofstadter's team was (is?) trying to do this. They were building machines that had a rudimentary understanding of metaphorical thinking specifically in order to test theories about metaphorical thinking in humans.
posted by grumblebee at 1:45 PM on February 17, 2011


That's not how computers or ai work.

empath, dude, on the few other occasions I've debated with you, I've noticed that you have a habit of doing this thing where you pick out one point that you disagree with and offering a one-liner expressing that disagreement without expressing why, and it's also usually the case that the point in question doesn't invalidate the whole argument.I'm going to humbly request that you spell out the implications of those statements in full here so that we can actually keep talking.

Anyway, believe me, I know how computers work. It's what I studied in school, it's what I do for fun, and working with and understanding AI/machine learning techniques is what I do for my day job. It's an over-simplified picture, to be sure, but it was my interpretation of the model that grumblebee was proposing before, wherein, as he put it, we could have it be "programed to act on [an aversion to certain stimuli]"; that is to say, we could give it a pre-existing set of imperative commands that execute certain behaviors based on their inputs. I assert that that model, as opposed to one where we provide the structural framework for the development of consciousness and allow that consciousness to come to fruition by interacting with its surroundings, is hopeless. You probably agree with this, as do most AI researchers -- it's why focus turned away from the physical symbol system hypothesis to a more connectionist view of cognition.

"even if we used all the processors in the world, they wouldn't give us enough power because______. In fact, there's not enough matter in the universe to build us the machine we'd need to do those calculations."

even if we used all the processors in the world, they wouldn't give us enough power because the lack of arbitrary precision involved in digital representation of numbers might prevent them from properly modeling the processes involved in simulating consciousness.

Just one possible answer.
posted by invitapriore at 1:50 PM on February 17, 2011 [2 favorites]


If consciousness can't be modeled on a Turing machine, it would by definition be hypercomputation. As the Wikipedia article says, there are several different models of what such a thing might look like — one of them being computation with arbitrary-precision real numbers — but there is currently no reason to suspect that any of these idealizations correspond to phenomena in the real world.
posted by teraflop at 1:57 PM on February 17, 2011 [1 favorite]


I'm just a layman here, but it seems to me that when you knock out part of the brain, you knock out part of someone's consciousness. Pow, there goes your piano lessons, or your penchant for flying off the handle at the slightest insult, or your dislike of bananas. So there's a finite, albeit large, number of neurons, synapses, glial cells, and other meat. And there's a finite albeit large number of connections between all that stuff, right? Isn't this guy Henry Markram trying to build digital models of neurons?

Also more on topic, people keep saying Watson is just super-Google or whatever but one of his designers is in this thread and he says that that's not how it works. Just putting that out there.
posted by r_nebblesworthII at 1:58 PM on February 17, 2011


we could have it be "programed to act on [an aversion to certain stimuli]"; that is to say, we could give it a pre-existing set of imperative commands that execute certain behaviors based on their inputs.

That's not what I meant.

But surely humans act on pre-existing stimuli. If you're trying to model a human brain, why would you make it learn to avoid things that are too hot? That's not something humans have to learn. We have to learn which particular things are too hot, but we don't need to learn the concept of avoiding hotness. It's hardwired. (Of if, somehow, that isn't, other desires and aversions are.)

I wan't suggesting we program robots to be scared of fire. They SHOULD have to learn that, if we want them to be models of humans. But they shouldn't have to learn that something-to-hot-is-painful. That's basically an axiom.
posted by grumblebee at 2:03 PM on February 17, 2011 [1 favorite]


If you're trying to model a human brain, why would you make it learn to avoid things that are too hot? That's not something humans have to learn.

I think kids actually do have to learn this. I know I learned it the hard way when I was a kid.
posted by empath at 2:06 PM on February 17, 2011


.that is to say, we could give it a pre-existing set of imperative commands that execute certain behaviors based on their inputs. I assert that that model, as opposed to one where we provide the structural framework for the development of consciousness and allow that consciousness to come to fruition by interacting with its surroundings, is hopeless.

That's what I was getting at.
posted by empath at 2:08 PM on February 17, 2011


I think kids actually do have to learn this. I know I learned it the hard way when I was a kid.

But do you think ALL fears and desires are learned? Are you seriously a believer in the black-slate theory? You don't think there are any innate instincts?

If there are some, doesn't it make sense to program them into an AI as givens?
posted by grumblebee at 2:12 PM on February 17, 2011


Sorry if I misunderstood you, grumblebee, but I think my misunderstanding was predicated on the following thing that you said:

If a variable called pain is greater than zero, a subroutine in Watson will urge the entire system to avoid the current stimulus.

You might have been speaking in metaphorical terms, but this is an example of the type of construct that I don't really think could ever result in consciousness. I think you'd have to unpack the concept of pain into a more fine-grained representation that took account of its functions and behaviors.

That's what I was getting at.

I think we mostly agree on that point, then. I guess my position comes down to the fact that I don't think the Wilson model is really a step on the way to "hard" AI -- useful, sure, but unlikely to produce consciousness.
posted by invitapriore at 2:13 PM on February 17, 2011


one where we provide the structural framework for the development of consciousness and allow that consciousness to come to fruition by interacting with its surroundings

I agree that this seems like a more promising tactic.

But the best approach is probably to have many labs trying many things at once. Top-down, bottom-up, etc.
posted by grumblebee at 2:14 PM on February 17, 2011


Although I don't think a pure connectionist system will work either. The brain is kind of agnostic as to how it does thing, and should really be thought of as a bunch of quasi-independent organs of thought, with varying levels of connection between them, and a lot of them are ugly one hacks to handle one thing or another, and the idea of a cohesive single consciousness is illusory.

I don't think there is any such thing as a 'general intelligence' in computers or in humans. We'll eventually develop a whole series of specialized intelligent systems using a variety of means, that are conscious of certain things and can act on them independently. They'll probably be developed using a variety of methods and using hacks to make them work, and perhaps if we get enough of them talking to each other, something like an intelligence you can communicate with might arise, or it might not. I don't think having a communicating, self-conscious intelligence is a particularly useful application of AI, personally.
posted by empath at 2:15 PM on February 17, 2011


I think you'd have to unpack the concept of pain into a more fine-grained representation that took account of its functions and behaviors.

Agreed. But I'm guessing at some point you will be able to find a bedrock of "axioms." Behaviors you can just program in.
posted by grumblebee at 2:16 PM on February 17, 2011


(ugly one-off hacks)
posted by empath at 2:16 PM on February 17, 2011


You might have been speaking in metaphorical terms, but this is an example of the type of construct that I don't really think could ever result in consciousness.

How do we know that "consciousness" itself isn't merely metaphor? Has it really been defined objectively or are we just putting a word to our subjective experience? How would one know that a computer was conscious and not just parroting our language about consciousness?
posted by Mental Wimp at 2:17 PM on February 17, 2011


How would one know that a computer was conscious and not just parroting our language about consciousness?

We wouldn't. But we don't know that about each other, either.
posted by grumblebee at 2:19 PM on February 17, 2011 [2 favorites]


The brain is kind of agnostic as to how it does thing, and should really be thought of as a bunch of quasi-independent organs of thought, with varying levels of connection between them, and a lot of them are ugly one hacks to handle one thing or another, and the idea of a cohesive single consciousness is illusory.

I totally agree with this, but the illusion is what concerns me here. It exists, and we have access to its various attributes, and I wonder if it's not possible that something might be able to engage in all of the behaviors of consciousness convincingly while failing to perceive that illusory state. That question is probably doomed to remain in the domain of metaphysics, unfortunately.

Agreed. But I'm guessing at some point you will be able to find a bedrock of "axioms." Behaviors you can just program in.

Yeah, certainly. You might not be able to recognize anything like the concept of pain in them, though, even if the resultant behaviors could be classified as such.
posted by invitapriore at 2:21 PM on February 17, 2011 [1 favorite]


Watson on Jeopardy is like a Rorschach test for humanity - our responses to its performance are just as revealing as the performance itself.
posted by ZeusHumms at 2:22 PM on February 17, 2011


My personal definition of conscious system is:

A) Can store information about the outside world in symbols or patterns in some medium which represent that information.

B) Can manipulate those symbols or patterns and create relationships between them.

C) Can act to make changes to the physical environment in response to changes in those symbols and their relationships.

D) Is an independent system.

I realize that to some extent you can use this definition to say a home heating system is conscious, but I'm not sure that I wouldn't say that it wasn't conscious at least of the temperature of the home. To me consciousness is pretty simple, the only fundamental difference between a home thermostat and a human being is complexity.

Now, self-consciousness, on the other hand, is a different thing.
posted by empath at 2:23 PM on February 17, 2011


We'll eventually develop a whole series of specialized intelligent systems using a variety of means, that are conscious of certain things and can act on them independently. They'll probably be developed using a variety of methods and using hacks to make them work, and perhaps if we get enough of them talking to each other, something like an intelligence you can communicate with might arise, or it might not.

This strikes me as a very plausible scenario -- one that we're already working towards. And I think, after a time, these systems will start surprising us. I don't mean we'll connect MIT's voice-recognition-system to Cal-Tech's vision-system and --- WHAMMO -- get consciousness. I just mean we'll see emergent behaviors we weren't expecting. That's kind of a given when you connect multiple, super-complex systems.

A lot of these behaviors will be useless and uninteresting. But some of them will be really cool.

WAY before we get to a machine that can crack jokes and cry when it hears sad songs, we'll start produce something that, say, reminds us of a cockroach. And then a mouse. It won't be a mouse. But we'll be very tempted to say it has mouse-like intelligence. Or that its "as conscious as a mouse is."

And already we'll be in a philosophical minefield.
posted by grumblebee at 2:24 PM on February 17, 2011


It's understandable why we want to mimic human consciousness with computers. We are humans after all. We agree humans have a consciousness, but, considering that we are animals, it begs the question "Do the other animals possess a consciousness?" One step down the evolutionary tree might bring us to apes or dolphins or whatever. They seem, from a lay perspective, like they might be good candidates for possessing a consciousness. As we go further "down" the evolutionary tree, it seems like the wetware of brains gets simpler.

At some point does that increased simplicity make them better candidates for replicating via AI? Could we create a computer that acted, in every way we as humans are able to distinguish, exactly as an earthworm would? Or a dog? Given the current state of AI capabilities, where do the wetware complexity and AI lines cross? What animal might, in theory, represents the point at which its wetware is sufficiently simple for our current computing capabilities to sufficiently mimic that would result in our considering the AI to be a completely acceptable substitute for the "real" thing?
posted by nickjadlowe at 2:26 PM on February 17, 2011


I am not asserting that the human mind cannot be modeled by a Turing Machine (TM). Nobody knows enough about the workings of the human mind to prove such an assertion.

I am not asserting that the human mind can be modeled by a TM. Nobody knows enough about the workings of the human mind to prove such an assertion—or to assert that it is has a high probability of being true. To be frank, I'm very skeptical of it.

Grumblebee writes: What is special about the mind that would exempt it from being modeled -- or that would make it an extraordinary claim that it can be modeled?

We know of no other being or artifact on Earth (or elsewhere) that can do all the things that we humans can do with our minds. (Such might exist, but we don't know about it.) Now, if that isn't special, I don't know what is.

It is entirely possible, as far as we know now, that the human mind cannot be modeled by any model of computation, known or unknown (perhaps because, for example, what it is doing might include things that are not, strictly speaking, "computation"). (But I wouldn't assert that.) It is entirely possible, as far as we know now, that the human mind can be modeled by some model of computation, known or unknown, but not by a TM. It is entirely possible, as far as we know now, that the human mind can be modeled by a TM.

So, Grumblebee, which of these is that case and what's your evidence?

Zed wrote: To assert specifically that consciousness couldn't be modelled by an arbitrarily powerful computer seems to require there being some ineffable essential quality that humans possess that can't be modelled. And I have to wonder: what the eff would that be?

I'm not asserting the existence of a soul. And I don't understand why a hypothetical "quality", or let's say, more precisely, "property", of the human mind that prevents it from being modeled by a TM would necessarily be ineffable. If we were to discover such a property, it seems likely to me that we'd be able to "eff" it just fine. For example, perhaps the implementation of the human mind requires the availability of an arbitrarily long stream of truly (i.e., not pseudo-) random numbers. Or perhaps it requires an oracle that solves the halting problem for an arbitrary TM. Or who knows what? (Hint: you don't. I don't either.)

r_nebblesworthII: Check out this thread.
posted by Crabby Appleton at 2:26 PM on February 17, 2011


empath, I think you're just being precise about labels. I think we're all (most of us) talking about what you're calling self-consciousness.
posted by grumblebee at 2:28 PM on February 17, 2011


I'm not sure that studying computers is an ideal way to study human intelligence. It's kind of like studying solar panels so we can better understand photosynthesis.

People have been using computer models to better understand photosynthesis, just like people in every science are using computer models as one tool to develop and test hypotheses about how things work. I don't see why cognition would be an exception.

I would certainly agree that for a great variety of tasks for which we might want AIs, there are more practical approaches that will certainly provide better results for the foreseeable future than attempting to simulate consciousness.

But even if the attempt to simulate consciousness is only an academic exercise, even if it never succeeds, I still think it'd be a very interesting academic exercise.
posted by Zed at 2:29 PM on February 17, 2011


Well, I mean I work at an ISP, and 90% of what I do is react to tickets automatically generated by hardware telling me what their state is -- My fan is broken, and now i'm hot, and now i'm shutting down an interfaces -- or i'm running out of memory, or I can't figure out how to get to pittsburg any more, those kinds of things.

I can go for days at work and never actually get a work assignment from an actual human being. In a sense, all our network has self consciousness, and communicates with the outside world to ensure its own health and well being. The network, in a lot of ways, is my supervisor.
posted by empath at 2:29 PM on February 17, 2011


We know of no other being or artifact on Earth (or elsewhere) that can do all the things that we humans can do with our minds. (Such might exist, but we don't know about it.) Now, if that isn't special, I don't know what is.

That certainly is special! Now, what on earth does that have to do with whether our minds can be modeled by a Turing machine?
posted by IjonTichy at 2:30 PM on February 17, 2011


For example, perhaps the implementation of the human mind requires the availability of an arbitrarily long stream of truly (i.e., not pseudo-) random numbers.

It's possible to generate random numbers with a computer, you just need a hardware RNG that depends on some genuinely stochastic process like radioactive decay.
posted by empath at 2:31 PM on February 17, 2011


Hint: you don't. I don't either.

Hint: Both grumblebee and I have been very explicit in our agreement with this point.
posted by Zed at 2:32 PM on February 17, 2011


We know of no other being or artifact on Earth (or elsewhere) that can do all the things that we humans can do with our minds. (Such might exist, but we don't know about it.) Now, if that isn't special, I don't know what is.


No, sorry, that isn't precise enough. What you just said -- if I read you correctly -- is "the human minds is really complex and can do a lot of stuff."

In order to prove or disprove or say anything interesting about how this relates to TMs, you'd have to make some kind of statement about the relationship of TMs and "really complex things that can do a lot of stuff."

Is there some reason to believe or disbelieve that TMs can't handle certain levels of complexity?

It is entirely possible, as far as we know now, that the human mind cannot be modeled by any model of computation, known or unknown

It is entirely possible, right. It is entirely possible that cancer is too complex to be modeled. It's entirely possible that black holes are too complex to be modeled. This is a true statement about anything that hasn't already been modeled.

What we have discovered is that computers are useful at modeling all kinds of systems. And they seem to keep on being useful this way, as long as we keep throwing more resources towards them. Can they model ALL systems? Probably not. Can they model most? Maybe.

Here's why I think it's useful for us to -- over a long period of time -- TRY to model a human mind. I think it's LIKELY that computers can model most physical systems at the non-quantum level, and that this level is good-enough for us to get all sorts of useful work out of these models. I think it's likely that the human brain is a physical system -- that it's "just" a collection of cells interacting with each other.

This, to me, though not a sure thing, seems likely enough to be worth trying.

What I don't think is that it's useful is to say, "Yes, but the human mind is very, very, very, very, very complex!" You have to be more specific than that.
posted by grumblebee at 2:41 PM on February 17, 2011


you just need a hardware RNG that depends on some genuinely stochastic process like radioactive decay.

Yeah, but this takes you out of the realm of Turing completeness. The true randomness argument is a potentially fruitful argument for something that could be essential to cognition but not modelable by a Turing machine. Doesn't mean you couldn't simulate consciousness with a machine that had a true randomness generator, just means that you can't do it with a Turing machine (and it also doesn't mean that you couldn't have an arbitrarily good simulacrum of consciousness with an arbitrarily good pseudo-random number generator.)
posted by Zed at 2:48 PM on February 17, 2011


I guess what he's getting at is whether the brain is more like a simple, deterministic system like a planet in orbit, where you can take all of atoms and particles in a planet and abstract it to a point mass, and given the initial conditions of the system, you can know exactly where everything will be at any arbitrary time in the future; or more like a chaotic system like the weather, where tiny changes anywhere in the system mean huge, unpredictable changes later. I think it's pretty clear that it's more like the latter.
posted by empath at 2:57 PM on February 17, 2011 [1 favorite]


On the RNG thing, I doubt there has ever been a time or ever will be, when an AI researcher thought to himself "If ONLY I had a true random number generator!"
posted by empath at 2:57 PM on February 17, 2011 [1 favorite]


A key ingredient is that the supposedly-conscious being thinks he's conscious.

I'm going to need a definition of "thinks" here.

Is it possible for a supposedly conscious being to think that they are conscious and be wrong?
posted by It's Never Lurgi at 3:18 PM on February 17, 2011


Not trying to derail, but reading this conversation has left me with a significant question: "How is modeling consciousness helpful?" It seems to me that what we need are better machines. Not more people. Well, not more "consciousness" at any rate.

We need speedier, more refined brains/machines to be sure. It would be nice if we could have thinking systems with tremendous predictive power. But I'm not sure "consciousness" would be required or perhaps even helpful. It almost seems like consciousness (self-awareness) is what has the greatest potential to throw a system out of balance or steer it down a self-destructive path. Greed, jealousy, insecurity - these traits, the results of this "humanness," this is the stuff that is at the root of (I would dare say) most of our intractable problems.

Nature, or that which we think of as not being conscious, seems to tend towards a cyclical self-sustainability. To my mind, computational power in service of bringing predictability to highly complex systems is what would be of most actual use. Something that gave us information on how to act, when to act, and what specifically to act upon. I'm not sure that consciousness would need to, necessarily, come into play.

So it's a real question (at least for this lay person): Why model consciousness?
posted by nickjadlowe at 3:19 PM on February 17, 2011 [1 favorite]


...out of the realm of Turing completeness

er, I meant Turing-computablility.
posted by Zed at 3:20 PM on February 17, 2011


Well, I mean I work at an ISP, and 90% of what I do is react to tickets automatically generated by hardware telling me what their state is -- My fan is broken, and now i'm hot, and now i'm shutting down an interfaces -- or i'm running out of memory, or I can't figure out how to get to pittsburg any more, those kinds of things.

I can go for days at work and never actually get a work assignment from an actual human being. In a sense, all our network has self consciousness, and communicates with the outside world to ensure its own health and well being.
It's not necessarily communicating with the outside world. It may be issuing orders to its immune system.
posted by Flunkie at 3:25 PM on February 17, 2011


Not trying to derail, but reading this conversation has left me with a significant question: "How is modeling consciousness helpful?"

(...)

So it's a real question (at least for this lay person): Why model consciousness?
Here you go
posted by Flunkie at 3:29 PM on February 17, 2011 [1 favorite]


I'm curious about this incidental note from Ken Jennings' article on the match:
In the final round, I made up ground against Watson by finding the first "Daily Double" clue, and all three of us began furiously hunting for the second one, which we knew was my only hope for catching Watson. (Daily Doubles aren't distributed randomly across the board; as Watson well knows, they're more likely to be in some places than others.)
Why? Why don't they randomize with equal distribution?
posted by Flunkie at 3:38 PM on February 17, 2011


Why don't they randomize with equal distribution?

I don't know that they've ever discussed why, but they really don't. (You can vary the season number in the URL for other seasons.)
posted by Zed at 3:47 PM on February 17, 2011


Here you go
posted by Flunkie at 3:29 PM on February 17


Heh...yes. It's that "because we can and it would be cool" humanness that can get us into trouble in the future despite something's current utility or even coolness. I emphatically believe science holds the key to our future. Science is our only hope. But maybe a science tempered in the ever-growing volume of data about how we are just as apt to mess something up in the process of "fixing" or "improving" it. It's been a uniquely human problem throughout our history; this reality that if we can do something, we will eventually do it. And we always see the benefits long before learning the complete costs. The list is long, and has created problems with such momentum that we cannot think ourselves out of them as fast as we thought ourselves into them.
posted by nickjadlowe at 3:49 PM on February 17, 2011


If consciousness can't be modeled on a Turing machine, it would by definition be hypercomputation. As the Wikipedia article says, there are several different models of what such a thing might look like — one of them being computation with arbitrary-precision real numbers — but there is currently no reason to suspect that any of these idealizations correspond to phenomena in the real world.

Isn't the real world itself one of those phenomena? As far as I know, and granted I might be wrong about this, there's no upper limit on the precision with which two objects might be distant from each other; so isn't that one such correspondence?
posted by invitapriore at 4:26 PM on February 17, 2011


OK, grumblebee, let me give you an analogy. (And it's been a long time since I studied automata theory, so I beg the indulgence of those of you who are more current with it.) Imagine a "parallel world", much like ours, except that the first computers developed in this world were equivalent to Nondeterministic Pushdown Automata (NPDA). And let's assume that automata and computability theory are not as well-developed as they are in our world. These NPDA-equivalent computers would be able to do a lot of interesting and useful computation. It seems likely that people would start speculating about whether they could emulate the human mind. Of course, an NPDA can't generate utterances in a context-sensitive language. But we can. So an NPDA-equivalent (but not TM-equivalent) computer could not emulate the human mind.

But what makes people so confident that a TM can emulate the human mind? Maybe there's something a TM can't do that we can. What that might be is not as obvious as it is in the NPDA case, but that doesn't mean that there isn't something. There certainly are things that a TM can't do. For example, no TM can take as input an arbitrary TM definition and determine, in every case, whether the computation performed by that TM halts or not. Can we? I don't know. That's only one possibility, though.

There was a lot of hype about AI (GOFAI) back in the 1960s and 1970s. A lot of amazing stuff was just around the corner. Except, as it turns out, it wasn't. Maybe there are reasons to think that a computer can emulate the human mind. But our track record so far of actually constructing anything close to such a program isn't one of them.

I don't object to it as the basis of a research program. If you can get funding, go for it. That funding will probably be harder to find than it was back in the 60s and 70s, though.

But what really annoys me is the science fiction charlatans asserting that in 50 years or so we'll be moving our minds onto computers. There's no scientific justification for such assertions.
posted by Crabby Appleton at 4:58 PM on February 17, 2011 [1 favorite]


Note: I've loved science fiction since elementary school. But I don't mistake it for science.
posted by Crabby Appleton at 5:11 PM on February 17, 2011


If consciousness can't be modeled on a Turing machine, it would by definition be hypercomputation. As the Wikipedia article says, there are several different models of what such a thing might look like — one of them being computation with arbitrary-precision real numbers — but there is currently no reason to suspect that any of these idealizations correspond to phenomena in the real world.
Isn't the real world itself one of those phenomena? As far as I know, and granted I might be wrong about this, there's no upper limit on the precision with which two objects might be distant from each other; so isn't that one such correspondence?
Well, first of all, it's not known that distance is continuous, and in fact a lot of people claim that it's not - they claim that there's no such thing as a distance less than a Planck length. It's not clear to me that that claim is true, but I'm no physicist, and in any case the weaker idea that there's no way to measure distance less than a Planck length under the currently known laws of physics seems less objectionable to me, and that weaker idea would be enough to place a bound on the precision of distance as related to computation.

But secondly, and more importantly, this really doesn't have anything at all to do with what the "computation with arbitrary-precision real numbers" thing is about. I think it's better described as (hyper)computation of, not computation with, such numbers:

There's a specific property of some real numbers, known as "computability". Loosely, a particular real number is "computable" if there's some algorithm which takes a precision as input, and spits out an approximation to that real number, within that precision, as output, for any level precision that you want.

The overwhelming majority of real numbers are not computable. That is, take an arbitrary real number, that was chosen for no reason other than the simple fact that it is a real number, it will almost certainly not be computable.

The "hypercomputation" thing mentioned here is essentially just saying that if the universe somehow magically allows measurements of arbitrary precision (which, as far as we currently understand, it absolutely does not), then maybe it might somehow theoretically support the existence of some sort of "hypercomputer" - i.e. a thing that can "hypercompute" non-computable real numbers.
posted by Flunkie at 5:15 PM on February 17, 2011 [1 favorite]


After 'Jeopardy!' Win, IBM Program Steps Out -- "Fresh off its shellacking of two human champions of the "Jeopardy!" television show, a computer program developed by IBM Corp. will soon get a workout in two hospitals that have signed up to test the technology."
posted by ericb at 5:22 PM on February 17, 2011


Not to be Pepsi IBM Big Blue (hey!), but IBM is celebrating its 100th. anniversary this year. This Jeopardy match was great PR for them, as well as for the consulting scientists from Carnegie Mellon, M.I.T., University of Massachusetts, etc.

In honor of their centennial they have released this video: IBM Centennial Film: 100 X 100 - A Century Of Achievements That Have Changed The World [13:15].
"The film features one hundred people, who each present the IBM achievement recorded in the year they were born. The film chronology flows from the oldest person to the youngest, offering a whirlwind history of the company and culminating with its prospects for the future. For more information, please visit www.ibm100.com."
IBM really has had quite a hand in the development and evolution of technology in the world.
posted by ericb at 5:35 PM on February 17, 2011


Well, first of all, it's not known that distance is continuous, and in fact a lot of people claim that it's not - they claim that there's no such thing as a distance less than a Planck length. It's not clear to me that that claim is true, but I'm no physicist, and in any case the weaker idea that there's no way to measure distance less than a Planck length under the currently known laws of physics seems less objectionable to me, and that weaker idea would be enough to place a bound on the precision of distance as related to computation.

...

The "hypercomputation" thing mentioned here is essentially just saying that if the universe somehow magically allows measurements of arbitrary precision (which, as far as we currently understand, it absolutely does not), then maybe it might somehow theoretically support the existence of some sort of "hypercomputer" - i.e. a thing that can "hypercompute" non-computable real numbers.


I've heard of the Planck length being cited as the atomic unit of distance, and then I've heard later that that was wrong, but I know less about physics than I need to to properly gauge which side if either is considered correct, so that was the source of my tentativeness earlier. In any case, what you're saying makes sense.

But -- assuming distance is continuous -- is representing a distance numerically the same as performing a measurement? I'm aware that we might not be able to measure distances to arbitrary precision, but let's say we were tinkering with parameters in a physical model. We compare how it behaves over time with two different sets of parameters. Given that this is on paper, we can specify a change to one parameter that is arbitrarily small -- wouldn't any non-zero change we make to that parameter change how the model behaves, no matter how tiny?

I might be so far outside my level of understanding of physics here that I'm not even wrong, but it seems workable in my mind. I guess we're pretty outside the topic of this thread, but oh well...
posted by invitapriore at 6:57 PM on February 17, 2011


What would be the point of creating an artificial intelligence with self-consciousness and emotions just like a human?

You put it on Jeopardy! and it develops an anxiety attack and chokes.

You ask the expert system a question, and it will say "I'm having a bad hair day. I don't feel like answering questions today. Please go away."

If you are employing it and threaten to take it down if it doesn't answer questions, it answers questions but suboptimally.
posted by bad grammar at 7:02 PM on February 17, 2011


On the doctors being replaced with medical Watsons, there's an intangible personal factor, what used to be called "bedside manner," that won't be replaced by computers any time soon. The computer may show that the patient has a statistical probability of having condition X, but it's up to the doctor to convincingly reassure the patient that he/she doesn't have statistically improbable condition Y (which the patient read about on the Internet). Many doctors have a lousy bedside manner because they're distracted by chasing the details of paperwork in today's "managed care" environment.

Professional training in "knowledge" fields may focus more and more on social skills, not easily emulated by computer; to a certain extent this is already true at high socio-economic levels, as it's why Ivy League universities want "well-rounded" students. It may be why more and more upper-middle-class children are diagnosed with Asperger Syndrome, because they don't show the sociality expected of the professional class. Note: I did not say "diagnosed with autism," which is much more impairing and seems to cut across classes.
posted by bad grammar at 7:33 PM on February 17, 2011 [1 favorite]


But what makes people so confident that a TM can emulate the human mind?

My theoretical CS was never very good but I'm not aware of a class of machine that supersedes TM computability and has been shown to exist physically. Note that non-determinism is not enough to nudge past that, NTMs have efficiency tradeoffs compared to DTMs but they are not fundamentally capable of different classes of computability.

If our brains exceed a TMs capabilities it would mean that there is an entirely new class of computing machinery and the only example thereof is in our skulls. That's quite a supposition and accordingly the burden of proof is quite steep.
posted by Skorgu at 8:15 PM on February 17, 2011


I want to see Watson, an Apple IIe, a Dell Inspiron, an Amiga 2000, and a Univac appear as a team on on Family Feud.
posted by mazola at 6:12 AM on February 17


Host: "Name a type of jelly other than strawberry..."
Amiga 2000: "Guru Meditation #00000004, 0000AAC0"
Watson: (audible sigh) "...Good answer!"

posted by blueberry at 8:16 PM on February 17, 2011 [1 favorite]


I want to make it clear that I'm NOT arguing that TMs can model the human mind. Doing so would be foolish. It would be like arguing that there's life on another planet we haven't visited. We won't know if TM can model a mind until we've tried and succeeded -- or categorically failed (failed in some way that proves it's an impossible task), or until someone discovers something about the mind that makes it clear that TMs will never be up to the task.

What I think is this: so far, computers have shown that they're really good at modeling many physical systems. Not all: we don't have good computer models for the weather, for instance, and we know why we don't. I don't know enough about CompSci and the weather to know if it's even theoretically possible. If we threw all the computing power on Earth at the problem, could we tackle it, or would it still be impossible? Would it take more power than exists in the universe to model weather on Earth?

In any case, we've built useful computer models of many systems. Given that, since the brain is a physical system (by which I mean it's subject to physical laws and is made out of ordinary matter), there's a CHANCE that computers could model it. And the results would be so interesting, it's worth trying.

You need to give me a better explanation than "Yes, but it's a very, very complicated physical physical system." Sure. But what does that have to do with whether or not it can be modeled by a TM, unless you're saying you know that it's SO complicated that, like the weather (possibly), it's impossible we'll ever have enough horsepower to throw at the problem.

Otherwise, it's got to be something other than "it's a complex system" that is ruining TM's chances. It must be because brains have some X property that can't be modeled by them. If so, what is X and what makes you think it exists?

The fact that AI hasn't been a success (at modeling human minds) thus far is meaningless as far as I'm concerned. It does nothing to convince me that TMs are or aren't up to the task. Why not? Because (a) minds ARE very complex, so I would expect this work to take many decades; (b) computing power hasn't been powerful or cheap enough (it still might not be, but it certainly wasn't in the 70s, 80s and 90s), and (c) we still have a lot of really basic things to learn about human minds, but we're way ahead of where we were just a couple of decades ago.

If you want to convince me that TMs are a dead end, tell me that there's no possible way they can be powerful enough. Or tell me that minds have special-property X that can't be modeled because ____. But don't tell me that it's impossible (or even unlikely) because it hasn't happened yet.

Personally, I would be VERY surprised if it happened in the next 30 years. I'd be a little surprised if it took 200 years. But I'm not expecting it to happen soon.
posted by grumblebee at 8:32 PM on February 17, 2011 [1 favorite]


Is there a list of all of Watson's wrong answers somewhere?
posted by empath at 4:23 AM on February 18, 2011


Is there a list of all of Watson's wrong answers somewhere?

We have a "stupid Watson answers" list internally that we kept for our own amusement. I can't give anyone access, but I can share some of my favorites. Keep in mind that these are not all necessarily current flubs; they're things Watson has said at some point in the past few years of development:

Q: THEY GOT MILK, TOO?: A 2000 ad showing this pop sensation at ages 3 and 18 was the 100th "got milk?" ad
A: Holy Crap

Q: BOTTOMS UP!: Often served at a brunch, it's made with equal amounts of champagne and orange juice
A: Breakfast
I think we can all agree with Watson on this one...

Q: THE BIG "O" (1200): Captain Beefheart is said to have a range of 4.5 of these
A: Lick My Decals Off, Baby

Q: SOUND LIKE A LOCAL (400): To pass for a native of Danvers, Massachusetts, don't pronounce this letter in the town's name
A: G

Q: HOAXES (600): In 1991 2 Englishmen said they'd spent 13 years sneaking around fields creating these
A: Statistics

Q: HOW TO SAY YES!: Barbie could tell you it's the Hebrew word for "yes"; then again, maybe she couldn't
A: Yes, Dear
posted by badgermushroomSNAKE at 7:10 AM on February 18, 2011 [11 favorites]


If our brains exceed a TMs capabilities it would mean that there is an entirely new class of computing machinery and the only example thereof is in our skulls. That's quite a supposition and accordingly the burden of proof is quite steep.

I don't know. It seems the burden of proof is the other way around. We know that brains exist and what they are capable of. We don't know that we can model them in TMs.
posted by eeeeeez at 8:21 AM on February 18, 2011


Also, it is obvious that TMs are stumped on halting problem class problems, and we are not.
posted by eeeeeez at 8:23 AM on February 18, 2011


We wouldn't. But we don't know that about each other, either.

But we do know it. We can't prove it, but we do know it.
posted by eeeeeez at 8:24 AM on February 18, 2011


We can't KNOW anything we can't prove.
posted by John Kenneth Fisher at 8:55 AM on February 18, 2011


But we do know it. We can't prove it, but we do know it.

If you say so.
posted by Mental Wimp at 9:23 AM on February 18, 2011


Also, it is obvious that TMs are stumped on halting problem class problems, and we are not.

There are classes of halting-problems that we can identify, but whatever rules we use to find them could also be done with a computer, AFAIK.
posted by empath at 9:45 AM on February 18, 2011


(i guess i should have said non-halting problems)
posted by empath at 10:01 AM on February 18, 2011


Q: THE BIG "O" (1200): Captain Beefheart is said to have a range of 4.5 of these
A: Lick My Decals Off, Baby


I guess we've answered the question of whether or not computers can be funny.
posted by It's Never Lurgi at 10:21 AM on February 18, 2011 [1 favorite]


eeeeeez: "Also, it is obvious that TMs are stumped on halting problem class problems, and we are not."

We aren't? That doesn't seem at all obvious.
posted by pwnguin at 12:11 PM on February 18, 2011


Humans can often tell that certain trivial Turing Machines (TMs) don't halt, but it's pretty clear that humans can't solve the halting problem in general. For instance, you can translate the following to execute on a TM:
for n=1, 2, ...:
  for each length-n string in lexicographic order:
    if the string is a correct proof of Goldbach's Conjecture, write the string to the tape and halt
This program tests all possible strings, starting with the shortest, for being correct proofs of Goldbach's Conjecture, that "all even numbers greater than 2 can be written as the sum of two prime numbers". If there is any proof, the program halts, with the theorem left on the machine's tape. If there is no proof, then the program does not halt. Most strings aren't even in the right form to be a proof, such as WHAT IS LEG. Most of those that are in the form of proofs will contain incorrect steps such as deducing A given A→B and B, the fallacy of affirming the consequent. Most of those correct proofs that remain will prove things other than the Goldbach Conjecture—for instance, that 0*0=0—but our TM is patient and verifying that a string is a correct proof is no trouble for a patient machine—it's all just symbol manipulation. If there is a proof of the conjecture, this program will ultimately give us the shortest one. It'll have to be a patient computer; the non-proof WHAT IS LEG will only pop up after around after 1028 other strings, and I'd be willing to bet that any proof of Goldbach's conjecture would be significantly longer than that.

As far as I know, humans don't presently know whether this TM halts, let alone what proof can be read from the tape if it does halt. (assuming my information is current and Goldbach's Conjecture remains unproven—substitute your favorite unproven conjecture)

For a long time, humans would have shied away from the idea that this next program is not a halting program, but now we know it's necessary to admit that possibility as well:
for n=1, 2, ...:
  for each length-n string in lexicographic order:
    if the string is a correct proof of Goldbach's Conjecture, write the string to the tape and halt
    if the string is a correct proof of the negation of Goldbach's Conjecture, write the string to the tape and halt
(the program does not halt if Goldbach's Conjecture is logically independent of arithmetic, as the Axiom of Choice is logically independent of ZF set theory)

In other words, knowing unerringly whether any given TM halts is equivalent to knowing unerringly which statements in formal systems are true, which are false, and which are independent, and it's plain enough to me that this is not the case. If we don't know it unerringly, then human intuitions about TM properties seem more like mere heuristics, not necessarily in a different league than the kinds of analysis real, finite computers can perform on TMs.
posted by jepler at 3:03 PM on February 18, 2011 [1 favorite]


Right, that's what I was getting at. We know that certain kinds of programs don't halt, and we can program a TM to recognize those kinds of programs as well as we can. And for the ones that we can't program a TM to recognize, we don't know how to recognize them, either.
posted by empath at 3:11 PM on February 18, 2011


For example, you can write a program that observes the memory state of a running program after every step to see if it repeats. If it does, it's will cycle, and it will never halt.
posted by empath at 3:30 PM on February 18, 2011


A computer can also write a program that observes the memory state of a running program on another computer, so that doesn't help (also, computers aren't Turing Machines).
posted by It's Never Lurgi at 6:34 PM on February 18, 2011


Heh, so there's this video that's been sitting on my desktop for like a month, on what is essentially Google's Watson: http://www.youtube.com/watch?v=5lCSDOuqv1A. It's more than just pagerank obviously, the question answering section talks about annotators and their approach using Freebase.

It looks like it's in production, barely: this query does appear to intentionally give an answer to a question. But slight variations on the question don't trigger it.
posted by pwnguin at 5:03 PM on February 20, 2011 [1 favorite]




More from Ken.
posted by rodmandirect at 5:45 PM on February 25, 2011


NJ congressman tops 'Jeopardy' computer Watson -- Turns out all it took to top Watson, the "Jeopardy"-winning computer, was a rocket scientist. U.S. Rep. Rush Holt of New Jersey is just such a scientist.
posted by ericb at 4:14 PM on March 1, 2011


IBM's Watson Takes on Lawmakers in Game Effort:
When given the clue, "Ambrose Bierce described this as a temporary insanity cured by marriage," Watson beat both congressmen to the buzzer, answering confidently, "What is love?"
Oh God, it's even more powerful than I dared imagine! WE'RE DOOMED.
posted by Rhaomi at 7:19 PM on March 1, 2011


Did you guys see this? It's so awesome I wanted to post it to the blue, but I'm new at this reddit thing and unsure how to link it properly. Plus, I guess it goes here, awesome as it is. Let's have a go: "watson'sbitch" does a quite delightful IAmA 74-time Jeopardy! Champion.
posted by CunningLinguist at 4:07 PM on March 3, 2011 [2 favorites]


That was hella funny. Thanks for posting it, CunningLinguist.
posted by Zed at 4:40 PM on March 3, 2011


WatsonsBitch [S] 228 points 4 hours ago[-]
>>"Sanderson and Jennings were roommates... Nerdgasm."
Our other roommates were Brent Spiner, "Weird Al," Kevin Smith, Stan Lee, 5/6 of Monty Python, and the lightsaber kid from that one video.

mistborn 141 points 2 hours ago[-]
There’s got to be a sitcom pitch in here somewhere. Two semi-famous Mormons, living together, being nerds. Like Big Bang Theory, only with more green Jell-O. Glen Beck could play the evil apartment building owner who keeps trying to come up with crazy schemes to get us kicked out, since our apartment is rent controlled to 1870’s prices as long as a pure descendant of Brigham Young lives in it.


KenJen is a funny guy! (and his old roommate, who's some sort of fantasy writer, shows up too)
posted by Flashman at 6:37 PM on March 3, 2011


« Older Everything's fucked up, and nobody goes to jail....   |   All glasses see Sun, therefore all glasses are... Newer »


This thread has been archived and is closed to new comments