Person: Pick up a big red block. Computer: OK.
October 11, 2015 3:44 AM   Subscribe

In 1970, a young graduate student at MIT demonstrated SHRDLU, an interactive artificial intelligence program which could understand simple English sentences in order to manipulate and describe objects within a simple "block world". It was heralded as a huge breakthrough, leading to predictions that comprehensive "Strong AI" was just around the corner. This optimism proved to be premature, being followed a few years later by the first so-called "AI Winter" of disappointment and funding cuts. But the student, Terry Winograd, went on to Stanford and continues to be influential not just in computer science but also in ethics, cognitive science, natural language and even design. But you might know him better as the PhD. thesis advisor for a guy named Larry Page who was working on some kind of techniques for finding relevant web pages.
posted by mr.ersatz (18 comments total) 18 users marked this as a favorite
 
i could have sworn it (shrdlu) was availale in emacs, and that i'd played around with it, but apparently not.

if anyone feels like writing or understanding this kind of code (old AI in lisp) then read this.
posted by andrewcooke at 4:29 AM on October 11, 2015


As it's unlikely that homo sapiens can be the genesis of machine intelligence directly those of us of the church of google expect that the first post-human-AI will occur spontaneously at the first serious attempt to "turn off" the potential triumvirate of the GAT system (google/amazon/twitter).
posted by sammyo at 5:15 AM on October 11, 2015 [1 favorite]


Person: Pick up a big red block.
Computer: An iron pickaxe or better is required to mine this block
posted by oulipian at 5:26 AM on October 11, 2015 [13 favorites]


My earlier silly comment aside I've watched from the sidelines of AI research, there is increasingly amazingly mind boggling research in cognitive science in many directions. Understanding how brains work from a cellular level up through a systems level just explodes with new sub-specialties. The math and computer science advances are insane in both a theoretical and practical, computers seem to hold conversations over the phone. But every honest researcher I've heard discuss the end game admits that no one anywhere has any real clue what an actual AI would be or when or how it will ever occur.
posted by sammyo at 5:44 AM on October 11, 2015 [1 favorite]


Yay, Winograd. Back in the 80's, Language as a Cognitive Process (amazon link) was so ridiculously exciting to me, I became a fan-boy and set about trying to implement transition network grammars. It took a while, but I eventually got over the lack of a promised second volume. 'Cause well, semantics is hard.

Then came Understanding Computers and Cognition (co-author Flores deserves a post of his own), which is one of those books with a long tail of influence. A friend used the conversation for action stuff to discover places in conversations between customers and the gas company that consistently led to broken expectations.

Those days are long gone, and a lot of water has been passed. But I still hold affection for Terry.

Also, that hair.
posted by mrettig at 6:54 AM on October 11, 2015 [3 favorites]


Person: Pick up a big red block.
Computer: Actually, it's...
Person: (unplugs computer)
posted by chimpsonfilm at 7:24 AM on October 11, 2015 [2 favorites]


I once read an account of SHRDLU that came just short of calling it a hoax. Apparently the code was so opaque and full of magic numbers that it only ever ran on the PDP-10 on which it was designed. There were no lessons to be learned from its implementation, and any discoveries made by it ended there.
posted by rum-soaked space hobo at 11:10 AM on October 11, 2015 [1 favorite]


Instead of Google, it should've been named Shrdlu Jr.

Would've made more sense as part of "Alphabet Corporation".
posted by oneswellfoop at 11:22 AM on October 11, 2015 [1 favorite]


No matter how hard we run, we seem to always be 30 years from hard AI.
posted by clvrmnky at 11:44 AM on October 11, 2015 [2 favorites]


>> if anyone feels like writing or understanding this kind of code (old AI in lisp) then read this.

Thirty bucks for a used copy certainly speaks well for it's on-going relevance.
posted by cleroy at 5:09 PM on October 11, 2015


No matter how hard we run, we seem to always be 30 years from hard AI.

Does the bar keep getting raised? If 30 years ago, you showed me the permutaions my android phone goes through when trying to parse some meaning out of my voice commands, I'd have thought there was a little person inside it.
posted by bonobothegreat at 5:28 PM on October 11, 2015


The bar remains firmly at the lowest setting.

A grammar parse tree that responds to typical phone activities with no context is not hard AI. It's ELIZA on meth.

Once you are able to have meaningful conversation with your phone, and the phone can suggest or lie about how meaningful it finds the conversation, then maybe that is one small part of hard AI.

The problem is not coming up with great notions of the nature of intelligence, and how to synthesize that. The problem is that our notions of the nature of intelligence change and deepen with every new notion we explore.

Hard AI is like high energy physics, except everything is quantum, the models don't work with each other at all, no one agrees on the thought experiments, and we are still not sure the smallest theories are actually provable.
posted by clvrmnky at 5:59 PM on October 11, 2015 [3 favorites]


I take a significant amount of comfort from the fact that hard AI seems to inhabit the same realm as nuclear fusion. Sure, it'd be great in the abstract, but I'm also not entirely sure I want to be alive when it's the norm.
posted by thsmchnekllsfascists at 9:12 PM on October 11, 2015


Person: Pick up a big red block.
Computer: Actually, it's...
Person: (unplugs computer)


... about ethics in human-computer interaction?
posted by speicus at 3:54 AM on October 12, 2015


The first indication that AI as a discipline is more than just buzzwords meant to pick up on grant money will be when the social medias can parse my posts well enough to give me relevant advertising, and when my first reaction to NPC/AI squad mates in an FPS isn't "let me shoot these people so they're not getting in the way of completing the mission".

I think there's been some progress, we're doing things with machine learning for image recognition that weren't happening twenty years ago, and there's a strong argument for the "AI is problems we haven't yet solved" definitions problem.

But I think too that we've found that if computers try to get too smart, then we have the same sorts of "that's not what I asked for!" interaction problems that we have with humans, and crossing that uncanny valley's gonna be a tough slog.
posted by straw at 7:58 AM on October 12, 2015



Person: Pick up a big red block
Computer: There are 14 stores that sell bread near you. Do you want me to show them to you?

posted by Insert Clever Name Here at 8:22 AM on October 12, 2015 [1 favorite]


If you're not aware at what OP is getting at, Larry Page was the inventor of the "Pageview" webpage ranking system. It's a little-known fact.
posted by Sunburnt at 10:44 AM on October 12, 2015


Hard AI is like high energy physics, except everything is quantum, the models don't work with each other at all, no one agrees on the thought experiments, and we are still not sure the smallest theories are actually provable.

Makes me think we live in an age analagous to the state of things before Galileo turned his telescope to the sky. We could work at it for centuries before a single technological advance blows apart every notion we've held about our own consciousness.
posted by bonobothegreat at 7:13 AM on October 13, 2015


« Older Searchable Archive of > 30M American and Canadian...   |   Piece of cake^w Toast Newer »


This thread has been archived and is closed to new comments