Mother of All Demos
December 8, 2008 7:06 PM   Subscribe

Forty years ago, Douglas Engelbart gave the Mother of All Demos.

In this demo, he introduced the mouse (the trackball had been around for 16 years already), the hyperlink (simultaneously invented by Ted Nelson), word processing conventions, expanding hierarchical views of files, image links, group annotations of documents, collaborative editing, separation between views and models, and user testing of productivity software. SRI went on to Xerox PARC, where the Graphical User Interface and laser printing were later developed.

Those of you in the Stanford area may wish to attend the 40th anniversary bash from 1-5:30pm on 9 Dec at the Stanford University Memorial Auditorium.

Engelbart pioneered research into the use of computers for the augmentation of human capabilities. In the academic world, this research is continued by researchers in Human Computer Interaction and Hypertext.

The original computer systems which ran Engelbart's demo are very rare now, which is why the Computer History Museum is working on a preservation project to clone NLS. Speaking of history, Engelbart participated in an oral history a few years back.

Modern software which is similar to NLS or descended from it would include Tinderbox, NoteTaker, NoteBook, and MyInfo. Ted Goranson at ATPM has a wonderful series of reviews on NLS-alikes and what makes good ones good.

All of this innovation from Doug did not make him a rich man. But he carries on his initial vision toward an Open Hypermedia System under the auspices of the Bootstrap Institute. Others have taken the idea of augmentation in completely different directions.

(previously here, and here)
posted by honest knave (35 comments total) 26 users marked this as a favorite

 
Pretty amazing how we've essentially gone nowhere in forty years, eh?
posted by five fresh fish at 7:12 PM on December 8, 2008 [2 favorites]


Well, by '76 we'll be A-OK.
posted by George_Spiggott at 7:25 PM on December 8, 2008


We *have* gone from it taking eight years of school to use these programs to just needing to live eight years on this planet. That's something.
posted by wah at 7:32 PM on December 8, 2008


We *have* gone from it taking eight years of school to use these programs to just needing to live eight years on this planet. That's something.

Well, you might have needed eight years of school just to get near one of these things, but I doubt they were that hard to use. I also think kids younger then 8 can use some computer programs.
posted by delmoi at 7:35 PM on December 8, 2008


I should note that although NLS shows some features of a model/view approach, Model-View-Controller was properly formalised by Trygve Reenskaug in the Smalltalk language in 1979.
posted by honest knave at 7:45 PM on December 8, 2008


Pretty amazing how we've essentially gone nowhere in forty years, eh?

Nonsense. Since then we've added popups, referrer spam, cross-browser exploits, javascript that resizes your browser, animated .GIFs, the blink tag, ad syndication, thousands of pretexts to publish a site without any content, domain squatting, link farms, and most importantly, SEO consultants. We have not been idle.
posted by George_Spiggott at 7:49 PM on December 8, 2008 [9 favorites]


He forgot "one more thing..."
posted by Rock Steady at 7:53 PM on December 8, 2008


You'll also notice that he's using a chording keyboard interface with his left hand. Interesting how the mouse caught on but chording never really took off.
posted by chimaera at 8:03 PM on December 8, 2008 [1 favorite]


You appear to be trying to build a UI!
posted by It's Raining Florence Henderson at 8:18 PM on December 8, 2008 [2 favorites]


I saw Engelbart speak about five years ago (he was speaking with Alan Kay) and they both expressed regret that the chording mouse never took off. They said that it was much faster to edit documents with it.

Another interesting thing they talked about: Engelbart participated in the early LSD research at Stanford. I wonder sometimes how much that had to do with all this...
posted by twoleftfeet at 8:20 PM on December 8, 2008


Engelbart participated in the early LSD research at Stanford. I wonder sometimes how much that had to do with all this...

John Markoff's book What the Dormouse Said talks about the beginnings of Silicon Valley and the influence of LSD.
posted by eye of newt at 8:53 PM on December 8, 2008


One night a couple years ago we were celebrating my wife leaving a job, we we went to get a drink at at the Top of the Mark in San Francisco. It turns out it was Engelbart's 70-something birthday party. He was dancing to the band there. The somewhat cheesy band played 'twist and shout' but instead sang it as "click and and shout". Strange world.
posted by bottlebrushtree at 9:17 PM on December 8, 2008 [3 favorites]


Thanks for the reference eye of newt! I just did a quick "Search Inside This Book" at Amazon for references to Engelbart and LSD in Markoff's book. I love this quote:
Engelbart's contribution to the creativity session was a toy he conceived under the influence of LSD. He called it a "tinkle toy" and it was a little waterwheel that would float in a toilet bowl and spin when water (or urine) was run over it. It would serve as a potty-training teaching aid for a little boy, offering him an incentive to pee in the toilet.

Maybe LSD didn't influence the development of the mouse, word processing conventions, expanding hierarchical views of files, image links, group annotations of documents, collaborative editing, separation between views and models, and user testing of productivity software. But it did influence at least one of Engelbart's inventions!
posted by twoleftfeet at 9:39 PM on December 8, 2008


I sent my dad this link. And he said:
Thank you for sending this email with the anniversary message. It brought back a lot of memories.

I had joined Sperry in 1967 in to a research group developing radiation hardened circuits. Near by was another group doing design automation research. The DA people would go off to conferences and come back raving about the developments at SRI and Xerox Palo Alto Research Lab.

In no time at all, Sperry had their own system using these concepts with a graphical input. we would have a library of symbols and could plop then on a screen and create circuits that were able to drive automated manufacturing tools. Dom M_ , Ed M_, George S_ and Charlie O_ were the pioneers at Sperry along with teams of others at other companies. There was an early Sperry Family Day were you came to visit and got a Snoopy Poster, printed out from our home grown Design Automation System. This all started from the lecture on this clips.

I noted in the early clip that the pointing device was a puck with a tablet just like we used. In addition the was a addition tablet with 20 control buttons to deliver a range of commands. This was before the invention of pull down menus.

Great innovation in a time when DARPA would give out seed money and tell researchers to go play around and invent. Our discovery in 1968 was the ability of MNOS semiconductor memories to retain their memory state during a sever nuclear environment. This led to the successes of the early satellite electronics for trips to the outer planets.

Viewing these clips allows me to count my blessings that I got to witness and in a very small way contribute to some of the technology revolution of the the last 40 years.

posted by ioesf at 9:47 PM on December 8, 2008 [10 favorites]


Of course, it's also interesting to note how LITTLE progress we've made. If you read, say, Nicholas Negroponte's The Architecture Machine today, you realize how little progress we've made. A mouse is still an inadequate input device for capturing ad hoc inputs in a give-and-take process--certainly nowhere near as good as a pencil on a sketchpad. We still don't design systems that allow humans to model and map and refine designs implemented by sophisticated computer processes (monte carlo simulations, for instance). Instead, we painstakingly design (too frequently) single processes (a monte carlo simulation in an Excel spreadsheet, simply) without having designed a second-order ability to manipulate either the models or the processes (which is what humans are really good at).

Is this a failure of the imagination? Or a limitation of technology? Both, right?
posted by minnesotaj at 11:20 PM on December 8, 2008 [2 favorites]


>certainly nowhere near as good as a pencil on a sketchpad.

Tablet pc's and operating systems have been around for awhile. Touch screens are finally becoming usable and viable.

Part of the process of exploring new user-interaction options (IMO) is having processing power and energy efficiency catch-up to the potential of software. If the Apple Newton had a multi-core, highly-energy efficient CPU, it could have dedicated MORE resources to handwriting recognition and may have been more successful. (Actually as someone who letter-prints from my drafting days - I never had a problem with the Newtons' handwriting recognition)

I've had portable touch-screen devices since the Newton, a tablet pc since Windows XP tablet and even experimented with touch-screens & motion-capture. Yet - nothing seems to me more natural than the keyboard and mouse (albeit wireless these days - with multi-buttons (chords?)).

Have you tried to balance a tablet PC on your lap? In your arm? Ugh - it is not a sketchpad (yet). Or held your arms straight in-front of your face to manipulate things on a touch-screen? For hours?

Maybe when the GUI is completely immersive and surrounds you - but not when you are interacting with a small device.
posted by jkaczor at 12:04 AM on December 9, 2008 [1 favorite]


i'm glad i got the chance to see/hear/meet him in person in the last accelerating change conf in stanford sept 2005
posted by infini at 12:27 AM on December 9, 2008


Interesting how the mouse caught on but chording never really took off.

The multitouch trackpad on the new Apple laptops is getting there. Part of the problem is getting everyone to associate a set of gestures with a particular outcome, so that it becomes intuitive.
posted by Blazecock Pileon at 3:44 AM on December 9, 2008 [1 favorite]


honest knave, your last link is broken. It should be: Prof. Steve Mann. I worked with Steve briefly; he's charmingly weird.
posted by scruss at 4:47 AM on December 9, 2008


Pretty amazing how we've essentially gone nowhere in forty years, eh?

Sorry, I can't let this go without comment. Ignoring genuine UI advances, I consider the fact that this degree if technology is available to virtually everybody in the industrialized world a huge deal. A cool demo is very impressive, sure, but empowering millions and missions of people counts for something too.
posted by LastOfHisKind at 5:17 AM on December 9, 2008


Pretty amazing how we've essentially gone nowhere in forty years, eh?

Yeah, I feel as if I can't let this pass entirely without comment either. Our basic interaction paradigm may not have changed much, but I take that as a sign that, even though it may not be perfect, it was more than good enough when it was initially conceived.

The bicycle hasn't changed much in its basic design in far, far longer either - not because we've become stupid, or the bicycle is an obscure, little-used device that would not repay efforts to improve it, but because the design, as is, is so close to perfect that there's no point in trying to do significantly better.

I'm not saying that the WIMP (windows, icons, mice, pull-down menus) paradigm is the be-all and end-all of user interaction and will always exist much as it has the last 40 years, but even now, the revisions we're seeing are just that - refinements and additions, not wholesale replacement. It just makes too much sense as it exists today.
posted by kcds at 6:11 AM on December 9, 2008


Our basic interaction paradigm may not have changed much, but I take that as a sign that, even though it may not be perfect, it was more than good enough

I'd make the argument that we're still using mouse+keyboard+window-metaphor not so much because it makes 'too much sense' as because computer users are so acculturated to it that we literally can't see its flaws anymore.

I recently had the experience of introducing my mother-in-law, who had never touched a computer more complex than a WebTV (remember those?) to her shiny new multitouch macbook. Watching her struggle with concepts like "what's a menubar?" or the difference between a pulldown and a form field or discovering that a window can be moved or resized was a real eye-opener for me: she's a sharp lady, she'll figure it out, but it was a great demonstration of just how much assumed knowledge is built into those interfaces.

Any genuinely radically different mechanism for interacting with your data would feel, for nearly everyone, just as clumsy and counterintuitive as WIMP feels to my mother-in-law -- we would immediately judge it as inferior simply because we're ingrained to do it a different way, even if it were in some absolute sense a "better" mechanism. Any change from the existing metaphor is going to have to be slow and evolutionary for people to accept it; I expect we'll still be using some recognizable variation of keyboard+mouse or keyboard+touchpad for a long time to come.

(Multi-touch is one of those evolutionary steps that I believe will quickly become ubiquitous -- even after just using her laptop for an hour or so I'm still trying to two-finger scroll everything. I also expect the filesystem to atrophy, gradually return to its command-line roots for true geeks only, and be replaced by specialized application-specific file managers for normal users. My mother-in-law never needs to touch the filesystem. She wants her music, it's in itunes. Her email is in Mail. Her documents are in Word. The whole Finder is nothing to her but a desktop to put a photo of her grandkids on.)
posted by ook at 7:57 AM on December 9, 2008 [1 favorite]


ook (and others)

your comments have been insightful and thought provoking, so much so that I'm going to get on a soap box here.

the largest service providers globally - google, microsoft, vodafone, nokia - are all looking at their next billion or 4 customers

these customers, in the lower income demographics of hte developing world have little or no contextual knowledge of ICT devices and very often the mobile phone (cellphone) is their first exposure

ook's observations about his mother in law's experiences demonstrate the difference between what feels 'normal' to a digital immigrant or baby and those who are only know clambering over the divide

more so when you take into consideration that she maybe exposed to other "push button iconography" via ATM, washing machines, VCR's and other stuff

now in this context, does WIMP really apply ?

for example Lenovo built information access devices (aka computers) for farmers in rural china based on a remote control device to point simply because a TV was far more familiar than a mouse to these guys

what will the future UI look like?

I think it will be influenced by the real majority who are only now slowly coming online

it will become more intuitive for "humans" to navigate, based on their contextual knowledge and needs, language, literacy levels and culture, not simply that of the designers themselves

/end rant
posted by infini at 9:06 AM on December 9, 2008


Shamelessly ripped off from a friend's Facebook wall:

"The mouse may be forty, but it still says it's 39 on its Craigslist personal ad."
posted by WolfDaddy at 11:04 AM on December 9, 2008 [2 favorites]


it will become more intuitive for "humans" to navigate, based on their contextual knowledge and needs, language, literacy levels and culture, not simply that of the designers themselves

The designers of the OLPC "sugar" UI were, I think, making exactly this bet: since they were designing a computer for people who didn't already use computers, they could start with any UI they wanted to.

While I respect the idealism behind that choice, I personally think it was a noble mistake, setting people up to overcome one digital divide (getting access to computers in the first place) only to be confronted with a new one (being stuck with a UI that nobody else in the world uses).

The reality is that anyone who's going to be designing a new UI is almost by definition already thoroughly acculturated to WIMP interfaces: it's not a question of a new UI emerging from within those next four billion customers, it's a question of existing designers tweaking what they already know for the less digitally literate. (Or the literally illiterate.)

What I fear is that this is, consciously or not, generally going to mean a dumbing-down of the interface, not an improvement on it or growth in new directions. Your example of Lenovo using a remote control instead of a mouse is an absolutely perfect demonstration of this dumbing-down: a remote control isn't a better interface, it's just a superficially familiar one. The Sugar UI is a more complicated question, as it's consciously aimed at children, meaning a certain amount of 'dumbing-down' is maybe appropriate, and some aspects of the design are clearly meant as real improvements over WIMP, successful or not...

I'm wandering off the point, but basically what I was getting at is that I agree that digital divide issues are going to influence future UIs, I worry that much of that influence could easily be in the form of watered-down or badly-repurposed interfaces, not improved ones.
posted by ook at 1:40 PM on December 9, 2008 [3 favorites]


I was under the impression that Xerox management canned the mouse/track ball and Steve Jobs "borrowed the idea". In fact, I believe Steve "borrowed" a lot of ideas from other companies and applied his artistic spin to them.
posted by thankyoumuchly at 1:46 PM on December 9, 2008


ook: I agree with you quite strongly — though the OLPC project wasn't a 'noble mistake' as much as misguided noblesse oblige.

People react strongly to condescension — being told that the desktop software used by first-worlders is too complicated for them; that contrived kiddie interfaces (designed by people who would never use it themselves) are more 'their speed'.
posted by blasdelf at 2:25 PM on December 9, 2008


it was a great demonstration of just how much assumed knowledge is built into those interfaces

UI is a language. And just as as foreign language is utterly baffling to anyone who doesn't speak it, the language of WIMP and of Google Search and of the Word Processor is not intuitive.

I suspect that just as we're stuck with QWERTY a hundred years later because it's "good enough," we'll be stuck with WIMP a hundred years later. Quite literally, once one has learned to "speak QWERTY," there's little incentive (and a fair amount of disincentive) to learn to "speak Dvorak" or another layout.

When I'm writing here on MeFi, I type about as fast as I can usefully think. The baud rate is a good match. Likewise, on WIMP, most of the time time UI isn't really slowing me down enough to count for anything: I don't think much faster than I can switch applications, call up menus or keyboard shortcuts, etc.

Which isn't to say I'd love to see some radical new UI. I think that multitouch is going to be a remarkable improvement, truly something different and empowering.

The radical change I'd like to see, though, is for better predictive, corrective UI. When the OS starts learning your habits and patterns and can usefully predict and/or correct your actions, that's gonna become a killer feature. Think Bayesian mail sorting, applied to plugging your camera into the port: the OS and photo software will recognize the camera, assume you're doing the usual things, and automatically ... oh, I dunno, eliminate red eye from portraits and put the landscapes into the landscapes folder and prep the picture of the baby to send it to the grandparents, and so on.

That will rock.
posted by five fresh fish at 9:20 PM on December 9, 2008 [1 favorite]


The radical change I'd like to see, though, is for better predictive, corrective UI. When the OS starts learning your habits and patterns and can usefully predict and/or correct your actions, that's gonna become a killer feature. Think Bayesian mail sorting, applied to plugging your camera into the port: the OS and photo software will recognize the camera, assume you're doing the usual things, and automatically ... oh, I dunno, eliminate red eye from portraits and put the landscapes into the landscapes folder and prep the picture of the baby to send it to the grandparents, and so on.

That will rock.
posted by five fresh fish


yes, i was just musing that all the efforts and brain power currently being applied towards creating or working towards that technological singularity, and/or the intuitive self learning AI, could be applied towards making the stuff small enough to reside on any cellphone - already we're seeing a pattern of the lowest income demographics showing a preference for the most advanced mobiles. why? because when you can't afford a house or car, that's one bling thing that's not only functional and returns on your investment (you can get more work if you're reachable to others) but also a radio, tv, camera etc

taking this thought a step further, what FFF is saying above, is one way that an interface could adapt to the different contextual tech knowledge levels across the world population by having an initial "learning" phase (just like the old voice activated softwares etc) where it picks up through cues your literacy, numeracy, and ICT contextual knowledge levels and then adapts itself accordingly to your needs

if this happens, than "one machine can truly bind them all" ;p across the divide.

i hereby call it the Bridge ;p
posted by infini at 2:00 AM on December 10, 2008 [1 favorite]


Somebody still needs to sit down and design all those interface variations, though; they won't just magically emerge out of the need for them. (If you're bringing "self-learning AI" and the singularity into the question, you're talking about a world that doesn't need UI anymore; you can just say "Hey, AI, please do that thing for me.")

I'm not bullish on adaptive interfaces, either -- they're a nice idea in theory, but wind up being confusing in practice. (Does Microsoft Office still offer their "hey, where did that option go?" adaptive menus?)

As long as we're making predictions, here's mine: general-purpose "computers" will become increasingly rare, and will eventually wind up being things that only software developers and hobbyists will actually own (and at that point they'll no longer be general-purpose, they'll be designed specifically for software development.)

Hardware continues to decrease in price and size to the point where the average user will have a houseful of application-specific devices, with physical and software interfaces tuned for those applications only. (Instead of spending $50 for a videogame for your computer or console, you'll spend $50 for a physical device that plays that videogame. Instead of doing your work on a machine that does spreadsheets and documents and lets you waste time on metafilter, your office will supply you with a spreadsheet device and a document editing device, and you'll have to sneak your websurfing device in when nobody's looking.) For portability and convenience, some of these devices may wind up doing more than one thing -- you might still have one portable device that can check your email and surf the web and act as a phone, for example -- but the general-purpose WIMP UI will gradually fade out of existence to be replaced by dedicated interfaces for small, specific purposes. (Many of which will still show vestiges of their WIMP origins, to be sure, but simplified since they have to handle a lot fewer things.)

All these devices may end up sharing one of a handful of common operating systems under the hood, and they'll talk to each other wirelessly when needed, but the average user won't know or care about how any of that works; to them they'll look like different animals.

This isn't much of a prediction, since the process is already well underway. (iPhone. Kindle. TiVo. Digital picture frames.)
posted by ook at 9:56 AM on December 10, 2008


ah yes, ook, the ubicomp or spimes route to the future.... of the OECD world

what happens in the developing world?
posted by infini at 1:10 PM on December 10, 2008


Heh -- yeah, I guess in a way I'm describing ubicomp, though I hadn't really thought of it that way. ("spimes" are a nifty idea but somewhat beyond my point; all I'm getting at really is the decay of the concept of a one-device-that-does-everything into many-devices-that-do-one-thing-well.)

As for what happens in the developing world... same thing, no? I don't see how anything I'm describing is necessarily limited it to the already-developed nations -- possibly the opposite in fact: it'd be much easier for language- or culturally-specific devices to take hold if they can be made cheaply and individually. With "one-device-to-bind-them-all" you'd have to build a single interface that somehow accommodates everyone's needs -- which would be incredibly complex and necessarily involve a ton of design tradeoffs. Whereas if you know you're building a specialized device targeted at one specific type of user, you can build an interface to cover just those specialized and know that you're not in the process unbalancing some UI decision that was necessary for a completely different type of user.
posted by ook at 2:19 PM on December 10, 2008


Whereas if you know you're building a specialized device targeted at one specific type of user, you can build an interface to cover just those specialized and know that you're not in the process unbalancing some UI decision that was necessary for a completely different type of user.
posted by ook 4 ¼ hours ago [+]


that would of course be ideal, but how many different types of cellphones would then be required for every language or type of user? how would you classify them? the users I mean. would this repeat for all high tech gadgets?

UI perhaps needs to step back a bit and take a big picture look at the whole set of users again perhaps even reclassify them on a global scale - your idea might be workable for some product categories sure but there are some, in common use across the world, such as mobiles where there might be the tradeoff between extreme customization and mfg constraints and costs?
posted by infini at 6:36 PM on December 10, 2008


how many different types of cellphones would then be required for every language or type of user? how would you classify them? the users I mean. would this repeat for all high tech gadgets?

Well, I'm not so much suggesting that every handful of people in the world would have their own hyperspecialized cellphone; there wouldn't be much point to that because the functionality of a cellphone is already so simple and globally useful. I'm suggesting that the specialization would be around different computing tasks: instead of one gadget which handles my photos and my spreadsheets and my music and everything else, I'd have a bunch of different gadgets which handle each of those tasks individually. Somebody on the other side of the world might have completely different computing needs than I do, so he'd have completely different gadgets.
posted by ook at 2:43 AM on December 11, 2008


i hear what you are saying yes

but on the other side of hte world, at hte bottom of hte social and economic pyramid, he'd probably just have the cash for one gadget, the best prolly that he/she could afford, but one

hence the joke about the ring to bind them all

it'll prolly be finnish ;p
posted by infini at 7:11 AM on December 11, 2008


« Older The Eye and the Fly...  |  Snippets of a taped conversati... Newer »


This thread has been archived and is closed to new comments