Dumb terminals are cool again
January 16, 2006 2:53 AM   Subscribe

Ndiyo systems consist of a central PC running Linux, serving a bunch of ultra-cheap, ultra-thin VNC-ish clients over 100Mbit Ethernet connections. The developers hope that mass production will soon make the clients cost as little as a typical video cable.
posted by flabdablet (32 comments total)
 
The x-terminal will never die!

These clients are basically network-attached framebuffers, which is the right way to do it, I suppose, as long as you don't need to display video. I'd go one step farther and make them work over USB: each client box would be a usb hub plus framebuffer. Would lose some of the flexibility of ethernet cabling, though.
posted by hattifattener at 3:12 AM on January 16, 2006


So I can finally get rid of my VT100?
posted by swordfishtrombones at 3:16 AM on January 16, 2006


I'd like to do this at home, even. I don't need multiple computers all over the apartment, just one nice fast one with good clients, or maybe one nice fast one at my desk, a few good clients attached to it, and another nice fast one hooked up to the TV/stereo. Then again, I kind of already do this by utililzing old, cheap/free computers as terminals and slaves and whatnot. They need to support wireless, though - and allow third parties to make laptop/tablet/pda hybrids that replace desktops via good hardware design and UI. (As I read it, they wouldn't have any issues with this at all.)

Anyhoo, I'm all for it.

Err, wait. I think I just wished myself out of a fucking job. Nevermind. WINDOZE FOREVAH!!1

Seriously, though, there is a reason why thin clients mostly died. Flexibility and real power to the end user. Sure, in an everyday, ideal world the average office worker reads and composes some emails, types up a few text documents, spreadsheets or whatnot, and surfs the web. But then there's that 1% to 10% "power user" usage where one actually has to do a bit more than that and the thin client model just fails. And people have been trying to bring back the thin client model for ages and ages, pretty much ever since the PC took hold. Some have marginally succeeded, many have failed - even though there's plenty of uses for thin clients. Kiosks, POS terminals, data entry, internet cafes, schools, libraries, more - and many have succeeded in these niche areas. But it always seems like it's at some cost of usability and flexibility.
posted by loquacious at 3:27 AM on January 16, 2006


swordfishtrombones: Heh!

In the first place I ever lived in out of my folks home we had set up an old school x.25-protocol RS-232 serial network with a bunch of old-but-small Scanset CRT dumb terminals we picked up for 10 bucks a pop, served by a 286 or 386 or something running a 32 UART port Digiboard. Dumbterms everywhere! In the kitchen, in the living room, in every room, even in the bathroom!

Everything we did then was pre-web text and multiline BBS, anyway, so it was pretty awesome. But man did those keyboards suck so much ass. I guess you can only type so fast at 300 baud, anyway.
posted by loquacious at 3:32 AM on January 16, 2006


This looks like a good idea. I think it would go down very well here in Kenya.

The internet cafe in a box would be very well received. I'd certainly use a cafe that implemented it. Most of the ones I use are bug and trojan infested windows boxes.

Luckily, the cafe I'm using today has just got around to installing firefox whilst I was away in Nairobi....

Now, if only we could get the cost of broadband internet here down from $400 per month (for 512kbps), we could be on to something!
posted by davehat at 3:36 AM on January 16, 2006


Note that the people behind this include the co-inventor of the webcam, and the inventor of predictive text messaging.
posted by matthewr at 3:56 AM on January 16, 2006


hattifattener: they're talking about making a version of the nivo with USB ports on it. Presumably they could make these look like local USB hubs to the application server to allow things like flash drive and printer access.

loquacious: Seems to me that the two main reasons for lost usability and flexibility are (a) historical lack of proper support for multiple concurrent logins on typical Windows PC's and (b) the expense of low-sales-volume thin clients compared to cheap high-sales-volume PC's that can do much more.

Linux has always supported multiple concurrent logins properly (although Gnome screws up multiple logins from the same user) which mostly deals with (a). If these guys really can make thin clients that sell for about the same price as (say) USB hubs, it may well be worth trading off some flexibility.

If 90% of user needs can be met with ultra-cheap thin clients, that makes MUCH more money available to spend on the remaining highly demanding 10%, no?
posted by flabdablet at 3:58 AM on January 16, 2006


Trouble with the internet cafe idea is, "as long as you don't need to display video". Most internet cafes are gaming venues as well. I suppose you could segregate gamer and non-gamer users and maintain high-end PCs for gamer use, but this runs up against one of the primary design specs for an internet cafe: complete interchangability of workstations. Lots of websites now have embedded video and complex graphics, and as time goes by, this is only going to become more so. That said, the powerful server needed to drive a room full of gaming thin-clients may be cheaper than a room full of PCs would be, and is certainly far easier to upgrade.
posted by aeschenkarnos at 4:13 AM on January 16, 2006


flabdablet: I agree, honestly. I think it's a great tool - especially if they can get that price point down to where they're talking about. I could even give a few of them a home. USB support would have to be a must, though.

But I wonder where thin client computing fits in in a mobile environment? What about persistant data availability? What about input and output to/from removable media? (A USB port, thumbdrives and a few circulating removeable media drives could solve this, with the ability to write/read to/from profiles.)

Having worked in offices and campuses both small and large for years, I can see the central server being a bottleneck to the way people work today - or at least to how they're used to working. Plus is training factored into that cost? What about time spent waiting to access media? What if the server goes down?

Again, it's a great tool, it can and would be useful in a variety of situations, but I'm not sure if it would actually be ideal or workable for a 90% of users. It just seems a bit high.

And though installing PCs at every desk is more expensive - there are massive benefits to having fully functional PCs everywhere. In a properly configured, served and networked office environment, almost any desktop is as good as any other. A server could go down and be (marginally) replaced by a desktop, or an array of desktops. Every machine could go down except one desktop, and as long as the hub/switch/router chain still worked, that one desktop could still talk outside and function as a standalone.

Though the reliability and security of these (windows) PCs is often irksome - they're fully functional, and redundant through pervasive duplicity.

And in high pressure, low margin corporate/office environments, time (especially in terms of man-hours) is most certainly money. One or a few PCs go down, not such a big deal. The mail server goes down for a while, still not such a big deal. Network storage servers or webservers, ok, kind of a big deal - but meaningful work can go on.

The main server for a bank of thin clients goes down? Everyone goes down, no work gets done, and money burns up like an oil well fire.

I think this applies even to small businesses - perhaps particularly to small business. Unless they can promise real-world end-user PC grade power and flexibility, and server uptime approaching 99.9999% reliability, I'm pretty sure I'd absolutely hate to have that server/client model in a small or medium sized business.

And I've been in thin client environments, both text and GUI, where the server goes down. And - perhaps just to my geeky self - it's insanely, teeth-gnashingly frustrating when you've got all these almost-computers sitting around dead in a blank state while one is whopping oneself mightily in the head with the LART, while screaming "Why! Why oh why didn't we propagate some of that computing power out to the edges where it's needed most!"

Again, I did the idea and the projected price point. I support the idea.

But I want to point out the drawbacks to thin client, because it's neither new, nor really all that shiny (not that you've implied that it was, but others may forget), and I can imagine (and have seen, in real world working environments) many situations where running thin client just hurts. And when it hurts, it hurts real bad.

Besides, server-client computing seriously offends my anarchic hippie nerd sensibilities. We fought long and hard to wrest this power from the acolytes and adepts. Get yer mitts off it, suckas.

And really, I don't want a thin client. I want a fat portable client. I want a desktop class computer in a box the size of a PDA with real storage and a screen. I want to be able to take the whole thing with me and then plug it into a desktop or laptop simulcra, or use it stand-alone as a mobile device. I want it to be upgradeable and modular and open.

You want cheap, accessible computing? Make it as dumb and as smart and as flexible and as imaginative and as compatible as Lego bricks. More ram? Snap on a brick. More drive? Snap on a brick, or exchange bricks. Component fails? Swap it out. More processors? Snap on a stack of bricks. More battery life? More bricks, rechargable or holding disposables.

Which is something akin to what thin clients attempt to achieve, but fail to give the flexibility and power to the end-user, who really, ultimately, should have control of their computing experience.

All that counterpoint being said, yeah, there's a place and a market for thin clients, especially cheap, easy ones. Particularly in the wide open, vast realm where computing is still very inaccessible and expensive - which is still global in nature.
posted by loquacious at 4:48 AM on January 16, 2006


Some are misreading the idea, I think:
The server simply sends to the nivo -- over the network using a simple compression scheme -- the pixels that need to be displayed on the user's screen.
It would have no trouble at all with video. It would have trouble with everything else - especially text - because of compression artifacts. At least that is how I read it... Of course they might have smart algorithms for balancing plane text with compressed video - think the djvu image compression format or something.
posted by Chuckles at 6:09 AM on January 16, 2006


Gaming would be out though, I think... Or at least the server would have to have one high end video card for each client. It might be worth it to have these centralized, I suppose, but I suspect the hardware cost savings will be minimal.
posted by Chuckles at 6:12 AM on January 16, 2006


Loquacious: And I've been in thin client environments, both text and GUI, where the server goes down ... you've got all these almost-computers sitting around dead in a blank state ... "Why! Why oh why didn't we propagate some of that computing power out to the edges where it's needed most!"

Yep, but then again, you have all these 2+GHz PCs at work, with separately installed software and OS (and licenses!)... and the server goes down.
No email.
No web/intranet.
No workmanagement/workflow.
No administration/enquiry systems.
etc.

The only way you can win is if each computer is its own server, which obviously will never work ;-)
posted by Chunder at 6:13 AM on January 16, 2006


Chuckles: there's no reason on earth they should be using lossy compression for text regions. I would put money on them using lossless compression everywhere; but maybe they're being smart and using lossy only on image/video regions.

These are the guys behind VNC; you can be sure the text will be perfectly clear.
posted by thparkth at 6:19 AM on January 16, 2006


Oh, God: here we go again with "thin clients." Didn't we learn our lesson from the last round? And the extruded aluminum case on that "nivo" box is gonna cost $15 alone!
posted by ZenMasterThis at 7:04 AM on January 16, 2006


Of course a network based on a single central server and thin clients everywhere else is going to be horrible when the server goes down. But what about a network based on, say, a five-server cluster with redundant storage? Should be pretty bulletproof, and still doable on the cheap. And Linux is good at clusters.

As for end-users having control over their computing experience: it's my experience (in a sysadmin and general computer-fixer-upper capacity) that most people (easily 90%) don't actually want to turn all the knobs and press all the buttons; they just want the damn thing to work. In any business environment, especially one involving any kind of Windows network, that basically requires some form of High Priest to make the damn things work.

It is of course one of life's little ironies that we, the anarchic hippie nerds of yore, have become the High Priests we once despised so much; but it seems to me that what drives the need for a priesthood is not centralization, and not self-promotion by priests, but the sheer bloody complexity of today's PC's.

My Mum just sent me her first ever email the other day, and a quote from that seems appropriate:
...just to let you know i'm sitting at the thing. kicking and screaming may not be visible; this is because typing is something i do not fear. beneath the impassive mask however there beats a full kit of loathing for all the choices that interfere with each volition-to-result connection.
Freedom of choice
Is what you got
Freedom from choice
Is what you want


Sure, those of us who enjoy Doing Stuff with a PC for its own sake are better served by having one. But I really do think that a flood of ultra-cheap but graphically competent thin clients could actually make quite large and beneficial changes to our present computing landscape.

The Internet has always made an architectural distinction between edge hosts (in your living room) and interconnect hosts (in your ISP's rack). What if the standard deal with an ISP gave you not just a connection to the Internet cloud, but a Linux user account with a very high speed wireless connection to your (almost disposable) thin client?

What if Google handed out world-accessible Linux user accounts the way they currently hand out Gmail accounts?

What if instead of visiting people's houses to fix and upgrade their rapidly depreciating and virus-ridden PC's, all I had to do was connect to their user accounts and fix those?

Today's consumer Winboxen are absolute sitting ducks when they're brought home from the store. They're never configured with limited users for limited use; they're usually preconfigured with a subscription-based antivirus client that dies after a year; they all run the execrable Internet Explorer and the feeble Microsoft Java VM, and they all get infested within weeks of connecting to the net.

Of course it's quite feasible to set them up to run well pretty much indefinitely, but the sales guys will never let the customer know that expert help is useful in this regard. They're all "she'll be right, just follow the easy setup guide and you'll have no worries" - which those of us who "know something about computers" and are constantly being dragooned into Tech Support for Relatives know is crap.

Jef Raskin originally envisioned the Macintosh as a computing appliance: something simple enough that you could just plug it in, switch it on and start using it. Now the Mac has Unix under the hood, and comes with Terminal :)

It seems to me that widespread cheap thin clients actually have the potential to be Raskin's ubiquitous computing applicance, and they'll do that by pushing the admin load one step inward from the edge, to a place where more employment money for nerds lives.

We live in interesting times.
posted by flabdablet at 7:06 AM on January 16, 2006


A few questions from one of the cube dwellers in the trenches - terrified that the network admins are getting ready to take his puter away -

Isn't the server essentially just displaying several different simultaneous user-logins to the different monitors? How is this different from plugging a bunch of video cards into a server and then plugging a bunch of monitors into the server?

As for video - isn't that what this system is set up to do best? It's essentially just sending a video feed to a monitor, isn't it?
And isn't the quality of the video going to be completely dependent on the size, strength, speed, whatever of the server?
And how is this going to bode for my intricate, delicate http tunneling protocols that allow me to play ffxi from my cubicle?

Answers, damn you! I'll not have you take away my precious Dell only to saddle me with some worthless piece of monitor so I can remotely share a single, crappy server with every other hampster in this colony!
posted by Baby_Balrog at 7:10 AM on January 16, 2006


Chunder: Well, that still would happen in the server/thin client model. Server goes down, so does everything else. Except in the thin client model, you can't even boot your computer and work on your local data.

Nor could a thin client or group of thin clients replace a server - either wholly or partially.

Which was my point.

Crashed thin client server = no computing. Period.

Crashed utilities servers in networked PC computing = no remote/local communications, but working local computers, local peer-to-peer, and working general web/net access via hardware/firmware routers, switches and unintelligent/dumb hubs.

Granted, real world cost analysis and productivity analysis is a lot more complex than this simplification, but my hunches, generalizations, and guesstimates are based off of real world experience and gut feel. Say, school of hard knocks for IT.

To be sure, I only have confidence in the team working on Ndiyo. VNC changed computing, and it's something I use every day. If anyone could get thin clients working well, I'd imagine it would be them.

But I still have issues with the architecture and theory behind thin client computing. And these issues are fundamental to the thin client model - not the team or particular technical specifications of the implementation itself.

To be objective, these could just be the rantings of a paranoid old nerd. Single point of failure, single point of access and control, single point of storage. It just gives me the crawling heebies. Time share = ick, to me.
posted by loquacious at 7:15 AM on January 16, 2006


Yuck. Why use VNC when X is already network transparent (and faster)?
posted by cytherea at 7:25 AM on January 16, 2006


How is this different from plugging a bunch of video cards into a server and then plugging a bunch of monitors into the server?
Performance-wise, it's pretty much exactly that; it's just easier to wire up, because it uses standard networking stuff to move the bits from server to client and back. So you need enough server grunt to give all your users reasonable compute performance, and enough network grunt so video doesn't look completely fuct.
As for video - isn't that what this system is set up to do best? It's essentially just sending a video feed to a monitor, isn't it?
Yes, except it's doing that over a 100MBit network connection, so full-screen video is not likely to look nice. This will doubtless improve once Gigabit Ethernet is down to jellybean prices. Putting provision for XviD streams into the network protocol could fix it too.
And isn't the quality of the video going to be completely dependent on the size, strength, speed, whatever of the server?
And the bandwidth of the network, and the raw display grunt of the client; if any of these are inadequate, video quality will suffer.
And how is this going to bode for my intricate, delicate http tunneling protocols that allow me to play ffxi from my cubicle?
Run that stuff on the laptop hidden in your desk drawer, with the display in a window on your thin client.
posted by flabdablet at 7:47 AM on January 16, 2006


Ah. So this thin client is something I'd need a computer for to effectively use? ;)
posted by loquacious at 7:50 AM on January 16, 2006


If you want to add additional servers to your network, far be it from me to stop you ;)

I feel the same way you do about single points of failure, but the sheer amount of redundancy involved in putting 2.6GHz and 40GB on every desk strikes me as overkill.

But maybe Windows Vista will find ways to soak up most of that excess capacity - who can tell?
posted by flabdablet at 8:32 AM on January 16, 2006


thank you, flabdablet.
posted by Baby_Balrog at 8:33 AM on January 16, 2006


So they've reinvented the Sun Ray, which was itself just a rethinking of an X Terminal...
posted by mrbill at 10:01 AM on January 16, 2006


Why use VNC when X is already network transparent (and faster)?

X requires a lot more horsepower on the thin-client side than VNC does.
posted by mrbill at 10:09 AM on January 16, 2006


The sun ray costs $370.
posted by I Foody at 10:10 AM on January 16, 2006


...it's [sending a video feed to a monitor] over a 100MBit network connection, so full-screen video is not likely to look nice. This will doubtless improve once Gigabit Ethernet is down to jellybean prices. Putting provision for XviD streams into the network protocol could fix it too.

Compressed video requires major CPU power to decode, and it gets worse with every generation of codec. Older systems that handle MPEG-2 easily may struggle with DivX/XviD. Even the latest desktop PCs can't handle high-res H.264 too well.

I think there's a bit of a conundrum with regard to video. If the cheapie thin-client has enough power to decode video, it probably won't actually be particularly cheap or thin. On the other hand, a server that could handle the workload of decoding many silmultaneous video streams would be a big, expensive monster machine, not a cheap commodity PC.

I don't know if it's important. An internet-connected PC without the ability to show silly video clips from collegehumor.com or wherever is hardly a useless throwback, after all.

I wonder about the economics. In places where even a crap internet connection is expensive, won't the cost of the connection itself be far higher than the equipment costs, even for full-blown PCs? It sounds like a tremendous effort to save the wrong dollars.
posted by Western Infidels at 11:04 AM on January 16, 2006


...it's [sending a video feed to a monitor] over a 100MBit network connection, so full-screen video is not likely to look nice.

MPEG-2 and MPEG-4 at standard definition work just fine at about 2.2 Mbit/s. Yes, you do need a decoder. If you're not dealing with an optimized chipset and using raw CPU to do all the work, you'll need something in excess of a 300Mhz pentium to cope. With chipset and firmware assist you can do a bit better, at the cost of maybe not being forward-compatible with the Next Big Thing in compression.

But in no case will 100Mbit/sec be a bottleneck for streaming video, unless you're putting an awful lot of other stuff on that pipe as well. (And if you are, you should be using switches, not hubs.)
posted by George_Spiggott at 12:08 PM on January 16, 2006


The developers hope that mass production will soon make the clients cost as little as a typical video cable.

They're also hoping that investors will throw money at them. And are we talking Monster Cables or cheapo cables?
posted by mr.dan at 1:40 PM on January 16, 2006


I think, to a degree, some of you here do not see where this technology could sit.

The name Ndiyo (a swahili word more accurately translated as "it is so") and the test markets, South Africa and Bangladesh, suggest to me that this technology is targeted at developing markets. A lot of the stuff you guys are talking about, streaming media etc, is unaccessible to the majority of the world's population. It is a luxury of broadband internet access.

I am sitting in the second fastest internet cafe in Kisumu, the third largest city in Kenya and there are 10 terminals and my laptop sharing a 64kbps internet connection. The fastest cafe in town is 256kbps, shared by 8 terminals, however, they won't let me plug in my laptop (grrr).

What video/streaming media/whatever are you going to get through either of those?

Across the road is the main provincial offices of Telkom, the state owned telecoms company. If you sit with an advisor in the foyer, 5 of them have shiny PCs. The sixth has a pad of paper.

Looking into the warren of office space behind, you see that the senior staff have computers, however a lot of the work going on involves pens, calculators, bits of paper, filing cabinets, grey matter, telephones and ingenuity. All this at one of the largest companies in a country that is more developed than a lot of the rest of the continent.

Don't lets even get started on what the schools (don't) have...

Given that this kind of technology aims to be affordable to a market which is not necessarily going to be replacing existing technology more advanced than a calculator, I'd reiterate that this sort of technology is very much needed here. That or $10 (t-e-n) laptops.

I love my laptop. I wish everyone could have one. Perhaps in my life time they will. For now, this sort of solution could seriously help to fill a gap in the rapidly growing digital devide.
posted by davehat at 11:29 PM on January 16, 2006


But in no case will 100Mbit/sec be a bottleneck for streaming video, unless you're putting an awful lot of other stuff on that pipe as well. (And if you are, you should be using switches, not hubs.)

VNC generally streams the uncompressed video being rendered to the screen, using some basic real time intra-frame compression. It usually isn't pretty.
posted by cillit bang at 1:23 AM on January 17, 2006


hattifattener writes "I'd go one step farther and make them work over USB: each client box would be a usb hub plus framebuffer"

USB is only good for 10m.

loquacious writes "And people have been trying to bring back the thin client model for ages and ages, pretty much ever since the PC took hold. Some have marginally succeeded, many have failed - even though there's plenty of uses for thin clients."

It's always the cost that kills thin clients. Heck take a look at any bank. They run cheap commodity PC to emulate the 3270. 3270! You should be able to mount a USB hub and a 3270 interface chip to a flat panel for $20 but it is still better to use a PC it seems. Emulation software + PC was cheaper than a 3270 terminal 15 years ago and it hasn't gotten better. As a plus once you switch you can play solitare on your PC.
posted by Mitheral at 11:22 AM on January 17, 2006 [1 favorite]


That is exactly where the open source methodology comes in though, Mitheral. The reason the prices suck is proprietary hardware, proprietary software, proprietary everything... Carving out a space for open hardware will be an up hill battle, but it is a noble goal. Not only because of the added freedom, but also because it will make doing business cheaper.
posted by Chuckles at 11:31 AM on January 17, 2006


« Older Riding the Maglev train   |   Australia Card - One Attorney-General Can't Be... Newer »


This thread has been archived and is closed to new comments