Join 3,512 readers in helping fund MetaFilter (Hide)


The history of DOS
July 9, 2011 8:40 AM   Subscribe

"The story begins unambiguously. A group of IBMers, working on a secret project to build a personal computer, flew to Seattle in August, 1980, to see if [Bill] Gates could supply them with an operating system. He couldn't -- and referred them to [Gary] Kildall [of Digital Research Inc.] When they showed up at DRI's offices the next day ... the company's business manager ... refused to sign their nondisclosure agreement.... [IBM] did get together with Kildall ... a short time later, but they couldn't reach an agreement. At around the same time, [IBM] saw Gates again. [IBM] and Gates both knew of the operating system [Tim] Paterson had built at Seattle Computer Co.... "Gates said: 'Do you want to get [QDOS], or do you want me to?' [IBM] said: 'By all means, you get it."' Gates bought Paterson's program, called QDOS, for $50,000, renamed it DOS, improved it, and licensed it to IBM for a low per-copy royalty fee."
Tim Paterson, the man who created DOS, the operating system that dominated the computer industry between 1981 and 2000, has an occasional blog that provides a fascinating history of the microcomputer industry: Is DOS a Rip-Off of CP/M?; The Contributions of CP/M; Design of DOS; The First DOS Machine; IBM PC Design Antics; and All Those Floppy Disk Formats…
posted by Jasper Friendly Bear (77 comments total) 63 users marked this as a favorite

 
Fun Fact: My second computer ran CP/M off a ROM chip and used microcassettes for storage. This was in 1997.
posted by dunkadunc at 9:11 AM on July 9, 2011 [3 favorites]


Do you mean 1987?
posted by jayder at 9:16 AM on July 9, 2011


I still use a couple of DOS programs and often find that typing at the command line is much more efficient than futzing with the mouse. Love DOS, great post, thanks.
posted by Melismata at 9:19 AM on July 9, 2011


1997. My family is poor.
posted by dunkadunc at 9:30 AM on July 9, 2011 [18 favorites]


From "The First DOS Machine," describing the pre–IBM-PC Seattle Computer Products S-100-based microcomputer:
Microsoft took full advantage of the SCP system capability. In 1988, years after SCP had shut down, they were still using the SCP system for one task only it could perform ("linking the linker"). Their machine was equipped with the full 1 MB of RAM – 16 of the 64 KB cards. That machine could not be retired until 32-bit software tools were developed for Intel's 386 microprocessor.
Wow.
posted by grouse at 9:31 AM on July 9, 2011 [3 favorites]


Gary, Gordon Eubanks, Adam Osborne, Lee Felsenstein, Don Estridge....I salute you!
Jim Warren, not so much.
Thanks for the post.
posted by nj_subgenius at 9:44 AM on July 9, 2011


Gary Kildall also co-hosted Computer Chronicles on PBS between 1983 and 1990. Here’s an episode from 1995 that profiled Kildall.
posted by Jasper Friendly Bear at 9:45 AM on July 9, 2011 [2 favorites]


I was a little frustrated with his "Design of Dos" entry. Much as I enjoy geeking out over file system designs, I wanted him to finish the story he started at the top. Did the company ship a quick and dirty OS? I might never know.
posted by Net Prophet at 10:49 AM on July 9, 2011


Yes, it did: QDOS, later renamed DOS. The one he intended to throw away became the industry standard.
posted by flabdablet at 10:59 AM on July 9, 2011


Man, that takes me back. When I bought my first computer in...'84? '85? I had to decide whether to buy a CP/M or a DOS machine. I chose DOS...and my professional career was all downhill from there. :)
posted by Greg_Ace at 11:08 AM on July 9, 2011


Well, that's not what Microsoft claims... they claim they "improved" it before shipping. So who knows. But it seems quick and dirty to me.
posted by koeselitz at 11:10 AM on July 9, 2011


Well, that's not what Microsoft claims... they claim they "improved" it before shipping.

Someone had to add in memory leaks.
posted by Mister Fabulous at 11:14 AM on July 9, 2011 [8 favorites]


Adding a 9th sector on the track would increase this to 5634 bytes, still far less than the total available. You could almost fit a 10th sector, but not quite (or maybe you could reduce the overhead and make 10 fit!).

I once worked for an outfit that made, among other things, a high-capacity floppy disk controller for the Apple II. It used a dedicated disk controller chip rather than the native Apple II hardware/software hybrid, and fit 800KB on each 80-track double-sided 5.25" disk.

We didn't have to do anything clever at all to fit 10 512-byte sectors per track on 300rpm media, using exactly the same MFM coding and sector formats that IBM eventually used on its own "double density" disks. As far as I can tell, there was never any good reason why the standard IBM 5.25" 40-track floppy should not have held 400KB instead of 360KB.
posted by flabdablet at 11:20 AM on July 9, 2011 [1 favorite]


Someone had to add in memory leaks.

Everything I remember about early DOS architecture was in fixed-length arrays, so I'm not sure that there was any room for memory leaks.
posted by grouse at 11:21 AM on July 9, 2011 [4 favorites]


The standard Apple II floppy format was actually not too shabby, either. Even though it used a strict 4μs bit cell, eschewing the 4μs/6μs mixture resulting from MFM, and even though the standard drives were only 35 tracks and single sided, it still managed to fit 140KiB on a disk at 16 * 256-byte sectors per track.

Roland Gustafsson's Fast Loader, which came embedded in several games, used 6 * 768-byte sectors, pushing the capacity to 157.5KiB per disk. That fast loader also had the nice feature of being able to read an entire track in one revolution (no sector interleaving) starting with whichever sector first passed the read head after a track step (no time wasted for sector 0 to come around); it could load a game filling the entirety of a 48KiB Apple II's RAM in two and a half seconds.
posted by flabdablet at 11:37 AM on July 9, 2011 [3 favorites]


Yeah, DOS was extremely reliable. I'm not even certain what it would mean to 'leak memory' on such a simple system. Do you have any more info, or is that just a cutting remark made in passing?

I was no lover of DOS, having grown up on the Amiga. I knew what a real operating system looked like. But all the same, there was a certain elegance to its simplicity. It did one thing at a time, it did it very well (a.k.a. without bugs), and then it moved on. It was sort of the Charles Emerson Winchester of operating systems. And like Charles, it was hated, but relied on.
posted by Malor at 11:38 AM on July 9, 2011 [7 favorites]


I bought one of Tim's DTV cables for my Mythtv server a while ago. I wonder how often he thinks about Gates buying DOS for $50K.
posted by pashdown at 11:38 AM on July 9, 2011


Dunkadunc is not alone:

First Compy in my family was actually an Atari Computer System in the early 90s.

1995: we upgrade... to an old DOS machine with this old HOYLE card game program, the better animated King's Quest, Wheel of Fortune, Jeopardy! Alley Cat, and some flight sims. Oh yeah, PW for word processing.

1997: we upgrade... to a slightly better machine that ran Windows 3.11

2000: we upgrade... FINALLY, to a better Gateway Machine that ran Windows 98...

Nowadays I just buy a cheap machine every few years...

Lastly, I kind of miss the DOS beep, and the old machine sound effects.

Those were the days.
posted by JoeXIII007 at 11:41 AM on July 9, 2011


Terminate-and-stay-resident utilities were more than capable of leaking memory in DOS, if not very carefully designed. Load two of those from two different software authors and they would almost always disagree about who had the real copy of DOS's old memory limit pointer.
posted by flabdablet at 11:43 AM on July 9, 2011 [3 favorites]


Right, but that wasn't DOS, that was third-party programs. TSRs were a mess, but third-party programs on all operating systems always have been.
posted by Malor at 11:45 AM on July 9, 2011 [4 favorites]


This makes me remember the days of Ralf Brown's interrupt list, the API bible of the time. The last time I remember downloading it (zmodem! Telemate! v32bis!), I seem to recall it was at revision 30 or so. It makes me warm to the heart to see that it got up to revision 61, July 2000.
posted by Rhomboid at 11:53 AM on July 9, 2011 [6 favorites]


And basically you're complaining there that you can't unload TSRs in a different order than you loaded them. If you load TSR A, and it stores the old memory pointer, and then load TSR B, which stores the old memory pointer that includes the space allocated for A, of course they're going to disagree. There was no way for A to know that B had loaded.

It was up to you, the user, to know to remove your TSRs in the reverse order that you loaded them. If B was resident, and you unloaded A, havoc would result. It's just the nature of the beast; the system was never really designed around the idea of having multiple resident programs, and certainly not around removing them after running them. They were a horrible hack to work around a overly simple system.

Yes, a proper OS has memory protection and shared resources and so on, but I'm not aware of any OSes in the era that could do that on microcomputers. Even the very advanced, multitasking AmigaOS had no memory protection, and any running program could instantly crash the whole system. If you had two programs that tried to hook the same function in AmigaOS, you'd get exactly the same kinds of problems that you got in DOS's TSRs.

That sort of problem persisted into the Win2K era, at the very least. Just try installing two virus checkers on Windows 2000, for instance. And Win2K was quite solid, a very well-designed system.
posted by Malor at 11:54 AM on July 9, 2011 [2 favorites]


pashdown, that link to the PatersonTech website is great. The Early DOS Manuals page has some interesting documents, inlcuding this ad for an 8086 "computer" (PDF) and this 59 page DOS manual (PDF) from December 1980.
posted by Jasper Friendly Bear at 12:06 PM on July 9, 2011


Well, yeah, except that TSR was a system call, but DOS provided no API at all to help programs making use of that system call actually do anything, or to allocate and release resources; they pretty much had to install hook chains to get whatever done.
posted by flabdablet at 12:14 PM on July 9, 2011


While in the early days, just "getting something running" was kind of an achievement, a lot of the design decisions later on where pretty short-sighted. I got into an argument with a feature writer at one of the big tech magazines in the early 80s about multitasking. He claimed personal computers shouldn't try and do more than one thing at a time and that if by change you needed more computing power, you could just add another CPU card and run another copy of the operating system.

He and others like him associated all multiprocessing with time-sharing services, and had not clue that background tasks and threading could provide useful services even in a "one person, one computer" environment.
posted by CheeseDigestsAll at 12:36 PM on July 9, 2011


Oh man - just last night I was watching some videos of the old KayPro line. I have fond memories of that machine. I looked and it looks like it was CP/M based :D

When I first got online it was with a 286 running DOS. If my dad hadn't won 25 grand at the slots, I may have never had a newer computer for a few years after that. My old manager at Taco Bell, they had JUST gotten an old original IBM PC. With amber screen burnin. This was in like 1996. I think there were lots of us out there. That said, I was probably also one of the first kids to have actually had a computer (TRS-80 CoCo model 1 (with the 32k expansion and new concave keys added a bit later)) when the home computing revolution took off. And our family friend who was also superintendent of my little private xian school actually had a few computers that he put in for us to work with (the KayPro and an IBM PC). And my family supplied a PCjr. :P
posted by symbioid at 12:37 PM on July 9, 2011


Hey! which one of you computer jockeys brought the Geritol?
posted by Twang at 12:42 PM on July 9, 2011 [1 favorite]


The Amiga pushed 3.5" MFM 300RPM disks about as far as they could possibly go - it read and wrote them a whole track at a time, fitting 5632 bytes on a track - for a total capacity, on an 80 track double sided disk, of 880KiB.

I still have the pair of Digicard high-capacity 5.25" floppy drives (the same ones we made for the Apple II) that I built up a tiny Amiga interface card for on matrix board (all it had to do was provide the Disk In signal missing from the 5.25" floppy's native interface). Because our drives ran 80 tracks at 300RPM, just like the 3.5" drives but unlike the IBM high-density 360RPM 8" emulator drives, I was able to use cheap 5.25" floppies and have them look just like Commodore's native 3.5" disks to the Amiga.

Still got a big box of those. Wonder if there's any usable bits left on them? All my Apple II 5.25" floppies are still good AFAIK.

On preview: yes, the Kaypro was indeed a Z-80 based CP/M machine.
posted by flabdablet at 12:44 PM on July 9, 2011


As far as I can tell, there was never any good reason why the standard IBM 5.25" 40-track floppy should not have held 400KB instead of 360KB.

Ok, this is a general hand-wavy anecdote that might be totally wrong because I'm getting hella old, but if I'm recalling correctly (and I think it's more about the original Apple 2 disc drives than anything, but if I'm remembering correctly this applied to the original 360k drives for the IBM PC as well) -

The reason why they were so careful about not overdoing the sector/track density on early floppies was head alignment issues. It was purely a mechanical engineering problem as the tolerances weren't tight enough to support the track densities people were pushing for.

The mechanical tolerances in the first and second generation of microcomputer floppy drives were atrocious. It was so bad you could take two drives fresh off the assembly line and write a disk on one that couldn't be read on the other, so they engineered in a lot of play in the track/sector density, so even if the head was only half-way aligned over the track it could still read the sectors.

This problem would often show itself if you tried to write some additional data to a disk written by another drive with badly aligned heads, or if the heads on your drive were badly aligned. Basically you'd end up with tracks overlapping each other and it'd corrupt the disc and make sectors unreliable or entirely unreadable. I remember it was so bad that it was often about a 50/50 chance if you copied some data to a floppy that it would work or not on a friend's computer.

This became a self-solved problem as manufacturing processes and tolerances vastly improved, leading to the ability of people to invent or hack higher density encoding schemes as well as the standardization of higher densities.

For the first few generations of disk drives the magnetic media inside the floppy discs remained the same. Sure, some disc manufacturers (Like Elephant Memory or Memorex) experimented with finer oxide particles, or protective/lubricating coatings, better linings for the plastic envelopes housing the discs, etc. But in most cases a regular low density floppy is identical to a high density floppy save for the indicating notches punched into the sleeve.

(During the mid to late 80s and early 90s you could even buy a handy punch to turn "single density" floppies into DD or HD floppies. They made them for 8", 5.25" and 3.5" floppies.)
posted by loquacious at 1:09 PM on July 9, 2011 [3 favorites]


(During the mid to late 80s and early 90s you could even buy a handy punch to turn "single density" floppies into DD or HD floppies. They made them for 8", 5.25" and 3.5" floppies.)

Nah, 3.5" floppies had a small plastic piece that slid up and down as a standard feature, unlike the 5.25" holepunch.

(Goddamn you, loquacious: I turn 31 in two weeks. This is the first time I have ever felt old.)
posted by Ryvar at 1:15 PM on July 9, 2011 [2 favorites]


Nah, 3.5" floppies had a small plastic piece that slid up and down as a standard feature, unlike the 5.25" holepunch.

Those are those new-fangled DD/HD discs you youngsters used. The original SD discs had no such feature. Those moveable plastic bits were for backwards compatibility for use on old SD drives, not to turn an SD disc into an HD disc.

The punches used on the hard/rigid plastic sleeves of a 3.5" floppy were... kind of brutal. I remember one model that basically just mashed/mangled a semi-circular divot where the DD/HD divots would go, often leading to bits of broken plastic that would shed. The other model punched a hole clean through the plastic, leaving the envelope exposed to potentially collect dust and debris.

(Goddamn you, loquacious: I turn 31 in two weeks. This is the first time I have ever felt old.)


Heh, just wait until you go into a corner store and their radio is playing an old but very popular track you remember, and when you go up to the counter you casually remark "Man, I haven't heard this song in ages." and the kid behind the counter says "I have absolutely no idea who this is!". In my case it was Jane's Addiction, and he might as well have punched me in the face with a brick that had "old man" embossed in it.
posted by loquacious at 1:31 PM on July 9, 2011 [2 favorites]


C:\memories\good show.exe
posted by smirkette at 1:39 PM on July 9, 2011 [3 favorites]


oh dear. Janes Addiction is passe? Please tell me the kid at least thought it was good music and wanted to know who it was so he could seek it out.
posted by symbioid at 2:04 PM on July 9, 2011 [1 favorite]


The first 3.5" disks held 720K, as I recall. I don't think there were ever 360K 3.5's on the PC, were there? Was there an older generation of these disks that I didn't use?

The 720s had a sliding tab for write-protect in one corner. Then there were HD versions that would do 1440K. They had a notch punched out in the OTHER corner, exactly the same size and shape as the write-protect, but without the slidy tab. So the punch that loquacious is talking about would put a hole through the jacket where the HD cutout should be, compromising its integrity somewhat, but giving you twice as much space. For a lot of people, I guess that was a good trade, although I never did it myself. I just bought the HD floppies.

The drives on the 8-bit computers were indeed terrible for alignment, especially the 1541, the drive for the Commodore 64. Many people bought alignment programs and fixed the drives themselves. I was working in a little computer store, and we had a fixit guy who did alignments all the time on floppies. I think it was about $70 to get one aligned, but our tech took real pride in sending drives out in as close to perfect condition as he could manage. And I think they must have stayed aligned too; I remember neither complaints nor repeat business on any given unit. I guess once he fixed them, they stayed fixed.

It looked pretty easy, actually, but I never tried doing it myself.
posted by Malor at 2:06 PM on July 9, 2011 [1 favorite]


Oh, as an aside, it is so cool to be able to buy 16GB and bigger SD cards. Something like 12,000 floppy disks balanced on the tip of a finger, smaller than a postage stamp.

I remember being hugely impressed with 256 meg SD cards. :)
posted by Malor at 2:08 PM on July 9, 2011


The reason why they were so careful about not overdoing the sector/track density on early floppies was head alignment issues. It was purely a mechanical engineering problem as the tolerances weren't tight enough to support the track densities people were pushing for.

The mechanical tolerances in the first and second generation of microcomputer floppy drives were atrocious. It was so bad you could take two drives fresh off the assembly line and write a disk on one that couldn't be read on the other, so they engineered in a lot of play in the track/sector density, so even if the head was only half-way aligned over the track it could still read the sectors.


Yes, this was true even into the days of the IBM AT, I constantly had customers who had disks that could be read/write by their machine but nobody else's. We usually had to copy off the disk on their machine to an external hard drive, then fix their floppy, and copy it back to a freshly formatted disk.

Alignment was the bitch of all problems with the early microcomputers. I remember when the Apple floppy disk came out, invariably the customer would come back with the drive out of alignment within a few months. So we decided to start looking at incoming drives. All the new drives were barely within alignment. Our store had the advanced disk alignment kit (oscilloscope required) so we'd do QA on every single drive that came in. If it was out of alignment, we'd fix it and bill Apple under dealer DOA warranty. If the drive was within alignment, well, we had it on the scope anyway, so we'd put it into perfect alignment and NOT bill Apple for those.

Well, after a few months, Apple discovered how many new drives we were billing them for aligning. They shut us down and said they would not reimburse for any DOA disk alignments. Dammit, I was making good money at that. Oh well, I still did QA on all new disks, it saved me lots of future problems.

And let's not even get into the days when IBM offered an upgrade from single sided floppy drives to double sided. Oh what a pain in the ass. And then I could go on about the Osborne floppies, there were different drives for left slot and the right slot. I had one customer's Osborne come back from a factory repair and it just didn't work. I had to tear down the whole computer before I discovered they put a left handed drive in the right handed slot.

Well anyway, with the Osborne, I guess we're getting back to CP/M turf. And the reason you have a PC today is because of CP/M. Apple and other little computers like the VIC-20 didn't really have anything recognizable as an OS. CP/M gave a lot of the features we were used to on minicomputers like a DEC PDP-8, it was wildly popular. And DOS seemed like a huge loss of features compared to CP/M, and especially the multiuser version MP/M. I used to sell big Vector Graphic MP/M S-100 boxes with big storage on 8 inch floppies, then hook up one daisywheel printer and 6 cheap video terminals. You could set up a whole office for word processing with one CPU. You could not do that with PCs easily. We didn't really get that workgroup capability back with DOS, ever. It wasn't until the Mac started shipping with the Laserwriter that people started getting the idea that they wanted to share one big laser printer for their office, and then all these stupid PC networking hacks came out (ever seen a parallel printer autoswitch? They sucked) because the OS didn't support it.

But to get back to the main story line, I had lots of writers as customers who bought early PCs and put DR-DOS on them, which was basically the next gen version of CP/M. It was just fine for running the Wordstar app they loved, it even ran Lotus 1-2-3, they had a native DR-DOS version. But soon the Microsoft monopoly was completed, and DR-DOS was out of the running.
posted by charlie don't surf at 2:17 PM on July 9, 2011 [4 favorites]


The 720s had a sliding tab for write-protect in one corner.

Oh, crud. Malor's right, I'm wrong.
posted by Ryvar at 2:44 PM on July 9, 2011 [1 favorite]


WTF? This far into the Geritol-fueled reminiscences and no mention yet of TurboDOS on S-100 boxes?
posted by fredludd at 3:26 PM on July 9, 2011 [1 favorite]


Oh, crud. Malor's right, I'm wrong.

Noob! You kids and your fancy high density double sided discs. I bet you don't even know how to load a program from a data cassette. Why, in my day we all had to learn how to whistle at 300 baud just to play "Let's watch the screen draw a box and then beep three times" because no one even thought of inventing bootloaders yet. That game was amazing. It took about three hours to play through and kept us entertained for hours. The way the green monochrome phosphor would burn in was especially exquisite and subtle, and if you turned up the contrast and brightness really high you could actually watch the traceback of the electron beam etching that pixel line by line.

Hell, I didn't even see a "Hello World" program that worked without crashing until 1982. The first time someone thought of that 10 Print "Hello World" 20 Goto 10 trick there was fierce debate for months about whether or not is was safe. The were worried it would cause processors and memory boards to catch fire, and some hobbyists were worried it would cause all of the processors everywhere to catch fire if it was ever even run once.

Those were heady times, let me tell you. You really haven't lived until you've debugged an old Centronics printer interface and a dot matrix printer with a live goat!
posted by loquacious at 3:32 PM on July 9, 2011 [8 favorites]


Of all the devices to disappear from computers, I think the floppy drive is one of the least mourned. Is there one that is actually missed less? Well, maybe PORT/IRQ jumpers…
posted by jepler at 3:32 PM on July 9, 2011


What was the actual difference between DD and HD (i.e., 1.2Mb) 5.25" floppies? I'm guessing the media was physically different. Were HD discs made of a finer magnetic material or engineered to require a different strength of magnetic field? Were 720K and 1.4Mb (and 2.8Mb) 3.5" disks the same sort of thing?
posted by acb at 3:37 PM on July 9, 2011


Different magnetic coatings allowed higher densities on floppy disks.
posted by jepler at 3:43 PM on July 9, 2011


Noob! You kids and your fancy high density double sided discs. I bet you don't even know how to load a program from a data cassette. Why, in my day we all had to learn how to whistle at 300 baud just to play "Let's watch the screen draw a box and then beep three times" because no one even thought of inventing bootloaders yet.

Oh you noobs and your fancy data cassettes. We didn't have no stinking cassettes, we had punched tape. First you had to pull the bootloader tape through the reader. That was only a few feet. Then you had to pull the full OS tape through the reader. We didn't have one of those fancy hand-crank spools, so we had the punched tape reader set near the door. We'd prop it open, hang on to the end of the tape, and then run down the hall, pulling the tape after us. When we got all that OS crap loaded, we could put a 9 Track Mag Tape in the drive and load away.

Now don't even get me started about plugboard programming and mechanical card sorters. And don't call this a "Geritol-fueled reminisce," I'm barely middle aged. I was just a computer kid when there was no such thing as computer kids.
posted by charlie don't surf at 3:46 PM on July 9, 2011 [4 favorites]


Of all the devices to disappear from computers, I think the floppy drive is one of the least mourned. Is there one that is actually missed less? Well, maybe PORT/IRQ jumpers…

I would disagree with that. Floppies were actually really cool, and a fairly neat trick. They were a lot better for microcomputers than analog data-cassette tape drives or even the paper-punch tape hobbyists were originally using.

They were the primary mode of mass storage for microcomputing for a long, long time. People wrote books on floppy discs, bought and sold games, shared data, loaded operating systems. You couldn't really ask for a more reliable, more dense and more portable (and more affordable) storage media for the era. For all the problems it had, it was practically a platonic ideal considering all the other problems that early computers had.

I know people who had home computers well into the early 90s that still only used floppies and had no fixed (hard) disk as is so common today. They're entire mass storage system would be a few hundred floppies in carefully sorted caddies and trays. (Man, remember that? A really good floppy caddy could set you back 50 bucks!)

There's a whole bunch of stuff about computing that's much more awful. I remember having things like joystick connectors that were literally as fragile as the pins on a DIP package for a chip. Two rows of very brittle metal legs you'd have to carefully insert into a socket, and lock a fiddly little lever down to hold it in place. Or cables like the original Centronics parallel connector were huge, bulky and difficult to use, and often broke if they didn't have substantial structural engineering and strain relief.

Oh, here's one I absolutely hated. My family had a Franklin Ace 1000 which is a slightly larger, dumber and slightly brain damaged carbon copy of an Apple 2+. For starters, that inclined keyboard was torture to type on for extended periods. My mom somehow wrote books on it. But it had one incredibly amazing industrial design flaw - the reset button was directly below the keyboard wrist rest on the front.

Just in the exact right spot to be pressed if, say, you were a kid laboriously entering in BASIC code listings for a game from a book or magazine that you had propped up against the desk. I don't think I ever managed to learn not to do that long enough to enter all 5000 lines for that Star Trek game.
posted by loquacious at 3:55 PM on July 9, 2011 [3 favorites]


Then you had to pull the full OS tape through the reader. We didn't have one of those fancy hand-crank spools, so we had the punched tape reader set near the door. We'd prop it open, hang on to the end of the tape, and then run down the hall, pulling the tape after us.

I was, of course, mostly making stuff up. But this is awesome. I'm guessing you had a photo-optical reader instead of a mechanical one? If you went too fast would it crash the buffer?
posted by loquacious at 3:57 PM on July 9, 2011


Oh, here's one I absolutely hated. My family had a Franklin Ace 1000 which is a slightly larger, dumber and slightly brain damaged carbon copy of an Apple 2+.

But, the Franklin Ace 1000 had an awesome manual!
posted by Jasper Friendly Bear at 4:09 PM on July 9, 2011 [3 favorites]


What was the actual difference between DD and HD (i.e., 1.2Mb) 5.25" floppies? I'm guessing the media was physically different. Were HD discs made of a finer magnetic material or engineered to require a different strength of magnetic field? Were 720K and 1.4Mb (and 2.8Mb) 3.5" disks the same sort of thing?

As jepler points out above the coatings evolved, yeah.

Much later in the lifespan of a floppy if you bought discs that were advertised as lower density, they were usually actually the same as the higher density discs. It was just cheaper to manufacture one kind of disc instead of maintaining 3 different coating production streams. The newer discs worked just fine on older drives, so those in the know would save some money by buying "cheaper" SD or DD single sided or double sided discs and upgrading them with the punching tool.

Though sometimes you could get older discs to "upgrade" but you risked losing data if your read-write heads in your drive couldn't deal with the low magnetic flux of the old coatings.

And that wikipedia article reminded me of something I totally forgot about - the little stick on labels you'd use to cover the write protect notches on 5.25" floppies. You'd get a sheet of them in a box of floppies. Some were just paper stickers, some were foil, but others were very expensive looking die-cut Lexan plastic if they were premium floppies. Theoretically you could just use a bit of common masking tape or scotch tape. It's just interesting to think about how wholly unnecessary and "expensive looking" many early consumer computer products were just for the sake of marketing. The personal computing equivalent of an expensive or satisfying sounding car door, if you will.
posted by loquacious at 4:09 PM on July 9, 2011 [1 favorite]


But, the Franklin Ace 1000 had an awesome manual!

It did. And it was a fucking trip, written by some burnout hippy neo-libertarian acidhead or something. It was very obviously thumbing its nose at Apple through much of it, too.

It also included schematics of the mainboard like the original Apple 2 manuals.

For some reason when we bought it used some of the things that came with it were actual OEM Apple 2+ operating system and utility discs, as well as an Apple 2 manual. I'm guessing the original owner ordered a set of discs and manuals since the Franklin Ace was a direct rip off and would basically be functionally the same (and probably less confusing to read) as the Franklin manuals, especially since he would be booting from the official Apple discs.
posted by loquacious at 4:15 PM on July 9, 2011 [1 favorite]


In the early 1970's, my physicist father got a HP2100 computer system to control the gamma ray analyzer in his lab. It had a teletype machine and a high-speed optical reader for those inch wide paper tapes. There was something magical about running the Basic interpreter, a 3 inch diameter roll of that tape, through the reader; it would end up as a puddle of tape around your ankles 4 feet across and 8 inches deep, but you'd grab the end, hook it in the electric pencil eraser that had been fitted with a reel to wind the stuff back up, and it would all magically withdraw back from the puddle onto the reel.

SD cards can't do that.
posted by localroger at 4:18 PM on July 9, 2011 [1 favorite]


SD cards can't do that.

I find that a fairly suitable replacement for that thrill is to look at an image like this (from this article) and imagine all of those billions and billions of logic gates crammed into a flake of silicon a fraction of the size of your pinky fingernail, all silently clacking away and going through their tiny, sub-microscopic molecular changes just so someone can take a fuzzy snapshot of their cat looking ridiculous or listen to Lady Gaga on their pocket supercomputer phone.

Because that's pretty far out and wild, when you really think about it.

But, yeah, it doesn't quite have the thrill of a full size RLL Winchester drive winding up like a jet engine and clacking away with all it's science-fictiony noises, or the look and feel of a whole row of 9-track tape drives spooling away, or the heft of stack of removable rigid discs in a discpack, or it's later grandchildren - SyQuest, Bernoulli and Jaz drives.

I've never once imagined I was piloting a Buck Rogers spaceship into cosmic battle while operating a nearly silent netbook. And Macbook Airs still kind of freak me out like they aren't actually real. Hell, I'm still not so sure about microSD cards.
posted by loquacious at 4:39 PM on July 9, 2011 [4 favorites]


The first 3.5" disks held 720K, as I recall. I don't think there were ever 360K 3.5's on the PC, were there? Was there an older generation of these disks that I didn't use?

HP had a bunch of early 1980s PCs (some with touchscreens! And Unix!) that used 3.5 floppies. I'd have to dig around in my cupboards to check the capacity.
posted by rodgerd at 6:04 PM on July 9, 2011


loquacious: "Hell, I'm still not so sure about microSD cards."

Each MicroSD card has a tiny little voodoo spirit trapped inside that makes it work.
This makes it VERY IMPORTANT that you don't break one open, because if the voodoo spirit gets out it's going to go for the closest thing with a soul, and that's pretty much the end of you as your friends know you.
posted by dunkadunc at 6:15 PM on July 9, 2011 [1 favorite]


HP150's, rodgerd. I sold a few of them and wrote the software. Our selling point was that HP had never, ever, in its entire history, obsoleted a computer without supplying an object-code compatible replacement.

The HP150 series was the first computer HP ever did that to. A couple of airlines were highly annoyed as I recall. Fortunately our investment wasn't too deep and we we were able to migrate to PC's. I did continue to use the DOS-only dev tools for a long time; I was used to working in an environment of pure DOS at application level and hardware through assembly calls when that wasn't good enough, and while the assembly calls were different than the calls to the arcane HP150 API, it dd smooth the transition.
posted by localroger at 6:47 PM on July 9, 2011


I went to school with Kildall's kids. Good kids, actually. One's now an artist.
posted by Joseph Gurl at 7:35 PM on July 9, 2011


Malor wrote: I remember being hugely impressed with 256 meg SD cards. :)

I remember being impressed by an 8MB CompactFlash card (hell, my 8MB USB flash drive was also rather impressive). And of course, IBM's microdrives were marvels of engineering. What was it, 250MB in a CompactFlash Type III slot?

Before that, if you wanted to move the contents of a hard disk between systems, you either bought an expensive removable hard drive (who made those? I forget) or physically moved the hard disk between systems, presuming that the recipient had a spare IDE port. (unless they had SCSI)

And to think, that's all very modern compared to the subject of this post.

Actually, now that I think about it, the first CD burner I saw was also very impressive. As well it should have been, given that it retailed somewhere between $5,000 and $10,000.
posted by wierdo at 11:25 PM on July 9, 2011


I remember this computer.
posted by symbioid at 11:30 PM on July 9, 2011


Malor wrote: "I remember being hugely impressed with 256 meg SD cards."

I'm still impressed. To mee they still represent a few hundred boxes of floppies. I hold it and think "How did you fit all the disks in there?"

BTW, sometimes I really do miss floppy disks. CDs and DVDs still seem flakey in comparison. "What burning speed do I want," and "Finalize the disk when finished?" and "Do you want error checking?" When I used to just type "copy sourcefile targetfile" and know that it would work. I think Windows never has supported optical disks as reliably as Dos used to support the magnetic ones.

What I don't miss is the printer paper with the tractor feed spines, and you had to rip each page of the document off individually and get them stapled together in a neat pile, with no torn pages and everything in the right order, while buzzing on coffee and no sleep, because of course the essay is due right now.
posted by Net Prophet at 12:07 AM on July 10, 2011 [2 favorites]


Tim worked for MS for many years, didn't he, and despite kinda getting the shaft compared to Bill Gates, did alright for himself via MS stock and all that, eh? I don't know the man but used to (very badly) compete against him in automobile TSD rallies in the early 1990s.
posted by maxwelton at 3:59 AM on July 10, 2011


Yes, the first 3.5" IBM-compatible disks were 720KiB, exactly twice the capacity of the 5.25" floppy. They ran exactly the same data coding and sector format as the 5.25" drives, but they used 80 tracks instead of the 5.25" drive's 40 tracks. The later 1.44MiB drives were mechanically identical but ran exactly twice the data rate; that made all the bit cells on the disk half the size, requiring a finer-grained, higher-coercivity oxide coating on the disks.

The reason why they were so careful about not overdoing the sector/track density on early floppies was head alignment issues. It was purely a mechanical engineering problem as the tolerances weren't tight enough to support the track densities people were pushing for.

Head alignment wasn't the issue limiting sectors per track, though; that was rotational speed accuracy. A lot of early drives used belt driven spindles and suffered badly from wow, as frictional effects between disk and head assembly, and between disk and sleeve, interacted with elasticity and slippage on the drive belt. Disk controllers and formats needed to be designed to allow for drive rotation rates +/-5% off nominal, often within the same disk track.

The mechanical tolerances in the first and second generation of microcomputer floppy drives were atrocious

The first generation drives used a spiral-grooved cam and follower for head positioning and they were indeed awful. Apple's decision to cost-cut the track 0 detector switch, relying instead on pulling the head outward often enough to guarantee that it was as far out as it could go and therefore repeatedly bashing it against the track 0 mechanical stop, didn't help either (this was the cause of the characteristic Apple II bootup clatter).

This became a self-solved problem as manufacturing processes and tolerances vastly improved, leading to the ability of people to invent or hack higher density encoding schemes as well as the standardization of higher densities.

AmigaDOS achieved its high floppy capacity without doing anything innovative to the data encoding format. Like Gustafsson's Apple II fast loader before it, it simply always rewrote a whole track on every disk write. That meant Amiga floppies didn't need gaps between address marks and data sectors, because the disk head was not being switched from read mode to write/erase mode at sector boundaries. With no gaps there was room for 11 sectors per track, so using the exact same bit-level encoding that IBM used to get 720KiB on a disk, the Amiga achieved 880KiB. And yes, that was on 3.5" hard-shell floppies; but the direct-drive spindle motors and taut-band head positioners universally used on the drives for those had also been available on 5.25" drives for years by then.

Floppies had actually got really good by the early nineties. But by about 1998 they'd become a totally price-dominated commodity, and everybody seemed to have forgotten how to make them properly. This was the dawn of the era of the preformatted floppy that you wrote once and then threw away because you could neither read what you'd written nor write anything new nor even reformat successfully, and the beginning of the end of the floppy's reputation as a reliable storage medium.

those in the know would save some money by buying "cheaper" SD or DD single sided or double sided discs and upgrading them with the punching tool.

In the Apple II community, the most frequently used punching tool was one that cut a second write-enable notch on the other edge of a 5.25" floppy's jacket, allowing it to be flipped over. As far as I know, nobody ever made floppy disks with oxide on one side only, and "double-siding" Apple disks in this way seemed like a pretty economical way of using all the oxide you'd paid for.

It was a false economy, though. Unlike a proper double-sided drive with a read/write head on each side of the disk, the Apple drives had a head on one side and a simple felt loading pad on the other. After about a year's operation, the felt pad would typically have picked up enough stray bits of oxide and room dust to turn it into sandpaper, and would scrub the hell out of its side of the disk. Flipping the disk also ran it backwards in its jacket, releasing even more stored dust and crap to be picked up by the felt pad. Flippy disks were horrible and had a well-deserved reputation as data sinks.
posted by flabdablet at 6:56 AM on July 10, 2011 [6 favorites]


It is worth pointing out that Amiga floppies were terrible, extremely prone to data corruption. I suspect it may have been the filesystem, rather than the actual mechanisms or sector layout, but they were just awful. You had to be so, so careful with them.

And they kept writing to the disk a long time after the program returned control to you. Part of being careful was being absolutely certain the in-use light was out before ejecting one. That was rough for people who were used to most other machines of the era.
posted by Malor at 7:46 AM on July 10, 2011


Apple's other family, the Mac, initially used the same bit-level encoding as the Apple II had done. Its disk controller was called an IWM, for Integrated Woz Machine, and was pretty much a single-chip encapsulation of the lean, mean 8-chip controller that Wozniak had originally designed for the Apple II. Instead of the 16 256-byte sector per track format the Apple II family had always used, the Mac disks used 10 512-byte sectors for a track capacity of 5KiB, giving the original single-sided 80-track 3.5" Mac drive a total capacity of 400KiB and the later double-sided version 800KiB. Macintosh 800KiB disks used identical physical media to that used for IBM 720KiB and Amiga 880KiB formats.

Like its Apple II predecessor, the IWM did all its bit-level coding in strict 4μs cells. That made it incompatible at the most fundamental level with every other disk controller chipset on the market, all of which had by then standardized on MFM coding with a mix of 4μs and 6μs timings between signal transitions, and that's why reading IBM floppies on Macs and vice versa was impossible until the advent of the industry-standard SuperDrive (which needed HD media and couldn't read the old Mac floppies either).

4μs/6μs MFM was pretty much the end of the line in floppy disk bit encodings, because by then all the encoding innovation R&D effort had moved into the small hard disk market.

All the innovation in floppy encodings had happened years before, as manufacturers tried to find ways to avoid spending money on the inelegant, overcomplicated FM and MFM disk controllers on offer. Wozniak's Disk II encoding is probably the best known of the non-FM/MFM schemes, but there were others.

Woz used a technique call GCR (Group Code Recording) that you still see variants of today on optical media. His first Apple II disk controller encoded each five data bits as an 8-bit on-disk pattern; later improvements to the controller logic (retrofittable to original controllers by swapping out a socketed PROM chip) allowed that to be extended to 6 data bits per 8-bit disk pattern.

Commodore also used GCR on the Commodore 64's 1541 floppy drive; their scheme encoded 4 user data bits as a 5-bit on-disk pattern. The 1541 was a grotesquely crippled bit of interesting engineering. It was actually an "intelligent" peripheral, containing a 6502 CPU chip every bit as quick as the one in the C64 itself and a goodly chunk of RAM. It should have shat upon every other floppy drive then in the market from a great height, but for some reason I shall never understand, Commodore chose to link it to the C64 via a 2400 baud serial port, making it very nearly as slow as Apple II cassette tape.

The grandaddy of all nonstandard disk encoding schemes, if I recall correctly, was the one used on the Ohio Scientific Challenger, which used exactly the same encoding for the disk-head bit stream as everybody else was using for asynchronous serial comms over RS-232 - in fact the heart of their disk controller was a UART running 1 start bit, 8 data bits and 1 stop bit at 250 kilobits per second.
posted by flabdablet at 8:01 AM on July 10, 2011 [6 favorites]


Corruption on Amiga floppies had two main causes: one was the simple fact that they did write a whole track for every disk write, meaning that every nonsequential write involved rewriting a bunch of possibly-quite-unrelated data. The other was related: the only way you can get away with a scheme that always rewrites whole tracks is to use an extensive RAM cache so you can avoid using read/modify/write sequences for every single sector you write, and the most natural way to run that is asynchronous to user tasks. As you correctly point out, it was quite normal for an Amiga to have appeared to have finished a disk update, only to come to life again a few seconds later as the cache flush kicked in - and if you happened to have pushed the eject button before then: bye bye, filesystem.

It would all have worked a lot better if, like the Mac, the Amiga had had paperclip-only force-eject designed in. But it didn't.
posted by flabdablet at 8:08 AM on July 10, 2011


and if you happened to have pushed the eject button before then: bye bye, filesystem.

Or if the machine crashed before flushing its cache, and crashes were hardly an unknown event on Amigas, especially early on. I don't think the paperclip eject would have helped as much as it might have.

I've read somewhere that the 1541 was actually designed to be enormously faster than it was, but some helpful Chinese engineers removed a bunch of 'superfluous' traces from the communication lines. And Commodore didn't find this out until they'd built a big pile of them, and they simply could not take the financial hit of redoing the drives. They had to ship or die, so they shipped. And we suffered.
posted by Malor at 8:17 AM on July 10, 2011


Hey, I had an Apple II - I didn't suffer a bit! I just pointed and laughed :-)
posted by flabdablet at 8:29 AM on July 10, 2011


"Hey, I had an Apple II - I didn't suffer a bit! I just pointed and laughed"

Oh no! It's "My computer can kick your computer's ass" all over again! I haven't heard that one in decades.
posted by Net Prophet at 9:38 AM on July 10, 2011 [2 favorites]


According to this forum post, it is a bug in the mos 6522 (aka VIA) that resulted in the disk interface being so slow—it had to be bit-banged instead of transferring 1 byte at a time.
posted by jepler at 9:45 AM on July 10, 2011


Commodore had known for a long time of the 6522 shift register problem, since it affected the VIC-20. They replaced the buggy 6522 with two 6526s in the C64, so the "ship or die" explanation doesn't seem very satisfactory. They could have made the same modification to replace the 6522 in the 1541; I'm sure if they had enough time. But that would have rendered it incompatible with the VIC-20. The smart move would have been to modify the 1541 ROM so that it could negotiate either transfer mode (using the now non-broken hardware shift register vs. falling back to the slow software mediated mode), but I guess they couldn't pull that off, so instead they left the buggy chip in so that it remained compatible with all the existing gear in their line.

In reality it didn't matter that much as everyone ended up just buying one of these or something similar. I even modded mine with a toggle switch pointing out of the top so that it could be easily disabled without damaging the sensitive expansion port pin connectors from repeated stress, although now that I've googled it a bit I see that it could be soft-disabled from an internal menu, a feature that either I was never aware of or which my model didn't have.

And yeah, those joystick connectors sure were some finicky shit. I think we ended up getting some extension cables that we left plugged in to the machine all the time, giving a sacrificial male connector that could be ruined by repeated use that was easily replaceable by just getting another cord. It wasn't until the gaming consoles really made it big that manufacturers figured out that a cheap DB9-knockoff connector made out of plastic was not going to last for shit.
posted by Rhomboid at 10:07 AM on July 10, 2011 [1 favorite]


Heh, and just to come full circle, a similar serial problem would haunt IBM PC compatibles -- the 8250/16450 vs. 16550 UART. The former had only a single byte buffer, so it required an interrupt on every received character, and if the ISR was too slow to service that interrupt you lost data as the next byte would overwrite it. This made high speed serial communications really problematic on early PCs. It wasn't too bad of a problem if you were running a standard terminal program, but if you were trying to do anything fancy like multitask with DESQview or Win 3.1 or OS/2 2.x, you were toast. The 16550 added a 16 byte buffer so that the interrupt rate was much lower and the chances of dropped data was almost totally eliminated, but for a long time the 16450 persisted as the standard and was included in your usual "Multi-IO" IDE/serial/parallel controller card that you were assured to get with a new machine. (Heh, motherboards without any integrated peripherals. What a crazy world.) You had to go out of your way to find a dedicated serial card with a 16550.
posted by Rhomboid at 10:18 AM on July 10, 2011 [1 favorite]


I've read somewhere that the 1541 was actually designed to be enormously faster than it was, but some helpful Chinese engineers removed a bunch of 'superfluous' traces from the communication lines. And Commodore didn't find this out until they'd built a big pile of them, and they simply could not take the financial hit of redoing the drives. They had to ship or die, so they shipped. And we suffered.

The "Chinese" part is unlikely, given that back then, they actually manufactured computers in the West. (I think my first VIC-20 may have been made in West Germany; Commodore also had factories in Britain and North America.)
posted by acb at 11:07 AM on July 10, 2011


I don't suppose anyone here has a recording of an Apple ][ drive booting up? I'd love to use it as a ring tone.
posted by Joe in Australia at 11:22 AM on July 10, 2011


Everything that has ever existed will eventually turn up on YouTube.
posted by flabdablet at 11:30 AM on July 10, 2011 [4 favorites]


Oh no! It's "My computer can kick your computer's ass" all over again! I haven't heard that one in decades.

Apple ][: the Chuck Norris of 8-bit computers.
posted by flabdablet at 11:34 AM on July 10, 2011


HP150's, rodgerd. I sold a few of them and wrote the software. Our selling point was that HP had never, ever, in its entire history, obsoleted a computer without supplying an object-code compatible replacement.

I spent years playing with a 150, complete with hard drive, plotter, and dox-matrix printer, that my Dad's work had gotten rid of in the mid-eighties. I don't have it any more, alas, or the laptop version with it's slimline HP-IB cables (a 110, perhaps?).

I do, however, still have the luggable HP-UX workstation tucked away.
posted by rodgerd at 11:51 AM on July 10, 2011


That AppleCrate, by the way, uses a bit-banging software serializer for its inter-board network (NadaNet). I wrote the lowest-level layer of that code. It uses 8μs bit cells with a 31μs inter-byte gap, achieving throughput in excess of 10,000 bytes per second.

Even though the 1541 was forced to communicate one bit at a time under processor control, 300 bytes per second is and always was pathetic.

/assembly language geek strut
posted by flabdablet at 11:54 AM on July 10, 2011 [3 favorites]


One of the reasons 1541 drives got misaligned so easily was copy protection. I remember distinctly the sound of the head mechanism banging three times against the stop under control of the disk's copy protection scheme. A painful sound that eventually resulted in those $70 realignments.
posted by lhauser at 1:03 PM on July 10, 2011 [1 favorite]


loquacious: or it's later grandchildren - SyQuest, Bernoulli and Jaz drives.

Oh my, it's been almost ten years now since I salvaged my external SyQuest 88mB drive for its power supply and tossed the actual drive in the garbage. It had been two or three years since I'd even had a machine with a parallel port, the driver for it didn't work very well with Win98 or at all with Win2K / XP, and other solutions were making those heavy fragile 88Mb disks look more and more ... quaint.

When I first got it it was like a miracle, an external drive almost as big as the built-in hard drive. And for five years or so it was THE way to transfer big files (meaning more than a megabyte, ha) from one machine to another.

But I only ever owned three disks for it, two of which were mostly backups, and as files got bigger I used it less and less. Then, one day, the computer I needed to transfer the file to had a parallel port but the driver wouldn't work.

And by then, USB thumb drives were out about the same size as the SyQuest carts. With no moving parts, not needing to be plugged into the wall for power, and so on.

.
posted by localroger at 3:08 PM on July 10, 2011


Floppies had actually got really good by the early nineties. But by about 1998 they'd become a totally price-dominated commodity, and everybody seemed to have forgotten how to make them properly. This was the dawn of the era of the preformatted floppy that you wrote once and then threw away because you could neither read what you'd written nor write anything new nor even reformat successfully, and the beginning of the end of the floppy's reputation as a reliable storage medium.

I find that, in the rare case I need to get a floppy working, that formatting it about 10 times gets the job done.

Most of the time it is to write a floppy image onto it, so just writing the image a bunch of times gets the job done too.
posted by gjc at 6:47 PM on July 10, 2011


« Older For fans of gaming and pure delight: Katmari Damac...  |  Tennis at San Quentin... Newer »


This thread has been archived and is closed to new comments