The Life of a Data Byte
March 9, 2020 12:01 PM   Subscribe

This article is going to travel in time through various mediums of storage as an exercise of diving into how we have stored data through history. By no means will this include every single storage medium ever manufactured, sold, or distributed. This article is meant to be fun and informative while not being encyclopedic. Let’s get started. 34,128 characters (5800 words) from Jessie Frazelle, aka @jessfraz.
posted by cgc373 (29 comments total) 21 users marked this as a favorite
 
And not one mention of cuneiform.
posted by ZaneJ. at 12:10 PM on March 9 [4 favorites]


> 5800 words

Wouldn’t that depend on what machine it’s stored on?
posted by deadgar at 12:54 PM on March 9 [9 favorites]


If you feel jessie's post is too brief, the classic What Every Programmer Should Know About Memory clocks in at 110 pages and includes an index.
posted by pwnguin at 2:21 PM on March 9 [7 favorites]


Cool. Related, I know we had a post a while back on the GitHub Arctic Code vault. Lots of buzz-word bingo for storage nerds, but they are using a variety of new long term storage technologies including:
  • 3,500-foot film reels, using silver halides on polyester with a projected lifespan of up to 500 years
  • glass quartz platers and a femtosecond laser to achieve storage of up to 10,000 years
Sounds neat - but with usual hand-waving about if the technology/knowledge to access the data will last as long
posted by inflatablekiwi at 2:30 PM on March 9 [2 favorites]


I had always wondered if the Dead Sea Scrolls are first copies or like a back drive for when Titus invaded.
posted by clavdivs at 2:46 PM on March 9 [1 favorite]


...the team was tasked with developing a reliable and inexpensive way to load microcode into the IBM System/370 mainframes. The project then got reassigned and repurposed to load microcode into the controller for the IBM 3330 Direct Access Storage Facility, codenamed Merlin.
I believe the codename for the 23FD was 'Minnow', not 'Merlin', although that may have been the 33FD. The 23FD was indeed used to load microcode on 370 mainframes.
It was also used to load microcode on the 3830 control unit, not the [3333] controller.
posted by MtDewd at 3:10 PM on March 9


Four years later in 1967, a small team at IBM started working on the IBM floppy disk drive, codenamed Minnow [...] The project then got reassigned and [...] codenamed Merlin.
posted by axiom at 3:46 PM on March 9


Microdrives were a god damned miracle of minaturization and are probably still the most complicated mechanical device ever made.
posted by Your Childhood Pet Rock at 3:46 PM on March 9 [1 favorite]


Microdrives lead to the ipod, which lead to the smartphone. Although smartphones have no microdrives in them, this was a huge stepping-stone.
posted by sjswitzer at 3:52 PM on March 9


Look like Merlin was a version of OS/2. Of course, that doesn't mean they weren't reusing the name, but I admit I read right over the Minnow reference.
[edit]- Yes, the 3330 was codenamed Merlin, but the controller for the 3330 did not load the microcode, that was the control unit.

The project then got reassigned and repurposed...
When I first read that, I thought Jessie meant that it was taken away from the mainframes and given to the disk drives.
Got to slow down and stop skimming...

In 1973, I got trained on the 370/145, which had a 23FD. The instructor told us that we were not to use the term 'floppy', as it was trademarked by another company. We were to use the term 'diskette', or 'flexible diskette', at least formally.
posted by MtDewd at 5:36 PM on March 9


In French, diskette stuck, as "disquette".
posted by Monday, stony Monday at 6:38 PM on March 9


got like 5 paragraphs in, got confused about "vacuum channel," found this video, mind blown. https://www.youtube.com/watch?v=7Lh4CMz_Z6M

can anyone recommend something that explains vacuum column tape stuff for someone like me who is technical but not in the hardware engineering way THIS shit is technical? (IM A SOFTWARE PERSON OKAY)
posted by capnsue at 6:44 PM on March 9


btw by "this shit" i'm asking specifically about tape drives and this vacuum column magic. like, i think i understand what problem it was trying to solve, but 1. how did it work and 2. how did they come to this solution? sorry if this is a derail
posted by capnsue at 7:00 PM on March 9


Ah, the cassette tape days. I remember my dad contemplating floppy drives and doing a cost comparison of the 8 inch vs 5 1/4 inch drives available for the Apple ][. He waited around until '85 and did the same for the Mac vs Amiga (got me an Amiga) for those 880k drives (and stereo and color). Off to university and I ended up changing those 7 inch tapes all night long doing backups. Walking around with 6 or 7 on each arm hulking about like Popeye. Then at the end of the night backing up my Amiga sitting in my dorm room that had been dialed in all night which was equipped with one of those full-height 5 1/4 80Mb MFM hard disks (connected via an Adaptec MFM-SCSI controller), the whole setup had been nicked as one of the spares for the computer room Sun 3/50 workstations. Sometime in the early 90's when I built my Linux machine, of course it had SCSI drives, of course I had to build a kernel to support it, and of course I ended up with a QIC-80 tape backup system... what nerd doesn't have tape backups?

Now, there's a stack of CDR, DVD-RW just sitting on the shelf that haven't been touched in ages, a dozen Zip disks that I don't even know what's on them, half a dozen USB and SD and MicroSD cards laying about, and backups are just another cheap disk.

We edge ever closer to the Gibson-esque lemme just slot this little sliver of storage behind my ear and presto! I can speak Russian now. Lucky kids.
posted by zengargoyle at 7:08 PM on March 9 [2 favorites]


capnsue: the IBM 729 Vacuum Column Tape Drive. Tape spools were heavy, and with the stop/start nature of record-based data processing, the spools would be starting and stopping all the time. To overcome the inertia of the spool, the vacuum column drew in a working loop of tape and used it as a buffer against tape reel inertia.
posted by scruss at 7:23 PM on March 9 [1 favorite]


That and the vacuum column of tape acted as a shock absorber to stop the giant motors from ripping the tape apart while it was seeking so quickly.
posted by Your Childhood Pet Rock at 7:27 PM on March 9 [2 favorites]


capnsue I can't comment directly on the mechanics of vacuum column tape drives (as these were a little before my time) but I wrote UNIX drivers for their successors, which operated without vacuum assist. For whatever reason, this was not required by the next generation of drives. (On preview: maybe because the tapes got lighter and were more easily stopped and started using the vacuum as a buffer?) I suspect that the old drives were called upon to perform more random access non-sequential reads prior to the advent of higher density hard drives but I could be wrong. When I did my work, tape was more commonly, but not exclusively, a media for backups and I/O was almost always a sequential operation.
posted by Insert Clever Name Here at 7:29 PM on March 9 [1 favorite]


P.S. I was fortunate enough to work with a very clever more experienced engineer in those days who came up with a way to distribute our version of UNIX on open reel tape. It involved hand crafting a highly optimized UNIX filesystem that we wrote to the front of the tape (as its own volume: olds like me know what that meant) and which our computer's bootstrapping firmware could then load into memory. Once loaded, the FS had the binaries required to read the actual software update off the second volume of the tape.

In those days we bothered to do things like this because every stinkin' byte of memory and every CPU cycle was precious.
posted by Insert Clever Name Here at 7:38 PM on March 9 [5 favorites]


(On preview: maybe because the tapes got lighter and were more easily stopped and started using the vacuum as a buffer?)

The older computers back in the '50s and '60s couldn't keep up with the data rates that linear tape could provide so the tape needed the slack to stop and start regularly. By the time we get to UNIX land we have memory in the drives that can act as buffers and would allow the drive to keep writing when the CPU couldn't give its full attention. With the tape running linearly far more often the requirement to stop and start rapidly disappeared. With that gone, plus the improvements in tape materials, and the more accurate control of the motors in the reels, tapes didn't really need all that slack anymore.

Getting to the mid '80s DEC's DLT format didn't even need a capstan anymore. Instead it could control the tension on the tape by controlling each of the reel's motors.
posted by Your Childhood Pet Rock at 7:45 PM on March 9 [3 favorites]


In fact, the TU56 DECTape from 1970 used motor control with no capstans. It also had much smaller reels than the old IBM drives, at 3 7/8 diameter vs. 10.5, and longer start and stop times: 150 ms and 100 ms, vs 10 ms for both in the older IBM.

The IBM tape is going from 0 to 4.25 mph to 0 in 0.01 seconds, and then a bit later back to 0 in 0.01 s.
posted by Monday, stony Monday at 8:04 PM on March 9 [1 favorite]


(Thank you this makes sense now. I knew that the mefi nerds would come thru! You are all awesome. Also, keep posting stuff like this, I love it.)
posted by capnsue at 8:22 PM on March 9 [1 favorite]


The older computers back in the '50s and '60s couldn't keep up with the data rates that linear tape could provide so the tape needed the slack to stop and start regularly.

So: the exact opposite problem then. So hard to imagine tape I/O faster than the CPU and memory, but there ya go.

But weren't those tapes also doing a lot of forward/backward repositioning as well? There was very little use of "block dev" tape I when I writing my drivers, though we supported that.
posted by Insert Clever Name Here at 8:26 PM on March 9


But weren't those tapes also doing a lot of forward/backward repositioning as well?

The LINC laboratory minicomputer of the 1960s had only 2048 words of memory. Its video display text editor used the tape as virtual memory, so its specially designed tape drives were always zipping back and forth: Scroll Editing: an on-line algorithm for manipulating long character strings, IEEE Trans. on Computers 19, 11, pp. 1009–15, November 1970. This paper is by Mary Ellen Wilkes, her story and pictures of her with the LINC
appear here.
posted by JonJacky at 9:02 PM on March 9 [3 favorites]


. For whatever reason, this was not required by the next generation of drives. (On preview: maybe because the tapes got lighter and were more easily stopped and started using the vacuum as a buffer?)

The tapes stayed the same (0.5" wide on up to 10.5" reels), but with more processing power and large data buffers the tape drive could just read a bundle of blocks on the assumption that the system would need (at least some of) those soonish, leisurely coast to a stop, deliver the next requests out of buffer memory and read a couple of blocks again even ahead of the host request as the buffer emptied.

About the vacuum columns: the capstan plus the small length of tape in the columns would be light enough to allow near-immediate stopping and starting; just the capstan drive motor was about 100 watts already. Vacuum sensors basically measured how deep the tape loop inside the columns was, and those were controlling the reel motors. Several tape drive manufacturers also offered cheaper versions of this type of drives, with spring-loaded arms instead of the vacuum columns.
posted by Stoneshop at 2:47 AM on March 10


I have a small section of tape from one of those machines that a technician gave me. They had a debugging method to visually examine the physical tape that basically used a chemical to develop the "bits". The visible on/off bits are clearly visible without magnification. Say about a quarter to a half mm wide. (not actually bits but a phase change). But as for memory density it was probably the last technology that was at a physically discernible dimension.
posted by sammyo at 5:04 AM on March 10


I have a small section of tape from one of those machines that a technician gave me. They had a debugging method to visually examine the physical tape that basically used a chemical to develop the "bits".

MagnaSee, a suspension of tiny ferromagnetic particles in alcohol. You dipped the tape in the fluid to check the alignment of the tape head. It also worked with early hard disks; the ones with the removable platter packs.

Do Not Use that tape or disk pack EVER AGAIN on a production drive.
posted by Stoneshop at 5:23 AM on March 10


I still have a can of MagnaSee. You couldn't actually read the data at that density, but you could get an idea of track alignment or a section of bad tape.
The preferred method of viewing was to dip the tape in the can, let it dry, then take some Scotch tape and lift the developed image and put it on a clear microfiche card and look at it in the fiche viewer.
But we often used the tape again. [Pre-1985] Tape cleaner really worked. Too bad it was ozone-depleting.
I was able to read the data on my first ATM card, but even then, it was encrypted.

There was a bit of forward/backward repositioning, including for error recovery, but another thing was the record size. Early tape processing was just card storage on a new medium, so the records were usually 80 bytes, and a tape sort/merge involved 2 or 3 tape drives taking turns reading and writing records. As disk storage became cheaper, those operations moved to disk and tape was used more for backup/restore, and bigger records and smoother operation.
Also, it was a long time before there were buffers big and fast enough to smooth that out.

Data rates were still an issue in the 370's. The 3420 started out at 800 or 1600 bpi, and when the newer models came out, instead of 6400bpi, the density was 6250, because the mod 8 at 200 inches/sec exceeded the maximum channel speed. (With no buffer)
posted by MtDewd at 6:43 AM on March 10


I have a small section of tape from one of those machines that a technician gave me. They had a debugging method to visually examine the physical tape that basically used a chemical to develop the "bits". The visible on/off bits are clearly visible without magnification. Say about a quarter to a half mm wide. (not actually bits but a phase change). But as for memory density it was probably the last technology that was at a physically discernible dimension.

Interestingly, Techmoan released a YouTube video demonstrating the use of a 3M device made for verifying tapes were properly written just last week. Rather than using a developer, it's a handheld magnifier that contains a pot of some kind of ferrofluid that turns dark when the magnetic particles are aligned.

Delightfully, the storage case it came in even has a magnet in it that makes the 3M logo appear in the viewing window when not in use.
posted by wierdo at 7:50 AM on March 10 [1 favorite]


The slide-lock storage case for 9-track tapes lives on as the Richeson Lock Box Palette. Doesn't come with stern warnings about camera flash, though.

(computer tapes had an optical sensor that detected a metal reflector at the start and end of the tape. Some drives could be convinced to stop and rewind if a camera flash were fired nearby. For this reason, there aren't many pictures inside 1970s data centres.)
posted by scruss at 6:54 AM on March 11


« Older Humongous in a Girl Scout uniform: "Walk away"   |   Courage and chaos in China Newer »


You are not currently logged in. Log in or create a new account to post comments.