Most important product announcement that this corporation has ever made
April 7, 2014 12:58 PM   Subscribe

On this date 50 years ago, IBM announced the System/360. IBM bet $5 billion and the company's future on the product.

Spoiler- the gamble paid off spectacularly. The 360 announcement was a watershed event in the history of computing, and much of the 360 architecture is still running today.
For some background from some of the principals, ten years ago, the Computer History Museum held a 40th Anniversary party, and invited Bob Evans and Fred Brooks to speak.
After introductory remarks, a 5 minute movie was shown that included footage from the 60's and glimpses of many 360 (and newer) systems. Then Nick Donofrio from IBM introduces the speakers and moderates a Q&A session afterwards. Bob Evans, in charge of system development at the time, gives some background and a behind-the-scenes political view of the birth of System/360. He mentions a few ideas that he thinks were key to the project's success: upward- and downward-compatibility among machines; using ROM allowing the emulation of older machines. Project manager Fred Brooks discusses what he thinks of as the technical significance of the 360: the move from the 6-bit to 8-bit bytes; SLT as opposed to large-scale integration; the Standard I/O Interface; separation of Architecture from impementation. (The last two allowed you to upgrade a system over the weekend). Chief architect Gene Amdahl makes a brief, non-speaking appearance.
There followed a Q&A session that addressed several interesting topics: Why was the Mod 20 not compatible?; physical core size; floating point trade-offs; operating systems, and more.
The most amazing story to me was Fred telling about how 3 guys "one night...re-microcoded the model 30 so that it emulated a 1401... and saved the whole series." This sounds like the sort of project that would take a year, not a night. The idea that a small team could design this, code it, and then punch out 336 CCROS cards and put them in the boards (and then presumably work...) is hard to believe.
360 Console pictures (they don't make them like that any more): 2030, 2040, 2050 , 2065, 2075, 2091, 2195
Previously
posted by MtDewd (46 comments total) 34 users marked this as a favorite
 
On a personal note, I was 11 years old when this announcement was made, but the 360 had a major impact on my adult working life.

I'd love to hear from anyone who remembers Buy General Motors Stock and Dow Jones Prices are Up
If those mean something to you, I'd like to reminisce. We can sit on the porch and geez out.
posted by MtDewd at 12:59 PM on April 7, 2014


I learned to code on an IBM 370 Mainframe at Penn State in the early eighties. Fortran, PL/C and 370 assembler. I'm sure that my phone is more powerful now but at the time it seemed pretty awesome to be able to run programs on such a giant machine.
posted by octothorpe at 1:08 PM on April 7, 2014 [2 favorites]


Mainframes and minis (read: the AS/400) are still...everywhere. Folks might be amazed at how often you come across them in data centers. And they still run like fiends for data warehousing. You have to admire a design that just works, man. I had a crash course in basic 400 admin tasks at an ecommerce shop in Atlanta and went from horror to a grudging respect for the platform. They ran great unless you tried to shoehorn Java apps onto the hare-brained POSIX implementation created for that very purpose. Let the warehouses warehouse. Run Java on commodity hardware. Everybody wins.

And The Mythical Man Month is still a great read.

Also, you want a good angle for employment possibilities? A lot of the mainframe and mini guys are retiring (or have already done so). It can be prohibitively expensive to migrate off the platform so I don't expect they're going away any time soon. Learn some REXX and JCL, young'un. Learn your way through the IPL process. There's gold in that there iron.
posted by jquinby at 1:20 PM on April 7, 2014 [13 favorites]


Of course, it's kind of a myth that mainframes are dead. They're only dead as a monolithic construct; most companies that have serious data crunching needs today use things like blade servers that allow you to have many nominal computers running on the contents of a networking rack. It's pretty interesting, really, that we've decided that these don't count as "mainframes."
posted by sonic meat machine at 1:24 PM on April 7, 2014 [2 favorites]


They ran great unless you tried to shoehorn Java apps onto the hare-brained POSIX implementation created for that very purpose. Let the warehouses warehouse. Run Java on commodity hardware. Everybody wins.

I write PHP and RPG on an as400 system. It's just as fun as it sounds.
posted by kmz at 1:29 PM on April 7, 2014 [4 favorites]


What blows me away is the fact that the computer I'm typing this on probably has more CPU power than all the computers which existed 50 years ago put together, and more memory, and more disk space. And a better display than existed then. And it only cost me $1400.
posted by Chocolate Pickle at 1:30 PM on April 7, 2014 [2 favorites]


Yeah, these days, you have computing clouds that rely on many, many instances of a virtual machine all networked together, and you have cloud hosts with one physical machine that's virtualized into many, many instances for computing clouds to rent out - it makes my head hurt.

(And the z/Architecture mainframes are excellent industrial-scale linux VM hosts, and have been used as such since the '90s.)
posted by Slap*Happy at 1:31 PM on April 7, 2014 [1 favorite]


I'm a bit wary of saying this because there will be a lot of expertise out there, but one thing that I've liked about the coverage of the 50th birthday of the mainframe has been the etymology: the need to describe the physical structure of these new computers by reference to existing electronic engineering, namely the main frame of telephone exchanges of the time (and earlier).
posted by cromagnon at 1:48 PM on April 7, 2014 [2 favorites]


There's a VERY big shop in my general vicinity that has a huge data application running on mainframe architecture. Every few years, they try to get the application off of it. But the mainframe always wins.

Also: $5 billion - that was a lot of money in those days.
posted by randomkeystrike at 1:50 PM on April 7, 2014


I just want to mention my favorite program to run on the 360, IEFBR14, which has the purpose to do nothing.
posted by surplus at 1:53 PM on April 7, 2014


My partner is a programmer on a z/OS product, a third party extension to the 360 operating system that adds various kinds of system administration features. It's incredibly technical detailed work, a whole lot of assembly language programming. I do software too but I'm one of those young kids doing the Internet and the Python and stuff, so our conversations about software development are a little like someone who works on electric motors talking to an expert in hand-crafting steam engines. But I've learned a huge amount of respect for what that system can do. Both hardware and software system design.

His company is hiring. Turns out to be hard to find programmers who are 360 experts who haven't retired or, um, "moved on".
posted by Nelson at 1:56 PM on April 7, 2014 [3 favorites]


I learned to program on microcomputers, but college was very "data processing" oriented, the teaching language was PL/I and the assembly language was 360 architecture.

Coming from the 6502 and 808x world, I was amazed at the richness of the BCD instruction set, and... the fact that the thing had no stack. I mean, sure, you had access to registers that you could use to implement a stack (and I assume that's how the C compiler, that everyone had stories about, worked), but the standard way to program the thing was saving off the program counter you were supposed to return to in memory allocated to that function. No recursion.

A coworker has a (working) AS400 in his garage and is fascinated by old iron, I can't see it as a career path, but I'm betting he'd eat up a chance to learn and apply IBM big iron stuff.
posted by straw at 1:58 PM on April 7, 2014 [1 favorite]


> I just want to mention my favorite program to run on the 360, IEFBR14, which has the purpose to do nothing.
As it turned out, over the years, its attempt to do nothing was too concise and would cause problems with related tools, leading to the slight expansion of the program.
Typical bloatware...
posted by Foosnark at 1:58 PM on April 7, 2014 [2 favorites]


GNU Hello is up to a 707k tarball (3.6M unpacked). The core hello.c file clocks in at 220 text lines. It doesn't exactly do nothing though, it prints a message. It's also a full demonstration of a localized, guideline-compliant GNU source code distribution.
posted by Nelson at 2:03 PM on April 7, 2014 [1 favorite]


I've spent the last decade rehosting mainframe applications to open systems. It's a rather beautiful architecture, and a safe business decision, but it can also be terribly overpriced; a standard stack up to the low thousand MIPS probably has no business running on zOS. But they excel at doing what they do: running everything in the same place, minimizing latency, being as data robust as can be with thousands of concurrent users connected to the same machine. But my favorite thing about it is that it forces simplicity on developers. It's boring, in the same way that zero downtime is boring.
posted by valdesm at 2:12 PM on April 7, 2014 [2 favorites]


IBM's 360 and Early 370 Systems is a massive book that goes into the history and development of the 360/370 systems in deep, deep detail. Bought it for $50 as a hardback at Half-Price Books in Austin more than ten years ago and it's still one of my favorite titles.
posted by mrbill at 2:58 PM on April 7, 2014 [2 favorites]


I am currently one of the admins on relatively large AS/400 system. It runs like butter, hasn't been down in many years, and is crazy efficient once you get a hold of the commands.

It's great to see a high school dropout warehouse guy who can't run the update on his iPhone doing these incredibly elaborate inventory merges and calltag edits at speeds which would make a caffeinated coder take notice.
posted by lattiboy at 3:00 PM on April 7, 2014


I learned to code on an IBM 370 Mainframe at Penn State in the early eighties. Fortran, PL/C and 370 assembler.

Ditto here, but at Cornell (IIRC, the "C" in "PL/C"). Also: Punch cards ... PUNCH cards. My god, how far we've come.
posted by ZenMasterThis at 3:02 PM on April 7, 2014 [1 favorite]


Don't forget that you can emulate an IBM mainframe on a lot of different platforms. It's really twisted to have a complete emulated S/360 or 370 on my phone or Raspberry Pi.
posted by mrbill at 3:02 PM on April 7, 2014


I think I remember shelving books about this. In th early to mid 90's.

Of course, I could be mistaken.
posted by jonmc at 3:09 PM on April 7, 2014


Also: $5 billion - that was a lot of money in those days.

And by substantial... getting to the moon cost roughly $25 billion.
posted by wotsac at 3:13 PM on April 7, 2014 [1 favorite]


Hercules was created by Roger Bowler. Jay Maynard (“the Tron Guy”) was the maintainer from 2000 to 2012

Holy shit.
posted by jquinby at 3:14 PM on April 7, 2014 [2 favorites]


I am currently one of the admins on relatively large AS/400 system. It runs like butter, hasn't been down in many years, and is crazy efficient once you get a hold of the commands.

The only time our AS/400 went down was that time $SOMEONE ( who I swear totally was not me! ) pulled out a memory card from the back while it was running. After it was replaced, and the BRS cycled, it came up fine and didn't skip a beat.
posted by mikelieman at 5:54 PM on April 7, 2014


I learned to code on an IBM 370 Mainframe at Penn State in the early eighties. Fortran, PL/C and 370 assembler. I'm sure that my phone is more powerful now but at the time it seemed pretty awesome to be able to run programs on such a giant machine.

According to this, depending on your model of phone, it would be roughly 10,000 times more powerful, which is obviously pretty mindblowing.
posted by Jon Mitchell at 6:01 PM on April 7, 2014


The IBM/360 was the first serious computer I could access, through my university time share. I started writing FORTRAN on keypunch cards when I was about 13. Then I discovered ATS, the Advanced Text System. It was a line editor that ran on their Selectric printing terminals. Oh I love those old Selectric golf ball terminals. I decided to input my programs on ATS and send them to output on a high speed keypunch machine. Then I discovered you could just send the file itself for execution. Oh. So that's what computing is. I get it.
posted by charlie don't surf at 6:02 PM on April 7, 2014


So. Say you are a technically-minded young person without a computer science degree who wouldn't mind a job working on a mainframe. What do you do to get there?
posted by You Can't Tip a Buick at 6:18 PM on April 7, 2014


You start with something like the Master The Mainframe contest.
posted by mrbill at 6:27 PM on April 7, 2014


and you emulate/simulate and poke around and learn as much as you can from the IBM Redbooks, etc.
posted by mrbill at 6:27 PM on April 7, 2014


Ditto here, but at Cornell

Me too. I was supposed to learn how at MIT but somehow graduated without grokking programming, but got the hang of it at Cornell. You would sit in the big room full of punch card machines, put your program together, feed it into a card reader, and hope for the best. The IBM main frame was called Langmuir and was a few miles away. Every now and then, the geeks at the service window would post a status, which pretty regularly was "Langmuir down", so you would go for coffee and come back later. Your results would come in the form of a printout, which if you were lucky, indicated your program had run (in like 2.3 seconds, really fast!), and if not, gave you some idea where the error was in your punch cards. Hackers hung around trying to get into "core" and getting excited about printouts full of numbers and letters.

Back at MIT, late 60s, I remember one guy who lived in an Airstream in one of the parking lots, and the Airstream was filled up with punchcards which constituted his PhD thesis. One day, a giant crane fell down upon the parking lot, smashing his trailer and scattering all his card. Luckily he was not around. No backup.
posted by beagle at 7:04 PM on April 7, 2014 [3 favorites]


What blows me away is the fact that the computer I'm typing this on probably has more CPU power than all the computers which existed 50 years ago put together
Usually when folks talk about 'power' they're usually just talking about MIPs, or even less comparable, clock time. The 360/65 was only rated at ~.6 MIPs, but could support a couple of hundred users and had an aggregate data rate of 675 KBytes/second. My phone is 3G and runs at 200 Kbits/second, and supports 1 user.
posted by MtDewd at 7:38 PM on April 7, 2014 [3 favorites]


When I was doing my MS I had to write JCL to run my SAS jobs under MV/TSO on an IBM 3090. I still have my JCL book, just in case, because it took me so long to find one. I was always more a VAX guy than an IBM guy, but one of the most interesting technical books I ever read was a history of IBM storage technology.
posted by wintermind at 7:58 PM on April 7, 2014


MtDewd: "We can sit on the porch and geez out."

What a great expression. I don't even think I'm seeing it on Google, as such. Geez out!
posted by Conrad Cornelius o'Donald o'Dell at 9:17 PM on April 7, 2014


Another geez here who learned how to program in the 70s. Is there still room on the porch for me?

I started by sending out a handwritten coding form with my program, a woman in an office somewhere would type it out on a punch card machine and send back the stack of cards. Then I had to sent them out again to run the program, and then scratch my head at the result and try to figure out what I got wrong.

I got my first real job in 1981, writing assembly language on a thermal-paper line terminal. Debugging customer dumps was the fun part: sometimes someone would walk into my office and drop a 4-inch stack of paper on my desk.

And The Mythical Man Month is still a great read.

This. Long after the technical achievements of the 360 seem like Stone Age prehistory, the lessons Fred Brooks learned about managing programming projects are still true and relevant today.
posted by fuzz at 11:14 PM on April 7, 2014 [1 favorite]


I'm bringing my folding chair along to this geez out party as well whether you guys like it or not!

I cut my teeth in the 1970s at Texas A&M through a High School Advanced Placement program. I always tease that I'm the youngest person I know that started on punch cards. One of the first real programs I wrote for a class was an inventory management system in FORTRAN or PL/C , can't remember which. Loved coding in 360/370 assembler and thought it was hilarious slipping in incorrect JCL cards at the head of classmates decks and watch them try to figure it out.

Everyone was given an account with $25.00 credit to complete all their runs for the semester. Each run would cost you something like $0.25 to $0.50 cents. It was all designed to make you spend more time desk checking. Invariably people would run out of money before the end of the term and were relegated to "Happy Hour" from 4 to 5 in the afternoon when jobs ran free.

Between all this and tinkering with the early microcomputers of the day, I managed to set up the basis for a career that has, literally taken me around the world many times.

So thank you S/360
posted by michswiss at 11:32 PM on April 7, 2014 [2 favorites]


Coming from the 6502 and 808x world, I was amazed at the richness of the BCD instruction set, and... the fact that the thing had no stack.

I had a similar experience as straw: Learned programming on microcomputers, including 6502 machine code, then majored in CS in the late 80s where the department focused on data processing and I did FORTRAN, assembler and JCL on System/370. When my code crashed, I had to decipher raw memory dumps printed out on greenbar. I wondered for the first couple weeks of my assembler class how they could possibly do without a stack, but then we learned the standard linkage conventions. (recursion is possible if you dynamically allocate the register save area, right?)

The instruction set was really powerful compared to a 6502, and seemed so consistent and regular compared to x86. I grew fond of it.

JCL? Ugh. EXEC PGM=KILLITDEAD.
posted by jjwiseman at 11:53 PM on April 7, 2014 [1 favorite]


depending on your model of phone, it would be roughly 10,000 times more powerful

And yet, people have not replaced their mainframe computers with arrays of phones. Why is that?
posted by thelonius at 1:24 AM on April 8, 2014 [2 favorites]


the standard way to program the thing was saving off the program counter you were supposed to return to in memory allocated to that function. No recursion.

The subroutine call instruction in the 360 architecture is BAL, for branch and link, and all it does is save the current program counter into a register before taking the branch.

When I first encountered this, as yet another arrogant kid whose first assembly language was 6502, I could not believe what a primitive and shitty way to do a subroutine call that was; hell, it didn't even push the return address on the stack for you! How were you supposed to do recursion with a stupid shitty piece of shit IBM instruction like that? That instruction should be called POS, not BAL, amirite? God, these IBM architects are so uninspired and dull and dumb... is it beer o'clock yet?

A little later, after some of the arrogance had worn off, I was learning a about RISC architectures and the kinds of design decisions that go into those, and a familiar pattern turned up: RISC subroutine call instructions tend to save the PC to a register, because the only memory access instructions in a RISC architecture tend to be explicit loads and stores. And I had a few things pointed out to me which, if I'd had half the smarts my arrogance told me I had, should have been obvious from the beginning.

If you have enough registers (and the 360 has 16, which is plenty) then it doesn't hurt to devote one to holding the current subroutine's return address for the duration of the subroutine. In fact, having that address available in a register allows you to do cool things like put subroutine parameters inline, if you like; you just pick them up using offsets from the return address register, and you can also use an offset on the return jump so you return to the instruction following the inline parameters. This is quite neat. You can do inline parameters with 6502 as well, and I've seen code that does so, but it takes a lot more faffing about.

If you want to nest a call to another subroutine, you do need to save the return address register before doing that. If your design doesn't use recursion (as is typical for FORTRAN) you can save it into a fixed memory block. If it does, you'll almost certainly already have another register somewhere that points into your current routine's stack frame, and you can just save the return address register into a reserved spot in that. The amount of extra work typically needed for this is one instruction, which sounds like a disadvantage compared to a machine with an implicitly stack-based subroutine call instruction - until you realize that having saved the return address once you can then make as many additional BAL calls as you need to before restoring it again. And in the most common case - where the "leaf" routines that don't themselves call anything else are the most frequently executed - you've saved a memory read and a memory write for every such call.

And in fact the 360's BAL instruction lets you specify explicitly which register to use for the return address. It's quite feasible to design code that makes subroutine calls several levels deep without needing any memory cycles to deal with return addresses.

There is nothing wrong with BAL. It's actually very neat.
posted by flabdablet at 3:59 AM on April 8, 2014 [4 favorites]


The instruction set was really powerful compared to a 6502, and seemed so consistent and regular compared to x86.

To be fair, is there any instruction set that doesn't seem consistent and regular compared to x86? PIC or 8051, maybe. Maybe.

The 6502 is very pleasingly effective for a machine with so little on-chip storage. I can't think of a similarly storage-restricted architecture that's actually more fun to code for.

It seems to me that Motorola's 68000 architecture is a more reasonable thing to compare the system/360's to, and on that comparison I would have to rate 360 as the clear winner for tidiness.

Here's a pretty good piece on architectures, for others who also enjoy this kind of thing.
posted by flabdablet at 4:45 AM on April 8, 2014 [1 favorite]


[Cornell] The IBM main frame was called Langmuir and was a few miles away.

And, decades later, I work for a software company located around the corner from the old Langmuir Lab which, these days, is mostly empty.
posted by aught at 5:33 AM on April 8, 2014


IBM is doing a live online celebration thing in a few hours. Tune in on Tuesday, April 8, at 2 p.m. ET for the Mainframe50 event live from NYC.
posted by Nelson at 9:04 AM on April 8, 2014 [1 favorite]


The Winchesters, which were created for the System/370 kickers to the System/360s, were originally designed to have two 30 MB disk spindles--a 30-30, so to speak, and hence the nickname based on a popular brand of double-barreled shotgun.

The Winchester spindles spun at 4,000 RPM, and the unit could push data at 885 kilobytes per second. In many ways, they are very much like the disk in your PC, except that they weigh a ton (well, not really), and the amount of kinetic energy in a Winchester drive made it a truly dangerous device. They used to "walk" across the floors of data centers, like a lop-sided washing machine, when they got out of alignment.
posted by Lanark at 3:46 AM on April 11, 2014


A possibly apocryphal story I remember hearing about IBM's washing-machine disk drives: a small amount of a signal phase-locked to the disk's rotational position ended up finding its way into the head positioning motor. The net result was to make disk tracks slightly eccentric, guaranteeing that they'd misalign if you tried to use the same disk pack in a non-IBM drive.

This was allegedly done deliberately to give foreign drives a reputation for flakiness - though it seems likely to me that the original effect was accidental, remained unnoticed until people tried using IBM-formatted packs in third-party drives or vice versa, and just never got fixed for compatibility reasons.
posted by flabdablet at 5:14 AM on April 11, 2014


I'm always glad to hear of other students like michswiss who got access to computers through local university enrichment programs. There were very few school kids who had access to computing. Around the late 60s, with the rise of the large mainframes like the 360, universities discovered they could offer schools access to timeshare with negligible cost. My high school had a keypunch but no terminal, and in junior high school I used optical cards. Since our access was so limited (I remember those $25 computing credit accounts) we spent a lot of time writing on coding sheets and stepping through code and writing down register contents. We could write code for days and only run the code on the 360 once or twice. In all computing, computer time was the scarcest resource. Your program better not crash or you wasted computer time on your $25 account.

I thought more about it, and I do owe a lot more to IBM in the 360 era than I realized. IBM was a powerhouse and funded some advanced educational projects, like the IBM 1500, and of course universities were huge data processing users. My university's first major IBM installation was an IBM 650, that was before my time. But I remember going to computerized class registration with new punched card data processing. There is a particularly notable photo, the second in that last link, showing James Van Allen and E.F. Lindquist (founder of ACT) getting an IBM demo. They built a heavy-lifting computing facility to do educational testing (with tons of optically scanned tests) and data processing for NASA. Oh you ought to see our Library's Special Collections NASA map files.

There were plenty of other computing resources at the university, but most of them were outside the EBCDIC world. There were DEC PDPs, VAX, HP-3000, my favorite was an cluster of CDC Cyber mainframes, they were free to use and had the juice to do anything we needed.. except talk to EBCDIC systems like the IBM/360.

One of my first professional coding jobs was porting educational programs from IBM 1500 code to ANSI BASIC on the Cyber. And our university had a couple of particularly interesting research projects, we had PLATO IV terminals, and we had one of IBM's first videodisk prototypes. I loved the PLATO system, it ran on a CDC Cyber cluster so I was familiar with its internals. But the terminals themselves were incredible. It had a plasma touch screen with a rear projection system running off microfiche cards. You could overlay computer graphics on the plasma screen with microfiche images, the projector moved the film around, powered by compressed air. It also had a floppy disk drive, about 12 inches across, with no protective sleeves. You handled the raw mag media, we called it a "sloppy disc" because it was hard to handle the media carefully enough not to ruin it. The media access of this terminal was wonderful, but the compressed air system was a problem. The idea was you'd have a room full of PLATO terminals all running silently on compressed air. The compressor was in the other room, connected to air hoses to each terminal. But we only had a couple of terminals, and the air compressor sat right next to the desk. Sometimes the air tank would run low and the compressor would power up, creating a deafening pump sound. This was quite the opposite of the intended silent running.

Anyway, one day a wise guy (me) suggested, hey why don't you hook up the laserdisc to the PLATO terminals? It would be a much more useful source than those dumb microfiche systems. And it would be much more useful to overlay computer graphics on random access laserdisc video or still video images than those microfiche films. We made a pretty successful prototype, but the laserdisc was not widely available enough to attract anyone to build apps and content for it. And people totally did not get the concept of interactive computing with live video on demand. We started hooking up the laserdisc to early microcomputers like the Apple II, you could do a lot of powerful stuff with a computer driven laserdisc. I remember going to ACT to pitch our laserdisc project. It was a flop. They absolutely did not get the concept that a computer could trigger a video to play. Why that is impossible. And what possible use would it be to play videos during a computer program? Oh well, I was ahead of my time. If they don't get it now, they'll get it soon enough.

So that's basically what I did for the next decade or two, integrating video and multimedia into computing. I worked with interactive laserdiscs and then upgrades to Quicktime and other stored media types. IBM gave me a huge advantage with early access to their prototypes and new systems.
posted by charlie don't surf at 2:29 PM on April 12, 2014 [5 favorites]


I have a hard time believing the 30-30 story about the Winchester name, even though it's everywhere on the web, including some IBM sites. The reason for that is because Winchester was the code name for the disk. (And I'm pretty sure it was the code name of the 3348, the actual disk, not the 3340.) So what's the point of a code name? It's so people have something to use as a name without divulging anything about it to competitors. (Like- it's gonna have 30M spindles.) Here are some code names I remember: Meridian, Trout, Gulliver, Hickory, Piccolo, Minnow... They don't hint at what's being built. That would be like calling the Normandy Invasion 'Operation Bayeux' instead of 'Operation Overlord'. Also, it wasn't 30M- they were originally 35M... might as well call it a Leica.
I never saw 3340's 'walking', either. Probably because they were installed correctly, with the pads down and the wheels off the floor. The spinning disks did have a lot of inertia, but I would think that would be more likely to stabilize the machine gyroscopically. The only thing that would cause them to move would be the [extremely low-inertia] access arm, unless something caused the disk to go out of balance, like a head crash. And the 3340s usually were in a string of 8 drives bolted together. You might see a little more shaking in a 2311, but still, the pads are supposed to be down. For serious shaking, try a 64-pin 5225 printer.
posted by MtDewd at 2:59 PM on April 14, 2014


Back around 1980 or 81, one of my first mainframe jobs was servicing Printronix high speed dot matrix printers installed in high end IBM data processing centers. Printronix reverse-engineered the drivers for IBM printers and their big printers were much cheaper than the IBM products. Also they had more advanced graphics features like barcode printing. I did Printronix service in a 3-state area, Iowa, Minnesota, and Wisconsin. I'd get a phone call from the dispatcher, go to the Dubuque airport freight depot tomorrow morning at 6AM, pick up a parts and tools kit. Then I'd drive 1 or 2 hours to the client, open a box and it would have a tool kit, and some proposed repair like a motor and a xeroxed instruction kit on how to install it. I'd swap out the part, they'd fire it up and test it, did it work? It always worked. Some of the repairs baffled me, they seemed to be completely unrelated to the customer's problems, but they always worked.

Then I'd go back to the home shop in Dubuque. One of my big repair tasks was Corvus hard drives. These were IBM Winchester drives, typically they were 10 or 20Mb. Corvus bought a lot of these drive mechanisms and slapped a network interface on them that was well suited towards 8 bit data storage. The software was actually very good. At first it started as a single-user system, but then they expanded to multi-user OmniNet. You could hook up a dozen machines to one hard drive. It was amazing having the whole office hooked up to a common 20Mb data store, where you could back it all up at once. Corvus had an interesting backup device that wrote disk images to VHS tape, it converted the data to a video image with high redundancy in encoding. It was incredibly fast for the time. But I always had 4 or 5 Corvus drives in line for service, they were terribly temperamental and the technology just was not very good yet.

After doing Corvus service for a while, I got recruited to a Corvus software developer. We wrote UCSD Pascal APIs for file locking and record locking on the Corvus Omninet system. We used that to write EASY (Easy Accounting SYstem) multi-user bookkeeping software that ran on mixed networks of Apple II and Apple ///. And then we sold our APIs as a product "The Pascal Programmer." We sold it to companies like Apple, they liked our APIs so much they just integrated them into the product they were developing in secret: the LISA. Apple just took the Omninet protocols and our APIs and made it into a new feature: AppleTalk.
posted by charlie don't surf at 2:23 PM on April 19, 2014 [2 favorites]


I worked with Bill Joy's sister at Wang Lab's Dallas office in the 1980s. We'd be reading VTOCs, scouring core dumps, configuring backplanes, normalising relational databases. Bill would join us on occasion. He never travelled without his toothbrush.

Another story. My first paying computer job was as employee 5 at a very small group in Oklahoma. They serviced minicomputers. I learned a lot from them, including literally bootstrapping. The TI 990 (DNOS?), Data General, DEC PDP11, Wang VS. So in any case, I was a newbee. This was around 1981-82. I'd only seen the big IBM stuff, at TAMU, or CPM or Apple or a variant of Unix (can't remember the name) at that point.

The owner (got the job because of the class I shared with employee 3 doing 370 assembler) handed me a small computer running MS-DOS. His instructions were to write an accounting system for it and interface a POS register for restaurants. MICROS. You'll still see them around. I wonder if some of my code still lives. He then gave me a pile of COBOL source code from MCBA, an early MRP system for minicomputers, and said "this should help." Problem was, I only had compiled BASIC to work with and had to figure out 8086 assembler for serial communications on my own. And the menus, most minicomputers had some form of menus. I decided to write a cursor-driven menu system native to MS-DOS. Then a snake game. Couldn't resist.

There were points that I was having regular calls with Microsoft engineers figuring out how to make things work. They'd share details of an instruction and I'd share how I'd made it work. It was months of effort. I didn't care. It was simply a challenge and new and interesting and inventive and fun. If only I'd had a brain.
posted by michswiss at 6:00 AM on April 23, 2014 [1 favorite]


« Older Do you ever feel, like, bad about working in a...   |   Make your own solar system Newer »


This thread has been archived and is closed to new comments