CPU of a computer does not contain: Special Register Group
October 24, 2019 8:29 PM   Subscribe

The Honeywell 800, an obscure 1959 computer, has infected the definition of CPU for 60 years with the irrelevant phrase “Special Register Group.” Ken Shirriff (Previously restoring a Xerox Alto) investigates.
posted by larrybob (25 comments total) 20 users marked this as a favorite
 
Special Register Group and Honeywell 800 are both fine band names.
posted by otherchaz at 8:57 PM on October 24, 2019 [7 favorites]


The Honeywell 800 allowed eight programs to run on a single processor, switching between programs after every instruction. To support this, each program had a "special register group" in hardware, its own separate group of 32 registers (program counter, general-purpose registers, index registers, etc.).
Spoilers, but funnily enough this fine fact doesn't even contain the coolest phrase in the linked blogpost, namely...
frame, main, (1) the central processor of the computer system...
Frame comma main? Blown comma mind!

Fascinating stuff, cheers for posting larrybob
posted by I'm always feeling, Blue at 9:07 PM on October 24, 2019 [8 favorites]


Some of the comments on that post are a good example of why this sort of weird terminology persists. For nearly any CPU, you can think of some registers as "special registers" (like the alternate group on the Z-80), and so when you're presented with a pre-existing authoritative-sounding definition like that, you come up with good reasons to justify it.
posted by Harvey Kilobit at 9:22 PM on October 24, 2019 [1 favorite]


That is a fascinating article and that really is a terrible definition. What is "main storage?"

If I gave that definition to an educated computer user who wasn't sure what "CPU" meant it certainly wouldn't help.

It's also not just useful as an authoritative definition. It can't be used to determine whether something is or is not a CPU. I can imagine someone citing that in a contract dispute or patent lawsuit to argue a chip wasn't a CPU because the "main storage" was RAM or because there aren't special register groups.
.
posted by smelendez at 11:47 PM on October 24, 2019


I love that sort of fossilised phrases and definitions. Nice find!
posted by MartinWisse at 1:29 AM on October 25, 2019 [3 favorites]


To support this, each program had a "special register group" in hardware, its own separate group of 32 registers (program counter, general-purpose registers, index registers, etc.)

This kind of architecture is called a barrel processor, and it makes for a neat if somewhat brute-force solution to the problem of wasted time from pipeline stalls.

The process of fetching, decoding and executing a program instruction can be made a lot faster by doing it as a pipeline, breaking the required work down into a kind of assembly line of stages, each of which consumes the output of the last until the whole instruction execution process is done. The speed improvement comes from the fact that all the stages can be kept busy all the time. So even though an eight-stage pipeline might need eight clock periods to do all the work of fetching, decoding and executing a single instruction, parts of eight successive instructions can move from stage to stage on every clock cycle, so instructions end up getting completed at the rate of one instruction per cycle.

A pipeline stall happens when one or more inputs for some instruction depend on outputs that an earlier one is supposed to have provided, but the earlier instruction hasn't made it all the way through the pipeline yet and so those outputs are not yet available. Under this condition the later instruction has no option but to freeze all the pipeline stages before the one that needs those inputs until they actually arrive. This is a pipeline stall and it clearly wastes time.

In a barrel processor, instead of the fetch stage of the pipeline collecting successive instructions from a single thread of execution, it switches threads on every clock cycle; the next instruction from any given thread won't get fetched until every other thread has supplied an instruction as well. Typically the number of threads the architecture supports is at least as large as the depth of its pipeline, and this guarantees that work done in any given pipeline stage cannot depend on results yet to be calculated in any later stage; at any instant, all pipeline stages are occupied by instructions that come from separate threads and are therefore inherently not interrelated, so the pipeline never needs to stall.

I was super pleased with myself when I first invented the concept of the barrel processor as an engineering student, but it as is the case with all my best inventions, somebody else had already had it in production twenty years beforehand. C'est la vie.
posted by flabdablet at 3:49 AM on October 25, 2019 [14 favorites]


In the computer justice system, register based offenses are considered especially heinous. On the internet, the dedicated detectives who investigate these devious instructions are members of an elite squad known as the Special Register Group. These are their stories.
posted by Huffy Puffy at 4:55 AM on October 25, 2019 [34 favorites]


Up until a few decades ago (i.e., PCs becoming ubiquitous and familiar), the computer terms in dictionaries tended to be weirdly archaic, full of random fossils of long-forgotten machines. One I remember seeing (in a dictionary of computer terms from the 1980s) was ERNIE, an acronym for a device for generating random numbers electronically.
posted by acb at 5:09 AM on October 25, 2019 [6 favorites]


I get the feeling this persisted because few people actually read or care about the definition of "CPU" or "computer". Either you know what it a CPU is already, or "special register groups" just blurs into all the other jargon.
posted by Foosnark at 5:14 AM on October 25, 2019 [3 favorites]


Were the dictionary being written 20 years later, it'd probably mention segment registers.
posted by acb at 5:18 AM on October 25, 2019 [1 favorite]


I guess if you look at reference books a couple of decades old, there will be terms about things that are a couple of decades old.
I like the current Merriam-Webster:
Definition of cpu
: the component of a computer system that performs the basic operations (such as processing data) of the system, that exchanges data with the system's memory or peripherals, and that manages the system's other components
Frame, main
This definition appears to come from a time when a CPU was a main[ ]frame. (as Shirriff points out) I agree the definitions should be updated to something less than 50 years old.
Nowadays storage is not generally considered part of a CPU. Even the IBM 360 Principles of Operation(1964) places main storage separate from the CPU.
Special register groups, I have no problem with. IBM 360/370... architecture had 16 general purpose registers and 2-8 floating point registers, and I would call them special register groups, as opposed to internal CPU registers. They were not used like in the 800, having a group for each program, but the 360 wasn't limited to 8 programs.

What is "main storage?"
Again, this phrase makes sense 50 years ago, but may be confusing to 'digital natives'.
If this was a serious question, here's a quote from the 370 PoO(1975):
Main storage provides the system with directly addressable fast-access storage of data. Both data and programs must be loaded into main storage (from input devices) before they can be processed. Main storage may be either physically integrated with a CPU or may be constructed as standalone units. Additionally, main storage may be composed of large-volume storage and a faster access buffer storage, sometimes called a cache.
This kind of definition was not in the 360 manual. It just decribed how it worked.
Additional note: In IBM, core storage was usually referred to as 'memory' while semiconductor storage was referred to as 'storage'.
(I believe this was because core storage 'remembers' even after power-off, while semiconductor storage does not. This language may even have come from the legal department)
posted by MtDewd at 6:11 AM on October 25, 2019 [3 favorites]


then they have moved in opposite directions: "mainframe" is now a large computer system

Sincerely just trying to get an answer for myself, but the way people usually think about it nowadays and in the recent-ish past, would it be right to say that a mainframe is a large computer designed around supplying rapid input/output to many simultaneous users rather than providing very high levels of raw computational capacity, and a supercomputer is the reverse?
posted by GCU Sweet and Full of Grace at 6:16 AM on October 25, 2019


You could, perhaps, say that "main storage" was somewhat analogous to what today we call the CPU's cache. It isn't the same thing, but it's vaguely similar.
posted by sotonohito at 6:17 AM on October 25, 2019 [3 favorites]


I am amazed at the amount of research that went into that blog post. And I mean “research” in the nearly-forgotten “reading till your eyes bleed” sense. I am reminded of some articles I’ve seen tracking mistaken factoids through science textbooks. It is much easier to steal than to think!
posted by Gilgamesh's Chauffeur at 6:28 AM on October 25, 2019 [3 favorites]


ERNIE (now in its 5th incarnation) is still in service, producing random numbers to award prizes to holders of Premium Bonds in the UK. Of course as a cutting edge application of electronic computers, it's rather old in the tooth.

This also shows how important it is to indicate the source, context, and date of a definition. I was reminded of that in the 80x25 thread, which led me to research teletypewriters: it that context, what we call "bits" could be referred to as "levels" or "units".

The abundance of that definition is also a consequence of "shovelpaper" publishing in the computer field: libraries and bookstores tend to be full of low-quality books on the subject, and taking a definition without credit or attribution is exactly what they would do to pad out their page count (if the book is thick, it has to be comprehensive, right?)
posted by Monday, stony Monday at 7:13 AM on October 25, 2019 [3 favorites]


would it be right to say that a mainframe is a large computer designed around supplying rapid input/output to many simultaneous users rather than providing very high levels of raw computational capacity, and a supercomputer is the reverse?
I think the distinction between mainframe and supercomputer is narrowing and may have even disappeared. The early supercomputers had multiple pipelines and features for speeding things up that were later incorporated in mainframes. For example, the IBM 3195 supercomputer(1970) had a lower MIPS than the 3081 mainframe(1983), and only twice as much as the 3033 mainframe(1978).
Merriam-Webster, Definition of mainframe
1 : a large, powerful computer that can handle many tasks concurrently and is usually used commercially

You could, perhaps, say that "main storage" was somewhat analogous to what today we call the CPU's cache.
No, I'd say that "main storage" was somewhat analogous to what today we call RAM.
Cache today is usually in the 'CPU' chip. (AFAIK)
In [older] mainframes, the cache was separate from main storage and also from the 'CPU'. In the IBM 3033, the 'CPU' was in frame 2, main storage was in frame 3, and the cache was in frame 1.
posted by MtDewd at 7:27 AM on October 25, 2019 [2 favorites]


Registers are such a fascinating part of CPU design. I learned machine language on a 6502 which only had 3 generally useful 8 bit registers; A, X, and Y. And of those only A was really useful for arithmetic. So I had this idea registers were super special things. Old x86 days didn't do much to improve that.

But those were just microcomputers. Big computers had lots of registers. Whole register files! The next CPU I bothered to learn much about, the R3000, had 32 general purpose registers. Named 0, 1, 2, ... 31. That's it. Nothing special, just 32 ultra fast slots to put things in. So clean! I wish the Intel architecture had gone that way (they tried) but even now.. AMD64 has only 16 general registers and they still have funny names. Although you really should count floating point / SIMD registers too.

I also also reading recently about Coreboot, particularly the problem of inintializing RAM. Before that's done there is no RAM, which makes programming anything awfully awkward. The solution is romcc, a special C compiler that uses no RAM at all and holds everything in registers. Neat trick. There's also the possibility to use the L2 cache on the CPU as RAM but I didn't go further down that rabbit hole.

I'm sure glad I program in Python and Javascript most of the time. But sometimes it's fun to look at how the actual hardware works.
posted by Nelson at 8:32 AM on October 25, 2019 [3 favorites]


I love this sort of historical nerditry, even if my understanding is a little tenuous. Thanks for posting this.
posted by praemunire at 8:37 AM on October 25, 2019 [2 favorites]


A slightly old but decent video about Coreboot: x86 system boot and initialization.

I had about the same thoughts as Nelson. Going from the A,X,Y of the 6502 to the D0-D7,A0-A7 of the 68000 (with the correct endian-ness) meant that when I looked into x86 architecture for the first time it was a serious bit of WTF is this madness? It's so small and everything is backwards and jumbled up.
posted by zengargoyle at 9:23 AM on October 25, 2019 [3 favorites]


I grew up in IBM, so I started out bigendian, but after meeting the 6502, little-endianess made more sense to me, especially in addressing.

I see the 6502 A,X,Y registers as very different from the 16 or 32 general registers, although they have some similarities. I believe in the 6502, that the A register was an actual hardware register, wired to the accumulator, not just a programming symbol. When you did an ADC, that command actually gated the transfer. And similar for the X & Y, although they could be used for offsets.
In an [old- I'm not current] IBM mainframe, the general registers were off someplace else, or even someplaces else. If you needed to add data from a register on a 3033, it had to get gated into the A,B,C, and/or D register(s), which were attached to the accumulator. This was done by microcode, not the programmer. The programmer does not have access to the A-D registers, or even need to know that they exist.
Maybe that's too subtle a distinction...
posted by MtDewd at 11:08 AM on October 25, 2019 [3 favorites]


Looking at the block diagram of the 6502 it too had internal registers connected to the ALU, but it does look like the A register did have some special circuitry associated with it for binary coded decimal.
posted by mscibing at 4:51 PM on October 25, 2019 [1 favorite]


Special Register Group is obviously a hamster name.
posted by surlyben at 7:14 PM on October 25, 2019 [2 favorites]


Oh $DIETY, it is truly insane what the people writing low level boot/initialization code have to deal with, even on brand new POWER systems where the responsible parties aren't bound by backwards compatibility. Even when things are done to make the whole process more sensible, bugs in the silicon rear their ugly heads and demand hacky workarounds to be implemented in firmware.

Once I came to understand the crap they have to work with, I found myself much more understanding of ancient BIOS interfaces still around today that would be familiar to someone 30 years ago.
posted by wierdo at 7:43 PM on October 25, 2019


Wasn't Bernard Quatermas part of the Special Register Group?
posted by rum-soaked space hobo at 7:07 AM on October 26, 2019


Oh $DIETY, it is truly insane what the people writing low level boot/initialization code have to deal with, even on brand new POWER systems where the responsible parties aren't bound by backwards compatibility.

One embedded project I worked on came with a customer requirement that the system be fully upgradeable, in the field, over the wire, without physical access to the hardware which would typically reside in an underground pit miles from anywhere, with a guarantee that a failed or interrupted upgrade could not possibly break it and that multiple post-upgrade operational failures would eventually cause it to roll back automatically to the previous version.

The machine was built around a PowerPC-based SoC, flash EPROM, and DRAM. If you know how flash EPROM works, you're probably shuddering as much at those requirements as I was.

I built the bootloader in two stages. The only work the first stage did was to find the latest available fully burn-verified version of the second stage elsewhere in EPROM, then jump to it. It did the absolute minimum of chipset configuration required to make that happen, because there was no way I could think of to replace the first stage in any way that genuinely had zero chance of breaking it, so it had to be 100% bug-free. It didn't use any DRAM, just EPROM and CPU registers. Didn't even enable the on-chip instruction or data caches; all that stuff went into a replaceable Stage 2.

The system upgrade code was written in such a way that you had to squint at it a bit to work out that it didn't actually include a way to upgrade boot stage 1. The customer complained about lots of ways that the product didn't meet spec, and also revised the spec over time, and we dealt with that with over-the-wire upgrades just like the spec said we should, but they never did notice that we never changed and in fact provided no way to change the 32K erase block mapped over the CPU's reset vector. And we did find and fix a subtle intermittently-manifesting bug in stage 2 caused by the SoC failing to behave exactly as documented, which left me feeling like I'd built the thing right.
posted by flabdablet at 8:31 AM on October 26, 2019 [4 favorites]


« Older Native water rats have worked out how to safely...   |   One more to go. Newer »


This thread has been archived and is closed to new comments