deftly flensed from aging silicon
March 27, 2020 11:49 AM   Subscribe

Let us begin by weaving a lucid dream; the bones and sinews of my antique. Teach your contraption the language of the Motorola 68000.
posted by theodolite (27 comments total) 22 users marked this as a favorite
An elegant machine from a more enlightened age.

damn right, obi-wan....errrr i mean john
posted by lalochezia at 12:07 PM on March 27 [1 favorite]

Oh. Oh no. Oh.... this....this is entirely my shit.
posted by Sokka shot first at 12:28 PM on March 27

The language is a bit twee, but on-message for a buy-in culture whose beige slowbox snagged the folklore domain for their self-congratulatory origin story.

If you can't make disk images, the site the article links to has a few blank ones, like this 80 MB one:
posted by scruss at 12:34 PM on March 27

This isn't as good as the story of the programmer witch's job interview but it's written in the same mode and I respect that.
posted by Mr.Encyclopedia at 1:01 PM on March 27 [13 favorites]

The Force demands colour sacrifices. This is known.
posted by meehawl at 1:07 PM on March 27 [1 favorite]

The familiar “down-to” operator --> behaves just as you expect.
TIL people respace the -- and > operators and call them a whole new thing.
posted by scruss at 2:34 PM on March 27 [5 favorites]

This makes me happy. I was the final hand tester for the Motorola engineering team that developed the 68000 EPROM in the late 70s.
posted by a humble nudibranch at 2:53 PM on March 27 [28 favorites]

In the early 80's I had an after school job at Bell Laboratories. There was a guy in my department doing experiments on overclocking 68K chips. He had the hardware for a Blit terminal which he had modified to run a 68K at 40MHz (5x the rated clock rate), IIRC. On the chip in the huge ceramic package was a machined aluminum tub that he filled with liquid nitrogen for cooling. On the floor, he had a box with dead 68K. The candle that burns five times as bright lasts only a week or so.
posted by plinth at 3:02 PM on March 27 [21 favorites]

Another oldie reminiscence, slightly embarrassing: one of my early jobs was implementing COBOL on the Mac and its Lisa predecessor. The best parts of the project were the many diversions into Mazewars
posted by anadem at 8:59 PM on March 27 [2 favorites]

Anybody have a copy of Crystal Quest?
posted by migurski at 10:59 PM on March 27 [3 favorites]

Anybody have a copy of Crystal Quest?

There is this, and it's apparently available on Steam? But I suspect that you're really asking about this.
posted by hippybear at 6:33 AM on March 28 [1 favorite]

Anybody have a copy of Crystal Quest?

For old Mac software, you should visit the Garden.
posted by fimbulvetr at 6:37 AM on March 28

The original PalmPilot and the Palm also rank on the 68K processor. In college (half a million years ago now), I took a class on assembly language for the 68K (with the Palm as the platform.) That was fun, but not so useful in life.

I wonder if there are projects to run old mac software on the Palm (or vice versa).
posted by JMOZ at 7:40 AM on March 28

Re that garden link above, I bought a decent $6000 IIcx system (+4MB of RAM was $900 of that due to chip fab shortages) with the intent of getting rich & famous making indie Mac software.

Alas as is my pattern I mainly faffed about in PixelPaint and later Studio/8 throughthe 90s, letting the market pass me by.

But somehow I do have one four-star title in that repository, so I have that going for me...

A fun exercise now is picking the titles on that site I wish I’d been able to pull together, something like this I suppose.

Can’t wait for Apple to ditch x64, what a crap ISA!
posted by Heywood Mogroot III at 10:39 AM on March 28

The secret call to Andy Grove that may have helped Apple buy NeXT - "One visitor was George Fisher, the superstar CEO of high-flying Motorola. Their processors powered Apple and NeXT computers. He dropped a bomb that caught me completely by surprise: the Motorola 68000 line of processors we used was near its end of life because it had become too hard to cool..."
posted by kliuless at 3:06 PM on March 28

Aw I misread as 6800.
posted by skyscraper at 7:14 PM on March 28 [1 favorite]

Can’t wait for Apple to ditch x64, what a crap ISA!

RISC-V is just sitting there waiting for somebody to love it, but you know as well as I do that the winner of the next ISA marketing war will be proprietary, ugly, and come with a rentier's fever dream for a licence.
posted by flabdablet at 11:38 PM on March 28 [1 favorite]

Just brought this thing up on Debian. Apart from choosing target lx64 instead of mc64 as recommended under "Mutatis mutandis", I needed a couple of extra steps to make it go:
sudo apt install libx11-dev
before the make step would run successfully, and
mv vmac.rom vMac.ROM
without which ./minivmac could not find its ROM image.

posted by flabdablet at 4:43 AM on March 29

yeah, case-sensitive filesystems can be their own special pain sometimes, flabdablet. But configuring everything inside a shell on an emulated Mac was a nice touch.
posted by scruss at 8:42 AM on March 29

Came hoping for talk about the M68K, got snarky trash talk of C at the expense of Pascal. Puhh-leeez.

The elegant weapons of the more civilized age in this story are the 68K programmer's model and the C language.
posted by Aardvark Cheeselog at 1:16 PM on March 29

The 68K programmer's model was certainly much more pleasant to work with than any of the 8080's descendants and much less fiddly than 6502, but it wasn't quite as good as it could have been given the state of play in programmers' models of the era.

My main beef with it was the register layout. If you're going to put 64 bytes of register space on your CPU there are less wasteful ways to organize it than the 68K did. The split between A and D registers was quite a cunning way to wedge addressing for 16 registers into 3-bit instruction code fields, but I didn't like the fact that manipulating 8- or 16-bit data used up a whole 32-bit register.

I thought that the contemporary Zilog Z8000 organized its resources in a much more sensible fashion. On that machine you could fill up all the 8-bit register space and still have half the 16-bit space available. It would have been nice to have seen that kind of layout pattern used for the 68K's D registers.

But these were minor quibbles. The 68K architecture was necessary step on the road toward the big flat register spaces and big flat address spaces that would later become par for the course in RISC machines everywhere, and for that, compiler writers everywhere owe the design team a debt of gratitude.
posted by flabdablet at 10:05 PM on March 29

Honestly, I really liked the 68k ISA. It read really well and managing typical high level language things was pretty straightforward. There was a point in my career that I could look at Mac 68k code and tell you which compiler generated it.

It did have its quirks. For example, if you had a pointer in an address register and you wanted it advance it by 1, you could use the ADDA instruction but it turned out that the LEA instruction (load effective address) was much faster.

The initial release of the Macintosh prescribed a particular model for position independent code, dividing your code up into segments no larger than 32k each. You were also limited to 32k of globals. This was...frustrating, but compilers came up with better runtime models to solve this problem. There was a point when I could look at 68K assembly and tell you which Macintosh compiler generated it.

Early on, I worked with Lightspeed C (the precursor to Think C) because compared to other compilers for the Mac, it ran like a bat out of hell. They had an interesting problem in that they showed the number of lines compiled so far in a progress dialog box, but the Quickdraw font drawing routines were slowing it down too much. They modified the progress code to write directly to screen memory instead to remove the bottleneck.

While working on Adobe Acrobat 1.0, we used (IIRC) Think C 4.0 and the Think Class Library to run the UI. We were always looking for ways to speed up builds. Precompiled headers were a big win. We also found that if we had maxed out the RAM in the machine (I had a Macintosh IIcx with a black and white two-page display and a color monitor and 128M of ram and an external SCSI disk), we could set up a ram disk and redirect the include files to that. That made a huge difference in build time.
One of the tricks we had was multiple targets in the same executable. We had rasterizer code for the plain 68K and we also had code for the 68020 and we selected the appropriate one at startup.

I also wore an onion on belt, which was the style at the time.
posted by plinth at 7:06 AM on March 30 [5 favorites]

I love hearing y'all talk nerdy!
posted by a humble nudibranch at 1:21 PM on March 30

*puts on some soft jazz, pours out some wine, slips into something slinky*

Talk nerdy to me, MetaFilter.
posted by hippybear at 1:41 PM on March 30

OK - here's a story of how I optimized frame buffer rotate code for PostScript duplex printing on the HP LaserJet III series printers.
posted by plinth at 2:15 PM on March 30

*lays disheveled on the twisted sheets, eyes akimbo and short of breath*

Yes, that was nerdy. So very nerdy!
posted by hippybear at 2:21 PM on March 30

A hardware engineer friend of mine whom I shall call Dave, for that is not his name, had designed a 68000-based video poker machine board for a gaming company that wasn't Aristocrat Leisure but really really wanted to be, preferably before Aristocrat could.

Apart from doing some pretty slick (for the day) animation, this board needed to support AppleTalk-compatible networking. So he'd designed in the same Zilog Z8530 serial communications controller chip that Macs of that era used for that, but the software dev team couldn't get it to work and they brought me on to get the lowest-level driver going.

AppleTalk was based on an HDLC data stream at 230.4kb/s, so once a packet got started, bytes would be arriving at a rate of one every 35μs or so. The Z8530 has only a 3-byte receive queue, so if the software needs to stop paying attention to it for more than about 100μs at a time, it's going to drop data. On output it's even tighter: the output queue is one byte long and because HDLC is synchronous, delays will break outgoing frames.

Dave had designed his board with the assumption that network traffic would be handled with an interrupt per transferred byte, based on knowing that (a) an 8MHz 68000's specified interrupt response time is 44 clocks / 8MHz = 5.5μs which is quite a lot less than 35μs and (b) grabbing a byte off a chip and throwing it into a buffer is not a real complicated job. But there were other things at play that, as a hardware guy rather than a software guy, he'd not given much thought to.

For starters, he'd overlooked the fact that those 44 clocks don't even start until the current instruction has completed. And you wouldn't think that that would make a huge amount of difference to a CPU clocked at the 68000's almost unthinkably quick 8MHz, until you look at the worst case: a MOVEM instruction that dumps all 16 32-bit registers to memory using 32 back-to-back 16-bit writes at 4 clocks each will take 128 clocks just for the register writes, plus instruction decode and address mode overhead. All of which means that the worst-case lag from interrupt assertion to the first instruction of the interrupt service routine is something close to 25μs, which doesn't leave much change out of the 35μs/byte budget.

And for seconds, he'd used a very cunning hardware trick based on the existence of that very instruction to implement a pseudo DMA controller for shoving video data around at high speeds: there was an I/O bit you could set that would make any CPU write cycle into a particular reserved address range get its data not from the processor data pins, but from a latch and counter arrangement that pulled it out of sprite ROM instead. This meant that all the memory timing and destination addressing was still generated by the CPU, avoiding all the bus-control arbitration logic normally required for DMA. It was a very Dave trick and it worked really well.

So there was a spot in the frame buffer update code where a long sequence of inline MOVEM instructions occurred with nothing else in between. If an AppleTalk packet happened to arrive at the start of that lot, the CPU would really struggle to service per-byte Z8530 interrupts at adequate speed.

And for thirds, while he was designing the hardware he'd had no inkling of what an awful dog's breakfast the lead software engineer's chosen C compiler would make of interrupt service routines generally. If assembly language really was going to be totally eliminated from the project the way the software guy insisted it must, the Z8530 would have filled up its receive queue and started dropping data before the CPU was even a tenth of the way through the compiler's standard ISR preamble code.

Dave was basically right, though. Once I'd convinced the software guy that there was simply no way to make this thing work without an assembly code low-level driver, I wrote one that used as little time as possible and never dropped any incoming data or broke any outgoing packets. It made good use of the 68000's memory-to-memory addressing modes to minimize the number of scratch registers it needed to save and restore, and it also polled the SCC and looped around inside the ISR until the SCC had nothing left for it to do before exiting. Under worst-case response time conditions, that let it handle up to four bytes inward or two outward per SCC interrupt, not just the one.

All good technical fun. But I do still feel a certain amount of guilt for having made a genuine if small contribution to general zombification.
posted by flabdablet at 9:12 AM on April 1 [2 favorites]

« Older But when I go to sleep at night / Don’t you call...   |   The Antarctic Winter Film Festival Newer »

This thread has been archived and is closed to new comments