The Real Novelty of the ARPANET
February 8, 2021 2:21 AM   Subscribe

In my view, the Network Working Group was able to get everything together in time and just generally excel at its task because it adopted an open and informal approach to standardization, as exemplified by the famous Request for Comments (RFC) series of documents. [...] That framing, and the availability of the documents themselves, made the protocol design process into a melting pot of contributions and riffs on other people’s contributions where the best ideas could emerge without anyone losing face. The RFC process was a smashing success and is still used to specify internet standards today, half a century later. 3800 words from Sinclair Target for Two-Bit History touching on ARPANET's protocols.
posted by cgc373 (25 comments total) 17 users marked this as a favorite
 
made the protocol design process into a melting pot of contributions and riffs on other people’s contributions where the best ideas could emerge without anyone losing face

I'm old enough to have watched a lot of this stuff getting developed, and I'm inclined to agree with Radia Perlman that in fact the process was a boys' club where people who actually knew what the fuck they were doing were sidelined and ignored, and that we're still paying for that mistake today with an Internet based on shitty fragile protocols that break every time their appointed manicurists stop grooming them for five minutes.
posted by flabdablet at 3:47 AM on February 8, 2021 [17 favorites]


No love for RFC1149 and the more recent enhancement protocol RFC2549?
posted by sammyo at 5:10 AM on February 8, 2021 [3 favorites]


There are some good things about the RFC process, particularly when the Internet was more experimental and weird, but a combination of the boys' club Radia talks about and the fact that the stakes are much higher these days because the internet woke up discovered it was a major economic force has resulted in some things far more complex than they ever needed to be.

RFC 1149 is the best RFC, but I must confess some affection for RFC 748, specifying the "TELNET RANDOMLY-LOSE Option"
posted by rmd1023 at 5:24 AM on February 8, 2021 [2 favorites]


I am old enough to have participated in European standards activities, they were committees, seemingly dominated by bureaucrats whose companies wanted out of the office. (ISO seven layer model? That must mean seven protocols, one for each layer. And so seven committees some with multiple working groups, and of course coordination meetings between those committees.)

Much of the activity was dominated by companies wanting to recycle what they had already developed or wanting to remove things they didn't feel like implementing. I asked my company to remove me from the LDAP activity ('L' for lightweight) when it become the exact opposite because some people had an X.500 (very not lightweight) implementation they wanted to reuse.

I only saw the US system from afar but the 'boys club' view was not the impression I had at all.
posted by epo at 5:27 AM on February 8, 2021 [3 favorites]


"One for each layer"! I'm getting soft, some layers had lots of protocols. This was European style job creation.
posted by epo at 5:33 AM on February 8, 2021


I mention RFC 1178 - Choosing a name for your computer five days ago, a couple of days ago the same RFC comes up on Hacker News, now there's a RFC FPP. Something about coincidence.

I have gone up twice against M$ over not following the RFC. I sorta won once (we got a specific .exe that we had to tell people to run it once and if it's still being a bad computer and breaking things we'll figure that out). I lost the second time and we had to change our DHCP allocation to minimize the problems with M$ that refused to accept a new IP address. Graar.

Radia lost me when they implied that in '92 the ARPANET was tiny enough to just switch to this other thing all easy peasy. I expected them to go to putting that new destination address at the front of the packet because that's so obvious. But it sounds suspiciously like IPv6 without MAC randomization where one part is the hierarchical rout-able bit and the other part is the LAN side. Replacing the IP/Ethernet with something more like route-able IPX where every machine has a global id and pretty much eliminating any idea of privacy. And also when they talked about coming up with spanning tree and then waiting a few months for the implementers to do it. I don't think the hardware in '92 was up to the task of just switching to this new protocol. But I digress.

I especially like RFC 1345 Character Mnemonics & Character Sets. I have a script to search that and use an Input Method that handles most of them.

I was a student working in the IT at one of those places on that old ARPANET map and have known a bunch of people who wrote some of those and some of the software you rely on every day. It's so not a bunch of like frat boys.

Anyways, the 802.1q networking stuff is crap compared to the SFS that lost to it. Damn market forces.
posted by zengargoyle at 7:28 AM on February 8, 2021 [3 favorites]


If you're interested in it, I wholeheartedly recommend Darius Kazemi's 365-RFCs project, in which he celebrates the 50th anniversary of RFC1 with a year going through the first three hundred and sixty five RFCs in their order of appearance with some notes and comments.
posted by mhoye at 7:37 AM on February 8, 2021


Radia lost me when they implied that in '92 the ARPANET was tiny enough to just switch to this other thing all easy peasy.

So this brings back a weird factoid that I remember from middle school in the mid-1990s which I've never been able to reconcile. The library had an Internet For Dummies page-a-day calendar which I always made a point of reading, and there was one day which humorously made note that TCP/IP was only intended to be a test protocol for the Internet and everyone was supposed to have switched to AppleTalk (!) but no one ever got around to it.. On the one hand, it was a mid-1990s For Dummies Page A Day calender dispensing trivia to a general audience--probably not the most researched of publications. On the other hand, that particular page seemed like someone had an axe to grind and maybe they meant some other protocol (DecNet?)
posted by RonButNotStupid at 7:55 AM on February 8, 2021


humorously made note that TCP/IP was only intended to be a test protocol for the Internet and everyone was supposed to have switched to AppleTalk

I'm assuming AppleTalk was the joke. TCP/IP was a test protocol that they thought would be replaced at some point with something better, but the market said this is good enough and ran with it.
posted by jmauro at 8:21 AM on February 8, 2021


epo beat me to it; but the contrast for the IETF process at the time was the OSI seven layer model. There was a time it was considered the Way Forward. True to ISO form though the documents were nearly unreadable, and rigid, and glacial in development. Also you probably had to pay $1000 to read them.

Part of what was fun about the early ARPAnet days was that you'd develop the protocol first, then describe it in an RFC and maybe share some code. Big emphasis on a working system. I wish all standards were like that.
posted by Nelson at 9:51 AM on February 8, 2021 [4 favorites]


I'm assuming AppleTalk was the joke. TCP/IP was a test protocol that they thought would be replaced at some point with something better, but the market said this is good enough and ran with it.

Just a reminder we've been fighting to implement IPv6 since 1998 and US last mile providers still couldn't give the most flying of fucks about it.
posted by Your Childhood Pet Rock at 11:19 AM on February 8, 2021 [6 favorites]


Wow, that guy has a real axe to grind against angle brackets, and a strange approach to "historical" writing. The essay on Lisp quite literally spends half the article judging a book by its cover.
posted by scolbath at 11:22 AM on February 8, 2021 [1 favorite]


LISP fanboys: If there more than a dozen of them they'd be even more annoying than bronies.
posted by Your Childhood Pet Rock at 11:39 AM on February 8, 2021 [1 favorite]


Nelson: When I'm doing network training/presentations, I often include a picture of the OSI model and explain that it has absolutely nothing to do with TCP/IP but is legally required to be present in all beginner networking documentation. :)
posted by rmd1023 at 1:05 PM on February 8, 2021 [2 favorites]


Oh I dunno, we still talk about the 7 layer network as a model even for the Internet. It's easy enough to sorta map the layers. Ethernet or WiFi is layers 1 and 2, IP is layer 3, UDP & TCP are layer 4, and HTTP is mumbling layers 5-7. There's a whole industry built around the buzzword "layer 7 routing" and people talk about load balancing at layer 3 or 4 or 7 as understandable shorthand.

The funny thing is really it's only up through layer 3 that mattered to make the ARPAnet work. That stuff hasn't really changed much in 40+ years other than IPv6, and that's a very straightforward extension of IPv4. Even TCP on layer 4 isn't as essential as you'd think; QUIC has slowly been taking over for TCP as a major type of traffic on the Internet. (QUIC is nominally UDP but only in the most trivial way, and that only because it seems impossible to introduce a new IP protocol into the existing ecosystem of 'smart' routers and firewalls.)
posted by Nelson at 1:52 PM on February 8, 2021


Laugh about the ISO 7 layer model if like, but they were grasping at concepts that wouldn’t be hashed out and realized for decades. But think what is happening in a modern browser app:

JavaScript
HTML
HTTP
TCP
IP
IEEE 802.x
Cables, radio, etc

That’s a near exact correspondence to the now derided 7 layers if you squint just a little bit.
posted by sjswitzer at 2:46 PM on February 8, 2021 [4 favorites]


Oh yeah. Conceptually, the 7 layer model is great, but I think it's worth pointing out to people that it's a spec for a different protocol, so if things don't fit right, that may be why. (It gets a little mushy in the middle, there, to be sure.)

QUIC is still weird and new to me, but I think it's probably the biggest recent examples of having a robust new (-ish) protocol developed with a functional code base that's well past the lab state before hitting the IETF. The very large closed-system environments at google and probably facebook, amazon, and akamai are the only places I think you could have something like that spin up without it getting publicly workshopped first.
posted by rmd1023 at 3:50 PM on February 8, 2021


All of you complaining about the hidebound ISO way of doing things and the boy's club world of RFCs and the IETF, take a break for a second and read up on the IEC's standards and protocols. 61850 and 61970 are a marvel of awfulness. Or the IEEE's DNP3, a standard so bad that every vendor that uses it specifies just what portions of the protocol they pointedly refuse to implement.
posted by ocschwar at 5:28 PM on February 8, 2021


I played chess with a computer over ARPANET. Yes, the computer was in Urbana, Illinois. (Good morning, HAL.) It beat me relentlessly. You guys can go crazy over RFCs and protocols, but I still remember the wonder of this goddamed thing just tearing me to pieces in a small corner of a dark room at the computer center.
posted by SPrintF at 5:37 PM on February 8, 2021 [1 favorite]


i love these two pieces from a networking engineer about IPv4 and the evolution of IPv6: 2017 2020

lots of interesting details about networking and what folks were trying to do at any one time.
posted by bruceo at 6:24 PM on February 8, 2021 [2 favorites]


The RFC process was a smashing success and is still used to specify internet standards today, half a century later.
I dunno, "rough consensus and running code" seemed pretty good in the 1980s when anybody who had permission to open a raw socket, or listen on a socket numbered < 1024, was presumably a good guy who knew what he was doing. But I have to suspect that if the implementers of RFC821 could anticipate the results of letting anybody put anything they wanted in message headers, they would have put in some kind of authentication up-front.

A whole lot of basic internet infrastructure is demos that got out of hand, and not really something you'd have put into production if you were thinking through the consequences of having them cast into concrete as basic standards for interoperability.
posted by Aardvark Cheeselog at 7:13 AM on February 9, 2021 [1 favorite]


On the other hand, all that fancy well designed stuff where they thought through all the consequences ahead of time never got traction. Perfection is the enemy of good and all that. IIRC there was a highly influential essay or book written about exactly that concept applied to Internet engineering, what, 15 years ago? I can't remember.

To pick on email; the lack of authentication in RFC 821 set the stage for all the horrors of spam, phishing, and the general brokenness of email today. OTOH email is also still a functional tool for most people and has enjoyed nearly 40 years of success in RFC 821 form. The early folks wisely realized authentication was hard, particularly decentralized authentication, and just punted. If you look at the horrors of modern email now and DKIM and the like, they made the right choice then.
posted by Nelson at 7:51 AM on February 9, 2021 [1 favorite]


"Never got traction" is not the same as "outrun in short-term-appeal-driven market for consumer tech with strong network effects". Alternatives worked and had traction, but enough forces aligned to push a particular set of winners over the line. Very historically contingent and contextual forces.
posted by ead at 8:30 AM on February 9, 2021 [1 favorite]


Metafilter: RIP Lotus Notes.
posted by Nelson at 8:33 AM on February 9, 2021


There is a certain degree of viewing the past with the benefit of hindsight going on here. Up until the late 80s the internet was populated by a smallish number of technically literate people who could be trusted to be more or less well behaved (or at least well intentioned). Progress seemed to be gradual and continuous.

Although there were issues beforehand for me the big change came with the web (now if there was ever a prototype which went mainstream far too early...) after that the genie was out of the bottle and was never going to be put back. The number of users exploded, most were unskilled in computing issues and some were ill intentioned, existing design choices, good and bad, became ossified, commercial interests appeared. Progress became damage limitation.

AND, the OSI 7 layer model was an excellent model, possibly the best attempt yet. It separated out concerns very neatly and provided a good framework for thinking about implementations. The problem is that the Europeans decided to literally implement the model, hold layer boundaries sacrosanct, and provide the same separation of concerns in their implementation that existed in the model. This was silly and possibly reflected a turf war between different committees, the Americans realised there was good engineering sense in merging layers. Someone at the time said or wrote, "If you know what you are doing 3 layers is enough, if you don't 7 layers won't help you.". Progress was strangled in committee.
posted by epo at 12:58 PM on February 9, 2021 [1 favorite]


« Older Grattis på födelsedagen, Wilhelm!   |   Monkey Business Newer »


This thread has been archived and is closed to new comments