Good Ideas, Through the Looking Glass
October 3, 2012 6:52 AM   Subscribe

Good Ideas, Through the Looking Glass [PDF] by Pascal creator Niklaus Wirth, is an interesting look at some ideas - both successful and not-so-successful - from the past decades of computing.
posted by JeffL (36 comments total)

This post was deleted for the following reason: Poster's Request -- travelingthyme



 
Some good stuff in there, but man, some serious misunderstandings of the difference between your pristine, pure-as-in-theory load/store computing environment and the pockmarked rooftops, sewers and back alleys that make up the urban warfare of modern computer security practices.

If all programs were written in a proper programming language, this should not even be possible, because – if the language is correctly implemented – no reference to resources not named in the program would be possible.

That was a little... naive, a stark contrast to something like Marcus Ranum's six dumbest ideas in computer security, which is also a good read.

Still, I think this sort of historical record is important. "This is what we tried, it didn't work and this is why. If something in our assumptions has changed, maybe you can make it work, but if not, don't waste your time on it."
posted by mhoye at 7:51 AM on October 3, 2012 [1 favorite]


I'm not sure how seriously I can take a historical view of computing that mentions Lisp only once in passing.
posted by DU at 7:56 AM on October 3, 2012


Wirth once quipped that you could call him by name (pronounced "virt") or by value (pronounced "worth").

That will be funny if you have been programming for a long time.
posted by ubiquity at 8:01 AM on October 3, 2012 [5 favorites]


DU - looks like the paper was written in 2005, before the recent resurgence of interest in Lisp. Interesting.

The way they wrote the first Pascal compiler, in Pascal, without being able to compile it, is quite mindbending.
posted by memebake at 8:02 AM on October 3, 2012


mhoye: You may be thinking of "correctness" in its informal sense.
posted by bdc34 at 8:03 AM on October 3, 2012


It's not really a discussion of specific languages or machines so much as the underlying ideas or features (hence the discussion of functional programming rather than Lisp as such). This is his conclusion regarding functional programming:
Looking back at the subject of functional programming, it appears that its truly relevant contribution was certainly not its lack of state, but rather its enforcement of clearly nested structures, and of the use of strictly local objects. This discipline can, of course, also be practiced using conventional, imperative languages, which have subscribed to the notions of nested structures, functions and recursion long ago.
Note the use of the singular contribution. Basically, he thinks functional programming had one good idea, and the imperative languages already copied it decades ago. If that's his view, why would he spend much more time talking about FP and Lisp in particular?
posted by jedicus at 8:04 AM on October 3, 2012


He also speaks of extensible languages as if they died off in the '60s. One of the features of functional languages is they usually have the expressive power to implement what would normally be considered a syntax or a language feature without rewriting the compiler / interpreter (this is common practice in lisp and ml family languages, Haskell and Scheme being notable for it in particular).
posted by idiopath at 8:10 AM on October 3, 2012


If all programs were written in a proper programming language, this should not even be possible, because – if the language is correctly implemented – no reference to resources not named in the program would be possible.

This seems a perfectly reasonable statement to me and I don't see why this is "naïve" at all. Even in languages which don't enforce such a rule, if you're even a little smart you'll be naming all external things in such a way that it's clear what they are.

> I'm not sure how seriously I can take a historical view of computing that mentions Lisp only once in passing.

From the first page of the article:

"This led me to the idea of collecting a number of such good ideas which turned out to be
less than brilliant in retrospect. [...] No claim is made for its completeness."

(I might also claim that LISP hasn't really had that many good ideas that turned out to be really bad so might not deserve much mention even in a "complete" survey...)
posted by lupus_yonderboy at 8:16 AM on October 3, 2012


This seems a perfectly reasonable statement to me and I don't see why this is "naïve" at all.

It implicitly presupposes that there isn't (or can't be?) a malicious actor involved, and that assumption is original sin of computer security failures.
posted by mhoye at 8:26 AM on October 3, 2012


And overall, the discussion here is so far disappointing. Why not discuss the amusing examples in what's admittedly an incomplete survey rather than harping on what you feel is missing?

I am personally enjoying it (I skimmed the whole thing and now I'm reading it in detail).

In particular, this lets me see what happened a lot of concepts that I ran into peripherally a few decades ago and then vaguely wondered what happened to them. Binary-coded decimal (BCD) was one such example - I remember seeing a lecture on the Honeywell CP6 architecture where it was mentioned that it had a machine instruction to convert BCD to Gray code. This seemed stupid to me at the time and I am glad that I am proven right.

Some of these other bad features live on in modern systems. For example, Algol's "own" variables, which I am just finding out about from this article, seem very similar to C++'s static local variables.

I use this feature because it can be really handy - but I am quite aware that it's got numerous traps associated with it, and so is the C++ community. Moreover, there are fairly specific guarantees about when static locals are constructed, and when they are deleted, and even more important, there are "anti-guarantees", things you cannot rely on for these variables.
posted by lupus_yonderboy at 8:27 AM on October 3, 2012


It surprised me to realize that someone had to invent stacks. I mean, of course someone must have, but they seem so basic to me that it took me back a bit.

The other thing is that I have a hard time taking seriously code written in variable-width fonts. I can't justify it, but it just plain looks wrong to me.
posted by benito.strauss at 8:38 AM on October 3, 2012


> It implicitly presupposes that there isn't (or can't be?) a malicious actor involved, and that assumption is original sin of computer security failures.

I'm not seeing this. Here's the full quote:
On closer inspection one realizes that the need for protection and classes of programs arises from the fact that programs are possibly erroneous in the sense of issuing requests for memory outside their allocated memory space, or accessing devices that should not be directly manipulated. If all programs were written in a proper programming language, this should not even be possible, because – if the language is correctly implemented – no reference to resources not named in the program would be possible.
Surely he's absolutely right? If a language completely prevents you from even mentioning resources that you aren't supposed to have access to, why would you need memory protection?

Such is the theory behind Javascript, which has worked very well in preventing web programs from accessing memory that they aren't supposed to get to, and such is the theory behind the Java sandbox, which hasn't been so successful, but IMHO simply because the Java language is so complex and has all sorts of reflection capabilities.
posted by lupus_yonderboy at 8:39 AM on October 3, 2012


code written in variable-width fonts

For probably historical reasons, academics like mathematicians and computer scientists tend write their code examples in variable-width fonts, while computer programmers almost invariably use fixed-width fonts.
posted by lupus_yonderboy at 8:41 AM on October 3, 2012 [1 favorite]


I have to admit that I'm a bit surprised about Wirth's comments on object-oriented programming.
Nevertheless, the careful observer may wonder, where the core of the new paradigm would hide, what was the essential difference to the traditional view of programming. After all, the old cornerstones of procedural programming reappear, albeit embedded in a new terminology: Objects are records, classes are types, methods are procedures, and sending a method is equivalent to calling a procedure. True, records now consist of data fields and, in addition, methods; and true, the feature called inheritance allows the construction of heterogeneous data structures, useful also without object-orientation. Was this change of terminology expressing an essential paradigm shift, or was it a vehicle for gaining attention, a “sales trick”?
To some extent, I see where he's coming from, in that inheritance is the only clear new idea in OOP. But what Wirth is seriously overlooking here is the human dimension of programming. That is, object-oriented programming languages significantly bias things so that average software developers are more likely to do "the right thing". Yes, you could definitely do object-style programming in C or Pascal, but these imperative style languages also make it easy to take ugly shortcuts with global variables, overly long procedures, and more.

The two big shifts with OOP are to (a) de-emphasize the specific steps to take (the procedures) and to instead focus on its dual, namely the data, and (b) directly embody a lot of the best practices and design patterns that are well-known to great programmers, but probably not so much to average programmers. And by definition, most programmers are average. Unfortunately, this is also a simple observation that we often forget in academia, given the kinds of people we hang around all day.
posted by jasonhong at 8:56 AM on October 3, 2012 [1 favorite]


Lisp has invented a lot more than functional programming and well before 2005 as well. Garbage collection, for instance. Bignums, I think. Specialized hardware. Many other things and usually back in the 70s-80s.
posted by DU at 9:03 AM on October 3, 2012


"Because traditionally financial transaction – and that is where accuracy matters! – were computed by hand with decimal arthmetic, it was felt that computers should produce the same results in all cases, in other words, commit the same errors."

We call this "bugwards compatibility" at my shop. It is invoked surprisingly often.
posted by clvrmnky at 9:04 AM on October 3, 2012


...object-oriented programming languages significantly bias things so that average software developers are more likely to do "the right thing"

It is to lol, IME. The major feature OOP brings that I've ever seen is in inflating lines-of-code.

As for design patterns: If you are typing boilerplate code, your language sucks. This is exactly the problem Lisp macros were invented to solve and reminds me that "macros" are another thing Lisp invented, also well before this paper was written.
posted by DU at 9:05 AM on October 3, 2012


Oh oh oh, I thought it was a collection of good AND bad, not "good that became bad". OK, Lisp's non-inclusion now makes perfect sense.
posted by DU at 9:07 AM on October 3, 2012


Design patterns are not "boilerplate". They are high-level abstractions used to solve interesting problems in a manner that is both maintainable and testable.

The fact that one could write a boilerplate templating system that included some patterns or another is unrelated.
posted by clvrmnky at 9:14 AM on October 3, 2012


As for design patterns: If you are typing boilerplate code, your language sucks.

I think that equating design patterns and "boilerplate code" is a sign that you're thinking of design patterns as far more granular than they actually are. That sounds to me like calling dovetail joints an instance of "boilerplate woodworking."

Anyway, I'm surprised we're still having this argument about OOP. There are certain problems to which it's well-suited, and I think that becomes apparent very quickly to programmers who are working in collaboration with others on a large, well-specified project. The need for information hiding is less apparent when you've got control over the whole thing.
posted by invitapriore at 9:17 AM on October 3, 2012 [1 favorite]


I thought that the specialized hardware angle was interesting. It drove both the development of useful new hardware features like procedure return address prompted by Algol including recursion but also crazy stuff like the NS example.

"So I decided to program a new version of the compiler which refrained from using the sophisticated instructions. The result was astonishing! The new code was considerably faster than the old one. It seems that the computer archictect and we as compiler designers had “optimized” in the wrong place."

I love this situation of the hardware folks making these specialized complex instructions at the request of compiler folks but then they go off and optimize the snot out of the simple commands. Then the complex instructions are useless since the compiler can outperform the microcode with its broader knowledge of the code.

The introduction of recursive procedure calls by Algol and the description of opposition to recursion is interesting.
posted by bdc34 at 9:18 AM on October 3, 2012


Surely he's absolutely right? If a language completely prevents you from even mentioning resources that you aren't supposed to have access to, why would you need memory protection?
"Don’t you see that the whole aim of Newspeak is to narrow the range of thought? In the end we shall make thoughtcrime literally impossible, because there will be no words in which to express it. Every concept that can ever be needed will be expressed by exactly one word, with its meaning rigidly defined and all its subsidiary meanings rubbed out and forgotten. Already, in the Eleventh Edition, we’re not far from that point. But the process will still be continuing long after you and I are dead. Every year fewer and fewer words, and the range of consciousness always a little smaller. Even now, there’s no reason or excuse for committing thoughtcrime. It’s merely a question of self-discipline, reality-control. The Revolution will be complete when the language is perfect."
Orwell, of course. And to be clear, I'm not suddenly accusing you of being a secret fascist here, but it's just such a great line about the perfection of languages that I couldn't pass up the chance to wheel it out.

I'm trying to make two points. First: there is not one arbiter of language perfection here, and because there are many languages and those languages are made by fallible humans to address our fallible human needs, there is this ongoing need for flexibility of expression that makes "perfect" languages ill-suited for actually getting stuff done in any kind of timely manner. "Perfect" languages seem to lack the kind of flexible expressiveness humans seem to prefer, and more generally there are sound theoretical reasons grounded in Godelian incompleteness theory that suggest that no perfect language can both exist and be useful.

So given that you have to be able to write a compiler for a general-purpose computer in order for that computer to really be useful, and that anyone with the wherewithal can do so as they see fit, the idea that linguistic perfection means don't need an in-depth depth defence against bad actors who may not share your toolchain falls apart immediately.
posted by mhoye at 9:23 AM on October 3, 2012 [1 favorite]


As for design patterns: If you are typing boilerplate code, your language sucks.

That's a profound misunderstanding of what design patterns are. The intention of the "pattern language" idea is to give people a generalized set of terms to talk about overall structure while avoiding having to speak specifically to the particular implementation details. It's just about the exact opposite of "boilerplate code".
posted by mhoye at 9:27 AM on October 3, 2012 [3 favorites]


Well, dovetail joints ARE an instance of boilerplate woodworking. If you are handcrafting all your dovetails/patterns then there's a good possibility of screwing them up. If you use a template/macro then there isn't. Templates and macros can be of any granularity, from a couple lines of code up through creating whole families of functions (getters/setters being an easy, even simplistic, example).

But really, I don't hate OOP as a concept and it definitely is well-suited to many things. What I really object to is languages, e.g. Java, that pretty much force you to use it. Other languages basically ban you from using it and that's not good either. Plenty of languages allow you to do either.

Fewer languages, but still a non-zero number, allow you to write some parts functionally and some parts non-. That's another great thing to use judiciously but not be forced to shoehorn every problem into. (I guess you CAN always write functionally, but it may not buy you much at a language level if it doesn't do things like memoization.)

This is why I prefer style-agnostic languages like Lisp, Tcl, Python over Java, Haskell and C.
posted by DU at 9:28 AM on October 3, 2012 [2 favorites]


It's fabulous to see a historical view of computing. I've experienced a lot of it, but he's also snarky and judgmental where he has little business being so.

For example, he calls out C on a number of fronts and to be fair, he calls out some design decisions in Pascal, but not to the extent of the famous "Why Pascal is Not My Favorite Programming Language" by Brian Kernighan. Why C took off and Pascal did not has more to do with the language getting in the way or not. If expressions that don't short circuit, one return per procedure, broken I/O, broken string manipulation, broken dynamic allocation, broken for loops, etc. are all things that routinely got in my way in Pascal and weren't fixable without subtantial cost. He's free to have grief for the classic example:
x+++++y
but that's a strawman argument against ++. Just because I can write run-on sentences in English doesn't mean that I should or will (and this applies to a number of C language features like the preprocessor). Just to remind myself of the pain, I wrote a typical systems coding problem in C and Pascal and it was just as painful as I remember it.

With regards to OOP:
Nevertheless, the careful observer may wonder, where the core of the new paradigm would hide, what was the essential difference to the traditional view of programming.
The difference is this - before there was OO, we tried to write code to get as much of the benefits as possible:
A struct that contained typed function pointers and a possibly separate struct to hold the data. The problem is that writing out the damn function pointer prototypes and the constructors and type initializers and destructors is a royal and error-prone pain in the ass. Something that a robot could and should be doing, hence it went in the language.

With regards to functional programming, he also misses the advantages that have been gained in terms of lazy evaluation, shown in Haskell. I've also had some very surprising results running image processing code in F#. In most cases the F# code runs about 1.5x the equivalent C (ie, takes 1.5 times as long - which is consistent with performance I've seen in the MS JIT for most CLR languages), but in many cases it far outstrips it. Shocked the heck out of me. I also recall an exercise done in a class about implementation of functional programming languages. Without side-effects, a program can be written as a graph and to execute the program, you reduce the graph to a single node. As an exercise, we as students acted as CPUs/threads reducing a graph in parallel. The really cool thing was how little contention there was (which was the point of the exercise). He seems to have missed the benefit of that paradigm and stuck in the "first this, then this" mode of thinking.

And honestly, from time to time, I miss writing self-modifying code on the 6502, even though I didn't need it. It just turned out that absolute indexed (ie, LDA $2000, Y) took 4 cycles for load and 5 cycles for store and could use either X or Y as indexed and indirect indexed (ie, LDA ($40), Y) took 5 and 6 cycles for the same and only had the Y register available. The end result was that your bit-blitter ran substantially faster. Then again, I also miss writing to the metal in 68K - that was a real honey of a processor - made a lot of sense.
posted by plinth at 9:28 AM on October 3, 2012 [3 favorites]


Mhoy: You win, it is clear that the research into strongly typed languages, static or dynamic, is a complete waste and they are no use against the attacker holding the gun to one's head demanding one type in their PIN at the ATM.

They also don't defend-in-depth against changes in subject or being called a fascist and then being told that you are not being called a fascist. Point conceded.
posted by bdc34 at 9:52 AM on October 3, 2012


Ah...good old Niklaus. He spoke at GaTech in the late 80s. We all walked out thinking "He lives on an interesting planet. We should visit it some time." and got back to writing distributed OSes in C++ and whatnot.

I note in his section on architectural missteps he left out segmented memory, a la 8086. Probably because of why it came about...

And Cray was just as wrong when he thought there was no need for VM. At least Cray had a good reason to think that, not "memory will get cheap faster than your need for memory expands". Shades of "no one needs more than 640k".
posted by kjs3 at 10:03 AM on October 3, 2012 [1 favorite]


You win, it is clear that the research into strongly typed languages, static or dynamic, is a complete waste and they are no use against the attacker holding the gun to one's head demanding one type in their PIN at the ATM.

I don't understand; did Hitler also steal people's ATM PINs at gunpoint? Man, the more I hear about that guy, the more he sounds like a jerk.

The absolutist position you're ascribing to me here is the opposite of what I'm trying to say, that defense-in-depth and flexibility of expressiveness seem to be a better, more human-usable way of getting work done with computers than the decontextualized-purist approach the original author seems to be advocating.

Like just about everywhere else, purity is a beautiful thing to aspire to and a terrible thing to mandate.
posted by mhoye at 10:15 AM on October 3, 2012


mhoye: "That's a profound misunderstanding of what design patterns are. The intention of the "pattern language" idea is to give people a generalized set of terms to talk about overall structure while avoiding having to speak specifically to the particular implementation details. It's just about the exact opposite of "boilerplate code"."

On the other hand, in many languages the "design patterns" are already part of the language implementation itself. Starting with this premise, you can say that the need for day to day attention to design patterns means that either a) you are forced to use an insufficient language, or b) you are designing a programming language. Furthermore, the difference between a) and b) above is academic, and when writing in an insufficient language you spend quite a bit of time and energy implementing features of the language you should be using. This is a significant subset of the reasons for boilerplate code.

To put it another way, design patterns are code smells.
posted by idiopath at 10:46 AM on October 3, 2012


On the other hand, in many languages the "design patterns" are already part of the language implementation itself.

Yeah, that's a strong point; it feels like something that's been lost in the translation from the original "Pattern Language" idea into the "Design Patterns" you're referring to.

I think the original idea of the pattern language is much more general and powerful than the series of design patterns it's been in some sense reduced to. But that said, it's absolutely true that if you're looking at a well-understood, routinely-implemented elements of the technology, that should be handled either in standard libraries or by the compiler.
posted by mhoye at 10:53 AM on October 3, 2012


We'd have been a lot better off if Pascal (or its variants) had become the really widespread language in the 80s rather than C.

I was writing C and Pascal (and also Fortran77 and Ratfor) in the 80s, mostly on Unix systems, but also on the mighty IBM S/360. Kernighan gets it exactly right -- Pascal is fine to talk about programming, but C is much more usable in the real world.
posted by phliar at 1:35 PM on October 3, 2012 [1 favorite]


I liked his weak apology:
The designer of Pascal retained the goto statement (as well as the if statement without closing end statement). Aparently he lacked the courage to break with convention and made wrong concessions to traditionalists. But that was in 1968.
Edsger Dijkstra's letter Go To Statement Considered Harmful was published in the March 1968 issue of CACM, when Wirth was editor. Pascal was designed in 1968–1969 and published in 1970.
posted by fredludd at 2:38 PM on October 3, 2012


(I should mention that I worked in the same office as Dr. Kernighan for a few years and he's is a really sweet and civilized man in person...)
posted by lupus_yonderboy at 5:29 PM on October 3, 2012 [1 favorite]


@jeffl: I would agree that the original "teaching" version of Pascal created by Wirth has some annoying limitations, but later versions were certainly useful for getting real work done.

Translation: if you make Pascal not-Pascal enough, it's really useful. I don't think that's really Wirth's point.
posted by kjs3 at 6:46 PM on October 4, 2012


In school, I learned Turing, which was Pascal adorned with lots of assertions and other stuff intended to make it better for teaching. They even had Object Oriented Turing, but we never used it. This was in a second programming course, about 1995. Soon after, that department switched to C++ for their intro classes (what could go wrong?), then went to Java, which I think they still use.

The first class was taught in Gofer, a dialect of Haskell. I remember well its mournful error message: the stack has collided with the heap. I had no idea what that meant, but it sounded very impressive.
posted by thelonius at 8:11 AM on October 7, 2012


" [...] the idea that linguistic perfection means {you} don't need an in-depth depth defence against bad actors who may not share your toolchain falls apart immediately."

That paragraph is very convincing. I think Wirth is misguided in this case.

But consider this: many of the features introduced in the name of 'flexibility' -- and that are (suspiciously) formally 'incorrect' -- are also security holes. I've nothing against hardware that backs up language constructs (e.g. the Lisp machine, in my case). But formal attempts at perfection (we'll never get there, but it's worth trying) can *improve* security. Identifying which flexible features must be used with care is plenty worth it.

The "pockmarked rooftops, sewers and back alleys" are largely the result of inattention to and derogation of the contribution that can be made by theory.

Bottom line: most designers of 'modern' langauges should have opened a *!&^% textbook on langauge design. We know how to do this stuff.
posted by nickp at 7:18 AM on October 10, 2012 [1 favorite]


« Older 27.5 years of gameplay   |   Brilliantly Bad Books Newer »


This thread has been archived and is closed to new comments