"...it's only when you have your code in your head that you really understand the problem."
August 24, 2007 2:55 AM   Subscribe

Ever read a blog post, and think, "I wish I wrote that"? For all the Mefites with the many AskMe questions about "can I/should I/how should I learn to/ be a computer programmer", here's a pretty good explication of how good programing is done: Holding a Program in One's Head.
posted by orthogonality (43 comments total) 32 users marked this as a favorite
 
Excellent essay, pretty much ties in with my own experiences. It is quite amazing that any coding gets done at all in normal offices.
posted by AndrewStephens at 3:04 AM on August 24, 2007


Point #2 is especially key: "loading" a program into your head takes time. And Graham seems right that we have different kinds of memory; getting a sandwich (or browseing Mefi) doesn't wipe out your working memory, and I can even keep different kinds of programs in memory at the same time, e.g., a SQL operation or DDL can co-reside with something procedural.

But trying to work on a different problem of the same type will displace what's currently "in memory". One "cheat" is to mentally take your work home for you; I do this a lot, but it can compeltely block my ability to concentrae on other things athat are "similar enough".

Ideally, I prefer to work 10-16 hour stretches for about four days, then crash. It can be difficult to get this to work in the typical 9-5 shop.
posted by orthogonality at 3:04 AM on August 24, 2007


This is Graham's take-home message, and it deserves to be pulled out an emphasized:
Good programmers manage to get a lot done anyway. But often it requires practically an act of rebellion against the organizations that employ them. Perhaps it will help to understand that the way programmers behave is driven by the demands of the work they do. It's not because they're irresponsible that they work in long binges during which they blow off all other obligations, plunge straight into programming instead of writing specs first, and rewrite code that already works. It's not because they're unfriendly that they prefer to work alone, or growl at people who pop their head in the door to say hello. This apparently random collection of annoying habits has a single explanation: the power of holding a program in one's head.
posted by orthogonality at 3:07 AM on August 24, 2007


5. Write rereadable code

I've been told that variables with names that defy logic are the way to go, in order to maintain a steady salary for years to come. You know... if (zelda = 0) then apple = suitcase. Then I tried it, and could never figure it out myself!
posted by Xere at 3:23 AM on August 24, 2007


I'm not a programmer, or even close, but Graham's essays are always fascinating to me. This one especially. I wish I had read his essays as an 18 year old.
posted by mullacc at 3:26 AM on August 24, 2007


So I guess that's what spellcasting in AD&D is like.
posted by redteam at 3:58 AM on August 24, 2007 [3 favorites]


I used to tell the newbie programmers that good code is what's left when you've deleted nine tenths of what you've written.
posted by flabdablet at 4:09 AM on August 24, 2007


I'm going to disagree with a good part of what he says. It is definately true that you *can* program that way, and its probably true that many people *do* program that way, but it isn't the only way, much less the best way.

The fact is that studies have shown that the optimal method for programming is virtually the opposite of the "hold everything in your head" approach. Which isn't so good because there's a geek machismo in programming circles which holds that the ability to keep a huge program in your head is a measure of your geek cred, and that using other techniques is a sign of incipient suit-ness, or at least of submitting to the suits.

Back when I was studying programming in college I was among the people who sneered at pseudo-code, flow charts, etc. They were merely tools for those without the mental horsepower to do it the real way: just sit down and start pounding out code becuase you're such a badass you can and do have it in your head.

Then the professor in one class mentioned a few studies that had shown that among good programmers writing things out in pseudo-code, or flowcharting, or whatever actually speeded up the process, produced fewer bugs, etc. I didn't believe him, and I don't think anyone else in the class did either. So he split us into two groups, and required one group to plot out their program using pseudo-code before they were allowed to actually start coding. I was part of that group and certain that it was a waste of time and an insult to my intelligence.

Every one of us in the "do pseudo-code first" group finished our program before the people who had just started hacking, and on average our code was better (more terse to begin with, and fewer logic flaws).

The belief that its faster and better to just sit down and start hammering out code is deeply ingrained. The belief that you must hold the entire program in your head, and that a failure to do so is harmful to your geek cred is also deeply ingrained. But its bullshit.

And it limits programs. If programmers limit themselves to problems that they can hold in their heads, they will never be able to tackle the really big problems. I don't care how badass you are, you can't hold the entire code for a truly huge program in your head. I'm not saying that its bad to be able to hold large amounts of code in your head, I can and do. I am saying that it's bad to depend on that, and to do it to the exclusion of all else. Because if that's the way you operate once you find a problem big enough that you can't hold it in your head you will crash and burn.

Using pseudo-code or flowcharts seems slow, seems tedious, and seems like not only a waste of time, but an embarrassing admission that you aren't smart enough to do it the real way. In fact its faster than just sitting down and writing code, and it lets you tackle bigger problems.
posted by sotonohito at 4:39 AM on August 24, 2007 [19 favorites]


hmmm. while what he says makes sense i think you also need to be able to degrade gracefully. there are times when you can't do what he recommends. being able to work with a system where you can't get it all in your head - where it's written in a crappy language, a disorganized mess, the result of several people's inconsistent visions - is, in many ways, more challenging. "anyone" can get all obsessive about something, but being able to keep being productive when the conditions get harder (and being able to pull that mess back into something resembling order) is the real challenge.
posted by andrew cooke at 4:40 AM on August 24, 2007


I agree with sotonohito. The main problem I have with just sitting down and pounding out some code is the "then some magic happens" problem. I have a vague algorithm in mind, but because I haven't thought through every step, I don't realize that one of the steps is computationally expensive, impossible or whatever. Then I have a whole file full of code that I don't want to waste, so I just hack something together to make the magic happen in some half-assed way and the entire program suffers.

In fact, I had something like that at work recently. Because I didn't sit down beforehand and really lay out what I wanted to accomplish and how, I took a timid approach in the code to process each datapoint individually. It took me a few days to write and processing a full sample of datapoints took 40 minutes. Then I went back and spent another day "optimizing" each step to reduce the number of DB calls and it took only 7 minutes.

I noticed that most of those 7 minutes were in a particular area, which I eventually realized was O N2. "Wait a second--if I process a full sample as a single unit, rather than adding a datapoint at a time, I could make this go a lot faster."

That version runs in 17 seconds. It took a single day to write.

I would agree with the general thesis that having a whole programming unit in one's head helps to engineer and maintain it. But a program of any real size should be composed of many programming units and I should *not* have to have them all in my head to work on any one of them.
posted by DU at 5:12 AM on August 24, 2007 [1 favorite]


sotonohito and DU -- I didn't see anything in the essay contradicting the idea of using flowcharts or pseudocode. Indeed, such tools would be very helpful in the process of reducing parts of the program to "black boxes" which then don't need to be loaded in detail.

Great essay, exactly in line with how I work myself. And I use flowcharts and pseudocode and state maps and so on all the time.
posted by localroger at 5:20 AM on August 24, 2007


Nerds.

(Man, I should really go and finish that Computer Science degree..)
posted by radgardener at 5:21 AM on August 24, 2007


I was expecting to be annoyed (pontificators like Graham usually do) but this was fairly straight forward advice. Some his other essays, on economics and social policy really expose a rather tepid mind. For example this one where he argues that Unions and the decent-paying middle class jobs were economic anomalies. And here he argues directly for income inequality because he thinks it will result in more startups. No, he doesn't take the second step and explain why more startups would be good for the country (IIRC), so I suppose he expects the reader to simply make that assumption. But why? I mean even if what Graham says is true about startups, then why the hell does it matter? I mean the idea that startups of all things are the most important economic indicator, more important then anything else (like the median standard of living) seems myopic to say the least.

So since I already am annoyed by Graham I'm just going to close by saying any 3rd year CS major could have given you this advice. So there.
posted by delmoi at 5:28 AM on August 24, 2007 [1 favorite]


I agree with sotonohito. The main problem I have with just sitting down and pounding out some code is the "then some magic happens" problem. I have a vague algorithm in mind, but because I haven't thought through every step, I don't realize that one of the steps is computationally expensive, impossible or whatever.

DU: This advice, I think, is aimed at people who don't generally come up with "algorithms" The vast majority of programming that's paid for (as opposed to hobby stuff) is at corporations dealing with data flow and business logic. None of that stuff is every really complicated, just annoying. For that kind of stuff Grahams advice would be helpful.

On the other hand, he misses some points that I think are important for those types of programmers, like version control and focusing on maintainability.
posted by delmoi at 5:33 AM on August 24, 2007


DU Yeah, an entire unit, however you define that (I typically define it as a single subroutine) is neither unreasonable nor restrictive to hold in your head, and I'll agree that if you can't manage to do that then you probably aren't cut out for programming.

localroger It appeared to me that the author was arguing that a programmer can, and must, hold an entire program in his head. As evidenced by his title: "Holding a program in one's head", and by lines like "Your code is your understanding of the problem you're exploring. So it's only when you have your code in your head that you really understand the problem."

And again, I'm not saying that there's anything wrong with holding state in your head, its good. But it is wrong to depend on being able to do that, and not to know any other way to program. People addicted to the "all in my head" way crash when they hit a problem too big. I've seen it.

They work on toy problems in college, stuff that can usually be held mentally with only minor difficulty, then they hit a big programming project and its like watching a bird fly into a window. Sometimes they won't even know that other techniques exist, and even if they do they typically don't know that they really can and do work. I think there's a serious problem with how programming is taught in college, and it seems to me that articles like that one are symptomatic of that problem.
posted by sotonohito at 5:33 AM on August 24, 2007


You can never get it all in your head. What you can do is capture all the abstractions at a certain level, and then change levels as needed. Functional decomposition, objects, patterns, pseudo-code, UML diagrams, state transition diagrams, all those are ways of abstracting the problem (Koenig: "Abstraction is selective ignorance.")

But at any one level of abstraction, whether that's a leaf function or the high-level architecture of an entire system, you gain a lot, you really need to be able to fit that into your head so you can manipulate it, modify it, poke at it, see it in operation.
posted by orthogonality at 5:34 AM on August 24, 2007


I agree with the essay, in that having the model you are manipulating to build your program thoroughly in your head makes a big difference to coming up with novel solutions and speeding things up. I also agree with the comments that include an acceptance that the model can not always encompass ALL of the code. The programs I work on for work are modular enough, and huge enough, that that would neither be realistic nor necessary. The being said, when my own personal projects are small enough that it does all fit in my head, it can make for some really efficient programming.

One amazing feeling I do sometimes get, when I am immersed in a program and the piece that I am working on is in my head, is the feeling that my subconscious is taking on more and more of the problem solving. I can even take over to the extent that I will start to manipulate code without even really knowing what I am doing until moments afterwards, when my conscious mind has caught up.
posted by beegull at 6:09 AM on August 24, 2007 [1 favorite]


One amazing feeling I do sometimes get, when I am immersed in a program and the piece that I am working on is in my head, is the feeling that my subconscious is taking on more and more of the problem solving. I can even take over to the extent that I will start to manipulate code without even really knowing what I am doing until moments afterwards, when my conscious mind has caught up.

I do that with Math, Music, and programming.

Of all three, I think programming is the easiest for me to do it in.
posted by radgardener at 6:16 AM on August 24, 2007


Agree with sotonohito. All great programmers do the 'in the head' thing. All of them. I believe, in fact, that developing this skill is what causes the deeply intense immersion into programming that typically happens early in a programmer's development. He or she will just go away from the world and do nothing but programming for a long while, often several months. When they emerge, they're programmers. This metamorposis is actually called 'larval stage', by the geeks themselves, and they consider it to be when tinkerers become real programmers. (Myself, I'm just a tinkerer.... I've seen this from outside but I have not done it myself.)

But when you get into very large projects, the rules change completely, because the problem space is too big. NOBODY understands how everything works. As programs get more complex, dev teams rely on smarter and smarter programmers to hold the system. Eventually, it exceeds the smartest people on staff, and starts breaking, often badly. There are only so many genius-level programmers in the world, and they tend to get chewed up and spit out, burning to ashes in just a few years.

I'm not a good enough programmer to really know what happens next; I've seen the complexify-to-crash process many times, but being an amateur, I'm not sure how it's dealt with from there. I thought Microsoft had solved the problem fairly well with XP, but Vista is proof that they do not have good control over their problem space.

I suspect that the NASA code may be one of the only examples of a really big program that works, correctly, virtually every time, and they spend an unbelievable amount of time and money to get there. They're pretty much the only development organization on the planet that's gotten that good... and they do it with ordinary smart people, not superstars.

It appears that truly robust programs past a certain level of complexity become fantastically expensive to do. And when a single genius-level programmer can literally outperform a team of 100 mere mortals, there's lots of room for major upheaval. It's still possible for entire ordinary companies to be put out of business by single godlike programmers... but those company models will be the only way to move forward consistently over the long run.

From a big-picture viewpoint, it would appear that market forces mean that software is going to suck for quite awhile yet, until programs get too sophisticated for the single supergenius to compete with. Eventually, method will win out over pure brainpower, but I think that may take another human generation.

It's also interesting to note that open source software is relatively immune to the financial pressure of the single supergenius; even if his program X eats 3/4 of the market for Program Y, that doesn't automtically mean that Y will die, or even become irrelevant, like it would in the commercial world.
posted by Malor at 6:16 AM on August 24, 2007 [1 favorite]


How programming-specific is this advice, really? I do the same thing when writing. It may take me several days to get my ideas straight in my head, how I want to phrase certain statements, how I want to organize my defense of the main points, and where and how I will insert supporting evidence such as images, graphs and tables. I work best when I have long periods of time to continue writing. If I have a coffee break, it gives me a few minutes to re-think the structure and work out kinks before they are committed to disk. If I know I have to stop writing in an hour I usually get nothing done, as it takes me an hour or so to gear up to write in the first place. Finally, coming back to an old chunk of text long after I've written it, I often don't know what I was thinking when it was first put together.

In many ways this kind of advice could really apply anywhere people use their brains to get things done.
posted by caution live frogs at 6:24 AM on August 24, 2007


Wow, this is great and right up my alley:

DU, that's called throw one away, and it is quite possible that you would have not realized that the proper processing method was a whole at a time without brute forcing your way through it.

There in lies the rub with flowcharting and psuedocode... those actions do not take place within the space of the actual code and it easy to miss something or make assumptions that are incorrect. Don't get me wrong, I have a 4x8ft white board 3 feet with covered in hastily drawn flow charts, lists, and other random items.

I've always referred to 'it' as wrapping myself around the problem. This process doesn't involve code per se, as much as it does understanding data flow and the structure of boundaries between the parts of the program that must interface.

I create from the bottom up, normally using one or two test cases, but with a whole world of test cases sitting right in line. Start with the least dependent section first, and move up the hieracracy.

And I'm definately guilty of tunnel vision when it comes to coding. If I'm deep into a code project, I turn into a 'unix' geek, unshaven, barely showered, and the whole world is lucky I have clothes on. Food becomes a secondary issue, and I pace about, generally coming to my worldly senses realizing I'm pacing about outside, muttering to myself, smoking a cigarette while strangers passing by tightly hold their children's hands lest they get to close to the mad man I've become.

I like my code to be beautiful and I don't mean visually. I have a general belief that there is a 'more perfect' solution to most programming problems, and that those solutions often don't pop out until I've created at least one working iteration of a solution.

I think that the point to make here is that there is where many programmers stop. Mostly out of necessity because someone in management is screaming bloody murder about deadlines. I myself am in the situation more than not.

But there a few raw chances to transcend this 'just make it work' mode of operation, and then go back into the complexity, understand how the code is working, interacting, flowing, and to build an even better mental modal of what is going on. The process is iterative. Code -> understanding -> code, ad nauseum.

These are the moments where the great lightbulb don't just turn on but explode into moments of wonderous joy. And few people understand me when I've reached this stage, a point of beautiful code where by what is on the screen written in text is the physical manifestation of complete understanding in my mind.

quote:
“The megalomaniac pleasure of creation,” the psychoanalyst Edmund Berger wrote, “produces a type of elation which cannot be compared with that experienced by other mortals.”
posted by killThisKid at 6:25 AM on August 24, 2007 [3 favorites]


Paul Graham is great at writing stuff people like to link to.
posted by chunking express at 6:26 AM on August 24, 2007 [3 favorites]


I've worked in both architecture and programming. Holding it all in your head is vital to both, although it is true that what is in your head may be one of many levels of abstraction.

When you don't try to hold the program in your head, you wind up taking an approach that is like designing a house one room at a time, starting at the front door. This rarely works out well.

Some people here seem to be confusing "just start coding" with this approach, but I don't see it that way. Rather, "just start coding" for me means writing a few lines of code that cover the entire problem space of whatever unit I'm working on, then going back and adding details, filling in method bodies, etc.

Having the whole problem in your head allows you to turn it around and look at it in different ways, leading to new insights. Otherwise you are like the blind man with the elephant, seeing only one significant aspect at a time.
posted by bashos_frog at 6:43 AM on August 24, 2007


this advice reminds me of the geek machismo (GM) manifesto, the jargon file. many parts of the GM argument are debatable, but, holy shit, i hate it when people talk to me while programming.
posted by kickback at 7:09 AM on August 24, 2007


bashos_frog Naturally its important to grasp the problem as a whole. But if the problem is big enough you can only see it as a rather fuzzy whole. And that's important, without at least a fuzzy picture of the the problem its difficult to go much of anywhere.

The problem is that with small scale projects you can code from the model in your head and it works well. The more badass the coder, the larger the project they can do that way. But, as Malor observes even the most badass of coders can't do the biggest programs that way, and they break themselves trying.

I think its related to the gifted child problem. I'm sure a lot of people here experienced it when they got to college and started taking real courses. I'll use me as an example. I'd gotten through my entire life before college not studying. I'd never needed to. I was puzzled about, and scornful of, the entire concept. I supposed that this "study" stuff might have been needed by mental weaklings (though even there I suspected that they were just being lazy or deliberately stupid), but for smart folks like me its easy: you read the book, you pass the test. Worked great until I started taking real classes at university. Then I found myself struggling becuase I had no idea how to study. I learned, eventually, but I suspect that even today I'm not as good at it as I should be.

I think the same thing happens with programmers. There's a contempt for anything but just doing it by virtue of your own inate intelligence and badassness. But there is always a problem so big it can't be solved that way (or if you can solve it that way, you solve it badly). By then, the programmer has been successful with the "just doing it" approach for so long that they're sure its the only way that really works. Maybe *other*, lesser, programmers need the wimpy stuff, but not *ME*.

There's a thing to remember historically here. The most legendary of the badass programmers were the early MIT hackers back in the 1950's and 1960's. Due to the inherent limitations of computers at the time they all wrote out their code away from the computer. I don't think its coincidence that they were legends, they were smart and by virtue of technological limits they were forced to program the non-intuative way. A smart person programming the way I did before my professor shook me up can do a lot, but he'll never hit legendary status.

Malor I dunno how MS does it, and I'm sure that many big projects essentially operate by using up a genius level programmer. Which is a very bad thing indeed, genius level programmers aren't exactly common.

Other projects doubtless do it the other way: the flowcharts, pseudocode, etc way that allows for big projects to be completed without burning out a genius. I prefer the latter to the former.

kickback Well, yeah, like I said, I'm not arguing against holding a lot of state in your head. I'm just arguing that in and of itself that'll only work for some problems, and that eventually everyone, no matter how macho, will run into a problem that can't be solved by that alone.

And naturally programmers will tend to hold the maximum state in their heads at any given time because it is quite efficient. We just need to remember that when it isn't enough there are tools out there to let us handle problems bigger than we can fit into our heads all at once, and we need to be familiar with and used to using those tools. Among other things it really does make programming even the smaller problems faster, easier, and less buggy.

The semester my professor forced us to try the pseudo-code approach, I was still doubtful of the utility of the approach. So I alternated between doing it the macho way, and doing it the pseudo-code way for the toy problems he tossed at us that semester. I tracked my time and found that even including the time to lay out the pseudo-code it took me an average of 20% less time than just sitting down and hammering out code. And the programs where I did pseudo-code beforehand were more elegant.

Try it yourself if you have a few toy problems lying around. Do 50% of 'em by just sitting down and banging on the keyboard, and do the other 50% by laying out pseudocode or a flowchart, or whatever, first. You will probably find that a) the psuedocodeed programs are better, and b) once you get used to it it actually takes *less* time to start with pseudocode.
posted by sotonohito at 7:27 AM on August 24, 2007


Nope, can't do it. I'm working on one thing, get interrupted by a question, while I'm looking up code in our multi-million-lines-of-code project to answer it I get interrupted by three other people with another question, a bug report, and a feature request (despite the fact there are actual channels for these things).

I'm certainly willing to admit that MMO engine programming is a little different from the typical business case.

I got a call while at lunch yesterday, asking me about an obscure undocumented command for changing the font size in the console panel in the editing client, thrown in on a whim 3 years ago, which nobody ever really wanted or needed or used. I could tell them where in the source code to search for it, but not what it was.

Computers remember stuff so we don't have to.
posted by Foosnark at 7:36 AM on August 24, 2007


Yeah - not sure about that. On the one hand, I do have a general idea of where stuff is in our code base is, how it works, etc, etc. But I definitely haven't got 1/2 a million plus lines of code carefully arranged in my bonce for the purpose of leisurely strolls.

Generally though - it seems like a realistic article.
posted by seanyboy at 8:03 AM on August 24, 2007


I had a CS lecturer once who called the bird-hitting-the-window thing "running out of native cunning". His point was that software development methodologies exist precisely to address that issue. I think it's a good point. But it doesn't change the fact that the more native cunning you can bring to bear, the better your solutions can be - as long as they clearly communicate that native cunning both to other readers of your code and to you the next day; "clever" code is not the same thing as elegant code.

Personally, I have a deep and abiding distrust for systems that feel like nobody has ever really had time to hold much of them in their heads. That's one of the many reasons I generally prefer using open-source stuff to proprietary stuff, which is almost universally developed under heavy deadline pressure. It's also the reason for my lingering distrust of OO; it seems to me that the pursuit of re-usability at the expense of almost everything else, including elegance and simplicity, can end up wasting more time than it saves.

I like KillThisKid's Berger quote very much, and I think the phenomenon it describes is pretty much what drives open-source coders. The nice thing about visible source code is that the elation of the discovery of a beautiful way to do it can be shared.

I can't really write code until I've covered a few sheets of scratch paper (not screen; must be paper, or whiteboard at a pinch) with boxes and arrows or timing diagrams or whatever. I've never liked code much as a way to Get It In My Head; for me, code is what comes out afterwards. This may or may not have something to do with the amount of code I was forced to submit on coding sheets for somebody else to punch onto 80-column cards early on in the larval stage - I dunno. But I do like my boxes and arrows.

I also get very cranky when I can't get an uninterrupted ten-hour run at it if what I'm doing is even slightly tricky.
posted by flabdablet at 8:32 AM on August 24, 2007


I've had discussions with coworkers about how writing code at times seemed less like engineering and more like art. Viewing ourselves as artists justified our bohemian lifestyles but conflicted with our corporate environment. I used to bristle every time my bosses would lecture us about getting in on time and being in during "core hours." What if I'm not inspired at 3pm? What if my epiphany comes at 3am? I figured as long as I got the project done on time, that's what mattered. And stop asking me for status reports.

Anyhow, I think the author makes some good points like working in small groups and not having everyone editing the same piece of code, but all the office jobs I've had have operated like that. There's plenty of code I'd like to rewrite, and sometimes do, but that can't go on forever. New projects attract my attention, and I'd rather apply lessons learned to new code. I also enjoy readable code, but seeing meaningful variable and function names affect my ability to read code more than the density of it.

By the end of the article, I felt it could've been subtitled "why I don't work well with others." It's true that corporations don't like depending on the individual genius because that person is cancer to a group. In my experience, the individual genius hoards all the juicy bits of progarmming, usually enabled by a manager who supports their reasons for doing so, and the rest of the group gets scraps. Morale slides, people complain and/or leave, the project flounders, and typically someone gets fired.

The most satisfaction I've had working in groups are where the programmers are open and willing to share ideas and code. We may not be the alpha nerds of the organization, but together we produce some fine applications and raise each other's game in the process.
posted by hoppytoad at 9:06 AM on August 24, 2007


One of the best tips I heard for getting the code back in your head after a break is to leave a problem unsolved. Coming back to an unsolved problem lets you hit the ground running again, whereas if you've tied up all the loose ends, you have to scrabble to get a purchase on the smooth surface.
posted by bonaldi at 9:34 AM on August 24, 2007


"I wish I had wrote [sic] that"? No.
posted by ethnomethodologist at 9:57 AM on August 24, 2007


And oops. "I wish I wrote that"? Absolutely.

Sorry.
posted by ethnomethodologist at 9:58 AM on August 24, 2007


" But the wrong kind of interruption can wipe your brain in 30 seconds."

It's worse than a wipe, it's data corruption. If I don't deliberately clear my brain by ceasing work on a task after a poorly timed interruption I often find that errors are introduced. My mind has labeled some particular problem as resolved, but lost part of the solution that resolved it. I run into a similar problem if I lose the latest version of a program. Rewriting code from scratch is error-prone, because my mind thinks it already covered something missing in the newest version.

I completely disagree with those who think that this way of coding is anti-teamwork or incompatible with use of tools such as diagrams and models (pseudo-code OTOH is a ridiculous waste of time). In my experience those who code this way are eager to share their joy in code with others, and less interested in political gamesmanship than the more "corporate" types though exceptions exist in both camps.
posted by Manjusri at 3:07 PM on August 24, 2007


caution live frogs: In many ways this kind of advice could really apply anywhere people use their brains to get things done.

It's the only way I get things done, really.

Whether it's writing articles or essays (anything that's more than a 15-minute jobber) or scheming up an information architecture approach, or writing actual code or scripts, I tend to sketch out a few notes, maybe a couple of diagrams, and then set them aside on my desk for a while so the concepts forming in my head can incubate.

When things feel like they're about ready to hatch I'll schedule accordingly: block out big chunks on my calendar, set up an out of office message in email, turn off the phone and stock up on nuts and berries so that I can essentially hibernate with my ideas and, sometime later, emerge from my cave with something new, wholly sprung out of the experience.
posted by deCadmus at 3:09 PM on August 24, 2007 [1 favorite]


One new addition to the development scene is Mylyn, a plugin available exclusively for the Eclipse IDE*.

What makes Mylyn interesting and relevant to this discussion is that it does the "keeping the program in your head" for you. Basically, you create a list of tasks. As you work on each task, Mylyn carefully monitors which files (and, in the case of Java, which classes and methods) you work on. Every time you reactivate that task, it automatically pulls up the files and methods you were working on. Even better, it automatically keeps track of the various IDE tools you need for the resources you are using. and even somehow (although this part I haven't personally gotten to work) is supposed to measure the "importance" of a resource you've worked on based on how many times you opened the file, edited it, or how long you've spent viewing it.

Mylyn integrates with Bugzilla, Trac, and a variety of other online bug tracking/code tracking systems, and you can also have online/private tasks.

It's pretty cool.

I personally generally have some of the code of a program in my head. I think the best way to avoid needing to do this is basically good programming -- encapsulation, decoupled code, clear naming conventions, etc.

* Eclipse is a fairly popular IDE which I am sorta forced to use now in work. Its main plus is that it's probably the best program out there for ColdFusion development on the Mac. The downside is that the usability is pretty horendous, and it takes forever to figure out how to configure it to behave the way you want it to
posted by Deathalicious at 7:21 PM on August 24, 2007


I dunno. I suspect any person who subscribes to the "Programs are ideas" school of thought.

This is like arguing that a house is an idea because someone had to come up with a plan for it and figure out how to put it together. The house is not the idea, it is a constructed object -- hopefully well constructed by practiced craftsmen.

You can design a program any way that floats your boat -- flowcharts, prototypes, whatever -- but when it comes to actually creating the code you'll do better to drop the creative artiste act and put on your professional bricklayer hat.

Creative programming is all very nice in tight loops, but 95% of code should be dull as shit.
posted by tkolar at 9:36 PM on August 24, 2007


Dabblers and blowhards
posted by jcruelty at 2:20 AM on August 25, 2007


jcruelty wrote...
Dabblers and blowhards

Now THAT, I wish I had written.
posted by tkolar at 7:15 AM on August 25, 2007


Not me. It's exactly the kind of point-scoring-by-total-point-missing snide faux cleverness that absolutely gives me the shits.

Creating elegant code (in point of fact, creating any kind of elegant engineering) is an aesthetic pursuit. It just is. It's quite clear that not everybody can understand that; which is fine, if a little sad.

There is no conflict between this position and the use of whatever design aids suit you. Every aesthetic pursuit has its associated craft, and every craft has its tools.

It's possible to write code that does dull-as-shit work while still being elegant in and of itself. I've seen it done; one of my good friends and ex-colleagues is a master of it. I get the same kind of kick out of reading his production-grade code as an minimalist art afficionado might get out of a Malevich.
posted by flabdablet at 8:02 AM on August 25, 2007


Creating elegant code (in point of fact, creating any kind of elegant engineering) is an aesthetic pursuit. It just is. It's quite clear that not everybody can understand that; which is fine, if a little sad.

I would agree that coding can be beautiful. However, I would argue that it makes more sense, then, to compare coding to architecture, or to blacksmithing. In these other pursuits, one can make a truly beautiful thing. But, similarly, the thing is both functional and constrained by the purposes and materials given.

In painting, you can create anything two dimensional. You are not constrained by making an accurate depiction of a thing, or even a depiction. You don't have to respect the canvas -- you can overload it with paint or even mutilate it. You can even have a painting that has no expressed meaning.

None of this is possible with code. Can you have an abstract program? What would it mean to create a program that did nothing? Or a program that created something meaningless? Or created a program that had holes ripped in it?

I'm no code aesthetic myself, so I'd never write an essay like this, but I can see how someone who was both a painter and a coder might become a bit miffed that someone is conflating the two. Consider all the times that Hollywood gets things about computing so teeth-grindingly wrong that it ruins what might be an otherwise perfectly enjoyable film. If a swimmer wrote an essay called "Swimming and Hacking" and said that swimmers were like hackers because they "dive in" it would be a similarly bad essay.

The guy's point is not that code lacks aesthetics. It's that what the first guy is saying about painting is just wrong, and that employing painting as a metaphor for programming is wrong.
posted by Deathalicious at 4:21 AM on August 26, 2007


It's only wrong if you miss the point.

Yes, painting and coding have utterly different constraints, and the results of the two activities don't have much in common. So, if you focus on the outputs rather than the process, you'll miss the point.

The point of the analogy is that the internal processes involved in making a painting that pleases the painter, and making code that pleases the coder, have some strong similarities; in both cases, the judgement calls about what to do next, or the right way to arrange the materials, are as much emotional and intuitive as reasoned. There's a feeling of balance that applies to elegant coding every bit as much as it applies to great painting.

It's also interesting that, just as somebody who appreciates painting can spend hours just staring at somebody else's canvas and having an aesthetic response to it, somebody who appreciates coding can spend hours poring over another engineer's code and having that same kind of response.

It seems to me that the mark of great art is that it causes an absolutely visceral response in people who get it, and that this is so regardless of the number of people who don't get it. If you personally don't get it, you probably will miss the point - because elegant-code-as-art isn't something that's talked about much, and all you've got to work with are the outputs. All I'm really saying is that for those of us who do get it, the analogy between coding and painting works just fine. The fact that the vast bulk of commercial code has more in common with signwriting or housepainting than with fine art doesn't make this less true.

Personally, I've stood in front of Blue Poles and stared at it for a good half-hour without getting much out of it. I don't get Blue Poles. But I know people who will wax absolutely lyrical about their own experience doing the same thing; and the funny thing is that what they're describing sounds a lot like what happened to me when I first started figuring out how Woz's Apple II Monitor ROM worked, or finally understood his Apple Disk II drive controller design, or disassembled Roland Gustafsson's Disk II fast-loader.

The points that "Dabblers and Blowhards" tries to make would leave me equally unimpressed if somebody tried to use a parallel argument to demonstrate that music and painting had nothing in common.

Some coding does rise to the level of great art; to my way of thinking, Steve Wozniak is a great artist. I've said before that if Seymour Cray was the J. S. Bach of engineers, then Woz is Jimi. If that makes sense to you, you're probably one of the people who gets it :-)
posted by flabdablet at 7:48 PM on August 26, 2007 [1 favorite]


Some coding does rise to the level of great art; to my way of thinking, Steve Wozniak is a great artist. I've said before that if Seymour Cray was the J. S. Bach of engineers, then Woz is Jimi. If that makes sense to you, you're probably one of the people who gets it :-)

I do get it. I'll freely admit that I'm not knowledgeable about coding to immediately grasp the beauty of a stretch of code, but I do understand that it can be beautiful and wondrous. I won't argue with that.

It's just that I do feel that the kind of constraints put on the eventual output matter. As such, I think the processes involved in sculpting, glazing, and firing a vase are probably a better analogy for programming than making a painting. I suppose if painters today did have to make their own paints and worry about paint chemistry the analogy would hold. But the truth is with some exceptions most painters only have to worry about the surface of the process -- what color goes where. Whereas a potter has to be intimately aware of the fundamentals of the process. They have to know what is a stable structure and what isn't, what temperature to fire their clay pots with, which glaze to use and how it will react to the clay and to the fire. In other words, they need to know the "guts" of the process, just like programmers. Now, there are some potters who don't know the chemicals details. They make the pots, choose a glaze that they've been told will give them the color they want, and hand it over to someone else to be fired. That's kind of like programmers like me. I've taken some classes in CS theory, but most of the time I use programming languages that take care of everything like garbage collection, resource allocation, and even database manipulation for me.

But I can appreciate a pot without knowing the processes that went into it, and I can read an elegantly written piece of code even if I know I could never write something like that myself.

So, um, yeah. Writing a computer program isn't painting a painting; it's making a pot (although I personally like the music analogy too).
posted by Deathalicious at 1:52 AM on August 27, 2007


I don't think it matters whether a computer program is more like a pot than a canvas. The essay that "Dabblers and Blowhards" heaps scorn upon is drawing attention to what's happening inside a person as they do paint a painting or make a pot or play Voodoo Child or design the Cray 1 or network a cluster of Apple II's without using network cards.

Graham is pointing out that, perhaps contrary to appearances, hackers - who may or may not have day jobs as working programmers - are driven by an aesthetic impulse when they design stuff for their own pleasure, and that their first response to code is also aesthetic. The fact that code exists to do something, unlike a painting which exists to look like something or music to sound a certain way, is largely irrelevant for the purposes of this analysis - what's interesting about the finished product is its ability to provoke an aesthetic response.

There really is a strong and useful analogy to be made between the creative processes that underlie hacking, painting and (if you like) pot-making. It's useful because if you recognize that hacking is an essentially aesthetic pursuit, and you have a tame hacker whose work you value, then you will treat that person more like one of your art staff than one of your clerks, and you won't even try to make them interchangeable with somebody else.

The aesthetic response to other people's software designs is real and deep as well. It's the best explanation I know for the visceral contempt that so many Unix hackers have for Windows. But even Microsoft understands this to some extent; why else would they have called their standard IDE "Visual Studio"?

Arguing about whether a pot or a musical composition is a better parallel to a program than a painting really does miss the point. What annoys me most about Maciej Cegłowski's so-called "smushing" of Paul Graham is that I think he's missing it deliberately.
posted by flabdablet at 4:52 AM on August 27, 2007


« Older Friday Flash Fun - Haluz 2   |   Belgium (1830 - ?2007) Newer »


This thread has been archived and is closed to new comments