Skip

This is my Code Gun. There are $Armory.getGunCount() like it, but this one is mine.
September 11, 2011 4:39 PM   Subscribe

Code Hero is a game designed to teach programming. It uses the first-person shooter idiom, where you are armed with a Code Gun that shoots JavaScript. It reminded me a little of hacking the Gibson.
posted by sigma7 (118 comments total) 41 users marked this as a favorite

 
I love how rollerblades were the preferred mode of transportation back then. I remember hacking my first big bank back in 1998 a month after picking out my sweet new hacker name (phyre-phreek) and thinking "man, with all this money I now have, I'm gonna go out and buy myself a sweet pair of rollerblades to more efficiently hack the planet."
posted by TheBones at 4:45 PM on September 11, 2011 [5 favorites]


How do you play the game?
posted by tylerkaraszewski at 4:52 PM on September 11, 2011


This surely is what coding is really like. In that movie swordfish (where the super secret password is a dictionary word) the coder manipulates large cubes, that... do something. And in Jurassic Park, coding seemed involve flying over some sort of virtual landscape. Bait and switch. 20 years later I'm still taking shit out of databases and putting shit into databases.
posted by the noob at 5:01 PM on September 11, 2011 [18 favorites]


Code blue.
posted by Malor at 5:05 PM on September 11, 2011 [1 favorite]


Metafilter: putting shit into databases
posted by sigma7 at 5:13 PM on September 11, 2011 [5 favorites]


And taking it back out, one pageview at a time
posted by mrzarquon at 5:18 PM on September 11, 2011 [1 favorite]


@tylerkaraszewski: That is a little vague in the teaser clip, but I'm guessing it's a matter of sucking code into your Code Gun and shooting it back out to form the program. Which makes sense, if this is a learning tool -- start simple, then move your way up to actual writing stuff.

I gotta say I approve of stuff like this (and of stuff like Rails for Zombies -- previously). I didn't work all that well when I was taking CS courses, partly because I can't really absorb stuff in a lecture. I'm sure I'm not the only person for whom labs were a better means of retention. And I'm sure that, as goofy as the premise is, Code Hero will help others learn in its own way.
posted by sigma7 at 5:29 PM on September 11, 2011


In that movie swordfish (where the super secret password is a dictionary word) the coder manipulates large cubes, that... do something.

While getting blowjobs and seeing Halle Berry's boobs. I was a software developer for almost a decade. In all that time, my employer provided me with neither blowjobs nor boobs. Either Hollywood lied, or I've got one hell of a lawsuit on my hands.
posted by Mr. Bad Example at 5:30 PM on September 11, 2011 [2 favorites]


I write software for a living. I find it much more interesting and exciting than shooting things in video games. A particularly tricky software engineering/debugging problem can easily enthrall me for days, while I tire of a new FPS in an average of about 10 minutes.

(I also know the people writing this particular piece of software, and they are very enthusiastic and dedicated to it, which is great, I just know that it's not something I'd want.)
posted by rmorell at 5:36 PM on September 11, 2011 [1 favorite]


I never got around to watching "Hackers" when it came out, or in all the years since. I probably would have heaped scorn on that sequence had I seen it in theaters but now? I was giggling in delight at its mix of fantasy and reality.

The game looks awesometarded in pretty much the same way. I wonder how well-sandboxed the code it runs is? Because it's written in Unity, which uses... JavaScript. You're already hacking the game, why not hack the FUCK out of the game if you can?

(that logic is why I felt absolutely no qualms about sector editing my save game when I was playing through the c64 game based on "Neuromancer". Systems are for hacking.)
posted by egypturnash at 5:49 PM on September 11, 2011


You're already hacking the game, why not hack the FUCK out of the game if you can?

I'll be disappointed if there isn't some sort of Kobayashi Maru situation required to win the game.
posted by RobotVoodooPower at 5:59 PM on September 11, 2011 [4 favorites]


Teaching people to program with Javascript is like teaching people avionic engineering by having them build paper airplanes.
posted by localroger at 5:59 PM on September 11, 2011 [1 favorite]


Teaching people to program with Javascript is like teaching people avionic engineering by having them build paper airplanes.

I don't know...you can actually make a living coding javascript. You may not be a computer scientist, and what you hack together might make an in-the-bone programmer weep and rend garments, but you don't actually need to go further if what you're looking for is a Job Skill.
posted by maxwelton at 6:03 PM on September 11, 2011


When I read it was Code Hero I thought that it was going to be like Guitar Hero where pieces of the code where coming at you and you had to enter in the right pieces of code at the right time and there were crowds cheering you on.
posted by lilkeith07 at 6:06 PM on September 11, 2011 [5 favorites]


Should be written in LISP.
posted by symbioid at 6:14 PM on September 11, 2011 [1 favorite]


Old and busted: "Christ, what an asshole." New hotness: "Should be written in LISP."
posted by DU at 6:17 PM on September 11, 2011 [5 favorites]


you don't actually need to go further if what you're looking for is a Job Skill

If you learn in an extremely high level language like Javascript, you will not get any sense of how the computer works or how it is taking your commands and turning them into useful output. If you don't know how the computer works, you won't know when you are asking it to do something unreasonable, and you won't know how to figure out why you're getting something you don't expect when its vision of your code is different from yours.

In other news it's been at least 10 years since I saw Hackers and I thought it kind of silly at the time, but looking over the OP clip it's clear the moviemakers were trying really hard to convey to non-computer people what the experience of writing and hacking code was like. Given the challenge of that I think they did an OK job.
posted by localroger at 6:18 PM on September 11, 2011


Teaching people to program with Javascript is like teaching people avionic engineering by having them build paper airplanes.

Yeah those guys at Google are fucking n00bs.
posted by odinsdream at 6:31 PM on September 11, 2011 [3 favorites]


Teaching people to program with Javascript is like teaching people avionic engineering by having them build paper airplanes.

I'm curious about the intersection of set of people who talk trash about language X, and the set of people who use, on a daily basis, a valuable product implemented in X. For instance, how many JavaScript haters use GMail?

I'm willing to debate what makes a good first teaching language, but hardcore programming language snobbery is beginning to annoy me. For instance, I work at a small teaching college where intro CS is taught in Python. A colleague of mine told me, with disdain, that it's not a good course because "Python's not a real language." After some debate, I found out that "real" languages are compiled, like C or C++. So LISP is not a real language either?

Bottom line: anything that entices people who might otherwise not be interested in (or aware of) programming is probably a good thing. The ones who realize they like figuring out how to tell computers what to do will sort out the rest later.

On preview: what odinsdream said.
posted by ubermuffin at 6:34 PM on September 11, 2011 [12 favorites]


You know, I bet you could turn a coding tutorial into a roguelike pretty easily. "Oh no, I've been killed by an unbalanced )!"
posted by Sibrax at 6:39 PM on September 11, 2011 [3 favorites]


If you learn in an extremely high level language like Javascript, you will not get any sense of how the computer works or how it is taking your commands and turning them into useful output. If you don't know how the computer works, you won't know when you are asking it to do something unreasonable...

I think you've conflated some levels here. "How a computer works" is at least two distinct things: 1) the hardware implementation and 2) computing theory. High level languages can teach you plenty of #2 and many, if not most, of the things that are unreasonable to ask a computer to do are unreasonable because of some #2-related reason (combinatoric explosion, for instance).

If you think that knowing the difference between ints and floats is particularly useful or something anyone will care about in 100 years (as opposed to, say, fast ways to factor numbers), I think you are mistaken.
posted by DU at 6:50 PM on September 11, 2011 [2 favorites]


More to the point, learning your first language has nothing to do with how a computer works and everything to do with figuring out how to take what you think is your understanding of a process and and fixing it until you really do understand that process and can teach it to the world's dumbest person, i.e. a computer. In other words, learning to program is mostly learning to think and very little grokking of hardware.
posted by DU at 6:54 PM on September 11, 2011 [1 favorite]


I am a professional C++ programmer and my favorite language to write hobby projects in is JavaScript. It is a fun language that can do a lot of useful things without a lot of work, there's an interpreter for it on nearly every computer, and there's lots of libraries/documentation/examples/etc for it.

I find that as a general rule, many programmers like to say: "To *really* understand how the computer works, you have to understand it at a fundamental level."

The definition of "fundamental" is always "the lowest level at which I, the one making this statement, am comfortable programming." C people say it, assembly people say it, hardware design people say it, and they all think that whatever level they interact with the computer at is suitably fundamental.
posted by tylerkaraszewski at 6:57 PM on September 11, 2011 [15 favorites]


Teaching people to program with Javascript is like teaching people avionic engineering by having them build paper airplanes.

I really disagree. Javascript is a great entry level programming language. Syntactically it's lovely, it's free and available in every browser, the results are immediate and useful to the new programmer - in that they can instantly see that they are manipulating a web page or information coming from a web page.

Another advantage the the sheer weight of domain knowledge of javascript that exists - and there are a gazzillion examples - just view the source of any web page

Then once they get a handle on Javascript - you can show them jQuery and tell them that they will never actually ever use what they just learned. I kid, but it's far better than VB (shudder) and less daunting than ruby or python.
posted by the noob at 7:01 PM on September 11, 2011


Is this game coded in Javascript? (Since it is in fact a legit-enough language to make a 3D game.) If so, are the bits of code shown in the video actual code that is part of the engine? Because if that's the case it's a pretty cool game mechanic, almost Brutalist somehow, making the inner workings part of the whole structure. (And it probably should be written in Lisp.)
posted by vogon_poet at 7:04 PM on September 11, 2011


DU: If you think that knowing the difference between ints and floats is particularly useful or something anyone will care about in 100 years (as opposed to, say, fast ways to factor numbers), I think you are mistaken.

Well this is where we disagree. I have uncovered half a dozen major errors in published closed-source applications in my industry due to people not knowing how floating point math works. You need to know how ints and floats work if you are programming. If you don't, you will fuck things up and not know why. That will be just as true 100 years from now as it is now as it was 30 years ago when the floating point libraries were written, which will probably all still be in use 100 years from now.
posted by localroger at 7:09 PM on September 11, 2011 [3 favorites]


If you think that knowing the difference between ints and floats is particularly useful or something anyone will care about in 100 years (as opposed to, say, fast ways to factor numbers), I think you are mistaken.

Funny, javascript feels exactly the same way.
posted by 7segment at 7:10 PM on September 11, 2011


If you learn in an extremely high level language like Javascript, you will not get any sense of how the computer works or how it is taking your commands and turning them into useful output. If you don't know how the computer works, you won't know when you are asking it to do something unreasonable...

years ago things like garbage collection and memory allocation were extremely important to me as a coder - now? I couldn't be bothered with it. As someone once said - "fuck art, let's dance"
posted by the noob at 7:13 PM on September 11, 2011


fuck art, let's dance

Let's try not to trip over each others' feet while doing so at least.
posted by localroger at 7:17 PM on September 11, 2011


Oh,and looking at the calendar, I should have said 40 years ago when these libraries were written. Mental riffs don't seem to update themselves for the passage of time.
posted by localroger at 7:18 PM on September 11, 2011


I'm guessing it's a matter of sucking code into your Code Gun and shooting it back out to form the program.

This was pretty much my impression of the mechanic when I saw a demo of this at Super Happy Dev House 44 in May -- a Primer Labs guy walked us through a couple of examples where he changed the behavior of the code gun so that it would transport you towards what you shot at, or would create a copy for you in your inventory, stuff like that.

If you learn in an extremely high level language like Javascript, you will not get any sense of how the computer works or how it is taking your commands and turning them into useful output.

A generation of programmers who started with BASIC interpreters would probably disagree with you. I'm among them. Sure, it didn't help me get a grip on what was going on inside the machine, but I was pretty young and had plenty of time to learn about the nuts and bolts by keying in raw hex into microprocessors later.

I totally agree that if you're going to be a good programmer, sure, eventually you are going to have to learn things including the nature and limitations of floating point arithmetic, and if you don't, you will get badly burnt the moment you're representing currency and find that $0.10 + $0.20 doesn't test equal to $0.30 or something like that.

But a high level language can be great to get you into breaking down a task into a sequence of instructions that become a program. You don't have to learn it all at once.

I also think the bad reaction to JavaScript's number type is a bit overwrought, and I say this as a person who's been burnt by it. Real plain integers and some other constructs amenable to raw binary manipulation would be nice for some situations, but if you know and respect the limits of its floating point numeric type, it's serviceable for most.

Then once they get a handle on Javascript - you can show them jQuery and tell them that they will never actually ever use what they just learned.

Well, except jQuery is JavaScript. It doesn't replace the language, it replaces the crappy DOM. It's actually a pretty grand example of what kind of awesome things you can do with JavaScript the language if you know what you're doing.
posted by weston at 7:41 PM on September 11, 2011


Well, except jQuery is JavaScript. It doesn't replace the language

Well yeah, but I haven't written document.getElementByid('...') for quite some time.
posted by the noob at 7:50 PM on September 11, 2011


localroger > If you learn in an extremely high level language like Javascript, you will not get any sense of how the computer works or how it is taking your commands and turning them into useful output. If you don't know how the computer works, you won't know when you are asking it to do something unreasonable, and you won't know how to figure out why you're getting something you don't expect when its vision of your code is different from yours.

So I assume you started in assembly language and went up from there? All noobs should know the joy of having to write their loop control from scratch as separate opcodes before they're allowed to touch a foreach loop, never mind a map/reduce? Hell, assembly's too removed from the bare metal, noobs should be starting by poking hex values into memory with a monitor, right?

How many pros nowadays started out with the much-maligned Basic? Sure, it may have taught you some bad habits, but it was a nice simple toy you could play with easily. How many pros started with Logo? Here's a fun little sandbox that says "I am made of code, here is some of it, come fool around with it!". Some people will go no further than that. Some will start doing silly things with JS in their webpages, with Unity or Flash or some other JS game environment, maybe one kid who plays this game will go on to invent the hardcore cutting-edge language of 2040 that transforms the programming world of 2047 entirely.
posted by egypturnash at 7:53 PM on September 11, 2011


I have uncovered half a dozen major errors in published closed-source applications in my industry due to people not knowing how floating point math works. You need to know how ints and floats work if you are programming. If you don't, you will fuck things up and not know why.

...and therefore internal representation of primitives is relevant to people who are just starting to learn how programming works and what it can do? I'm sure you're a white-hot, bare-metal rockstar, but that doesn't make that idea any less pedagogically unsound.
posted by invitapriore at 8:00 PM on September 11, 2011


The problem with javascript isn't that it is too high level, it's that there are other far, far better languages out there. We're stuck with it because that's what's in all the browsers, and libraries like jQuery are improving it greatly, but damn I wish that browsers had standardized on something else.

"but if you know and respect the limits of its floating point numeric type, it's serviceable for most" is exactly what is wrong with it. You can work around all of its flaws, and yay we can do all this neat stuff, but other languages give you the cool stuff without needing to work around the flaws and crufty bits.

Of course it's too late now, the best hope I can see is an increase in other languages compiling down to JS, like coffeescript does.
posted by markr at 8:05 PM on September 11, 2011 [1 favorite]


I learned Logo first, then Basic, on an Atari 800 circa 1984. Didn't start to clue into what the heck RAM was until I figured out how to do fast blitting in QBasic around 1993, when I was using it to make video games for my friends. I have to say, spending my formative years programming in languages that hid "how the computer works" didn't seem to impair me too much. Nowadays, I do most of my coding in C++, including lots of grungy low-level stuff, and I'm pretty thrilled every time something like boost::function comes along to hide the inner workings all over again.

Talking about whether JavaScript's internal representation of numbers sucks is obscuring the point here -- which is that a new programmer can get a lot of mileage out of any reasonable language.

I'm grinning trying to imagine one of you guys scowling over the shoulder of an eager kid playing this game and saying "But it doesn't have proper ints!", or "This will all be meaningless once you find out about jQuery!"
posted by ubermuffin at 8:13 PM on September 11, 2011 [1 favorite]


With regard to teaching people programming in very high-level languages, this blog post by Bob Harper (CMU professor) is quite relevant; it documents his experience at teaching a class of freshmen with no programming experience how to code in ML. Also see the followup.

Summary: he believes that programming should really be taught with zero reference to anything but the language model. So no talk about hardware, or even compilers.
posted by destrius at 8:16 PM on September 11, 2011 [3 favorites]


"but if you know and respect the limits of its floating point numeric type, it's serviceable for most" is exactly what is wrong with it. You can work around all of its flaws, and yay we can do all this neat stuff, but other languages give you the cool stuff without needing to work around the flaws and crufty bits.

Specifically on the floating point issue: a developer who doesn't understand the nature and limits I'm talking about is going to get themselves in trouble with Python or Ruby in just about every popular language out there just as surely as you will with JavaScript.

On general flaws and crufty bits: I am interested to hear about these far, far better languages out there that I'm unfamiliar with that apparently don't have them.

Of course it's too late now, the best hope I can see is an increase in other languages compiling down to JS, like coffeescript does.

JavaScript itself is far from fixed in stone, and doesn't face many challenges that aren't shared by other languages when it comes time to evolve them (the Python 2-3 transition is going really swiftly and smoothly, right?). If there is one, it's that certain old runtimes tend to persist in the form of un-updated browsers, and this isn't a problem that would have been avoided if another language had been designed/picked.
posted by weston at 8:42 PM on September 11, 2011


Apropos programming:

"The realization came over me with full force that a good part of the remainder of my life was going to be spent in finding the errors in my own programs." -- Maurice Wilkes, 1949

Still true 62 years later.
posted by CheeseDigestsAll at 8:43 PM on September 11, 2011 [4 favorites]


I've never liked getting into low-level nitty gritty detail, but it's interesting to note that The Art of Computer Programming is written on an (abstract) assembly language level. And one can hardly accuse of Knuth of being a crappy computer scientist.

Of course TAOCP is not for beginners either. Should CS 101 type classes go into the details of type representations, etc? No. (Though floating point errors honestly should be hammered home from the onset. Unless you expect people to never work with floats.) But they are still important considerations, and people will still care about it as long as computers are still around.

The whole discussion reminds me of ActiveRecord programmers who think they don't need to know anything about SQL. And then they can't figure out why their queries take fucking forever to run whenever they get slightly more complicated than searching by primary key.
posted by kmz at 9:18 PM on September 11, 2011


I'm curious about the intersection of set of people who talk trash about language X, and the set of people who use, on a daily basis, a valuable product implemented in X.

This is a broader phenomenon. See also: Intersection of people who refuse to believe in science and people who use computers to talk about it.
posted by bicyclefish at 9:50 PM on September 11, 2011


30 Years programming here, and that "need to know the underlying machine" argument is bunk. The only reason you get a pass localroger is because its precisely the sort of thing I used to say.

JavaScript is a great language, and I'm convinced one of the reasons I struggle with it is because once you've spent your time pushing things in and out of registers, lambda functions and foreach commands become conceptually harder to understand.
posted by Boris Johnson at 10:45 PM on September 11, 2011


The website seems down for now. I'd be amused if this was caused by someone making too big of a codegun.

Of course, now, I'm going to be dwelling on the differences between Universes, VM's and sandboxes.
posted by seanyboy at 12:08 AM on September 12, 2011


magnum=self.gun;
bullets=magnum.bullets;
iUnknown=bullets.count;
if (self.guessBulletShot(bullets)) in [5..6] {
    isPunkFeelingLucky=punk.status.queryLuck(mention=[magnum.name,magnum.relativepower]);

    try
    if (isPunkFeelingLucky) gun.shoot
    else {
        quip = new cQuip;
        punk.arrest(quip);
    }
    except
        //punk was lucky
        self.chase(punk);
    end try
}
else self.gun.shoot(target=punk)

posted by seanyboy at 12:17 AM on September 12, 2011 [6 favorites]


30 Years programming here, and that "need to know the underlying machine" argument is bunk

Yeah this is pretty much the "get off my lawn" of computer science

These days there's actually quantum physics involved in microprocessor design just to keep the bits from bleeding into each other. There's exactly zero people on earth who could bang out some beautiful CoffeeScript/Lua/Ruby and actually walk you through how each line of code is turned by the computer into a physical reality of flowing electrons

Know what you need to do what you want to do. The rest is just the shoulders of all the giants you're standing on
posted by crayz at 1:20 AM on September 12, 2011 [1 favorite]


There are levels between not knowing the difference between ints and floats and knowing the exact electron flow through a CPU.
posted by kmz at 1:42 AM on September 12, 2011


I mean, don't get me wrong, starting with a high level language is awesome, but knowing at least a little mid level stuff is pretty essential too, down the line. There's always going to be people needed to maintain and develop the next gcc, libc, etc. And until a perfect language and perfect libraries exist, i.e. never, not knowing lower level quirks will bite you in the ass eventually.
posted by kmz at 1:56 AM on September 12, 2011


DU: If you think that knowing the difference between ints and floats is particularly useful or something anyone will care about in 100 years (as opposed to, say, fast ways to factor numbers), I think you are mistaken.

I think not. If you know the difference between an int and a float your code will run 100 times faster. There will, of course, be a market for ignorant programmers doing simple tasks.
posted by CautionToTheWind at 2:43 AM on September 12, 2011


You need to know how ints and floats work if you are programming in a language that differentiates between them.

Maybe. It's an artificial distinction that only happens to matter on today's hardware, with today's standards implemented with today's algorithms. Tomorrow will be different.
posted by DU at 3:00 AM on September 12, 2011


If you are convinced that you can't know anything about computer science without fully understanding the hardware, I suggest you read the classic Structure and Interpretation of Computer Programs.

Programming is about thinking clearly and understanding informational engineering. I know a person who implements doubly-linked lists, heap allocation and garbage collection on his index cards. Does he need floats and ints for that? No. But he does need algorithms + data structures.

Understanding the hardware is a form of optimization. While potentially of great good, doing it too early is as bad as doing any optimization too early.
posted by DU at 3:07 AM on September 12, 2011


If you know the difference between an int and a float your code will run 100 times faster.
Your code *may* run 100 times faster. But your compiler may automatically swap between integer & FP types as needed.

On the other side of this argument, some older programmers may be spending hours linking Dynamic languages into C written custom string code which runs noticably slower than the actual Dynamic Language Libraries. This happens regularly.

Here's what you should know as a programmer:

1) Don't assume your optimisations will be faster or they will be required. Rate the need for an optimisation and test if it makes a difference after the original unoptimised code's been written.

2) The readability of your code is more important than how fast (or even well) it runs.

I am jealous of the people who don't have that little egg of cognitive processing which tells them to optimise this like this & write the other thing like that.
posted by seanyboy at 3:11 AM on September 12, 2011 [2 favorites]


The difference in the internal representation of a float and an int is as big as it can be, and will stay with us for as long as our computers are based on math.

Misuse of the int and float tools can have dire consequences, as exemplified by the failure of american Patriot missiles to intercept Iraqi SCUD missiles in the first Gulf War, costing human lifes.

It is exactly the kind of ignorance that minimizes these things that propagates all the way to the most important and expensive projects, directly making them fail.
posted by CautionToTheWind at 3:12 AM on September 12, 2011


If you know the difference between an int and a float your code will run 100 times faster.
Your code *may* run 100 times faster. But your compiler may automatically swap between integer & FP types as needed.


You misunderstood. The code runs faster exactly because the compiler does not need to swap between integer and FP types as needed.
posted by CautionToTheWind at 3:16 AM on September 12, 2011


CautionToTheWind: You know what a compiler does right? Can you please explain how something that is optimised at time of compilation causes the program to run slower. Because either they've changed the meaning of the word "compiler" in the last twenty years or I'm really, really off base in my computing terminology.
posted by seanyboy at 3:40 AM on September 12, 2011


Plus that Patriot Missile error looks like it's a bit more complicated than "using an integer as a float"

money quote: Because time was measured as the number of tenth-seconds, the value 1/10, which has a non-terminating binary expansion, was chopped at 24 bits after the radix point.

We also have no idea what redefining the time as a float would have done. I assume that any expression of 1/10 of a second as a real value would have introduced the same error.
posted by seanyboy at 3:51 AM on September 12, 2011


Depressingly & ironically, arguing the technicalities of this point is actually making me think "people should really have a better grasp on the fundamentals." Which is completely the opposite of what I'm actually trying to argue.
posted by seanyboy at 3:54 AM on September 12, 2011


seanyboy:

I used the world compiler because I was paraphrasing the comment I was replying to, and I thought that it would detract from my point to go on and on about the proper use of the word compiler.

Plus that Patriot Missile error looks like it's a bit more complicated than "using an integer as a float"

Time in a system like this (tracking targets at Mach 5) should be a binary integer with full binary representation integrity. They used a float, with associated error, and the error compounded. This is one of those situations in which the use of integer or float is not only fundamental to the success of the system, but the choice of which one to use if very non-obvious and counter-intuitive.

Depressingly & ironically, arguing the technicalities of this point is actually making me think "people should really have a better grasp on the fundamentals." Which is completely the opposite of what I'm actually trying to argue.


Did you take the chance to learn something?

In the end it boils down to this: You need to learn the fundamentals before you can know which fundamentals you can ignore.

This thread proves that. In fairness to you, I wanted to eventually argue that many fundamentals and older ideas can be dismissed with little worry, but I got mired in the difference between int and float.

At least it is better than the last time I got laughed out of Metafilter for suggesting software was math. Metafilter doesn't do substance well.
posted by CautionToTheWind at 4:18 AM on September 12, 2011


@crayz: There's exactly zero people on earth who could bang out some beautiful CoffeeScript/Lua/Ruby and actually walk you through how each line of code is turned by the computer into a physical reality of flowing electrons

I beg to disagree: this is my job and there are hundreds of people in my field (computer systems architecture) which can do so as well.

OK there are hundreds of us, and thus probably not a thousand, but that is clearly more than zero.
posted by knz at 4:42 AM on September 12, 2011


Time in a system like this (tracking targets at Mach 5) should be a binary integer with full binary representation integrity. They used a float, with associated error, and the error compounded.

The problem here is not that they didn't know the difference between an int and a float but that they had to know the difference. There are many languages where you do not have to know, because the computer will Do The Right Thing (e.g. support rationals and bignums natively).

In any case, I think a twenty-year-old example of realtime programming of military hardware is a far cry from showing what all programmers need to know.
posted by DU at 4:45 AM on September 12, 2011


Well this is now us quibbling over minor things.

My reading of the links seems to imply that time was being held as an integer. I'm not sure what a "binary integer with full binary representation integrity" is, but my guess is that its just a posh and confusing way of saying a standard computer held integer

The issue appears to be that 1/10 of a second was converted to a float, and then multiplied by the number of seconds. Rather than the other way round.

I used the world compiler because I was paraphrasing the comment I was replying to, and I thought that it would detract from my point to go on and on about the proper use of the word compiler.

That was my comment & you paraphrased me incorrectly. I tempted to believe that you assumed I meant a JIT compiler or a runtime engine when I said compiler, but I didn't. I meant a good old fashioned "Language A --> Machine Code" compiler. You're sort of saying now, "You would be correct if you knew what a compiler was and used it in a contextually correct manner, but you don't & you didn't. So you were wrong."
posted by seanyboy at 4:46 AM on September 12, 2011


@knz: Computer systems architecture is still mandatory here for computer science, so it is way way more than a thousand of people who can do that.
posted by CautionToTheWind at 4:46 AM on September 12, 2011


You don't need to know how an engine works to drive a car. Knowing how that engine works doesn't make you a berthed driver, it makes you a better mechanic.

Knowing how a language works doesn't make you better at solving real world problems, it just makes you better at knowing the language. Sometimes there's overlap, but it's correlation v causation.
posted by blue_beetle at 4:50 AM on September 12, 2011


I beg to disagree: this is my job and there are hundreds of people in my field (computer systems architecture) which can do so as well.

Without doubt, any explanation will boil down to:

1) Beautiful CoffeeScript.
2) ????
3) Electrons.
posted by seanyboy at 4:56 AM on September 12, 2011


My reading of the links seems to imply that time was being held as an integer. I'm not sure what a "binary integer with full binary representation integrity" is, but my guess is that its just a posh and confusing way of saying a standard computer held integer

The numbers of reality are represented in computers in zeroes and ones. There are several ways to represent them, and the fast ones are all inaccurate in one way or another. For example, 0.3333..., a human representation, does not really represent one third, so if you need to represent one third, you must choose or design a representation that will work, like keeping fractions as 1/3. That is binary integrity - that the numbers that you need to use, when represented in binary, do not produce "non-terminating binary expansion(s)" (from the article). English is not my first language, but "binary integrity" is the absolute direct translation of the term I was thought.

By having an important precision parameter like time represented in an imperfect way, they accumulated errors that eventually proved fatal for dozens of military people. Furthermore, by having the system work before when tracking Mach 2 missiles, it gave them a false sense of confidence, and seem to support those who think of such matters in a less rigorous way. But when faced with Mach 5 missiles, the system's shortcomings became failures, and some people learned to care about zeros and ones.

About the compiler thing, yes you are right and I was wrong for an old and simplistic definition of compiler you provided afterwards. Not so for modern mixed JIT compiler and interpreter/VM environments. Double so for frontier-bending tools like Psyco.

2) ????

It's only ???? for some people. For others, number 3) is: profit!


In any case, I think a twenty-year-old example of realtime programming of military hardware is a far cry from showing what all programmers need to know.
posted by DU


The fact that these mistakes have been happening for at least twenty years and they are still, twenty years later, being minimized, hardly proves that my concerns are misplaced.
posted by CautionToTheWind at 6:04 AM on September 12, 2011


@crayz: There's exactly zero people on earth who could bang out some beautiful CoffeeScript/Lua/Ruby and actually walk you through how each line of code is turned by the computer into a physical reality of flowing electrons

I beg to disagree: this is my job and there are hundreds of people in my field (computer systems architecture) which can do so as well.

I beg to disagree with your disagreement. I can tell you in abstract terms (combustion, heat engines, Carnot cycles) how a car engine works, but that doesn't mean I could repair one. This is a personal limitation: there are undoubtedly mechanics that know every part of particular engines, who could strip and reassemble them with their eyes closed.

A computer, on the other hand, is an engine with a billion parts, a machine of such staggering complexity that it can only be approached through layers of abstractions, layers which individuals specialize their careers understanding. I don't doubt that you (or I, or any undergrad who has taken a systems course) could outline these layers, but to to even understand the transformation of code to electrons on this abstract level would require you to be a serious expert in javascript interpreters, operating systems, and processors. To concretely understand it, to literally map a line of code to electron flows on the transistor level, is beyond human comprehension.
posted by Pyry at 6:08 AM on September 12, 2011 [1 favorite]


Who do you think designs the electron flows and the transistors and the CPUs? Evolution??
posted by CautionToTheWind at 6:11 AM on September 12, 2011


I know a person who implements doubly-linked lists, heap allocation and garbage collection on his index cards. Does he need floats and ints for that? No. But he does need algorithms + data structures.

Most programmers have that stuff abstracted away too. Really, if you're an assembly line programmer, you're just connecting the dots between libraries that hide away things both high and low level from you.

I guess if you're as smart as Dijkstra, then you can be a computer scientist that doesn't use computers. But then again I don't see a lot of people formally verifying their programs either.

Who do you think designs the electron flows and the transistors and the CPUs? Evolution??

There is a lot of machine assistance in chip design, actually.
posted by kmz at 6:23 AM on September 12, 2011


a person who discriminates against people of other races is a racist
a person who discriminates against people of (the) other sex(es) is a sexist
a person who discriminates against people of other classes is a classist
a person who discriminates against people who use other programming languages is ...?
posted by LogicalDash at 6:27 AM on September 12, 2011


Programming languagist, duh.
posted by kmz at 6:32 AM on September 12, 2011


They are designed like any other large modern projects: by many people, working on modular parts, with heavy aid from computers. There is not some mentat out there who holds a billion transistors simultaneously in his mind and does the layout by hand.
posted by Pyry at 6:34 AM on September 12, 2011 [1 favorite]


Every aid the computers give was the result of a human thinking it and implementing it in software.

It's humans all the way down.
posted by CautionToTheWind at 6:39 AM on September 12, 2011 [1 favorite]


"Some people need to know these things" != "if you know them you will be better at what you do" != "we need to teach these things to kids in their very first programming lesson"
posted by DU at 6:41 AM on September 12, 2011


Computers do not run on magic. Modern computers may have a billion parts but a whole lot of those parts are redundant, and even if machines were used to do the low-level design that design was done on principles defined by humans. In reality, it is the outline of those functions that is important. If you know how a simple computer works, you will know what you should and should not reasonably expect a more complex computer to be able to do. If you do not know how the computer works at all, you will wonder why you suddenly ran out of memory when you made a minor change.

Computers are finite and there is no computer fast enough or with enough memory that you cannot choke it to death badly coding what should be a relatively simple algorithm.

A compiler cannot tell whether you need a float or an integer. There are, for example, a lot of good reasons for using the circular nature of 2's-complement integer math which rolls over instead of overflowing; floats don't do that and a compiler will not have any way of knowing you want it that way. There are good reasons for taking the very definite way integer division is truncated instead of the much fuzzier way it is rounded with floats. If there is a possibility that you needed a float and the compiler guessed integer, it will presumably have to check each integer operation it ever does to make sure it doesn't have to revert to float; this will destroy the performance you normally get with integer math. And if you don't know yourself which type you need you shouldn't be coding.

When I learned programming, nearly all programming books started with a primer on binary math. And for all the changes in computers in the last 30 years, they still run on binary math. Floating point math still has weird limitations and gotchas and runs way slower than integer math even with the best coprocessors and parallel architectures. To treat data the way Javascript (and to be fair a number of other languages) do, where there isn't even a way to definitely tell the compiler that a value is an integer or a float or even a goddamn string is stupid beyond belief.
posted by localroger at 6:47 AM on September 12, 2011


When I learned programming, nearly all programming books started with a primer on binary math.

And there you have the reason that "kids today" may not learn Javascript. Instead, we should make a 10 year old who wants to make video games put his nose to the grindstone and chop wood for 3 years at the beginning of his apprenticeship. Eventually he may look at a keyboard but mayn't touch it until he's hand assembled a CPU from string and glue.
posted by DU at 6:50 AM on September 12, 2011


Also, it is amusing to me that the people supposedly arguing for more education are actually arguing for stagnation from a point of ignorance. Check out the exact/inexact numerical tower that scheme uses and tell me that isn't pretty neat. It also makes ints and floats obsolete from the POV of the scheme user.
posted by DU at 6:58 AM on September 12, 2011 [1 favorite]


Yes, god forbid we actually empower 10 year olds to do things properly, rather than let them stumple into failure by themselves.

The Long, Dismal History of Software Project Failure
posted by CautionToTheWind at 6:58 AM on September 12, 2011 [1 favorite]


Really, if you're an assembly line programmer, you're just connecting the dots between libraries that hide away things both high and low level from you.

I don't think it has much to do with being an assembly line programmer - I'd say a big difference between a bad dev and a good dev is the ability to successfully google for those existing libraries instead of writing them yourself. Libraries aren't a weakness.
posted by soma lkzx at 6:59 AM on September 12, 2011


Who do you think designs the electron flows and the transistors and the CPUs? Evolution??

Yes. (for small definitions of yes).

The fascinating part is they evolved a circuit on an FPGA that used impossibly few number of gates to achieve the circuit's stated purpose. The circuit worked only on that specific chip and relied on extremely subtle electrical interaction that isn't supposed to happen between unrelated gates.

</derail>

posted by fragmede at 7:02 AM on September 12, 2011


Teaching a 10 year old Javascript does not prevent them learning other languages. My 12 year old just recently devoured a book on Java and another on Python. He's previously been messing around with Scheme. I have a Javascript book on the way for him. I somehow doubt that all that other knowledge is going to fly out of his head when he learns Javascript and I also doubt that, had he started with JS, he'd be incapable of grasping ints vs floats if it ever became important.

That said, I will concede that my 12 year old isn't designing ballistic missile control software. If he were, obviously the first thing I'd teach him is binary.
posted by DU at 7:02 AM on September 12, 2011


a big difference between a bad dev and a good dev is the ability to successfully google for those existing libraries instead of writing them yourself.

The difference between a bad dev and a good dev is the ability to know which libraries you should google for and when you should write them yourself.

Libraries aren't a weakness but they can be a liability. I've seen many projects founder on libraries that only do 99% of what they need, so they keep adding libraries and adding them and pretty soon they are doing almost no development except of monumental build scripts to compile all that stuff, most of which they aren't using.

And recently, I spent 3 or 4 days battling a GUI widget to get it to do what I wanted. Eventually I gave up and spent 3 hours making one from scratch that worked perfectly the first time. Less time, less code, better functionality.
posted by DU at 7:06 AM on September 12, 2011


I also doubt that, had he started with JS, he'd be incapable of grasping ints vs floats if it ever became important.

The problem is not that he will be incapable of grasping ints vs floats. The problem is that he won't know when it becomes important.
posted by CautionToTheWind at 7:07 AM on September 12, 2011


I think we may be talking past eachother in the sense that I too agree that a lot of computer science concepts can be learned after one has programming experience. It's just that int vs float is a bad example of it.
posted by CautionToTheWind at 7:09 AM on September 12, 2011


No, int vs float is a great example of it. It's the kind of low-level, inside-baseball thing that, even if it is of interest to the child, is no real help in actually writing a program a child is interested in. I would no more start with that than I would start with an explanation of, say, wood grain to a child who just wants to build a birdhouse and a go-kart. There's plenty of time for optimization later. Get them building first, then teach the finer points.
posted by DU at 7:16 AM on September 12, 2011


DU if your son likes computers, don't shy away from teaching him stuff like int vs float. His mind might be more eager to absorb that than you might think. At least I was, and I wish I had a parent who could provide the guidance you give.
posted by CautionToTheWind at 7:19 AM on September 12, 2011


I'll definitely tell him about ints vs floats, but only when that information will be the answer to some actual problem he's facing. Otherwise it's just easily-forgotten theory.

That said, he probably has a firmer grasp of computer architecture than most 12 year olds after both of us read this awesome book.
posted by DU at 7:32 AM on September 12, 2011


Int vs. float is not like wood grain in carpentry. It's like the difference between a hand, jig, circular, and table saw. And while you can build a birdhouse without understanding the difference (you might not want junior playing with the power tools just yet) you damn well do need to know the difference and how to use them to build anything practical.
posted by localroger at 7:35 AM on September 12, 2011


When I was a kid, I got a huge kick out of figuring out how things worked in detail, particularly knowing it better than most adults knew. I meant it in that sense, not an utilitarian sense.
posted by CautionToTheWind at 7:39 AM on September 12, 2011


localroger, I think you should try programming in an untyped language sometime. You can build a lot of cool stuff in !C.
posted by DU at 7:41 AM on September 12, 2011


You can describe the difference between ints and floats by referring to machine epsilon; i.e. that a floating point value is always going to be imprecise. There's nothing about that statement that needs to refer to hardware or electronics; its just maths. Its like telling a child that 22/7 is not really pi, but close enough for certain use cases.

Its really just about knowing the details of the types you are using in programming; in the same way that its useful to know the complexity of certain algorithms or data structures that might be provided in a standard library, so you know how to use them effectively. The fact that the limitations were imposed by hardware becomes irrelevant once you specify the interface; its basically just another form of abstraction and modularity.

So, does a person who's just starting to learn programming really need to care about all this just yet? No, they should start coding with an API that consists of simple and straightforward data types, so they can concentrate on learning how to manipulate those types to yield useful programs. Once they master that they can go on to learning more complicated types, and learning about the differences between floats and ints will come perfectly naturally and without any need to worry about electrons or quantum physics.
posted by destrius at 8:18 AM on September 12, 2011 [2 favorites]


you damn well do need to know the difference and how to use them to build anything practical

IEEE doubles are capable of exactly representing all the values of 32 bit ints (and more!), so other than speed differences*, there is no real practical reason to not use them for everything. If you want to force integers, that's what the ceiling, floor, and round functions are for, and they have the benefit of making your intention explicit, rather than relying on the implicit truncation behavior of the data type.

* Probably 10 year olds aren't doing the type of heavy numeric calculations where this would matter
posted by Pyry at 8:21 AM on September 12, 2011


Your favorite high-level language sucks?

I like JavaScript and I like jQuery. I use both to solve real, practical problems. It must be a "real" language because I'm solving "real" problems with it and occasionally making "real" money...right? (I also write in C++ and know the difference between an int and a float.)

Also: C Robots. C++ Robots. Bug Brain too, but the site seems to be down.
posted by ostranenie at 9:10 AM on September 12, 2011


Javascript is actually a pretty decent high-level language, but not one I would necessarily say is a great intro teaching one. Javascript actually can get rather squirrelly with types if you're not careful. 2 + "3" doesn't give you what you might naively expect.
posted by kmz at 9:27 AM on September 12, 2011


2 + "3" doesn't give you what you might naively expect.

THIS. After I discovered this behavior I would have never written another line of Javascript if it weren't the default language for web programming. This is NOT something cool, nifty, or convenient, it is actually a cruel joke and if there is any justice whoever thought this would be a good idea is tied to an anthill.

Also, how to get into deep, deep trouble with IEEE doubles: In binary, 1/10 is an infinitely repeating binal as 1/3 is in decimal. This means you can indeed get 1 / 10 * 10 = 0.99999 out of a binary floating point library. This doesn't happen with the single precision libraries because they are SO imprecise that the authors knew they needed to round off after every operation. However, the double precision libraries don't; most of these libraries were written in the 1970's when lots of users paid for their computer usage by the machine cycle, and so the double precision libraries assume you will want to do your own rounding when you're done instead of paying to do it after each step. So, in double precision, operations that would give you 1 give you 0.9999...

I have seen this bite several RL programmers in the ass, hard. My favorite was the beta unit of a then state of the art programmable scale indicator which I was asked to evaluate. We hooked it up to a 100 lb capacity base and set it up to count by 0.2. Imagine our surprise to see the occasional odd number appear in the LSD. Turns out that the generally very smart guy who wrote the firmware (I met him, and he did a great job in many other areas) had never been told about this little rounding problem. It took him almost a year to chase all the rounding errors out of that firmware -- and this was after it had been approved.

In the years since 1995 there has been a gradual shift in scale firmware from using integer counts as the basis of weight calculation, converting to human units only for display, to using floating point math with human units throughout the metrology code, and while it hasn't killed anybody (that I know of), as CautionToTheWind warned above it has created a growing library of weird bugs and unpredicted behaviors which you would never have seen in earlier generation equipment.

This is why, no matter how abstract your working environment might be, you need to know how the computer is implementing your commands. Programming a computer without knowing how binary math works is like driving a car without knowing how to read street signs. Yeah, you might get where you want to go, but there will be lots of mayhem on the way even if you do.
posted by localroger at 9:50 AM on September 12, 2011 [2 favorites]


You know there's a lot of non-firmware computing to be done, right? Obviously programs that are close to their hardware are going to have to understand that hardware better. And, also obviously, if you are measuring things then rounding is going to be an issue. But the farther you are from hardware and precision, the less critical the details usually are.

This is why, no matter how abstract your working environment might be, you may sometimes need to know how the computer is implementing your commands. But you don't need to learn it on day 1.
posted by DU at 10:47 AM on September 12, 2011


Dude, a major manufacturer nearly released a scale that would spit out the occasional odd number when counting by zero point two. That is not to the metal firmware programming, that's counting by two.

If they had programmed it to the metal, counting from integer 0 to nMax and converting to pounds as a last step as more primitive indicators did (and generally without floating point math at all, the display decimal being forced), it would have worked. It was because they didn't program it to the metal and didn't know how it worked that it was very visibly and embarrassingly broken.

And one of the reasons it reached me in this state is that by their nature, the kind of bugs you get from this sort of ignorance are pernicious and very hard to duplicate, because they may only occur on certain input values or timing alignments. Like buffer overruns in C, that's a pretty cruel thing to put in the path of a novice programmer.

You don't need to teach people starting out the entire IEEE float layout. But it would be very helpful to let them know these are not real numbers in the math class sense, they have limits, and they can do stupid things if you get too close to those limits. Finite integers are a little weird by math class standards but they are much more predictable, as well as faster. Ultimately the goal of learning to program is to learn to program, and I'd much rather learn something useful like the way integers roll over so you can do time interval calculations without worrying about overflows or reset points than that you have to do some kind of braindead shit like subtracting zero from a value to make sure it's not going to be treated like a string by the + operator.
posted by localroger at 11:09 AM on September 12, 2011


I reiterate: You have successfully made a case for some people sometimes needing to know ints vs floats. You have not yet successfully made a case for making 10 year old kids who are wondering what this computer programming thing is all about and just want try it out slog through details to get there.
posted by DU at 11:21 AM on September 12, 2011 [1 favorite]


Computer systems architecture is still mandatory here for computer science, so it is way way more than a thousand of people who can do that.
Yeah, I had to take a class like that and passed it with a B+ or so, but unless their work actually involves using that knowledge, I can guarantee a majority of those students are only going to remember things very, very, broadly and hazily.
posted by juv3nal at 11:47 AM on September 12, 2011


I've had great fun with Robocode. It's Java-based, and is essentially simultaneous turtle graphics programming with guns. My kids (8 & 11) have enjoyed watching me program in it, but when they want to express themselves using computers, their tool of choice is still MIT Scratch. All good stuff. Looking forward to helping nudge them along as they're ready.
posted by dylanjames at 12:06 PM on September 12, 2011


You have not yet successfully made a case for making 10 year old kids who are wondering what this computer programming thing is all about and just want try it out slog through details to get there.

I don't think anybody is arguing that. (At least I hope not.) But you also said this: If you think that knowing the difference between ints and floats is particularly useful or something anyone will care about in 100 years (as opposed to, say, fast ways to factor numbers), I think you are mistaken.

Which is rather different.
posted by kmz at 12:28 PM on September 12, 2011 [1 favorite]


you damn well do need to know the difference and how to use them to build anything practical.

It's nice to know that all the automation code I've written in my career, making machines do things that I'd previously had to do by hand, and making them do it faster and in less mistake-prone manner, wasn't "practical".
posted by asterix at 12:55 PM on September 12, 2011 [1 favorite]


for some people sometimes needing to know ints vs floats

For example, people who need to tell the machine to count by two. Kids never do anything that advanced, I suppose.
posted by localroger at 1:05 PM on September 12, 2011


asterix: if I am reading between the lines right it seems you are programming in an interesting little sandbox which few people outside of industry ever encounter, where the gods of the interpreter really do care about catching your mistakes sensibly and preventing the 3AM call from you to them to ask why some crazy thing happened. The companies that build these environments spend a lot of time and energy to make sure they work right, and that does not translate to something like Javascript which just goes WHEE DOUBLE FLOATING POINT and plugs in the libraries without warning you of their limitations.
posted by localroger at 6:31 PM on September 12, 2011


localroger: This is why, no matter how abstract your working environment might be, you need to know how the computer is implementing your commands.

Rather, you need to know what the specifications of the types you are using are. As long as you understand the interface, you shouldn't need to know anything about the implementation. If there is something that happens due to implementation that is not reflected in the interface, then that just means the interface is not specified accurately.

I mean, imagine if in 100 years people use some kind of biology-based computing infrastructure instead, and instead of bits data is stored in base-4. But people still use classic IEEE floating point, and a compatibility layer is written that translate all the behaviors of our classical floats into the underlying machine. Does that mean that we now have to learn about this compatibility layer as well, since its part of the implementation? Probably not, because the whole point is that the underlying machine is abstracted away. That's what CS is all about, abstraction.

However, if you're talking about verifying the correctness of the compatibility layer, or perhaps trying to find security issues related to the layer, then yes, you need to understand it as well. But that is very different from learning how to program above it.
posted by destrius at 6:52 PM on September 12, 2011


if I am reading between the lines right it seems you are programming in an interesting little sandbox which few people outside of industry ever encounter, where the gods of the interpreter really do care about catching your mistakes sensibly and preventing the 3AM call from you to them to ask why some crazy thing happened. The companies that build these environments spend a lot of time and energy to make sure they work right, and that does not translate to something like Javascript which just goes WHEE DOUBLE FLOATING POINT and plugs in the libraries without warning you of their limitations.

I'm trying to think of what you could be referring to, and failing utterly. The "interesting little sandbox" I've done the vast majority of my work in is the Python interpreter (although I've worked in Java and Perl too). I'm pretty certain that if I were to call anyone at 3 AM asking why some crazy thing happened in my code, they'd tell me to RTFSource and hang up.

None of that changes the fact that I've done a lot of very practical programming, and I don't know the first thing about how binary math works.
posted by asterix at 6:59 PM on September 12, 2011


The "interesting little sandbox" I've done the vast majority of my work in is the Python interpreter

OK, I got the idea you might be working in a PLC environment or something like that.

None of that changes the fact that I've done a lot of very practical programming, and I don't know the first thing about how binary math works.

Your code would probably be a lot more reliable and efficient if you did. But yes, it is a holy grail for the gods of the loose abstraction to free you from knowing a bunch of shit I thought necessary when I was 16, so they've probably made it possible for you to function without knowing that shit. If you knew that shit, you would feel ripped off.
posted by localroger at 7:58 PM on September 12, 2011


Assembly is the Godwin of programming discussions.
posted by monocultured at 1:33 AM on September 13, 2011 [2 favorites]


@PyRy I don't doubt that you (or I, or any undergrad who has taken a systems course) could outline these layers, but to to even understand the transformation of code to electrons on this abstract level would require you to be a serious expert in javascript interpreters, operating systems, and processors. To concretely understand it, to literally map a line of code to electron flows on the transistor level, is beyond human comprehension.

1) What you learn in undergrad is only the basic that will support your experience building through thorough investigation into all matters of system design. Gathering a full understanding of entire systems takes many more years of careful and thorough study, but nothing more that a dedicated career in that field won't provide.

2) mapping lines of code to electron flows at transistor levels is not beyond human comprehension. As I pointed out this is the job of some people out there, whether you believe it or not. What may surprise you however, is that we usually do not draw the full picture by hand. To understand why, look at the big picture: when and why would we need to map high-level software to signal patterns in chips? There are two common scenarios in the field: 1) education (teach others to do our job) and 2) troubleshoot bugs in chip designs that appear in silicon although they did not come up in pre-fab simulations or FPGA. Most bugs in chips are vindicated via automated testing during manufacturing, but unfortunately manufacturing tests are not complete (same as unit tests in software -- it's not because the tests don't fail that there is no bug). (Other scenarios exist as well.)

In the case of troubleshooting, consider the situation with new chip designs. When a new chip with new feature is designed this often require compiler changes in code generation to support new features or new instructions scheduling. This means that an error in the overall behavior of a chip prototype can be a chip design bug, a manufacturing error, an assembler error (invalid translation from assembly to machine code), a compiler error (invalid assembly), a timing-related bug due to a specific OS scheduling policy, etc. To properly distinguish failure modes, a thorough understanding of all the software and hardware components is required, as well as a clear understanding of the cause-effect relationship between specific software patterns and the low-level circuit behavior.

Granted, the top-down thorough understanding you are talking about is only called for in very few situations, hence my assessment that there may be only a few hundred people worldwide which can fully "get" it. Still, it's possible.
posted by knz at 1:49 AM on September 13, 2011


It is a sign of a declining civilization that instead of saying "I don't know", we say "It can't be known", for things which had to be known to even make the very tool we are using to post it online.
posted by CautionToTheWind at 3:13 AM on September 13, 2011


I think there are competing definitions of "known" in place.

I believe one group is saying that the entire state of the system from higher level programming down to electron flows can't be contained in the short term memory of the human mind.
I think the other group is saying that of course this information exists and people use it regularly.

The system was designed by humans, it can be understood by humans. I mean the humans who are designing and troubleshooting computer systems.. are using computers, they're cyborgs.. they don't have to hold the gestalt of the whole thing in their heads.. they can isolate and work on one part of it a time while keeping the bigger picture in sight.

And knowing something about binary math and circuits and suchlike is really important if you want to get out of the software playground and work with hardware or anything involving low-level interaction with sensors that may not have libraries that someone else has already built.
posted by TheKM at 3:39 AM on September 13, 2011 [1 favorite]


I believe one group is saying that the entire state of the system from higher level programming down to electron flows can't be contained in the short term memory of the human mind.

Yes; the human mind is limited, so it can't directly understand the system as a whole in the same way that it might a car engine or a watch. Instead, the system has to be approached through abstractions, and the right level of abstraction depends on the task.

And knowing something about binary math and circuits and suchlike is really important if you want to get out of the software playground and work with hardware or anything involving low-level interaction with sensors that may not have libraries that someone else has already built.

Sure, if you're directly working with hardware that can be useful (although projects like the arduino, and environments like labview make this easier). But otherwise it can be useful to move up a level of abstraction from ints/floats to just numbers, and kids aren't going to be permanently stunted for starting at that level.

Binary and two's complement aren't even really fundamental, they're just implementation details; the real underlying math is things like complexity theory, the lambda calculus, and computability.
posted by Pyry at 8:15 AM on September 13, 2011


I fully agree that it is possible to understand the entire state of the system in that way, and that it can be very useful and even critical in certain areas. I guess what I'm arguing against is that such knowledge is always going to be useful to any programmer, whether they are working on low level embedded systems or writing web apps.

In some sense, its not that such knowledge "can't be known", but more of that it "shouldn't be known", because sometimes knowledge details of the implementation may lead to you exploiting it in ways that will break abstraction, and thus lead to all sorts of problems. I'm not saying that people should never learn about the underlying implementation of course; rather, that while they are coding at a level above it, they should try to "forget" as much of it as possible. And if they possess no knowledge at all about the underlying implementation, ideally this should not hamper them in any way at all.
posted by destrius at 8:21 AM on September 13, 2011 [2 favorites]


Careful people, if the epistemology thread touches the practical binary math thread, there's likely to be a collapse of the bit bucket state vector.
posted by localroger at 9:53 AM on September 13, 2011 [1 favorite]


Your code would probably be a lot more reliable and efficient if you did.

You're going to have to spell this out for me a bit more. Given that none of the code I've actually written involves binary math (at the level I'm writing it, not in the underlying libraries), the only way this could be true is if I were to reimplement the underlying functionality my code depends on. And I find it highly unlikely that I, as a lone developer, would be able to make it more reliable and efficient than the large distributed team of people who wrote the original functionality.

But yes, it is a holy grail for the gods of the loose abstraction to free you from knowing a bunch of shit I thought necessary when I was 16, so they've probably made it possible for you to function without knowing that shit. If you knew that shit, you would feel ripped off.

Or I could just decide that there other things of value in code than reliability and efficiency (or that I'm happy with the level of reliability and efficiency I've currently got), and be willing to accept the tradeoffs.
posted by asterix at 10:39 AM on September 13, 2011


asterix: All of your code involves binary math. This has just been hidden from you by the development system you use.

Very, very often, that's OK. You can treat the system as if those binary images are real numbers and nothing will go wrong. From your comments it sounds like you're doing machine controls, and those are good candidates; they don't tend to accumulate sums forever, or have inputs or control values that blow up to extremely small or large values, and rounding errors don't bother them. You probably don't use algorithms that require a lot of iterative steps, possibly causing a performance problem, to get a result.

It is a matter of luck that you have never run into a situation where the fine behavior of floating point math affects you. I've known a few situations where this happened IRL, and I've been doing this for 25 years. You can get lucky and never have a FP mishap, but the thing is that if you don't know how FP works, when you do have a FP mishap you won't know where the hell it came from. The very smart, well educated guy who unlike me has a degree had no idea where the odd numbers were coming from until I told him. He subsequently found many more bugs, some very nasty, which could have been triggered in rare but possible circumstances.

You really don't want bugs like that in your code. There is nothing more frustrating than being told by a customer or tech that your code is doing something every fiber of your being says should be impossible -- and being proven wrong about that.

There are also some things that are a lot easier to do with pure 2's complement integer binary math, and you probably don't even know how much easier they are because you're not familiar with it. For example, you can calculate time differentials based on a continuously updated counter without worrying about the counter maxing or rolling over; 2's-complement is "circular" so that as long as your time delay doesn't approach the rollover period, you can just say (now count - start mark count) and tell what the delay is. You don't have to check whether midnight or something happened in between, or tell your customer they will have to go to a 64 bit processor when UNIXTIME reaches 2^31. It's not just 100 times faster, in many cases when you know what you're doing it's easier and works better.

I participate on another forum dedicated to an embedded controller which has a FP library, but few of the gurus use it. As one of them occasionally posts, and I concur: "If you don't know how to solve your problem with integer math, you don't know how to solve your problem."
posted by localroger at 5:42 PM on September 13, 2011


Addendum: By "knowing how FP works" I don't mean you need to be able to write your own FP drivers (though I have done that myself). It's no great hardship to learn the difference between an integer and a float and the strengths and weaknesses of each; if you are at all interested in mastering the tools of your trade the hour or two this would take should be of great interest.

I find it incredibly frustrating in an environment like javascript or, hell, even VB, that I don't have the option of doing a simple single instruction CPU add that will roll over instead of throwing an exception if the carry flag is set. This makes some operations that are trivially easy in assembly nearly impossible in these high level languages.
posted by localroger at 5:52 PM on September 13, 2011


I find it incredibly frustrating in an environment like javascript or, hell, even VB, that I don't have the option of doing a simple single instruction CPU add that will roll over instead of throwing an exception if the carry flag is set. This makes some operations that are trivially easy in assembly nearly impossible in these high level languages.

That's a good thing, really; integer overflow bugs can be very nasty. And it still is trivially easy in a high level language, just use a modulo operator, which is what the underlying math behind rolling over is. Your only complaint would be that it would supposedly run slower than your single instruction, but if the compiler is smart enough it probably would be able to optimize away the modulo anyway.
posted by destrius at 6:19 PM on September 13, 2011 [1 favorite]


just use a modulo operator

This works for unsigned arithmetic, in the positive direction. It doesn't work in reverse or if you want to use (or mix, as you can with 2's complement) signed and unsigned values. There are really a lot of cool tricks you can pull with non-exception 2's complement integer math that just don't work, at all, when overflows trigger errors.

And the lack isn't a good thing; there are situations where this is what you want and it's extremely difficult to work around all the "protections." What should be a single CPU instruction balloons to a 20 line subroutine (I've done this. Javascript. Ugh).
posted by localroger at 7:11 PM on September 13, 2011


It is a matter of luck that you have never run into a situation where the fine behavior of floating point math affects you

Or the problem domain you're working in.

There's two ways in which a developer can be affected. One is using floats as part of a model without understanding and accounting for their behavior. The other is if you're working in a domain where integer math or raw bit manipulation could provide performance gains that make or break the ability of a system to provide a feature.

There are no shortage of problem domains where you don't really bump up against either of these things frequently. The problem domain that JavaScript was originally intended for -- manipulating the behavior and appearance of elements on a web page -- is one of them. That's my experience, anyway, and I've done an awful lot of work with JavaScript.

Now, yeah, for fun or for profit sometimes I've worked on projects where I've worked to get it to do things beyond that. Programatic media generation where you're trying to figure out a nice way to represent a byte, set it to the appropriate value, and then stuff it into base64 encoded data URIs, algorithms that could take advantage of integer shortcuts, or even occasions where I realize part of my model has an algebra that more or less maps to a bit vector with some rolling or logical operations. Then I do indeed think it'd be nice if I had byte-level abstractions I could work, bitwise operators that map to the metal.

FWIW, so do others. It seems likely JavaScript will eventually get these things.

In the meanwhile, though, a good chunk of the time, if I had to trade away the other features I like about JavaScript for the ability to better bit-twiddle -- say, if I had to write in Java or C instead -- I wouldn't even have to think about it for most problem domains.
posted by weston at 11:29 AM on September 14, 2011


« Older Iraqi Maqam   |   what if my own skin makes my skin crawl? Newer »


This thread has been archived and is closed to new comments



Post