Join 3,432 readers in helping fund MetaFilter (Hide)


Technically elegant but nearly forgotten
August 1, 2011 5:55 AM   Subscribe

Ron Doerfler's Dead Reckonings - Lost Art in the Mathematical Sciences is a collection of essays, in weblog format, on historical techniques in mathematical sciences, antique scientific instruments and other related topics.

A sampling of the topics covered follows. He has written about lightning calculators, magnetic deviation and navigation and nomography, the art of representing mathematical relationships and facilitating calculations with figures.

Many essays are in multiple parts, with the parts always posted sequentially. There are also PDF versions suitable for printing (the links for these are just before the comments sections).
posted by tykky (21 comments total) 27 users marked this as a favorite

 
I'm subscribed a year ago when he came up on Make's blog. His calendar is 4 feet from my head right now. It's really great, but I haven't made it past March yet. I also bought his book, which is also great but mostly unplumbed. It isn't light reading. It isn't difficult either, but you do need to stop and puzzle things out before they are obviously true.

But having only explored a little bit into his techniques, I'm already a much better mental calculator. 29*31 = (30 - 1)*(30 + 1) = 302 + 30 - 30 - 12 = 899.
posted by DU at 6:01 AM on August 1, 2011 [1 favorite]


I would toss in the Ulam spiral.

Really, any graphical adjunct to a mathematical calculation is fair game, right? Visualizing Pi. If practicality was important, I'd go with Gunter's chain.
posted by twoleftfeet at 6:18 AM on August 1, 2011


Very nice. The current first article is about the book by Evesham, that I just picked up at the library. Doerfler's review is so thorough you don't really need to read the actual book.

The book has a beautifully elegant curve that D'Ocagne constructs to find roots of cubics x3 + px + q = 0. (If you want to see it, it's 'Courbe C3. Solutive de ....' a few pages back from the end of this book on Amazon preview. I don't know how to link to the actual page). I've finally understood how it's constructed, and the next step is "Write a program to create a PDF of it".
posted by benito.strauss at 8:06 AM on August 1, 2011


Write a program to create a PDF of it.

You have probably read about this in Dead Reckonings already, but I'd like to point there is a library for the Python programming language that would be very helpful for this purpose. It's called pynomo, and judging from the examples it can handle quite complicated constructions.
posted by tykky at 8:35 AM on August 1, 2011 [1 favorite]


The other cool thing about nomograms is where they went theoretically. The average nomogram lets you use two values to find a third one. That is, they let you compute a function of two variables, v = F(x,y). We often come across functions of three or more variables, and the question arose "Can we compute every function (of however many variables) by repeatedly applying functions of two variables?".

As a simple demonstration, look at the function of three variables that just multiplies all three values together, G(x,y,z) = x*y*z. The answer is 'yes' for this function: Define M(x,y) = x*y, the function of two variables that multiplies its two input values, then
G(x,y,z) = M( M(x,y), z)
That's not deep, but given an arbitrary G, can we always find such an M?

In 1900, David Hilbert, king of the mathematicians in the 19th century, made a famous list of 23 unsolved problems. For question 13, he picked a specific nasty function G(a,b,c), and asked if is was possible to "find such an M?".

Some fifty years later, Russian generic smart guy V. I. Arnold answered the question "Hell yes", showing that it was in fact possible for any arbitrary function of any number of variables, going much beyond the question Hilbert had asked. He was 19 years old at the time, and was working off the results of his teacher Andrey Kolmogorov, another name to be reckoned with.

For those not used to mathematical theorems, it's worth mentioning that often these proofs are not constructive; i.e. they don't tell you how to find the particular function of two variables, just that they exist. That's how you play the math game these days.

It looks like some people are trying to make the game even harder, by only using addition and functions of one variable ('Addition' counts as a function of two variables A(x,y) = x + y. They're saying that it's the only function of two variables you're allowed to use.) Explicitly, this means that if you give them any function of two variables, V(x,y), they want to say there are function of one variable f(t), g(t), and h(t) so that
V(x,y) = f( g(x) + h(y) )
I don't know what the results are in this case, but I'm guessing that (like me) you came to this thread for the pretty pictures, so I'll link to this handout. It's intended to illustrate a result along the lines discussed above, but strikes me more as what playing Minecraft on acid must feel like.


posted by benito.strauss at 8:49 AM on August 1, 2011 [3 favorites]


Thanks for the pointer, tykky. I hadn't noticed the references to Pynomo before. (I hadn't thought about making one myself until just this weekend). In fact, this example on the Pynomo web site looks a lot like the curve I want to make.

The only drawback is that one of the things I like most about the nomograph is its 19th century style. (I don't actually need it for computations). I want to try to recreate the original as closely as possible — it's more of an art project.

But the Pynomo project looks really cool. The developer really deserves praise. The Examples page is a nice showcase, and the BMI chart could actually be useful these days, even though everyone owns a few calculators.
posted by benito.strauss at 9:04 AM on August 1, 2011


G(x,y,z) = M( M(x,y), z)

what
posted by DU at 9:08 AM on August 1, 2011


@DU: I am not sure what makes you ask what, but that claim seems correct to me in the context in which benito.strauss presented it. That is, for those particular definitions of M and G, it is true.
posted by tykky at 10:04 AM on August 1, 2011


G(x,y,z) = M( M(x,y), z)

what


In this case, G(x,y,z) = x*y*z, is a function that takes three values and multiplies them together. The notation G(x,y,z) says that it involves three variables. But it turns out that you only need to operate on two values at a time. The right hand side says you can multiply x and y together, and then take that result and multiply it by z, and you get the same thing.

This might not sound surprising, because most functions we write are (compositions of) operating on at most two variables at a time. For instance, this complicated function:
H(x,y,z) = x + y – z2 + z * cos(x*y).
If you wanted to calculate a value you could do so only handling at most two values at a time. (If you like HP RPN calculators, it's the same thing as saying you only need a stack that's two deep).

But now let's take a messier function (sorry I don't know how to render this better):
                  x – y if z > 0
J(x,y,z) =
                  x + y if z <=0         
This looks to be a function that is essentially of three variables. You have to look at all three values at one time to compute it. In that way it seems different from H( ).

Now let's think about it in terms of nomograms. In a basic nomogram, there are two scales, X and Y. You find the x and y values on each of those scales, and draw a line connecting the two points. Then you find where this line intersects a third line/curve (the Z scale), and that's your answer. A nomogram like this can only calculate a function of two variables, z = f(x,y). [Think about trying to create a nomogram for a function of three variables, say K(x,y,z). You'd have to have three scales, X, Y, and Z, and plot the input points x, y, and z, on each of them. Then what would you do? You can't just say "draw a line through those three points" — they might not lie on a line.]

So we'd think that there's no way we could use one or more nomograms to compute the function J(x,y,z) above. Arnold's result says that our instincts are wrong — there are a bunch of functions of just two variables that we could use to compute J(x,y,z).
posted by benito.strauss at 10:37 AM on August 1, 2011


Arnold's result says that our instincts are wrong — there are a bunch of functions of just two variables that we could use to compute J(x,y,z).

Yes, this the part I'm "what"ing. Obviously it works for commutative functions (at least...I think that's obvious). But that it's possible for ALL functions is kind of mindblowing.

J(x,y,z) seems to indicate that you have to at least be allowed to do F(x,z) or F(y,z) as your "first pass".
posted by DU at 10:48 AM on August 1, 2011


That is, for a given G(x,y,z) there might NOT be an M(M(x,y),z), but if not there will be an M(x,M(y,z)) or an M(y,M(x,z)).
posted by DU at 10:55 AM on August 1, 2011


Yeah, I have no idea how you'd actually find the functions of two variables for J(x,y,z). And it doesn't have to be just one M. It could be
M( L(z,x), N( P(z, Q(y,x)), R(S(x,T(z,y)),y))),
just so long as each function only has two arguments.

It's one of those statements where you can tell that someone gets it because they're confused and dis-believing.
posted by benito.strauss at 11:07 AM on August 1, 2011


God, I love nomograms. I hadn't realized that before, so thanks for the post. I've just done a search and discovered that my local university library has 20 books on nomography, so I guess I got plenty of happiness in my near future.
posted by neuron at 10:31 PM on August 1, 2011


Oh, if you are allowed to have many different 2-arg functions then it doesn't really blow my mind anymore. I mean, the non-linear and/or crazy ones like J() above are going to be a problem, but for normal functions it seems pretty clear that you can do that.

I think I'm less impressed because that's basically how computer programming works. You don't want one big procedure that takes a million arguments. You want a bunch of small ones that take just a few. So you break it up into pieces, etc. What was blowing my mind was that you could break it up into a single smaller procedure called multiple times with partial answers as arguments.
posted by DU at 3:06 AM on August 2, 2011


I think I'm less impressed because that's basically how computer programming works. You don't want one big procedure that takes a million arguments. You want a bunch of small ones that take just a few. So you break it up into pieces, etc.

That's a good point. It's especially clear with side-effect free functional programming that a program is a composition of functions. Also, I think that in functional programming the functions are actually functions in the math sense which is not the case in imperative programming languages.
posted by atrazine at 4:54 AM on August 2, 2011


... it doesn't really blow my mind anymore. I mean, the non-linear and/or crazy ones like J() above are going to be a problem, but for normal functions it seems pretty clear that you can do that.

Well, I can't argue someone into having a mind that's blown when it's not. But I wonder if your distinction between normal and crazy functions is analogous to the distinction between rational and irrational real numbers. Day-to-day most of what we see are rational numbers, but there are way more irrationals than rationals.

If I was asked to define 'normal' functions, I'd say something like "compositions of +, –, *, /, plus the trig, log, and exponential functions", which bakes in the ability to be expressed with functions of two variables. There are a lot more functions than that. [Though I find it interesting that back when they were still debating what constitutes a function, many people would have said that J( ) wasn't one function, but rather "two functions" joined together. But that's not the sense we use these days.]

Me, I'm still impressed. Think about writing programs where your functions can only accept two arguments, (and you can't just bundle a bunch of values into a C structure, or Haskell data type, and call that one argument).

I'm going to think about how to implement the function below (which I fondly remember from programming dBase IV a long time ago) with functions taking two arguments.
                  x if z > 0
IF(x,y,z) =
                  y if z <=0         

posted by benito.strauss at 7:59 AM on August 2, 2011


(and you can't just bundle a bunch of values into a C structure, or Haskell data type, and call that one argument)

Continuity plays a role here too. If you don't care about continuity, you can encode multiple numbers in one (e.g., by interleaving the bits), so the requirement that we use only 2-argument functions becomes meaningless.

how to implement [IF(x,y,z)] with functions taking two arguments

That function isn't continuous, so it doesn't really fit under Hilbert's 13th. You will need to allow discontinuous 2-argument functions in your solution, at which point the trick above becomes available. But if you want a "nice" solution, without "unreasonable" discontinuity, you can... hm. I guess you don't want a solution, eh? You want to think about it more yourself.
posted by stebulus at 10:42 AM on August 3, 2011


Yeah, my example fails continuity. It could easily be fixed up to be continuous, but it lead me to realize that no matter how many weird "if" clauses were in the definition, so long as the regions were defined by "H(x,y,z) > 0", where H( ) could be written with functions of two arguments, you could use approximate characteristic functions to piece them together.

So it seems very hard to come up with a (continuous) function of three variables which can't be written as the composition of functions of two variables. Which is what Arnold told me in the first place. Probably time to go read the original. I wonder how complicated it is?
posted by benito.strauss at 1:12 PM on August 3, 2011


Looks like the papers of Arnol'd and Kolmogorov are just a few pages:

V.I. Arnol'd. On functions of three variables. Dokl. Akad. Nauk. 114 (1957), 679--681. English transl., Amer. Math. Soc. Transl. (2) 28 (1963), 51--54.

A.N. Kolmogorov. Representation of functions of many variables. Dokl. Akad. Nauk. 114 (1957), 953--956. English transl., Amer. Math. Soc. Transl. (2) 17 (1961), 369--373.
posted by stebulus at 1:49 PM on August 3, 2011


Thanks for the references, stebulus. The Arnol'd paper is a very hairy inductive construction that reminds me that I'm not a professional mathematician, and persuades me to stop looking for the Kolmogorov paper.
posted by benito.strauss at 2:21 PM on August 3, 2011


This 1997 lecture by Arnol'd [ps.gz] (apparently delivered at this conference) has a simple and clear description of his idea for expressing functions on trees as sums of single-variable functions on the coordinates. (See pages 3-4, from "Kolmogorov had shown" to "that's how it worked".)

Kolmogorov's contribution — the reduction to functions on trees — is not explained, alas.
posted by stebulus at 3:21 PM on August 3, 2011


« Older In the New Yorker: Getting Bin Laden, What happene...  |  "History Changes".... Newer »


This thread has been archived and is closed to new comments