Easy as falling off a log
January 21, 2017 12:26 PM Subscribe
i guess some function would still have had these nice properties, but whenever I see a log-transform make everything so much easier to deal with, i end up thinking "thank god for logarithms".
posted by vogon_poet at 1:26 PM on January 21, 2017 [3 favorites]
posted by vogon_poet at 1:26 PM on January 21, 2017 [3 favorites]
Back around 2001 I used this to get an Atari 2600 to do polar-cartesian conversion, so that a tank parked in the middle of the screen could drive around a simple landscape that translated and rotated around it. It wasn't a playable game; it was jittery and there wasn't much room left in the 4K ROM, but it by all rights it shouldn't have been possible at all with the CPU spending most of its time chasing the beam to build the video display. But with a log lookup table to turn scaling multiplies into additions, it was do-able.
posted by Bringer Tom at 2:21 PM on January 21, 2017 [13 favorites]
posted by Bringer Tom at 2:21 PM on January 21, 2017 [13 favorites]
Logarythm: Birth control method for lumberjacks.
posted by Greg_Ace at 2:25 PM on January 21, 2017 [4 favorites]
posted by Greg_Ace at 2:25 PM on January 21, 2017 [4 favorites]
So this guy is trying to breed some venomous snakes for a scientific project, but he can't get them to successfully breed in captivity. He mentions this to a colleague in the math department, who immediately claims to have a solution. He puts together some rustic platforms made of whole tree trunks and places them in the snake enclosure. Sure enough, this solves the problem. The scientist asks the mathematician how he knew this would work, and he replied "Everyone knows, adders need log tables to multiply".
posted by 445supermag at 3:15 PM on January 21, 2017 [29 favorites]
posted by 445supermag at 3:15 PM on January 21, 2017 [29 favorites]
This article talks about Napier's invention of logarithms for multiplication. But he was also the inventor of Napier's bones, a different method of multiplication, a sort of simplistic mechanical calculator. There are some beautiful antique ones out there, frequently ivory, minutely inscribed.
posted by Nelson at 3:17 PM on January 21, 2017 [5 favorites]
posted by Nelson at 3:17 PM on January 21, 2017 [5 favorites]
To add to what scose said, there are several reasons why logarithms often appear in statistics and machine learning:
- the Gaussian distribution is a (normalized) exponential of a quadratic, so its logarithm is a quadratic (plus the log of the normalization constant)
- likelihoods are often very small quantities, and working with log-likelihoods can avoid numerical problems (and as the log function is monotonic, you can directly maximise the log-likelihood rather than the likelihood)
- The logarithm also appears in the definition of entropy, due to the identity log(ab)=log(a)+log(b) and the requirement that the entropy of a source formed by combining two independent sources should equal the sum of the entropies of the two sources
- as the log function is concave, we can apply Jensen's inequality, and obtain the identity log(a_1 x_1 + a_2 x_2 + .. . ) >= log(a_1 x_1) + log(a_2 x_2) + ...
i guess some function would still have had these nice properties
They had an earlier invention which was more cumbersome: prosthaphaeresis.
posted by tss at 4:14 PM on January 21, 2017 [5 favorites]
They had an earlier invention which was more cumbersome: prosthaphaeresis.
posted by tss at 4:14 PM on January 21, 2017 [5 favorites]
Logarithms are also used in numerical computing: it is less numerically stable to multiply a lot of numbers together than to add their logarithms.
(For non-programmers: computers will forget the lowest digits of a big number. If this happens in the middle of an algorithm then you'll get the wrong answer, but if it happens at the end, it's not a problem! So if you multiply a lot of numbers, the answer gets really big very fast, and this can cause the product to come out wrong. If you add logarithms, the answer only gets big at the end when you convert back, meaning that you get the right answer.)
posted by ragtag at 5:37 PM on January 21, 2017 [4 favorites]
(For non-programmers: computers will forget the lowest digits of a big number. If this happens in the middle of an algorithm then you'll get the wrong answer, but if it happens at the end, it's not a problem! So if you multiply a lot of numbers, the answer gets really big very fast, and this can cause the product to come out wrong. If you add logarithms, the answer only gets big at the end when you convert back, meaning that you get the right answer.)
posted by ragtag at 5:37 PM on January 21, 2017 [4 favorites]
Messing around with logs and an associated series (the harmonic series), I came up with a strange-looking approximation I don't remember seeing before:
For natural numbers n (and the bigger n is, the better the approximation),
n ~ (e)(e^1/2)(e^1/3)(e^1/4) ... (e^1/n)/1.781.
In other words, a natural number n can be approximated by multiplying the successive roots of e together all the way up to the nth root, and then dividing that product by 1.781.
posted by jamjam at 7:20 PM on January 21, 2017
For natural numbers n (and the bigger n is, the better the approximation),
n ~ (e)(e^1/2)(e^1/3)(e^1/4) ... (e^1/n)/1.781.
In other words, a natural number n can be approximated by multiplying the successive roots of e together all the way up to the nth root, and then dividing that product by 1.781.
posted by jamjam at 7:20 PM on January 21, 2017
e is approximately equal to (1+9^((-4)^6*7))^(3^(2^85))
It's a cute expression in that it uses all of the digits once
It is also accurate for the 18,457,734,525,360,901,453,873,570 digits of e... according to this Numberphile Video.
posted by MikeWarot at 8:26 PM on January 21, 2017 [1 favorite]
It's a cute expression in that it uses all of the digits once
It is also accurate for the 18,457,734,525,360,901,453,873,570 digits of e... according to this Numberphile Video.
posted by MikeWarot at 8:26 PM on January 21, 2017 [1 favorite]
jamjam, 1.781 is e^γ where γ is the Euler-Mascheroni constant defined as γ = limn->∞ 1 + 1/2 + 1/3 + ... + 1/n - log(n). So log(n) = 1 + 1/2 + 1/3 + ... + 1/n - γ + o(1), and exponentiating gives your approximation.
posted by eruonna at 9:07 PM on January 21, 2017 [2 favorites]
posted by eruonna at 9:07 PM on January 21, 2017 [2 favorites]
Atari 2600 ... with a log lookup table to turn scaling multiplies into additions, it was do-able
Have you seen this rather nice 6502 algorithm for reducing a multiply to lookups in a table of squares?
http://everything2.com/title/Fast+6502+multiplication
I imagine it offers more precision than you needed and burns more ROM space as well, but it's cute all the same.
posted by flabdablet at 9:48 PM on January 21, 2017 [1 favorite]
Have you seen this rather nice 6502 algorithm for reducing a multiply to lookups in a table of squares?
http://everything2.com/title/Fast+6502+multiplication
I imagine it offers more precision than you needed and burns more ROM space as well, but it's cute all the same.
posted by flabdablet at 9:48 PM on January 21, 2017 [1 favorite]
I graduated high school in India in 2003. However we were still not allowed to use calculators until we got to college, and AFAIK that is still the case. So I became intimately familiar with the little blue book of Clark's Tables, with its tables of trigonometric tables and log tables. By the end of 12th grade, I was really rather quick with the thing and had a good sense of when I had simplified an expression enough to start taking logs. Of course we grumbled about not being allowed calculators a fair bit, and an illicit calculator would often be passed around the physics lab that the teacher would turn a blind eye to. In retrospect, I can understand to some extent why the math syllabus authorities might have not permitted calculators in high school - not being able to use calculators automatically favored more elegant solutions to math problems rather than brute force ones. It forces you to look for patterns in the numbers, and simplify, simplify, simplify as much as possible before calculating the final answer. Still, I can't say I'm not glad I don't have to use log tables any more these days, and being allowed to use calculators in college felt like a huge blessing.
posted by peacheater at 5:10 AM on January 22, 2017 [2 favorites]
posted by peacheater at 5:10 AM on January 22, 2017 [2 favorites]
Yup, and Clark's Tables is still in print. It's rather odd that you can use all this computing power to buy a book of log tables from Amazon India. Curriculum, I shall never understand thee.
calculators:log tables::Symbolic mathematics:calculators for me. I love the fact that the Raspberry Pi comes with a full (if non-commercial use) licence for Mathematica 10. It's not super-fast, but it works. Just yesterday, I had the need to calculate the outer/inner vertex ratio of a [5/2] concave decagon (a ★ to you). Mathematica reduced the fearsome mess of ratios of surds to a tidy ½(3+√5) in no time.
posted by scruss at 5:39 AM on January 22, 2017 [1 favorite]
calculators:log tables::Symbolic mathematics:calculators for me. I love the fact that the Raspberry Pi comes with a full (if non-commercial use) licence for Mathematica 10. It's not super-fast, but it works. Just yesterday, I had the need to calculate the outer/inner vertex ratio of a [5/2] concave decagon (a ★ to you). Mathematica reduced the fearsome mess of ratios of surds to a tidy ½(3+√5) in no time.
posted by scruss at 5:39 AM on January 22, 2017 [1 favorite]
One of my first jobs as a larval programmer was refactoring a large chunk of Z80 assembler which contained a BASIC interpreter with floating-point and a respectable set of functions. Said code was, shall we say, sparsely documented throughout and based on an earlier integer-only version, so the maths stuff was bolted on much like a fungal parasite inserting its mycelial tendrils into its host. It was also remarkably compact and intertwined with sets of inscrutable primitives calling each other in apparently random ways.
The Z80 is not a complicated processor and at heart does a small number of simple things to small binary numbers held in a small number of registers. I made decent progress in unpicking and documenting most of the code, but the one-gnomic-comment-every-twenty-lines of Z80 bit-shuffling that comprised the mathematics completely eluded me. It was like thinking oneself handy with Lego and then discovering that somebody had built a working televison out of nothing but.
The company I was working in contained a decent proportion of high-flying Oxbridge types with proper degrees in hard things, so I wasn't too ashamed to send my co-workers chunks of the code with pleas for elucidation. Eventually, we came to the conclusion that (a) it worked, (b) it had probably been written on LSD by someone who (c) was in communion with the angry ghost of Euler and (d) if I stuck to documenting the interfaces to the rest of the code, it would be entirely sufficient unto the day. We did improve on the comments in the code by including a header file in each module called (if I remember correctly) warning.h -
; Good friend for Jesus sake forbeare,
; To dig the dust enclosed here.
; Blessed be the man that spares these stones,
; And cursed be he that moves my bones.
(And yes, we did appreciate the ultimate Napierian pun)
Having previously worked in engineering companies where access to one of the small team of tame mathematicians was held as a high honour requiring many mystic rites, I was not altogether unprepared for this: however, it did demonstrate to me that no matter how clever I thought I was and how well I thought I understood the way computers worked, there were plenty of oceans where I simply didn't have the gills to breath.
posted by Devonian at 5:50 AM on January 22, 2017 [10 favorites]
The Z80 is not a complicated processor and at heart does a small number of simple things to small binary numbers held in a small number of registers. I made decent progress in unpicking and documenting most of the code, but the one-gnomic-comment-every-twenty-lines of Z80 bit-shuffling that comprised the mathematics completely eluded me. It was like thinking oneself handy with Lego and then discovering that somebody had built a working televison out of nothing but.
The company I was working in contained a decent proportion of high-flying Oxbridge types with proper degrees in hard things, so I wasn't too ashamed to send my co-workers chunks of the code with pleas for elucidation. Eventually, we came to the conclusion that (a) it worked, (b) it had probably been written on LSD by someone who (c) was in communion with the angry ghost of Euler and (d) if I stuck to documenting the interfaces to the rest of the code, it would be entirely sufficient unto the day. We did improve on the comments in the code by including a header file in each module called (if I remember correctly) warning.h -
; Good friend for Jesus sake forbeare,
; To dig the dust enclosed here.
; Blessed be the man that spares these stones,
; And cursed be he that moves my bones.
(And yes, we did appreciate the ultimate Napierian pun)
Having previously worked in engineering companies where access to one of the small team of tame mathematicians was held as a high honour requiring many mystic rites, I was not altogether unprepared for this: however, it did demonstrate to me that no matter how clever I thought I was and how well I thought I understood the way computers worked, there were plenty of oceans where I simply didn't have the gills to breath.
posted by Devonian at 5:50 AM on January 22, 2017 [10 favorites]
Am I the only one who noticed the math mistake in the Forbes article? If the author had done his work correctly, he would have had an even more accurate answer!
posted by math at 8:49 AM on January 22, 2017
posted by math at 8:49 AM on January 22, 2017
Have you seen this rather nice 6502 algorithm for reducing a multiply to lookups in a table of squares?
Looking back at my own source code I see that I considered that algorithm myself but wrote it off as needing too much ROM because it would need a 16-bit per entry table. It has actually been over a decade since I looked at this code, and if I may quote myself...
posted by Bringer Tom at 2:30 PM on January 22, 2017 [2 favorites]
Looking back at my own source code I see that I considered that algorithm myself but wrote it off as needing too much ROM because it would need a 16-bit per entry table. It has actually been over a decade since I looked at this code, and if I may quote myself...
;...Since the cartesian-to-polar conversion willDamn I miss playing with Stella code.
; require division, I will be using the identities
;
; exp(log x + log y)=x*y and exp(log x - log y)=x/y
;
; This trades off an exact result for numerous advantages. Transforms in
; both directions can use the same log and exp tables. And the errors are
; concentrated at extreme distances from the origin, where I've already
; decided to tolerate them.
;
; So we will use a table of logs scaled so that exp(5.1)=256. This allows
; us to handle numbers up to 160, with errors in the 1-pixel range out as
; far as 40 or 50. (We could improve local accuracy by sacrificing
; maximum distance.) We will also of course need the mirror table of
; exponents which will have 256 entries. We will have to treat zero as
; a special case, since log(0)=negative infinity, not an 8-bit value.
;
; We'll use one table with different entry offsets for sine and cosine,
; and since this is the only thing we use the trig tables for they
; can be in stuffed with pre-scaled logs ready for scaling multiplies
; instead of raw cosine values.
posted by Bringer Tom at 2:30 PM on January 22, 2017 [2 favorites]
« Older #NotLovinIt | Eero Saarinen: The Architect Who Saw the Future Newer »
This thread has been archived and is closed to new comments
Also your calculator probably computes x^y by computing exp(y*log(x)).
posted by scose at 1:10 PM on January 21, 2017 [8 favorites]