Eleven Equations True Computer Science Geeks Should (at Least Pretend to) Know
November 29, 2011 8:40 AM   Subscribe

Eleven Equations True Computer Science Geeks Should (at Least Pretend to) Know

The author doesn't go into much detail on what these equations mean, so here's additional background:
  1. Binomial Co-efficient: explanation, source code
  2. De Morgan's Law
  3. Eigenvectors: lecture (long, but helpful), unfunny cartoon, tutorial Disclosure: I still have no clue what eigenvectors are or why they're useful
  4. Pumping Lemma - in layman's terms
  5. Information entropy - measures how little (or how much) information there is in an event. Different possible outcomes (say, a coin flip) translate into entropy. The theory can be used, for example, to measure the strength of a password -- the more entropy, the harder it is to crack.
  6. Bayes Theorem - maybe best known for its use in identifying spam.
  7. Fermat's Little Theorem - basically, if p is a prime number, then (ap - a) will be evenly divisible by p. It's a way of identifying whether a given number is probably prime. Very useful in cryptography.
  8. Natural Join - used extensively in relational databases (although more often manifested in the somewhat-related INNER JOIN)
  9. Fixed-Point - another concept I don't fully get, heavily involved in lambda calculus. Here's an explanation focusing on Haskell
  10. O(n) - pretty fundamental concept in programming algorithms - how much time will this algorithm take to run with n inputs. Best possible case is O(1) (hash-lookups are often close to O(1)). Worst possible case is O(n!) (O(n*(n-1)*(n-2)…*2*1), brute-force solution to the travelling salesman problem).
  11. Euler's identity - actually, he just includes it because it's pretty. It's been called the most beautiful theorem in mathematics (previously).
posted by Deathalicious (138 comments total) 123 users marked this as a favorite
 
I have personally used the Pumping Lemma to prove theorems about Peg Solitaire.
posted by Wolfdog at 8:42 AM on November 29, 2011 [1 favorite]


However, that doesn't excuse the omission of Stokes' Theorem, which is best theorem.
posted by Wolfdog at 8:42 AM on November 29, 2011


Hash lookups are O(logn). Oh, and I remember now why I dropped out of Comp Sci. I hate looking at math equations, they're impossibly ugly with the symbols we use.
posted by Yowser at 8:49 AM on November 29, 2011 [1 favorite]


*whooshing sound right over my head*
posted by desjardins at 8:49 AM on November 29, 2011 [15 favorites]


Thompson's axiom:

Going tough = weird pro
posted by Danf at 8:51 AM on November 29, 2011 [5 favorites]


I thought a fair few of these were only tenuously defined as equations.

That said, although I remembered learning about all of these at school, a few didn't show up until grad school (eg the fixed-point y-combinator), or showed up in classes in the math department (eg Fermat's little theorem). I wonder how the author chose these 10.
posted by troublesome at 8:52 AM on November 29, 2011 [2 favorites]




Small nit: there's no worst possible big-oh. It's not like factorial is the fastest-growing function there is. E.g.
posted by a snickering nuthatch at 8:54 AM on November 29, 2011


Yeah this is sort of a strange hodgepodge. I think the author sort of is inclined towards the "pretending to know" side of the spectrum here.
posted by silby at 8:54 AM on November 29, 2011 [13 favorites]


I don't think Stokes' Theorem is that important to CS though.
posted by atrazine at 8:54 AM on November 29, 2011


I recognize the names of all of these. And in most cases, know enough about the theorem to know when I should look it up. But a few, not even that. For instance, I can never remember a damn thing about linear algebra no matter how often I learn it.
posted by DU at 8:57 AM on November 29, 2011 [2 favorites]


Hash lookups are O(logn)

I think you're confusing it with (most) tree lookups or binary search. A hash table is usually stored as an array, so a hash lookup is constant time with respect to the number of elements stored in the table, as long as you don't get collisions (i.e. it takes the same time to lookup an element in a table with only one element as it does a table that is full, assuming no collisions). With collisions the behavior becomes more complicated and depends on the collision resolution method.

On the other hand, if you consider the input to be the size of the key, then the running time depends on the particular hash function, but it will typically be linear with respect to key size.
posted by jedicus at 8:59 AM on November 29, 2011


My grumpy self was all ready to complain about something or the other, but any CS ideas list mentioning the Y combinator makes me happy. (For blissful, however, it would need to include the Curry-Howard Isomorphism, heh.)
posted by Iosephus at 9:01 AM on November 29, 2011 [2 favorites]


I don't think Stokes' Theorem is that important to CS though.

BEST THEOREM. And more relevant to CS than Euler's lame old identity. Compute the flux across the surface of the heat sink! Evaluate the curl of your coffee! Evaluate the divergence of C++ from an acceptable programming language!
posted by Wolfdog at 9:02 AM on November 29, 2011 [17 favorites]


No, hash lookups are O(logn), I'm certain of it. God, I should have stuck Comp Sci out, I would have kicked ass.
posted by Yowser at 9:04 AM on November 29, 2011


Amdahl's law is probably used by "True Computer Science Geeks" more often than Euler's identity.
posted by Triplanetary at 9:04 AM on November 29, 2011 [7 favorites]


I hate looking at math equations, they're impossibly ugly with the symbols we use.

Huh? That's like saying you hate reading because the symbols we use in English are impossibly ugly.

I think the integral symbol is pretty gorgeous. I love how in inequalities, the symbols are active, "eating" the larger thing. I love how ingrained the "plus" symbol is to me (and evidently many others) that it just means positivity (here on MeFi, it means "favorite").

He left out one of my favorites, the Cauchy-Schwarz Inequality, second only to to this fantastic and true inequality.
posted by King Bee at 9:04 AM on November 29, 2011 [4 favorites]


I wish I had his job and free time.
posted by CautionToTheWind at 9:05 AM on November 29, 2011


Not that there's any reason I should know any of that but boy do I feel dumb.

Thanks for nothing!
posted by mazola at 9:07 AM on November 29, 2011


King Bee, whenever I see a math equation(unless it's an algebraic equation, which are a piece of pie), my eyes glaze over. I truly hate looking at math equations, and I think a large part of it is how ugly our way of expressing them are.
posted by Yowser at 9:08 AM on November 29, 2011 [2 favorites]


jedicus, I'm fairly certain that only balanced trees (such as red-black trees) are O(logn)... oh god, why is all this garbage in my brain?
posted by Yowser at 9:09 AM on November 29, 2011


Euler's lame old identity

May Euclid strike you blind (or afflict you with 13 kids).
posted by DU at 9:09 AM on November 29, 2011


how ugly our way of expressing them are.

Piffle
posted by DU at 9:10 AM on November 29, 2011


Piffle

Pi.f(f(le)) surely?
posted by lalochezia at 9:12 AM on November 29, 2011 [2 favorites]


Some of these are no brainers, you kinda want to know Big O, but some of them come up so infrequently you can just look them up.

I used Bayes Theorem once.

I had a contract job where I was to write an application to manage mailing lists. These lists came in the form of excel spreadsheets. A column had one of a handfull of things in it, fristname,lastname,zip,street address, phone number. The columns were never in any particular order.

The first requirement was that the app needed to correctly figure out what data a column contained. Zip and phone were easy. Street address wasn't too hard, I built a table of acceptable address terms from a list I got from the USPS like street,avenue,boulevard, etc and checked columns against that.

First and last name were a problem. Obviously many words, like "george" can be a first or last name. I built tables using census data of most common first and last names and ran each column against that, figuring the probability that each word was a first or last name. Did a bunch of rows till the app was reasonbably sure.

The app worked perfectly except on the decrepit machines they inteded to run it on figuring out first and last name was excruciatingly slow. The users also didn't trust it, they kept asking what to do if the computer figured wrong.

I eventually got rid of all the Bayesian logic and added comboboxes to let the user tell the app which columns were which. I realized users sometimes want to do *more* work themselves if it gives them a feeling of control.

The app had some other issues, such as de-duping. Can anyone tell me the what O checking each address against every other address is? I eventually just did a straight sort on the addresses and compared each address against only adjacent addresses. Sped it up by orders of magnitude.
posted by Ad hominem at 9:13 AM on November 29, 2011 [2 favorites]


Gah, I forgot about the difference between worst and average (and best?) case for the big O. Never mind, I give up the internets.
posted by Yowser at 9:14 AM on November 29, 2011


Next time you know a young person that wants to learn how to make games and wants to go to school to "learn computers", show them this list. Or perhaps show them a list of the best papers at the top CS conferences. There's this famous quote which is commonly misattributed to Edsger W. Dijkstra but was actually in a paper by Fellows and Parberry:
Computer science is no more about computers than astronomy is about telescopes, biology is about microscopes or chemistry is about beakers and test tubes.
The point being, CS is about the science of computing, and the computer is just a tool. Many of the bedrock theorems in CS date back to the 60s or 70s and are still useful today. The most current editions of the first three volumes of the famous tome The Art of Computer Programming date to 1973 and 1981. You will likely learn how to program in a CS curriculum, but it will not be the focus and you will certainly not be taught many of the required subjects that real programming requires, such as revision control systems and build systems, or how to use common APIs. It's expected that you pick those up yourself or learn them at your first job.

There is a reason that CS departments have such high attrition rates for the first few semesters.
posted by Rhomboid at 9:15 AM on November 29, 2011 [13 favorites]


Can anyone tell me the what O checking each address against every other address is?

n2
posted by DU at 9:15 AM on November 29, 2011 [3 favorites]


Pairwise comparison is O(n2), your heuristic would be O(n).
posted by silby at 9:18 AM on November 29, 2011 [1 favorite]


If you can get your hands on a copy of this book without breaking the bank and complete all the proofs, I guarantee that linear algebra will be burned into your mind until your dying day.

jedicus, I'm fairly certain that only balanced trees (such as red-black trees) are O(logn)... oh god, why is all this garbage in my brain?

Simple binary trees retain O(log n) lookup time as long as they've been populated randomly. Red-black trees were invented because that's rarely the case with real-world data.
posted by invitapriore at 9:18 AM on November 29, 2011 [3 favorites]


Other ones that are important to know:

Joseph's Binomial Gambit
Schuster's Particulate equation
Benjamin's Loose Change Conundrum
Roger's Rogering
Coolio's Posse Limit
Buschel's Law for Naming Equations
1 + 2 with a half axel and a twist of lime.
posted by tittergrrl at 9:21 AM on November 29, 2011 [12 favorites]


Next time you know a young person that wants to learn how to make games and wants to go to school to "learn computers", show them this list.

No, please don't. While there are computer scientists who write papers, present at conferences, etc, the vast majority of people programming computers are practitioners, not scientists. You can make a pretty good living as a software practitioner, and businesses need far more of them than true scientists. In fact, we are desperately short of them. Do what you can to encourage people to enter this field.
posted by Triplanetary at 9:24 AM on November 29, 2011 [10 favorites]


Pumping Lemma

Choose from the following:

a.) Oh, matron.
b.) Phwoaaarrrr!
c.) Wink wink, nudge nudge.

I may have been living in England a tiny bit too long.
posted by Mr. Bad Example at 9:25 AM on November 29, 2011


desjardins: "*whooshing sound right over my head"

I'm sure we can get good layperson explanations for most of this stuff. I'll try to start.

Binomial Coefficients:
Let's say you have a set of items, all different, in a bag. You pick out some number of them from the bag. So you end up with a new set of items - the stuff you picked out. How many different possible sets of stuff could you pick out of a bag, assuming the bag started with n items, and you picked k items out?

The first item you picked from the bag could have been any of n possibilities. The next item could have been any item but the one you picked already (so there are n-1 possibilities for that). And so on- the next item has n-2 possibilities, etc.

Mathematicians and other interested folks compactly represent n times n-1 times n-2 times n-3 all the way down to 1 as "n!"

But we aren't necessarily picking out every item in the bag, that would be lame. We stop picking items after k choices. So we need some way of expressing that we're multiplying n times n-1 times n-2 etc but only the first k terms of that.

So, concretely, if we had 9 items in the bag to start out with (n = 9) and we were picking out 3 items (k = 3) we would want to just multiply 9 * 8 * 7. Our compact representation of multiplying descending terms has to go all the way down to 1, but we can do a clever math trick to just get the terms we want. If we take 9 * 8 * 7 * 6 * 5 * 4 * 3 * 2 * 1 and divide it by 6 * 5 * 4 * 3 * 2 * 1, the 9 * 8 * 7 survives and all the rest of the terms cancel out. We know we want to divide by 6! to cancel out all those terms we don't want- is there a systematic way we can know to divide specifically by 6! from the numbers we started out with (9 and 3)? Yes; 9-3 =6. So what we want to cancel out is n-k.

So the expression we have so far for the number of different possible sets you might draw from your bag is n! / (n-k)!. That's really close, but there's one last issue to take care of. Our explanation talked about "drawing the first item" and "drawing the second item" etc., but the final answer, which just talks about the sets of items you end up with, doesn't care about the order that you drew the items in. If we just used n!/(n-k)!, we would end up counting some sets twice (or more!) just because there are different orders that we could have drawn those sets in.

We want to correct for that by dividing by another number - the number of different orders we could have drawn the set in. Let's say that from your bag of 9 items, we've drawn 3 items- and those three items are a pencil, a spork, and a xylophone. To do our little correction we need to know how many orders we could have drawn these items in so we can divide by that. How many orders are there? Well, we could have drawn the pencil first, or the spork first, or the xylophone first- 3 choices. Then there are two items left that could have been drawn second, and then there's just one item left. 3*2*1. That's just 3!. From the original problem, k was 3. The number of different orders of the k items we picked is k!.

Dividing our formula by that, we get (n! / (n-k)!) / k! . When you divide by two things one right after another, that's the same as dividing by the product of those two things, so that's the same as n! / ((n-k)! * k!). Tada!
posted by a snickering nuthatch at 9:25 AM on November 29, 2011 [37 favorites]


That's a very well-written explanation, Jpfed! I'm quite impressed at how easy it was to understand.
posted by Triplanetary at 9:31 AM on November 29, 2011


I've at least brushed against most of these in school but have seldom needed them in the real world.
posted by octothorpe at 9:34 AM on November 29, 2011


Jpfed, are you a teacher/professor?

Gawd knows I've met many a calculus professor who lacked the eloquence you obviously possess with regards to explaining mathematical concepts to students. Well done.
posted by RolandOfEld at 9:35 AM on November 29, 2011 [1 favorite]


Yesterday, in class, I was finishing up the chapter on infinite series, and at the end there's a cool little way to prove Euler's Identity using the Maclaurin series for e^x.

I trotted out the number i, asked them if they knew what I was talking about, and unanimously they're like "square root of -1, some imaginary number, right?" I'm giddy at this point, and I'm like, "yes, exactly. Now, if we want to know how it plays with the stuff we already have..." and I show them the derivation, ending up with Euler's Identity.

They collectively sit in silence, and I ask if they liked that. Some guy asked "so, is this on the exam?" When I say no, it's just something I thought you guys might have seen before and wondered why it might be true, and well, here's a reason. Then I decided to ask if anyone had seen Euler's Identity before.

No one had. I sighed, and moved on.
posted by King Bee at 9:36 AM on November 29, 2011 [7 favorites]


This list is okay, but as far as math and numerical methods go, I'd be far more pleased if the average programmer had a solid foundation in statistics.
posted by vanar sena at 9:38 AM on November 29, 2011 [4 favorites]


No, hash lookups are O(logn), I'm certain of it. God, I should have stuck Comp Sci out, I would have kicked ass.

Hash tables have an average lookup/search of O(1) with worst case of O(n). You are probably thinking of a binary tree which has an average lookup of O(log n) and a worst case of O(n).
posted by Bort at 9:38 AM on November 29, 2011 [1 favorite]


Do what you can to encourage people to enter this field.

I truly believe that you're setting someone up for failure by telling them to major in what is essentially a branch of applied mathematics when all they want to do is program. At the very least, have them consider Computer Engineering or Software Engineering instead of Computer Science. Otherwise, they're just going to hit that touch discrete math course their first or second semester, and say "this is really hard, what does this have to do with anything?" and become disillusioned.
posted by Rhomboid at 9:39 AM on November 29, 2011 [2 favorites]


At the very least, have them consider Computer Engineering or Software Engineering instead of Computer Science.

I don't know how this works in general, but at my school Computer Engineering was a superset of Computer Science in terms of course load.
posted by invitapriore at 9:44 AM on November 29, 2011 [1 favorite]


I truly hate looking at math equations, and I think a large part of it is how ugly our way of expressing them are.

I'm forced to assume by this statement that you live a sad, lonely life devoid of happiness and beauty if you believe such a thing. If you think this is ugly, well, you're beyond saving.
posted by auto-correct at 9:44 AM on November 29, 2011


Do what you can to encourage people to enter this field.

I truly believe that you're setting someone up for failure by telling them to major in what is essentially a branch of applied mathematics when all they want to do is program.


Indeed. All of those "determine the correct sequence of library calls" jobs can be serviced perfectly well by humanities majors.
posted by ceribus peribus at 9:45 AM on November 29, 2011 [3 favorites]


jedicus, I'm fairly certain that only balanced trees (such as red-black trees) are O(logn)...

Hence why I said "most" tree lookups. Pretty much all of the trees of interest in CS have log n search time (e.g. red-black, AVL, splay, B-).

Anyway, more topically, another interesting use of information entropy is in the calculation of decision trees.
posted by jedicus at 9:46 AM on November 29, 2011


at my school Computer Engineering was a superset of Computer Science in terms of course load.

That's certainly not how it was when I was at school. At the beginning, perhaps. But CE stopped after hitting all the basics of algorithms, data structures, and operating systems, and moved on to logic design and the like. CS would have continued on with much more theory-oriented topics, e.g. linear algebra, grammars, compilers, language systems, etc.
posted by Rhomboid at 9:47 AM on November 29, 2011


None of this stuff is all that hard to understand if you have someone who can take the time to put the explanation in context. Mathematicians like to make a game of making it hard. Which is why I hate them.

The only two on this list that I think are really important for Software Engineers (which is really who the author is adressing) are Bayes and DeMorgan. And by "important" I mean "I have used in my professional career."

If you're actually a Computer Scientist (which is what is printed on my degrees, but is not my profession) then, yeah, this list looks halfway reasonable for undergrads to be on the look-out for, but any advanced work (i.e. graduate level and beyond) is going to require you to fill your head with half a hundred more specialized equations that are far more important to your particular field of CS, and the rest can basically be paged-out until you need to look them up. It's good to have a math major in any programming group to remind you of these things. Also they tend to have good music collections, which is why I love them.
posted by jeffamaphone at 9:48 AM on November 29, 2011 [2 favorites]


King Bee, you remind me of what Bill Watterson said of his character Miss Wormwood from Calvin and Hobbes:
I think she seriously believes in the value of education, so needless to say, she's an unhappy person.
posted by invitapriore at 9:48 AM on November 29, 2011 [5 favorites]


Computer science is an awesome academic discipline. Programming is an awesome profession. Most computer science departments are mediocre-to-bad at training people to become professional programmers. Computer science departments and aspiring professional programmers really ought not to be wasting time on one another. It's an ill-fated partnership.
posted by silby at 9:48 AM on November 29, 2011 [10 favorites]


Mathematicians like to make a game of making it hard. Which is why I hate them.

I know you're kind of joking, but actually the majority of our game is trying making it easy to understand. It's all you muggles who refuse to try to understand that should be mocked. =)

invitapriore, I'm not sure I can favorite that comment hard enough.
posted by King Bee at 9:51 AM on November 29, 2011 [1 favorite]


I'm guessing the vast, vast majority of computer scientists know 2 or 3 of these at best.
posted by naju at 9:53 AM on November 29, 2011 [1 favorite]


It's an ill-fated partnership.

Yes and no, but I agree with the sentiment. I propose the following:

  • Computer Engineers build computers (a focus of Electrical Engineering).

  • Software Engineers build software.

  • Computer Scientists are mathematicians who use computers to do their work and math to define the capabilities of their computers.


  • If you want to be a Computer Engineer or a Computer Scientists, there are plenty of good academic programs to choose from, including your local state school (don't go into debt). If you want to be a Software Engineer, you pick one of the above to get a degree and check the box, but if you want to be good you spend all your free time writing software and learning to use tools like debuggers and source control. Or you just drop out and learn by doing. The industry secret is nobody cares about degrees*, only the ability to write code and pay attention to detail.

    *If you are an H1B a degree matters.
    posted by jeffamaphone at 9:54 AM on November 29, 2011 [1 favorite]


    Oh yeah, in no way would I call myself a Computer Scientist. I am at best a software engineer.
    posted by Ad hominem at 9:57 AM on November 29, 2011


    This list is okay, but as far as math and numerical methods go, I'd be far more pleased if the average programmer had a solid foundation in statistics.

    I'd be far more pleased if the average programmer had a solid foundation in programming.
    posted by Ickster at 9:59 AM on November 29, 2011 [8 favorites]


    I think the author is making a mistake when he tries to distill a whole sub-field of knowledge down to one equation. I feel like I have a pretty good grasp on the automata and so on -- I'm certainly aware that a regular language would be able to have a middle part repeated indefinitely, but I'd never heard the phrase "Pumping Lema", at least not that I remember. The phrase "Fermat’s Little Theorem" sounded vaguely familiar but the formula itself is something I have a pretty intuitive grasp on.

    The only one I truly didn't know was the The Fixed-Point (Y) Combinator. Probably because A) I'm not a Paul Grahm fanboy, and this is supposed to be famous because it's the eponym of his Y-Combinator and B) I've implemented recursive functions in Scheme in college, but I've never studied lambda calculus out of that class and we didn't go over that specific thing.
    I truly believe that you're setting someone up for failure by telling them to major in what is essentially a branch of applied mathematics when all they want to do is program. At the very least, have them consider Computer Engineering or Software Engineering instead of Computer Science.
    Eh, I disagree. I taught myself to program in highschool. It's not that hard because you get quick feedback from the computer. College was lots of theory and only two, first year classes on practical C++ (one of which I tested out of). I think the theory stuff helped me be a better programmer. That's probably why I knew most of these formulas.

    That said probably what the software 'industry' needs is a greater focus on Software engineering which I interpret as being about putting together good software that can be easily maintained, etc. There are lots of people out there who may actually know this theory stuff but put together god-awful code that is a huge pain to maintain.
    You are probably thinking of a binary tree which has an average lookup of O(log n) and a worst case of O(n).
    A balanced binary tree, like a red-black tree should have O(log n) worst case, as far as I know, right?
    None of this stuff is all that hard to understand if you have someone who can take the time to put the explanation in context. Mathematicians like to make a game of making it hard. Which is why I hate them.
    Wow, that's one of the dumbest things I've read on metafilter in a while. I realize mathematicians like to try to explain things in a way that semi-proves them as they explain but you really think they're deliberately obfuscating things? If you dumb things down too much you're not actually explaining it in a way that's useful. Mathematicians typically like math and want other people to like math.
    posted by delmoi at 10:00 AM on November 29, 2011 [1 favorite]


    and I show them the derivation, ending up with Euler's Identity. They collectively sit in silence, and I ask if they liked that. Some guy asked "so, is this on the exam?"

    Every time I hear "is this on the exam" I'm strongly tempted to make the exam much, much more evil. Then I remember that for many students the course is just a hoop to be jumped through, and if they're willing to jump through it I can respect that. Some days I'm Socrates, some days I teach driver's ed.

    However, it depresses me to know end that they didn't see the beauty of what you showed them. When I first saw e^(i*pi) = -1. I felt confusion, then awe. Two irrational number linked by such a simple equation -- it made me feel as though a Grand Unified Theory of Everything just might be possible, as though I were catching a glimpse of it. It was a religious feeling. I pity them.
    posted by justsomebodythatyouusedtoknow at 10:00 AM on November 29, 2011 [4 favorites]


    By the way, the real essence of computer science is most artfully explained by Geoff Pullum's Scooping the Loop Snooper.
    posted by silby at 10:04 AM on November 29, 2011 [2 favorites]


    However, it depresses me to no end that they didn't see the beauty of what you showed them.

    Yeah, I get a little bit of that, but I have to let it wash over me. A couple of dudes who sit in the back who seem to like my style looked like they might have been having that "revelation" feeling, but didn't want to be singled out or something.

    I also teach at a fairly large state funded institution near a much larger, state funded institution, so we sort of are picking up the pieces over here. Like you say, sometimes you just teach driver's ed.
    posted by King Bee at 10:04 AM on November 29, 2011 [1 favorite]


    ceribus peribus: "Indeed. All of those "determine the correct sequence of library calls" jobs can be serviced perfectly well by humanities majors."

    Wow. Suddenly both my educational choices and my career seem that much more diminished.
    posted by Deathalicious at 10:05 AM on November 29, 2011


    They collectively sit in silence, and I ask if they liked that. Some guy asked "so, is this on the exam?"

    I'm always tempted to answer "it is now!"

    (Seriously, I wish I could do away with exams. Nobody ever asks "is this on the homework?")
    posted by madcaptenor at 10:10 AM on November 29, 2011 [3 favorites]


    Computer science is an awesome academic discipline. Programming is an awesome profession. Most computer science departments are mediocre-to-bad at training people to become professional programmers. Computer science departments and aspiring professional programmers really ought not to be wasting time on one another. It's an ill-fated partnership.

    This is very, very true. In fact, many of the big name professors in my CS department did not use computers, at all, for anything. Secretaries printed out any important email for them to read, for example.

    My personal opinion is that "Computer Science" is an awful name. They'd be better off calling it Computational Mathematics or something and making Software Engineering a separate program entirely.
    posted by LastOfHisKind at 10:17 AM on November 29, 2011 [4 favorites]


    Mathematicians like to make a game of making it hard. Which is why I hate them.

    Surely you understand that this is an offensive accusation.

    In the interest of civil discussion, would you like to describe the experiences which led you to this conclusion?
    posted by stebulus at 10:20 AM on November 29, 2011 [1 favorite]


    I like "informatics" myself.
    posted by silby at 10:21 AM on November 29, 2011


    If you are an H1B, every 3 years of professional experience count for 1 year in college.

    The way I go about my job: every time a hard problem presents itself I half remember some class from college, google for some context and expense the right book from Amazon. I have learned and forgotten linear algebra at least 3 times (faux 3D transforms in a 2D environment, some industrial process simulation, color transforms). I have used Bayes and O successfully for job interviews only.
    posted by Ayn Rand and God at 10:21 AM on November 29, 2011


    Rhomboid said: At the very least, have them consider Computer Engineering or Software Engineering instead of Computer Science.

    I agree that more people should study Software Engineering and that Computer Science is often overkill for a career in business software development. You mentioned "learn computers" in your original comment, which is why I jumped.
    posted by Triplanetary at 10:23 AM on November 29, 2011


    Pairwise comparison is O(n2), your heuristic would be O(n).

    <nitpick> That's only true if you can handwave away the cost of the sort, which is probably going to be O(n log n) and thus dominates the cost of the operation. </nitpick>

    Otherwise, they're just going to hit that touch discrete math course their first or second semester, and say "this is really hard, what does this have to do with anything?" and become disillusioned.

    I had been a professional software developer for eighteen years before I found out what the term "discrete math" meant.

    I truly hate looking at math equations, and I think a large part of it is how ugly our way of expressing them are.

    It is true! It's especially terrible inside CS papers. Why, when we have a surplus of lovely, readable languages for expressing computation, do CS authors insist on encoding their ideas inside these might-as-well-be meaningless jumbles of Greek letters? It just guarantees that almost nobody who might actually want to put their ideas into practice will be able to do so without getting a translation from someone who speaks the academic moon-language.

    It is intensely aggravating to find a paper which appears to be discussing a topic of interest, whose content is locked up inside an eye-melting sprawl of symbolic gibberish. I generally end up re-abstracting the principle out of a working example, if I can find one, because it's easier than actually reading the original paper.
    posted by Mars Saxman at 10:54 AM on November 29, 2011 [2 favorites]


    However, it depresses me to know end that they didn't see the beauty of what you showed them. When I first saw e^(i*pi) = -1. I felt confusion, then awe. Two irrational number linked by such a simple equation
    Bleh, I hear that kind of stuff all the time but to be honest, once you understand why that's the case it doesn't seem all that exciting to me. eix = cos x + i sin x. It's obvious if you put π in for x you're going to get a pretty 'normal' number out of a sin/cos function.

    e, sin, and cosine are all based on pretty similar infinite polynomials. Maybe it's because people are taught e and π as magical numbers that they just throw into equations to just make them work, without teaching people where they come from.
    posted by delmoi at 10:55 AM on November 29, 2011


    I truly hate looking at math equations, and I think a large part of it is how ugly our way of expressing them are.
    It is true! It's especially terrible inside CS papers. Why, when we have a surplus of lovely, readable languages for expressing computation, do CS authors insist on encoding their ideas inside these might-as-well-be meaningless jumbles of Greek letters?
    I always thought they looked pretty *shrug*. Part of it is probably an issue of density. It takes a lot less space to express a formula then it does psudocode. And people want mathematical proofs of things in papers, not just 'here's some code, it works'
    posted by delmoi at 10:57 AM on November 29, 2011


    Layman's explanations, part II:

    DeMorgan's Laws:
    Let's say you have a bunch of statements that are either unambiguously true or false. You can call each of those statements a "proposition".

    Propositions are cool and all but (being mathematicians) we want to perform operations on them! There are a few simple things we can do to a proposition (or set of propositions) to form a new proposition.

    The easiest operation is NOT- take the proposition and form a new, opposite proposition. So if you tell me the proposition "'A' is the first letter of the alphabet" (which happens to be true) I can form a new proposition "'A' is not the first letter of the alphabet" (which happens to be false). To make this kind of conversation easier, we label our propositions with letters and decorate them with operators: You told me A, and I told you ~A. (There are a few ways of writing NOT, but I'm going to keep this plaintext and use tilde.) Since asserting A is the same as saying A is true, asserting ~A is the same as saying A is false.

    We can combine propositions with operators to form new propositions, just as we can combine numerical expressions with operators (like plus, minus, etc.) to form new numerical expressions. The operators we'll use in this way are "AND" and "OR" (other operators exist, but I won't go into them here because they're not needed for this post).

    "AND" is pretty straightforward: if A is true, and B is true, then A AND B is true. When (and only when) either A is false or B is false, then A AND B is false. "OR" is slightly different from how it's normally used in English: if either of A or B is true, then A OR B is true. If both A and B are true, then A OR B is also true. When (and only when) both A is false and B is false, then A OR B is false.

    Well- that was shorter that I thought it was going to be. You already know DeMorgan's Laws! Those laws are just the italicized sentences in the previous paragraph. If you insist on seeing them written out all fancy-like, here they are:

    When (and only when) either A is false or B is false, then A AND B is false.
    ~A OR ~B = ~(A AND B)

    When (and only when) both A is false and B is false, then A OR B is false.
    ~A AND ~B = ~(A OR B)
    posted by a snickering nuthatch at 11:00 AM on November 29, 2011 [3 favorites]


    Bleh, I hear that kind of stuff all the time but to be honest, once you understand why that's the case it doesn't seem all that exciting to me...e, sin, and cosine are all based on pretty similar infinite polynomials.

    True, but not in the definitions of those things. "e" is the number that makes the natural logarithm function equal to 1. Sine and cosine are based on where rays intersect the unit circle. "i" is this square root of -1. Pi is ratio of the circumference of a circle to its diameter.

    You sure could define e^x, sin(x), and cos(x) by their Maclaurin series if you wanted to (instead of having them be consequences), but then all those other connections would be the mythical things. Why should all those things be related in such a simple way? It's got to be a little bit surprising how these numbers all show up, right?
    posted by King Bee at 11:02 AM on November 29, 2011 [2 favorites]


    A balanced binary tree, like a red-black tree should have O(log n) worst case, as far as I know, right?

    Correct. O(n) would be the case where a binary tree is completely unbalanced - all nodes are on the right hand side or left hand side, making it essentially a linked list.


    That said probably what the software 'industry' needs is a greater focus on Software engineering which I interpret as being about putting together good software that can be easily maintained, etc. There are lots of people out there who may actually know this theory stuff but put together god-awful code that is a huge pain to maintain.


    I agree 100%. In my experience in the business programming world, developers with or without a CS degree and little experience do not have enough of an understanding of what should be the basics like: encapsulation, abstraction (vs copy, paste, modify), unit tests, source control, simplicity (vs clever "hacks" that are hard to maintain), etc.

    Things may have changed since I got my BS about a decade ago, but I believe there is an unfulfilled need for an applied CS track in addition to the classic theoretical CS track in universities.
    posted by Bort at 11:04 AM on November 29, 2011


    delmoi, what bothers me is the lack of any system for translation of this knowledge from academia to industry. It seems like such a waste - isn't the whole point of all this research that someone, somewhere, is going to do implement it and thereby do something useful? It's not clear to me how the people writing the papers expect that to happen, when they express their ideas in terms that the great majority of working software engineers will not be able to read.

    Software engineers have pretty good systems for communicating amongst themselves, and an aspiring software engineer can take their pick from hundreds of books and thousands of web sites that will walk them through what they need to know to learn how to develop software.

    Computer scientists must have similarly good systems for communicating amongst themselves, as judged by all these papers which start out with fascinating abstracts and then dissolve into glop. People keep writing the glop, which means people keep reading it, which means that someone, somewhere, is teaching people how to read it, and apparently doing so with some success.

    Whose job is it, I want to know, to translate from one world to the other? Where can I read translations of academic papers? What kind of person is it who does this sort of work? Because I keep coming up blank, which means I have to do it myself, and it just sucks.
    posted by Mars Saxman at 11:05 AM on November 29, 2011


    some days I teach driver's ed.
    Why disparage driving schools? They're probably up there with med schools, officer candidate schools, and (some) engineering colleges on the "list of schools where good teaching and attentive students can prevent needless deaths".

    Admittedly, the opportunities to encounter fascinating ideas and sublime equations in driving school are much more limited.
    posted by roystgnr at 11:07 AM on November 29, 2011 [1 favorite]


    Just yesterday I was trying to see if I could OR procmail conditions together instead of ANDing them. The docs suggest using DeMorgan's!
    posted by sbutler at 11:10 AM on November 29, 2011 [1 favorite]


    isn't the whole point of all this research that someone, somewhere, is going to do implement it and thereby do something useful?

    No, it really isn't.

    One of my favorite quotes from mathematician G.H. Hardy: I have never done anything "useful". No discovery of mine has made, or is likely to make, directly or indirectly, for good or ill, the least difference to the amenity of the world
    posted by King Bee at 11:11 AM on November 29, 2011


    delmoi, what bothers me is the lack of any system for translation of this knowledge from academia to industry. It seems like such a waste - isn't the whole point of all this research that someone, somewhere, is going to do implement it and thereby do something useful? It's not clear to me how the people writing the papers expect that to happen, when they express their ideas in terms that the great majority of working software engineers will not be able to read.

    This is quite wrong, and -- speaking as a mathematician -- I find the anti-mathematical, anti-intellectual bent in this thread to be genuinely distressing. For example, in number theory, one of the main areas where theoretical and computational research come together, there is a huge amount of genuinely important academic research going on, and this research is the entire basis for all of the cryptography used in computer science. Remove academia from the equation and you do not have any modern cryptography. None at all.

    Other areas of theoretical research are highly important in practical usage. When you play video games, where do you think all of the number-crunching algorithms behind the physics engines come from? That's right: they originally come from academic research in differential equations. Where do the equations that computer scientists working in oceanography, meteorology, applied biology, chemistry, etc come from? They come from academic mathematics. Here is just one example from the large state school where I got my Ph.D.: Mathematicians there are at work designing models of internal organs that get used in hospitals by computer scientists. They are at work writing computer programs that are used by tons of people in industry.

    I could go on and on. The idea that mathematics is somehow only of use to academics is not just incorrect, it's dangerously incorrect, because it ignores the huge amount of vital interface between academic mathematics and the real world.
    posted by Frobenius Twist at 11:23 AM on November 29, 2011 [12 favorites]


    Mars Saxman, isn't the "system for translation of this knowledge from academia to industry" just a degree in Computer Science? A good degree program will teach students enough to at least understand whether a paper is relevant to their work.

    After I took the time to understand the reasoning and purpose behind mathematical notation and how it worked (took years) it actually became an efficient and consistent way to communicate very complex ideas and build on existing knowledge.

    Additionally, many large tech companies fund research divisions that do practical things with this stuff all the time. Netflix and Google come to mind as companies that directly profit from innovation and cutting-edge research in even very slightly more effective algorithms. This stuff is genuinely difficult and I don't think there can be any shortcuts to understanding and using the ideas and techniques.

    Software Engineering communities might seem easier to understand because of the use of diagrams, visuals, and language metaphors to explain concepts but I'm sure there's a similar investment of time and effor has to be made to understand the foundational concepts before moving on to all the recent research.
    posted by musicismath at 11:24 AM on November 29, 2011


    If you want to be a Computer Engineer or a Computer Scientists, there are plenty of good academic programs to choose from, including your local state school (don't go into debt). If you want to be a Software Engineer, you pick one of the above to get a degree and check the box, but if you want to be good you spend all your free time writing software and learning to use tools like debuggers and source control.

    Exactly the same argument applies to computer engineering and computer science. Even the best academic programs provide only a minimal high-level introduction to their fields. If you want to be a good computer scientist, yeah, a good academic CS program helps, but then spend your spare time doing computer science (reading, writing, and presenting papers; writing research code; doing your own experiments; proving theorems, whatever). Similarly, if you want to be a good computer engineer, find a good academic CE program, but spend your spare time doing computer engineering.

    One of the real values of a "good" academic program, in whatever discipline, is the opportunity for useful, constructive feedback on the stuff you do outside of class, both from the faculty and from your fellow students. The official curriculum, even in the best academic programs, provides at most a superficial introduction to their field; if you want to be really good, you have to push further on your own.

    isn't the whole point of all this research that someone, somewhere, is going to do implement it and thereby do something useful?

    HAHAHAHAHAHAHAHAHA.

    Ha.

    Um. No.
    posted by erniepan at 11:26 AM on November 29, 2011 [1 favorite]


    Useful is not the same thing as important.
    posted by yeolcoatl at 11:26 AM on November 29, 2011


    Following up from

    Jpfed

    Why do we call the number of ways to chose k items out of n Binomial Coefficients?

    Well, let's say you have a binomial, which is what we call a sum of 2 terms, e.g. (a + b).
    And let's say we wanted to raise it to the nth power. That is, we want to multiply (a + b) by itself n times. Multiplying anything by (a + b) is just multiplying it by a, and then by b and then adding both together. For each on the n (a + b)'s we get to multiply by an a one time and by a b another and add stuff. The number of times we multiplied by the a, say k times, shows up as the exponent of a, and the remaining (n-k) times we multiplied by b becomes the exponent of b. Then when we group all the like terms (same exponent of a and b together) how many akb(n-k) terms will there be total?
    One for every way of choosing the k factors of a from the total n factors. And this count becomes the coefficient of the term akb(n-k)
    posted by Obscure Reference at 11:26 AM on November 29, 2011 [2 favorites]


    I realized users sometimes want to do *more* work themselves if it gives them a feeling of control.

    More like Microsoft has taught them to never trust a computer to read your mind.
    posted by straight at 11:31 AM on November 29, 2011


    More like Microsoft has taught them to never trust a computer to read your mind.

    My explanations should have started with lesson number one - never distrust a computer!
    posted by a snickering nuthatch at 11:36 AM on November 29, 2011


    I'm sure we can get good layperson explanations for most of this stuff. I'll try to start...
    Layman's explanations, part II:...


    Yay! Now do Karmarkar's method!
    Please?

    posted by ceribus peribus at 11:44 AM on November 29, 2011


    Those interested in a survey of CS topics should pick up The Turing Omnibus, which is a little dry, but covers topics from the mundane to the brain-melting (and covers some topics in the above list).

    One of my favorite is the busy beaver problem, which illustrates our inability to reason about even very simple computers. This is probably the best lesson for a CS person: The halting problem exists, deal with it.
    posted by RobotVoodooPower at 11:45 AM on November 29, 2011 [2 favorites]


    I could go on and on. The idea that mathematics is somehow only of use to academics is not just incorrect, it's dangerously incorrect, because it ignores the huge amount of vital interface between academic mathematics and the real world.

    That's great except for the widespread disdain for industry and practical application in general throughout the academy. You don't get tenure by helping industry solve a pressing problem, or from teaching; you get it by publishing papers for other academics to read. When professors look at me like I'm an idiot for asking about applications, it's clear to me where the problem lies.
    posted by LastOfHisKind at 11:45 AM on November 29, 2011 [3 favorites]


    Re: pi and e:

    It's amazing that they're connected geometrically, but once you start studying almost any advanced science they show up everywhere. Especially anything involving waves or periodicity, which is pretty much everything.

    After a while, when looking at quantum mechanics and physics equations, you get used to them being everywhere, but if you take a step back and look at it, and how simple and beautiful the relationship makes so many equations, it's just awe inspiring.
    posted by empath at 11:46 AM on November 29, 2011 [1 favorite]


    "Maybe it's because people are taught e and π as magical numbers that they just throw into equations to just make them work, without teaching people where they come from."

    For those of you distressed by people going "huh, what?" about all this beautiful math, this sentence captures the problem right clear.

    You take for granted the layer of abstraction in which you are comfortable conversing. We're still trying to grasp the handlebars next to our ears. I think this is the main issue. For some of us, the foundations of mathematics were taught at the wrong age (around 7th or 8th grade, usually). What is happening at that time for most people? Puberty. So, if you were lucky enough to have your hormone soup kick off just a few years later, you probably had enough attention span to learn the initial step of abstraction from x+y to something useful like why the hell are we using Greek letters, and what the fuck is with the squiggly lines that keep showing up and I can't remember for the life of me what they mean. It's like being dumped in a foreign land and hearing snippets that you understand (usually works like "the" and any swear words), while the rest just becomes noise you can't retain. It is frustrating. This is why Jpfed's explinations above have been freakishly enlightening. I can actually understand the abstraction that he is describing. Yes, I understand that it is odd to thing that people somehow missed out on these things, but I think you have to understand that for many people, public institutional model schooling was a bad idea (wikipedia says it is based on the Prussian model, though most of us think of it as the "Factory Model").

    We aren't stupid, nor are we anti-intellectual. We just need some tutoring in speaking the language. Some of us really want to learn, too, but the autodidacts (myself included) have had problems finding the right informational sources that start us back far enough to understand WHY you do things, not just that you do things. It is helpful to understand the reasoning for something, versus simply accepting that "that's just how it is done". Jpfed's explination shows a reason WHY we use n!. It's right there in plain english. We want to multiply from n all the way down to n=1 together. That's awesome. Now I know why n!. Now I have a new word. Now someone point me to as clear an explination for every other function in Calculus and higher math.
    posted by daq at 11:53 AM on November 29, 2011 [7 favorites]


    Computer Scientists are mathematicians who use computers to do their work and math to define the capabilities of their computers

    I'm pretty sure that this has already been done.

    By the way, I'm an unabashed software engineer with a strong grounding in CS.

    I attended and presented at the IPCV conference a few years back (Image Processing and Computer Vision) and after reading the papers and seeing my fellow presenters, I was pretty appalled at the content. I wrote up a blog about the presentation side of things. In terms of the "wall of math", I saw more than more presenter put up a slide that was just math equations - no explanations. Pitiful. Why? Because for the most part, they were trying to express things like triple nested for loops in mathematical notation with subsections that could've mapped onto if/else blocks, and I'm sorry, if you're doing nested for-loops, and if/else blocks that's what you should present.

    One place where Math/software diverge is naming, so in the wall of math, you might see a definition of ρ(x) and σ(x) and a few slides later you're expected to remember the meaning if not the full definition when placed in a larger context. If I wrote my code like this, I'd get keelhauled. Yes, absorbingBoundary(x) and nonAbsorbingBoundary(x) are longer, but the name carries the meaning. There is nothing in math representations that prevents you from doing this. Terseness at the cost of readability is never a win.
    posted by plinth at 11:57 AM on November 29, 2011 [5 favorites]


    Danf FTW
    posted by Lukenlogs at 11:59 AM on November 29, 2011


    That's great except for the widespread disdain for industry and practical application in general throughout the academy.

    Sorry, but this is just completely incorrect. The applied mathematicians I know all have close and fruitful ties to industry. Do you know about SIAM? It's one of the most important applied mathematical organizations, and a large part of its mandate is to strengthen ties between industry and mathematics. Applied mathematicians get money from and work with industry all of the time. I knew plenty of people who got Ph.Ds in applied math and then went on to very good jobs in industry, with the blessings (and help!) of their advisors.

    When professors look at me like I'm an idiot for asking about applications, it's clear to me where the problem lies.

    Let me respectfully submit that your experience here is in no way indicative of the mindset of most applied mathematicians.
    posted by Frobenius Twist at 12:00 PM on November 29, 2011 [1 favorite]


    Ad hominem wrote: Oh yeah, in no way would I call myself a Computer Scientist. I am at best a software engineer

    Better than me. I'm just a handyman. A handyman for the 21st century, to be sure, but a handyman nonetheless.
    posted by wierdo at 12:03 PM on November 29, 2011 [3 favorites]


    I could go on and on. The idea that mathematics is somehow only of use to academics is not just incorrect, it's dangerously incorrect, because it ignores the huge amount of vital interface between academic mathematics and the real world.

    Frobenius, I'm afraid I was trying to express exactly the opposite of the message you apparently got from my comment. I am pissed off because the interface between academic mathematics and the real world is too small and too difficult to negotiate. I would really like to know where you are seeing this "huge amount" of vital interface, because it's been years now that I've been looking for a forum where this kind of conversation happens, with no success.

    The whole point of my frustration is that I know academic computer science researchers are doing interesting work, and I want to apply it - I want to find out what they've discovered so I can make use of it. I am deeply, keenly aware that the work going on in academia is relevant to my professional practice, and I'm pissed off because I can't get at it.

    I am complaining about the academic use of mathematical notation because that is the single largest obstacle in figuring out just what the hell academic researchers are actually doing. Professional software engineers never use this notation to communicate with each other; it is completely foreign to the practice of software development. Software engineers are notorious for discussing their projects via whiteboard, but I have literally never seen two engineers describe the project they are working on using equations - it just doesn't happen. Even here at Google, which is the most academically-oriented software engineering organization I've ever encountered, if you walk up and down the hallways and snoop on the whiteboards, you will never see any equations anywhere - it's all boxes and lines, depictions of graphs, Venn diagrams, that sort of thing.

    It's infuriating to have to reverse engineer information that is clearly printed right there in front of me in some research paper, just because nobody has bothered to translate the specialist notation into language that might actually be comprehensible outside the academic research community.
    posted by Mars Saxman at 12:04 PM on November 29, 2011 [3 favorites]


    No one had. I sighed, and moved on.
    posted by King Bee at 12:36 PM on November 29


    King Bee - I usually do this proof at the end of calc ii to show them where Euler's formula comes from.

    Then, I really blow their minds. I say, "You could generate all of trig you have learned just from these series for sin x and cos x, and never even mention triangles at all."


    It usually seems to work.
    posted by wittgenstein at 12:19 PM on November 29, 2011


    Whose job is it, I want to know, to translate from one world to the other? Where can I read translations of academic papers? What kind of person is it who does this sort of work? Because I keep coming up blank, which means I have to do it myself, and it just sucks.

    In some sense, textbook authors (of advanced textbooks) are supposed to do this. I'm not sure they always succeed, though. (But I shouldn't throw stones; writing even a bad textbook is a lot of work.)
    posted by madcaptenor at 12:23 PM on November 29, 2011


    One place where Math/software diverge is naming, so in the wall of math, you might see a definition of ρ(x) and σ(x) and a few slides later you're expected to remember the meaning if not the full definition when placed in a larger context.

    You're expected to, yes. But in practice, you don't. One solution to this is to give math talks in rooms with multiple blackboards; then you can write on all of them, and hopefully when you need the definition you can just gesture to the board that it's on. (I have heard that some people who are better at giving talks than I am actually think about this while they're planning their talks.) I don't know a good way to do this with slides.
    posted by madcaptenor at 12:26 PM on November 29, 2011


    This is for Deathalicious, since he said he had no idea why Eigenvectors are useful and a Ctrl+F of "Eigen" turns up no comments that refer to it (I am a game designer, not a game programmer, real game programmers feel free to kick my ass over anything I get wrong here):

    Eigenvectors get used pretty heavily when dealing with rotation in 3D graphics programming, because people tend to think in terms of yaw/pitch/roll which leads to some problems (gimbal lock being a big one). For a practical visual example of this - spend some time animating objects in Matinee with the Unreal Development Kit, eventually you'll run into weird situations where the object insists on yawing 359 degrees counterclockwise when you wanted it to yaw 1 degree clockwise.

    Matinee has a nice flag labeled "use Quaternions" that clears up these issues, but also prevents you from editing yaw/pitch/roll curves. This is because Quaternions are basically rotation represented as a single vector and scalar, ie the new Z+ axis and a set rotation about it (local yaw in the case of Z+), and are therefore free of gimbal lock issues.

    Quaternions also have some nice performance advantages.

    Anyways: rotation matrix->quaternion conversion involves eigenvectors, so if you're doing low-level graphics programming for games or any realtime, interactive three dimensional content you're probably going to run into them a fair bunch.
    posted by Ryvar at 12:28 PM on November 29, 2011 [1 favorite]


    King Bee: You sure could define e^x, sin(x), and cos(x) by their Maclaurin series if you wanted to

    I probably do want to, if I'm working with complex variables. (Otherwise, I have a soft spot for defining sine and cosine as the solutions to the addition law for cosine, taken as a functional equation; see Robison, "A new approach to circular functions, π, and lim sin(x)/x", Math. Mag. 41.2 (March 1968), 66–70, jstor.)

    but then all those other connections would be the mythical things. Why should all those things be related in such a simple way?

    One way of explaining this (which I think I got from one of Doron Zeilberger's essays) is to say that these functions embody very simple kinds of structure, and a simple structures occur in diverse contexts just because they are simple. This is also why the Fibonacci numbers occur in many settings, for example, and Catalan numbers, etc. (An analogy could be made to the law of small numbers; the law of simple structures?)

    wittgenstein: Then, I really blow their minds. I say, "You could generate all of trig you have learned just from these series for sin x and cos x, and never even mention triangles at all." It usually seems to work.

    That's encouraging. I've been meaning to write an hour-long talk for undergraduates about the fact that definitions are chosen, not given, with sine as the running example. I was hoping that if I get students at the right moment in their mathematical development, it could, as you say, blow their minds.
    posted by stebulus at 12:29 PM on November 29, 2011 [2 favorites]


    True, but not in the definitions of those things. "e" is the number that makes the natural logarithm function equal to 1. Sine and cosine are based on where rays intersect the unit circle. "i" is this square root of -1. Pi is ratio of the circumference of a circle to its diameter.

    You sure could define e^x, sin(x), and cos(x) by their Maclaurin series if you wanted to (instead of having them be consequences), but then all those other connections would be the mythical things. Why should all those things be related in such a simple way? It's got to be a little bit surprising how these numbers all show up, right?
    Well, what's the difference between defining e one way and defining it another way if you end up with the same number? The difference is defining it as "the number that makes the natural logarithm equal one" vs. defining it as the root of an infinite polynomial is that the polynomial method tells you what it is

    The other thing is that I don't even think it makes sense to say "e" is the number that makes the natural logarithm function equal to 1." because I thought the 'natural' log is the log base e, the same way log base 2 and log base 10 are defined by their bases. Maybe you meant "the base of a log such that f(x) = logb(x) = f'(x) = ∫ f(x)dx"

    But anyway, when I first came across e in calculus class it immediately struck me that 'e' was a manmade thing, a number chosen to make calculus with logarithms easy. It wasn't some mysterious object sent from the heavens. The fact that it's based on a similar polynomial as sin and cosine makes it not very mysterious that pi would be related.
    delmoi, what bothers me is the lack of any system for translation of this knowledge from academia to industry.
    Well, I don't know. I studied computer science in school because I loved computers, not because I was worried about getting a job. But I was perfectly capable of learning to program on my own, and I found the theory stuff really interesting. I think having a solid grasp of algorithms is probably the most practically useful thing I learned in school, so you don't end up writing code that's O(n2) without realizing it or being able to figure out how to make it go faster.
    Also, keep in mind that this list is actually written for "Computer science geeks", not necessarily "programmers" or "software engineers"
    Whose job is it, I want to know, to translate from one world to the other? Where can I read translations of academic papers? What kind of person is it who does this sort of work? Because I keep coming up blank, which means I have to do it myself, and it just sucks.
    Smart people?
    posted by delmoi at 12:37 PM on November 29, 2011


    because I thought the 'natural' log is the log base e, the same way log base 2 and log base 10 are defined by their bases.

    If you define the natural log by ln(x) = ∫1x 1/t dt, then defining e to be the solution to ln(e) = 1 makes sense. And it's not hard to show that ln, defined in this way, satisfies ln(ab) = ln(a) + ln(b), so it makes sense to call this particular integral a logarithm.
    posted by madcaptenor at 12:54 PM on November 29, 2011 [1 favorite]


    My personal opinion is that "Computer Science" is an awful name. They'd be better off calling it Computational Mathematics or something and making Software Engineering a separate program entirely.

    I actually have a master's degree called Software Engineering which is a separate department in the School of Computer Science but we had to use quit a bit of math there too. Predicate calculus for proving models of software, linear regression for work estimation techniques, statistics for dynamic code analysis, etc. If you want to have "fun", learn to specify your software using Z Notation.
    posted by octothorpe at 12:55 PM on November 29, 2011


    whenever I see a math equation(unless it's an algebraic equation, which are a piece of pie), my eyes glaze over

    This happens way too often for me, and I'm a freaking computer scientist trying to get a degree in code theory. I don't know why, it's possible that it's because equations just seem "jumbled" compared to a simple straightforward program in pseudocode (or, god forbid, an illustration of some sort).
    posted by ymgve at 1:42 PM on November 29, 2011 [1 favorite]


    The other thing is that I don't even think it makes sense to say "e" is the number that makes the natural logarithm function equal to 1." because I thought the 'natural' log is the log base e, the same way log base 2 and log base 10 are defined by their bases. Maybe you meant "the base of a log such that f(x) = logb(x) = f'(x) = ∫ f(x)dx"

    As madcaptenor says above, if you define the "natural logarithm" in terms of an integral, then the mythical number "e" is the number that when you plug it in, you get 1 (it is not hard to show that if you define the function in terms of an integral, it is one-to-one, increasing, and its range is all reals). Since the function is one-to-one, it has an inverse, call it "exp(x)". As it turns out, this function agrees with "e^x" for every choice of x, and so e^x is the inverse of that funny function you defined in terms of an integral.

    Then you define what a "logarithm" is with other bases.

    In a calculus setting, it makes perfect sense to first define the natural logarithm in this way, because the only "basic" function up to that point for which you don't have a nice antiderivative is 1/x. So, you use the fundamental theorem of calculus to make one, namely ∫1x 1/t dt.

    It's a very standard way for doing all this stuff with logs and exponentials in a calculus class, but rigorously.
    posted by King Bee at 1:45 PM on November 29, 2011 [1 favorite]


    Also, I'd like to point out that it's slightly harder to see that e is the number you get when you plug 1 in for x in 1 + x + x^2/2! + x^3/3! + x^4/4! + ... It's a little bit easier to see that using the intermediate value theorem, there is a number that makes ln(x) equal to 1, and that it has to be somewhere between 2 and 3. By examining the function (1 + x)^(1/x) for x close to zero and looking at its graph, you can see that the limit at 0 exists (even though the function fails to be defined). Then you can plug in some "small" values for x to get a feel for how big e really is.

    I'm not saying what you're suggesting is wrong, I'm saying that infinite sums (and "infinite polynomials") are going to be a little harder to grasp for some people than the intermediate value theorem (which everyone in the room intuitively already knows to be true).
    posted by King Bee at 1:49 PM on November 29, 2011


    e wasn't invented to make logarithms easy. It was discovered in a problem involving compounded interest. It's not a man made number, it's a fundamental constant that comes up over and over in all kinds of applications. It just happens to also make calculus easier.
    posted by empath at 1:56 PM on November 29, 2011 [1 favorite]


    empath: It's amazing that they're connected geometrically, but once you start studying almost any advanced science they show up everywhere.... After a while, when looking at quantum mechanics and physics equations, you get used to them being everywhere, but if you take a step back and look at it, and how simple and beautiful the relationship makes so many equations, it's just awe inspiring.

    Not to mention a little eerie, as the physicist Eugene Wigner pointed out in the opening of his essay The Unreasonable Effectiveness of Mathematics in the Natural Sciences:
    There is a story about two friends, who were classmates in high school, talking about their jobs. One of them became a statistician and was working on population trends. He showed a reprint to his former classmate. The reprint started, as usual, with the Gaussian distribution and the statistician explained to his former classmate the meaning of the symbols for the actual population, for the average population, and so on. His classmate was a bit incredulous and was not quite sure whether the statistician was pulling his leg. "How can you know that?" was his query. "And what is this symbol here?" "Oh," said the statistician, "this is pi." "What is that?" "The ratio of the circumference of the circle to its diameter." "Well, now you are pushing your joke too far," said the classmate, "surely the population has nothing to do with the circumference of the circle."

    Naturally, we are inclined to smile about the simplicity of the classmate's approach. Nevertheless, when I heard this story, I had to admit to an eerie feeling because, surely, the reaction of the classmate betrayed only plain common sense.
    And for sake of completeness and to bring this back to CS, here's Richard Hamming's response, The Unreasonable Effectiveness of Mathematics.
    posted by Westringia F. at 2:09 PM on November 29, 2011 [2 favorites]


    e wasn't invented to make logarithms easy. It was discovered in a problem involving compounded interest. It's not a man made number, it's a fundamental constant that comes up over and over in all kinds of applications. It just happens to also make calculus easier.

    I disagree. Of course it's manmade, even pi is man made, in a sense, since there are no true, perfect circles in nature.

    Wikipedia gives the history of e this way:
    The first references to the constant were published in 1618 in the table of an appendix of a work on logarithms by John Napier.[4] However, this did not contain the constant itself, but simply a list of logarithms calculated from the constant. It is assumed that the table was written by William Oughtred. The discovery of the constant itself is credited to Jacob Bernoulli, who attempted to find the value of the following expression (which is in fact e): e = limitn->∞(1 + 1/n)n

    The article on Napier goes into more of the history, but he was using numbers based on by e to do logarithms, apparently without realizing it's significance or pointing it out as a discrete number. Bernoulli was looking at compound interest but this was a couple years after Libnitz' publication on calculus and it used a very 'calculus-y' method (taking the limit of a closer and closer approximation to the curve)
    posted by delmoi at 2:38 PM on November 29, 2011



    Well, what's the difference between defining e one way and defining it another way if you end up with the same number?
    The number itself isn't what's important -- and it's certainly not arbitrary, unless you're thinking about the actual decimal digits for some reason.

    i, pi, e are related on mathematical reasoning level if you dig deep enough. That they are connected by the mathematics isn't the mind blowing part. Like you said, depending on the approach, it may be easier or harder to see this when pushing symbols depending on the approach you take. But I'll make the argument that none of math is really mind-blowing taken just by itself. Or at least, to most people who are practically inclined. It's just a form of reasoning. It's like saying you can snap two legos together. Well, duh, that's how they work...

    The mind blowing part is when you take the math and map it back to higher level concepts. It's when you back up -- i: the third dimension, pi: circles, e: compounding, {dx}: rate of change. Guess what: they are all intimately related.

    Some of this is obvious in retrospect, even at a high level, but anyone who says this isn't mind-blowing, at least a little (mind-dusting? :-)) -- well, I dunno what to say to that.
    posted by smidgen at 2:44 PM on November 29, 2011


    delmoi: Well, what's the difference between defining e one way and defining it another way if you end up with the same number? The difference is defining it as "the number that makes the natural logarithm equal one" vs. defining it as the root of an infinite polynomial is that the polynomial method tells you what it is

    Choosing a definition isn't just choosing a definition, it's part of choosing the structure of the entire body of knowledge about the object to be defined. We know a lot of things about e. If you take one thing as the definition, you need to prove the rest. So you face a complicated system of trade-offs. If you choose the definition that makes computing the numerical value easy, then maybe you'll have to work harder to prove essential formal properties like Euler's formula. If you take one of those formal properties as the definition, you might have to work hard just to prove that such functions exist. The historically sound approach might be accessible but obstruct understanding of later clarifications. And so on. Which option is best depends heavily on purpose and audience and taste, and you can't judge the choice of definition in isolation from all those trade-offs.

    What you see in first-year calculus textbooks might not be the approach you prefer, but it is chosen advisedly.

    (By the by: a polynomial by definition has only finitely many terms; you mean power series. And e isn't a root of the power series, even if we extend the word "root" to such functions.)
    posted by stebulus at 2:45 PM on November 29, 2011


    The mind blowing part is when you take the math and map it back to higher level concepts. It's when you back up -- i: the third dimension, pi: circles, e: compounding, {dx}: rate of change. Guess what: they are all intimately related.
    Well, I don't know it doesn't really mean much to me.
    What you see in first-year calculus textbooks might not be the approach you prefer, but it is chosen advisedly.
    Well, you have to give people a definition that they can understand at the time, right?

    Anyway, the point is e doesn't just 'magically appear' in formulas people observe things and come up with formulas that match what they're looking for. e is an obvious choice in a function that uses exponential growth but you could factor it out using one of the expressions that leads to it and end up with some unwieldy monstrosity if you wanted too.

    Anyway, what I was saying wasn't exactly "e is a manmade thing" but rather "it seemed like e was a man made thing when I wast taking calculus in highschool", that impression of e is why I wasn't as impressed by that equation, because I didn't see e as some kind of magical thing that appeared from the heavens.
    posted by delmoi at 2:59 PM on November 29, 2011


    By the by: a polynomial by definition has only finitely many terms; you mean power series. And e isn't a root of the power series, even if we extend the word "root" to such functions.
    Well, transcendental numbers are essentially numbers that can't be expressed as zeros of a finite polynomial. I was using the term 'infinite polynomial' I guess somewhat imprecisely, but it should be obvious what I meant. Also, if e = c1 + c2 + c3 + c4... then 0 = -e + c1 + c2 + c3 + c4
    posted by delmoi at 3:07 PM on November 29, 2011


    OK, but you wouldn't want to define e as the root of a (infinite) polynomial whose constant coefficient is 1-e, would you? Because that's what you'd be doing in this case. That is pretty circular.
    posted by King Bee at 3:13 PM on November 29, 2011


    Well, you have to give people a definition that they can understand at the time, right?

    Sure. You don't think students can understand "e = ln-1(1)"?

    why I wasn't as impressed by that equation, because I didn't see e as some kind of magical thing that appeared from the heavens.

    Ah, got it. Makes sense.

    The first time I saw Euler's identity, I was puzzled by the non-real exponent, which just didn't make sense in the very elementary notion of exponentiation I had at that time. And the appearance of π was, of course, completely unexpected from that very elementary point of view. So the identity hinted at a whole world of new ideas. I found the experience a bit empty, since the identity itself didn't give me any way to acces that world, but I guess some people like the sense of mystery.

    using the term 'infinite polynomial' I guess somewhat imprecisely, but it should be obvious what I meant

    It certainly was. I was just saying that's not how we do.
    posted by stebulus at 3:22 PM on November 29, 2011


    Anyway, the point is e doesn't just 'magically appear' in formulas people observe things and come up with formulas that match what they're looking for. e is an obvious choice in a function that uses exponential growth but you could factor it out using one of the expressions that leads to it and end up with some unwieldy monstrosity if you wanted too.

    One of the amazing things about physics is the 'naturalness' of many of the equations -- they're mostly small whole numbers and constants. If you take e out of a lot of the equations, you end up with less than natural equations. That tells you something about how important e and pi are to nature.
    posted by empath at 3:31 PM on November 29, 2011


    O(n) should probably be #1; it's what most software engineers without a CS degree really, really would benefit by knowing. The rest are nice and all, but amortized analysis - Big-O notation - is vocabulary you *must* have to discuss whether code is efficient or not.
    posted by talldean at 3:34 PM on November 29, 2011 [2 favorites]


    e = ln-1(1)
    Sure, but if they don't know the integral form of ln then it seems somewhat self-referential.
    posted by delmoi at 4:20 PM on November 29, 2011


    One of the amazing things about physics is the 'naturalness' of many of the equations -- they're mostly small whole numbers and constants. If you take e out of a lot of the equations, you end up with less than natural equations. That tells you something about how important e and pi are to nature.
    It's pretty clear that there are two different senses of the word 'nature' in that sentence, 'naturalness' in the sense of being easy to grasp and straightforward for humans vs. 'nature' in the sense of non-human creation.

    That's like saying because riding a bike comes naturally bicycles are a product of nature.
    posted by delmoi at 4:23 PM on November 29, 2011


    if they don't know the integral form of ln then it seems somewhat self-referential.

    It would be, indeed. This is the kind of consideration I was trying to get at above — it's not just choosing a definition in isolation, it's choosing how to organize the whole body of knowledge about exponentials and logarithms.
    posted by stebulus at 4:49 PM on November 29, 2011


    The whole point of my frustration is that I know academic computer science researchers are doing interesting work, and I want to apply it - I want to find out what they've discovered so I can make use of it. I am deeply, keenly aware that the work going on in academia is relevant to my professional practice, and I'm pissed off because I can't get at it.

    this, exactly. I could definitely use more Jpfed, less whatever-the-hell-it-is that everyone's waving their mathematical dicks around about for the last twenty or thirty comments.

    in my experience, the people to ask are guys who have to use arcane math daily to solve practical chores on concrete physical objects, for example my husband (a mechanical engineer) showing me gear backlash calculations on a napkin at dinner (and explaining why that's important to know before buying tiny expensive gears to fit into a tiny expensive prototype) or one of our manufacturing chemists here at BigPharmaCo who absolutely rules at this stuff, and should really be teaching it to the next generation, except for the part where he gets paid a hell of a lot more to solve our Big Interesting Problems That Could Go Spectacularly Boom! If Not Well Understood not to mention he doesn't have to babysit a bunch of hormonal angsty teenagers He frequently uses my whiteboard to brainstorm / problemsolve with his colleagues whilst waiting for the materials buyer (whose cubicle is right outside my office) to get off the phone.

    Two years ago I had no idea what an integral was, much less what it was used for, but it looked intriguing; as someone said above it is a graceful and lovely symbol. Chemist Guy said "oh this thing? That's a useful piece of shorthand for when we need to calculate things like fill rates and volumes in a reactor (an irregular 3D space). Here, let me show you what we're doing; this represents feed rate of solvent, this is the heat ramp, this is how heat and pressure affect reaction over time, etc...", and it was elegantly simple when it was presented that way.

    The bottom line is that I get it, and why it's important, and what it's used for, and although I'd never use it in my own daily job as a legal secretary, I have enough appreciation for math and intellectual curiosity to enjoy these exercises. I certainly understand these kinds of concepts easily enough when they are presented in a nonthreatening manner in a realworld scenario like Jpfed used up there.

    And I never went to college and flunked highschool algebra. Twice.
    posted by lonefrontranger at 4:52 PM on November 29, 2011 [1 favorite]


    There sure is a lot of HackerNew crossover here, and ah, there as well.
    posted by sammyo at 4:54 PM on November 29, 2011


    whatever-the-hell-it-is that everyone's waving their mathematical dicks around about for the last twenty or thirty comments.

    You'd rather wave your ignorance dick around?
    posted by stebulus at 4:56 PM on November 29, 2011


    I find linear algebra to be very useful, but not so useful that I don't need to do a review about once a year.
    posted by humanfont at 4:58 PM on November 29, 2011


    You'd rather wave your ignorance dick around?

    sure, because in my experience there do exist people (like Chemist Guy and Jpfed) who are genuinely excellent at teaching people how to understand their language (and math is, essentially, a language).

    And because they don't feel a need to continually stroke their gigantic egos by proving to the world what an awesome genius they are by speaking High Klingon to the other nerds, they're also willing to speak slowly using simple basic English to make the translation go a bit smoother. You know, because we are ignorant savages and all.
    posted by lonefrontranger at 5:13 PM on November 29, 2011


    And because they don't feel a need to continually stroke their gigantic egos by proving to the world what an awesome genius they are by speaking High Klingon to the other nerds

    Christ. You and Sarah Palin can go off in a corner and use the word "professor" as a pejorative while the rest of us nerds talk "High Klingon" to each other. Feel free. I'm so surprised that the nerds who wave their "mathematical dicks" around don't actually want to explain things to you.
    posted by Frobenius Twist at 5:21 PM on November 29, 2011 [6 favorites]


    And because they don't feel a need to continually stroke their gigantic egos by proving to the world what an awesome genius they are by speaking High Klingon to the other nerds, they're also willing to speak slowly using simple basic English to make the translation go a bit smoother.

    This is the kind of comment that belongs on metatalk, if you feel the need to make it. Or better yet, not make it at all.
    posted by empath at 6:04 PM on November 29, 2011 [1 favorite]


    And because they don't feel a need to continually stroke their gigantic egos by proving to the world what an awesome genius they are by speaking High Klingon to the other nerds, they're also willing to speak slowly using simple basic English to make the translation go a bit smoother. You know, because we are ignorant savages and all.

    Jeez Lou-Goddamn-ise, you think that when I teach people about binomial coefficients, I don't do exactly what Jpfed was doing above? And that I don't metaphorically stab myself in the heart every time they refuse to listen to the explanation and just wait for the formula to magically appear so that they can get a C in the class and move on to The Next Thing so the whole cycle can repeat itself again?

    Maybe you're one-in-a-million, but nearly everyone in every class I've ever taught just sits there and waits for me to give them the goddamned formula. Most people couldn't care less about the why of any of this stuff.

    Most college professors are doing what Jpfed is doing, explaining things in terms that people can understand. The problem isn't with us, the problem is with people who assume at the outset that math is a mystical thing written in "High Klingon", and I'm just up there "stroking my ego" and "waving my mathematical dick around", and don't pay attention to me. Christ.
    posted by King Bee at 6:18 PM on November 29, 2011 [3 favorites]


    King Bee, that just makes me want to go back to school, well one where all the professors are above average.

    I'm a programmer so somewhat technical, and the discussions struck me very much folks just enjoying the topic, far far from "ego stroking".

    It was also fun to see several folks remembering their first impression of the Euler Identity; again very much my experience.
    posted by sammyo at 6:38 PM on November 29, 2011 [1 favorite]


    But since Jpfed got the ball rolling, here's part 3 of "layman's explanations". I'm going to take my crack at Fermat's Little theorem.

    Working "modulo p" (or "mod p") is something you're already familiar with if you've had to tell time. You get to work at 7 o'clock, buckling in for an 9 hour day at the office. When you're done, it's not 7+9=16 o'clock (as long as you're a civilian), it's 4 o'clock. So what happened?

    Well, since the times "start over" every twelve hours, what you've done is taken 16, divided by 12, and looked at the remainder. Since 16 divided by 12 is 1 with a remainder of 4, you say it's 4 o'clock. What you're doing here is working modulo 12. You're equating two numbers which aren't equal (16 and 4 surely aren't), but they leave the same remainder when divided by 12.

    So, we'll say two numbers are "equivalent modulo 12" if they leave the same remainder when divided by 12. That's how we tell time.

    You can work modulo any positive integer you want (although working modulo 1 would be kinda lousy, since every number leaves a remainder of 0 when divided by 1). When you work modulo 2, you say that all the even numbers are equivalent (map them all to 0) and all the odd numbers are equivalent (map them all to 1). And in some way, they sort of are. You've split the entire set of positive integers into two sets; the evens and the odds.

    When you say two numbers leave the same remainder when divided by 2, or 12, or any other positive integer, what you're saying is that their difference is a multiple of 2, or 12, or whatever positive integer you've taken.

    So why stop there? We've already seen one way it's useful to partition the integers into the remainder they leave when divided by 2 or 12. 2 was a prime number, 12 wasn't. What happens if we work modulo a prime number, p?

    As it turns out, they do. What Fermat's Little Theorem says is to first fix a prime number p. (If you don't like the general case, pick your favorite prime, but don't pick 2, since we already looked at that one.) Now, if you take any old integer a and raise it to the power p, you get something that leaves the same remainder that a did when you divide by p.

    Or, for any prime p and any integer a, ap - a is a multiple of p. Now, I'm not going to prove that this is true (at the risk of talking over anyone's head), but go ahead and give it a shot with some of your favorite prime numbers and see that it works. Nifty, eh?

    It's a nice little result. What's cool about it (for you applied types) is that things like this form the basis for RSA encryption (as mentioned in the FPP). Instead of working modulo some prime in RSA, they work modulo the product of two (extremely large) primes. The idea is that the modulus (the thing you are working "modulo" in) is so big and that it's super hard to factor large integers, that if I take some number and raise it to a power, then look at the remainder that's left modulo this gigantic thing, you can't tell what the original base in the exponential was. BUT! if you could figure out some nice algorithm for factoring numbers, you could use results like Fermat's Little Theorem to find out what the base in the original exponential was; and hence figure out things like my credit card number, and so on.
    posted by King Bee at 6:43 PM on November 29, 2011 [3 favorites]


    I'll also do a part IV, about Bayes' Theorem. I don't think I need to tell the folks here about the importance of a good understanding of probability, so its usefulness is (hopefully?) self-evident.

    So, the way I'll be talking about computing probabilities here is in the discrete/finite/equally likely sense. If we want to compute the probability of something, we just have to count. For instance, if I roll a fair, standard, six-sided die, the probability that it lands on an even number is 1/2 (since half the outcomes are even and half are odd). To get the probability of an event (like the die roll is even), we just count how many outcomes meet our condition (3) and divide by the total number of outcomes (6).

    Bayes' Theorem is about conditional probabilities. Like with the famous Monty Hall problem, you're aware that probabilities are affected by "what you know" and "what has already happened".

    Alright then. If I actually roll the die and tell you that the outcome on the die was prime (so either 2, 3, or 5), then the probability that the outcome was even is only 1/3, since only 1 of the possible 3 outcomes is even. So, we say that the probability that the die roll was even given that the die roll is prime is 1/3.

    That's what that piping thing in Bayes' Theorem is supposed to represent. P(A) represents the probability of outcome A. P(B|A) represents the probability of outcome B given that A has already occurred. So, if we take event B to be "the die roll was even" and A to be "the die roll was prime", we have that P(B) = 1/2, while P(B|A) = 1/3. Got the notation down?

    Quick digression: The way I like to talk about conditional probabilities is to use real-life examples. If you want to compute the probability that a person contracts ovarian cancer GIVEN that that person is a woman, that's different than looking at the probability that a person contracts ovarian cancer. The population of men totally screws with your calculations, since they can't contract ovarian cancer, so of course you'd only look at the population of women. The probability that your subject has ovarian cancer given that your subject is a man is 0. It just can't happen (men don't have ovaries, if we restrict ourselves to a binary understanding of "man" and "woman", no offense to anyone intended). The probability that your subject is a woman given that your subject has ovarian cancer is 1, since you KNOW it has to be a woman if the subject has ovarian cancer. The relevant probability is that your subject has ovarian cancer given that your subject is a woman. That's what people are interested in.

    So, in general, P(B|A) = (# of outcomes that meet condition B AND condition A)/(# of outcomes that meet condition A). Now, we can make this equation a bit more complicated (at first!) in order to make it say something a lot more meaningful. That fraction on the right? I'm going to divide both the numerator and the denominator by the total number of outcomes in the experiment.

    Your new denominator says (# of outcomes that meet condition A)/(# of total outcomes). Why, that's just P(A), right? Your new numerator says (# of outcomes that meet condition B and condition A)/(# of total outcomes). Why, that's just P(B and A), right?

    What we have now is P(B|A) = P(B and A)/P(A). There was nothing special about A and B, so if we swap their roles, we get P(A|B) = P(A and B)/P(B). This last equation, I can multiply both sides by P(B) to get P(A|B) x P(B) = P(A and B).

    But wait a second. The "probability of B and A" and the "probability of A and B" are the same thing, aren't they? (NOTICE: I'm saying A and B, not A given B, swapping the order there, like with the ovarian cancer example, changes the deal.) So, we can take P(A|B) x P(B) and swap it in for P(B and A) in that first equation above, and we get
    P(B|A) = [P(A|B) x P(B)] / P(A).
    That's what we wanted.

    Now you might claim that there's some wankery near the end there when I'm rewriting equations and all that jazz, but the final result says something totally different. It's a way to relate the probability of B|A to the probability of A|B. Sometimes it's easy to compute the one and not the other, so we can use identities like this to help us figure out some other, more complicated probabilities. The wikipedia article linked in the FPP has some great examples, and now that you have an idea of what the proof is all about (it's not challenging), you can just look at the examples and try to appreciate them.
    posted by King Bee at 7:23 PM on November 29, 2011 [2 favorites]


    Layperson's explanations, part 5.

    Eigenvectors:

    Please note that despite the ridiculous example I'm about to use, eigenvectors are a widely-applicable concept.

    Say you have a clique of people (in our example, we'll just use three people). This clique has so much drama that the happiness of each member on any particular day is completely determined by some combination of the happiness levels of the various members the previous day. This is math after all so we'll be using numbers to represent how happy each clique member is on any given day.

    We'll call our clique members Albert, Becky, and Chris. Their respective happiness levels from yesterday can be called A, B, and C. I'll use At, Bt, and Ct to refer to their happiness levels for today.

    Albert hates Becky and loves Chris. If Becky is happy, he is unhappy. But if Chris is happy, Albert is very happy.
    At = -2 * B + 3 * C

    Becky is somewhat more emotionally stable- her happiness is partially dependent on how happy she was yesterday. She is indifferent towards Albert, but dislikes Chris.
    Bt = B - C

    Chris is also somewhat emotionally stable and likes everyone equally.
    Ct = A + B + C

    We can lay out these equations as a matrix- a table where each person gets a row and a column. Every entry in the matrix specifies how the column person's happiness yesterday influences the row person's happiness today.
          A     B     C
    A     0    -2     3
    B     0     1    -1
    C     1     1     1
    
    We might imagine different scenarios about how this group reacted the day after they met. Let's say they met with happiness levels A = 1, B = 0, and C = 1. First, let's write that down as a "vector" - a list of numbers. We'll just remember that A is in the first position, B is in the second position, and C is third. We don't want to have to write A, B, and C all the time.

    Their happiness vector the day they met:

    1
    0
    1

    What was their happiness the very next day? To find out, we're going to multiply our matrix (representing their relationships) by our vector (representing their current state). To perform this multiplication, we're going to go:

    For each row (person whose happiness is to be determined):
    Going left to right along the row (considering each person whose happiness might contribute to the row person's happiness),
    Multiply the column person's entry in the vector (how happy the column person is) with the column person's entry in this row (how much the row person is made happy by the column person's happiness).
    Add the results for this row to get the total amount of happiness the row person has today.

    So you get a new number for each row- you end up with a new list of numbers, a new vector of happiness values.

    Going through our example,
          A     B     C
    A   1*0 +0*-2 + 1*3  =  3
    B   1*0 + 0*1 +1*-1  = -1
    C   1*1 + 0*1 + 1*1  =  1
    
    So if when they met, Albert was happy, Becky was neutral, and Chris was happy, then the next day, Albert is ecstatic, Becky is unhappy, and Chris is still happy. Becky went from neutral to unhappy, and Chris's happiness is completely unchanged- which means the day's change to the vector of happiness wasn't really uniform at all. Different people were affected differently by the drama of the day.

    Of course, we could have picked different numbers for that happiness vector. And really, there are a few happiness vectors we could've picked that have a very special property with respect to this matrix, and for having that property we call them eigenvectors.

    Imagine if they had met on a day when the happiness vector looked like this:

    2.05
    -0.8
    1

    So Albert is pretty darn happy, Becky is having a slightly unhappy day, and Chris is happy.

    Multiplying that vector by the matrix gives us

    4.6
    -1.8
    2.25

    Albert is ecstatic, Becky is having a very bad day, and Chris is really happy. Hmm. Their relative proportions of happiness haven't really changed, have they? They've gotten more extreme, to be sure- but all of them changed by growing (multiplying by) about a factor of 2.25.

    That's what an eigenvector is. An eigenvector (a special state), acted on by a matrix (system of relationships), produces a new vector (a new state) that has only changed in that all of the elements in that vector have grown (or shrunk) by the same amount. The amount that each element grows by is called the "eigenvalue" for that eigenvector.

    We notate this idea as follows:

    Ax = lx

    Where A is our matrix, x is the eigenvector, and l is the eigenvalue for that eigenvector.

    I feel bad that for length reasons I should stop here, because there is a great deal more that could be said about eigenvalues and eigenvectors, but I hope I at least conveyed what they are.
    posted by a snickering nuthatch at 8:23 PM on November 29, 2011 [10 favorites]


    Wow, sucks for Becky...
    posted by delmoi at 8:26 PM on November 29, 2011


    Another interesting feature of Eigenvectors is that they are all orthogonal from eachother. So one eigenvector can't be 'composed' by adding a combination of other eigenvalues together. They also span the 'space' of possible vectors that you can get when you multiply vectors by your matrix.

    So, any vector that can come out of a multiplication by that matrix can be expressed as a combination of Eigenvectors.
    posted by delmoi at 9:16 PM on November 29, 2011


    My submission for an equation missing from the list:

    2 * (signed area of triangle A,B,C) = (Bx - Ax)*(Cy - Ay) - (Cx - Ax)*(By - Ay)

    That thing is the Swiss army knife of 2D geometry. Among the many applications:

    -The most famous--finding the signed area of a polygon, by iterating through adjacent pairs of points B,C and getting each pair's signed triangle area with some fixed third point A. Let A = (0,0) and half the terms fall away, revealing the standard formula for polygon area.

    -Determining which side of a ray AB a point C is on. Positive signed area = counterclockwise triangle, so C is on the "left" if you're standing on A looking at B. Good for hit testing of points against polygons, among many other things.

    -Finding the distance from a point C to a line AB. Let AB be the "base" of the triangle, so the distance becomes the "height", and since 2 * triangle area = base * height, we just need to get the signed area and divide by the length of the base. Getting the length of the base requires a sqrt, but if you're going to be testing many points against the same line, you'll just precompute the base beforehand, so each individual point test requires only the signed area.

    All this for only two lousy multiplications and five subtractions!
    posted by equalpants at 9:26 PM on November 29, 2011 [2 favorites]


    Another interesting feature of Eigenvectors is that they are all orthogonal from eachother. So one eigenvector can't be 'composed' by adding a combination of other eigenvalues together. They also span the 'space' of possible vectors that you can get when you multiply vectors by your matrix.

    So, any vector that can come out of a multiplication by that matrix can be expressed as a combination of Eigenvectors.


    Not true. The eigenvectors of an operator (or a matrix, if you prefer to choose a representation) only form an orthonormal basis if the operator is normal (for complex vector spaces) or if the operator is self adjoint (for real vector spaces). These are the 'spectral theorems' of linear algebra.

    It is true that many of the common operators encountered in science and engineering do meet these requirements and, as such, have an orthonormal basis of eigenvectors, but it's not true in general.
    posted by BlueDuke at 10:47 PM on November 29, 2011


    And to put what BlueDuke said in terms of real-valued matrices, such as the one in Jpfred's example: if a real-valued matrix is symmetric, then its eigenvectors are orthogonal. (In Jpfred's example, the eigenvectors are not orthogonal.)
    posted by Westringia F. at 11:32 PM on November 29, 2011


    roystgnr: "Why disparage driving schools? They're probably up there with med schools, officer candidate schools, and (some) engineering colleges on the "list of schools where good teaching and attentive students can prevent needless deaths"."

    Hmmm, at least when I went, for everyone else, driver's ed was a formality. Somehow they'd all managed to learn how to drive before ever taking the class. I'd never been behind the wheel before in my life. I had to take driver's ed twice. That's right -- I flunked it the first time. Only thing I'd ever flunked in my life.

    I don't think the instructor had a clue what to do with me. He really just expected that everyone already knew how to drive and that they just needed to take the course because that's the law.

    My brother, who went to a private school, did not have access to the standard driver's ed class. So, he went to a private driving school sometime in his 20s, I think. He didn't learn a damn thing and failed both the written test (although he passed it the next time) and driving test (let's just say he never made it out of the parking lot).

    It's my firm belief that nearly everyone simply learns to drive on their own, by which I mean their parents or older friends, at one point or another, simply let them behind the wheel of the car. Most people I encountered could drive well before driver's ed ever started.

    So, I don't see there being a lot of incentive for creative pedagogy when it comes to driver's ed. Basically the only way to learn how to drive is to just drive around a lot. I'll admit it -- I was still a bad driver for years after I formally received my license. I'd say I'm still a below-average driver. I still suck at paralel parking, for example. That's okay, because I'm above average on other things and driving isn't that important to me beyond getting where I need to go.

    So far, I haven't encountered any driving schools that actually made people better drivers. I probably would have just been better off if my parents had thrown me behind the driver's seat and had me do circles in the parking lot for a few weeks. That's what I'm planning to do with my kid when he's old enough.
    posted by Deathalicious at 5:38 AM on November 30, 2011


    I learned from driving school that yelling at your students is a really bad idea.

    I did not learn how to drive.

    Basically, when I'm doing a new thing I get nervous. As you might expect, I had very little driving experience when I started taking driving lessons. (New Jersey, where I lived at the time, required everybody to have a certain number of hours of lessons with an approved driving school before they could get a learner's permit.) So I was not good at it, and the instructor was yelling at me, and yelling does not motivate me but rather makes me freak out. So my driving got even worse.

    When I had taken the state-required number of lessons the guy signed the form, but he also told my mother that I probably needed more lessons. (I did, but not from that guy.)

    Then my mother taught me how to drive. Without yelling at me.

    She knew this might be a problem because my father had taught my mother how to drive, and he had yelled, and she had the same reaction I did. (It was quite a few months before my mother would let my father be in the car when I was driving.)

    I haven't thought of this in years, but it gives me one more reason not to yell at my students.
    posted by madcaptenor at 6:54 AM on November 30, 2011


    Also, re delmoi's comment above asserting "e doesn't just 'magically appear' [and] is an obvious choice in a function that uses exponential growth," this may already have been said, but in fact e does appear without being introduced.

    e winds up intrinsically linked to exponential growth not just because we've decided the growth should be exponential, but because the exponential growth equation is derived from the proposition that the growth of quantity P(t) (eg, your money at time t) is proportional to the amount you currently have, ie, d/dt P(t) = rP(t), where r is the rate of growth (eg your interest rate + 1) -- a proposition that doesn't involve e (or any exponential function) at all! But it does lead to it: it can be shown that exp(rt) uniquely solves this equation, which brings the e in where we hadn't before. Sure, we could instead write this as At, but then we must also acknowledge that A=exp(r) -- ie, e (unbidden) raised to our growth rate (quantifiable). Likewise, if we start with discrete interest and shrink the compounding interval toward the continuous case, the expression limx->0(1+x)1/x -- which is e -- will appear.

    That is: given ONLY the things we're measuring (money P, time t) and our underlying assumption (that the instantaneous growth of our money is proportional to its amount, with a constant of proportionality r [= interest rate + 1, per our definition of what it means to earn interest]), the e will appear in the relationship between money, time, and interest -- whether we want it to or not, purely as a consequence of the system.
    posted by Westringia F. at 7:03 AM on November 30, 2011 [1 favorite]


    BTW, If you're interested in the historical development of calculus, you might dig Analysis by its History, which is almost more of a history book (with pictures!) than a math book, although it is not lacking for mathematical rigor.

    [Last remark I swear.]
    posted by Westringia F. at 9:23 AM on November 30, 2011 [2 favorites]


    « Older We Need To Talk About Lleyn   |   Little Printer Newer »


    This thread has been archived and is closed to new comments