ProbabilityXLS
July 11, 2004 5:00 AM   Subscribe

The History of Probability - Excel Version Huge detailed timeline. [via Roll the Bones]
posted by srboisvert (2 comments total)
 
I can't critique this as a history of probability for two reasons: First, I don't know much about the history of probability; secont, it's nearly incomprehensible.

In the latter, it's a great example of just how hard information design is: With a free-flowing map of the history of probability as his goal, and Excel as his tool, this may well be the best possible realization. That said, it's not usable -- really, at all -- in an online, interactive form. It should be read as a printed chart.

This also illustrates something that's frustrated me for a long time. There's only one way, effectively, to see a narrative that's constructed this way: The way that it's laid out. Where are the abstraction tools that we've been promised for years? What would have been great, is if the author could organize the concepts, and then arrange them by reference; then others could re-arrange to find new implicit relations.

None of this is a slam at the author. In it's way, this is a great piece of info-rotica, because if I switch back and forth between 50% and 80% (especially on the chart worksheet), I can see clear lines and relationships. (I'll posit Scoles's Inverse Porn-Erotica Axiom for Data: If it clarifies the data, it's info-rotica; if it obfuscates it [like you're typical Wired graph], it's info-porn.) If I had a huge pen-plotter or tons of time to print out and clip-tape, I think this would be a beautiful thing...
posted by lodurr at 7:11 AM on July 11, 2004


hey i was just about to post this! what are the chances? :D a nice book to accompany it btw is ian hacking's emergence of probability... from one customer review :D
This is a great book. Hacking describes the development of probability and statistics from the Renaissance to David Hume. His central questions are: What were Pascal, Huygens, Leibniz, Jacques Bernoulli, and all the others really doing? What problems were they trying to solve? What limitations were they working under? How did all this fit into other intellectual and mathematical problems of the day? How did all this affect the subsequent development of probability and statistics? Some of this clears up minor details that I had never grasped before, such as what was the problem with two dice that Pascal solved for the Chevalier de Mere. More important is the description of the intellectual implications of the development of modern probability and statistics. I had not known that the very name "probability" grew out of a profound religious and intellectual argument between the Jansenist Pascal and the Jesuits.

The book is full of historical gems. For example, the Dutch and English governments in the seventeenth century became infatuated with annuities as a way to finance theor expenses, especially wars. Most of the schemes were actuarially unsound. The early statisticians devoted a lot of energy to this problem and this led to major advances. Unfortunately the governments were not always pleased to be told they had no clothes. It all sounds terribly up to date.

In summary, this book covers material that is important not only in a histroical context but also for its relvance to many contemporary issues. It is well written and concise. If you want to know what the early probabilists were thinking about and how that affected the way we all think about uncertainty today, this is the book for you.
but what is probability? it seems there're two main camps, frequentists and bayesians:
Traditionally, probability is identified with the long-run relative frequency of occurrence of an event, either in a sequence of repeated experiments or in an ensemble of "identically prepared" systems. We will refer to this view of probability as the "frequentist" view. It is the basis for the statistical procedures in use in the physical sciences.

Bayesian probability theory is founded on a much more general definition of probability. In BPT, probability is regarded as a real-number-valued measure of plausibility of a proposition when incomplete knowledge does not allow us to establish its truth or falsehood with certainty. The measure is taken on a scale where 1 represents certainty of the truth of the proposition, and 0 represents certainty of falsehood. This definition has an obvious connection with the colloquial use of the word "probability." In fact, Laplace viewed probability theory as simply "common sense reduced to calculation" (Laplace 1812, 1951). For Bayesians, then, probability theory is a kind of "quantitative epistemology," a numerical encoding of one's state of knowledge.
...or a degree-of-rational-belief. the debate rages on :D fwiw, here're a couple explanations on what may be possible origins of probability:
  • Quantum Darwinism and Envariance by W. H. Zurek - "I review key ideas of quantum Darwinism and investigate its connections with the environment -- assisted invariance or envariance, a recently identified symmetry exhibited by pairs of entangled quantum systems that is responsible for the emergence of probability (allowing, in particular, a completely quantum derivation of the Born's rule) within the wholly quantum Universe."
  • Paradoxes of randomness by G. J. Chaitin - "Okay, what I was able to find, or construct, is a funny area of pure mathematics where things are true for no reason, they're true by accident. And that's why you can never find out what's going on, you can never prove what's going on. More precisely, what I found in pure mathematics is a way to model or imitate, independent tosses of a fair coin. It's a place where God plays dice with mathematical truth. It consists of mathematical facts which are so delicately balanced between being true or false that we're never going to know, and so you might as well toss a coin. You can't do better than tossing a coin... However, you can prove all kinds of nice mathematical theorems about this O number. Even though it's a specific real number, it really mimics independent tosses of a fair coin. So for example you can prove that 0's and 1's happen in the limit exactly fifty percent of the time, each of them. You can prove all kinds of statistical properties, but you can't determine individual bits!"
not that i really understand any of it!
posted by kliuless at 8:48 AM on July 11, 2004


« Older War on iraq   |   Heather Mallick, talk some sense into us. Newer »


This thread has been archived and is closed to new comments