Virtual Thinking
July 21, 2008 5:58 PM   Subscribe

Correlative Analytics -- or as O'Reilly might term the Social Graph -- sort of mirrors the debate on 'brute force' algorithmic proofs (that are "true for no reason," cf.) in which "computers can extract patterns in this ocean of data that no human could ever possibly detect. These patterns are correlations. They may or may not be causative, but we can learn new things. Therefore they accomplish what science does, although not in the traditional manner... In this part of science, we may get answers that work, but which we don't understand. Is this partial understanding? Or a different kind of understanding?" Of course, say some in the scientific community: hogwash; it's just a fabrication of scientifically/statistically illiterate pundits, like whilst new techniques in data analysis are being developed to help keep ahead of the deluge...
posted by kliuless (40 comments total) 30 users marked this as a favorite
 
I am not sure I buy this mini-essay FPP as being about a coherent topic. Why are computer-assisted proofs like large scale data mining efforts, other than that they use computers? Where do you get the idea that new statistical techniques are being developed as a result of some sort of crisis?

In any case, large-scale data mining to find correlations has been is a basic tool of science and social science for a long time, take a look at empiricism, and even the standard scientific experiment as an example. This serves as grist for the mill of theory, but does not replace it. Think of one of the famous early examples of data mining in social sciences, Durkheim's study of suicide - the statistical correlations are moderately interesting, the theory endures.
posted by blahblahblah at 6:44 PM on July 21, 2008


blahblahblah nothing new here except the scale; which makes it notable and transformative. It's like saying Apollo was just another rocket launch.
posted by humanfont at 6:56 PM on July 21, 2008


Lol, the first link states that these methods won't replace traditional science. Don't they realize that science has already been replaced by SCIENCE!
posted by blue_beetle at 7:30 PM on July 21, 2008 [1 favorite]


Yeah, I'm not getting this either.

In this part of science, we may get answers that work, but which we don't understand.

You mean like quantum mechanics? Or almost any statistical correlation from medicine, psychology or sociology?
posted by DU at 7:40 PM on July 21, 2008


New facts can be discovered without ever looking for them. If a medical computer grid had access to all of the sales data in the world, it could be discovered that hundreds of store products are each associated with an array of birth defects and diseases, without even suspecting there was a relationship. The problem is that there is political and economic danger in exposing this information, and so the final leg of the experiment requiring targeted human feedback breaks down. Typical game theory ensues and people cover their tracks, or emit chaff into the stream. What is needed to rescue the concept of this ethereal computation for the common good is to preserve anonymity. With privacy guaranteed, new information can be deemed sound.
posted by Brian B. at 7:56 PM on July 21, 2008 [7 favorites]


If the scale of the data is so vast and the patterns are deemed too subtle for humans to understand, how will we be able to distinguish false positives with useful correlations? If we assume that any massive data set has noise, and that noise by its definition is random, then there will be patterns randomly formed in parts of that noise (like how in an infinite stream of random numbers there will be a section with 100,000 consecutive 3's or whatever).
posted by Spacelegoman at 8:32 PM on July 21, 2008 [2 favorites]


I didn't really understand that Kevin Kelly article. He brings up Google translating English to Chinese as an example of computing done with "no theory of Chinese," but when a real human learns Chinese, they don't study a "theory of Chinese," they learn words and grammar by rote and practice. However the people who programmed the Google translating software must have had very specific theories of computation and linguistics. It wouldn't matter how powerful your computer was if you just told it to correlate every Chinese character for every 3 English letters, you would still end up with nonsense. The Google programmers certainly knew alot about how to get a computer to select important linguistic elements and about the way those elements were likely to be statistically related. So It seems obvious to me that the Google programmers are much more theory laden than a normal human translator but Kelly seems to be claiming the opposite.

These computing techniques are definitely going to be useful, and they will give us results that we won't understand right away. But It's not that much different from moving from a slide rule, where you have to understand the basics of the calculations, to a calculator which does most of the work for you.
posted by afu at 8:43 PM on July 21, 2008 [2 favorites]


It's like saying Apollo was just another rocket launch.

So what's the equivalent of the moon landing?
posted by afu at 8:44 PM on July 21, 2008 [1 favorite]


This is all Chris Anderson's fault, yet again. I present to you Chris Anderson as theorized about by himself:
The way Chris Anderson has traditionally published is unsustainable in the face of new developments as outlined in this misleading neon info-graphic! (not pictured)

In the face of such a massive increase in critical attention towards my trite editorial, the way I have traditionally published cannot continue. Each new dalliance I put on the cover of Wired is now savaged for it's idiocy by domain experts before it even appears on newsstands!

How am I supposed to get an advance on the book version of my pop-science pablum if I'm not showered with uncritical adulation? How am I going to debut on the NYT bestseller list in four months if I'm already discredited?

Clearly, I need to develop a new kind of Chris Anderson. A totally new model of exerting my bloviations on the world. I need to find a way to publish to smaller, more sycophantic audiences in private at first; gradually sowing my trite theories into the public discourse so that they are not so easily mocked for what they are.

I need to explore the depths of his my own long tail — I needs to expose my genius to my social graph first. Publishing straight to the cover of Wired exposes my ideas to public ridicule by practicing experts — if I could limit my audience initially to self-facilitating media nodes 9:02 like myself, I might be able to ride a wave of sycophancy right over any criticism!

I, Chris Anderson, hereby usher in a new age of post-pop-science! Jared Diamond and Malcom Gladwell don't stand a chance against my cunning insights into this world of post-hype I herald!
moar lulz
ugghhhh
posted by blasdelf at 8:48 PM on July 21, 2008 [9 favorites]


He brings up Google translating English to Chinese as an example of computing done with "no theory of Chinese," but when a real human learns Chinese, they don't study a "theory of Chinese," they learn words and grammar by rote and practice.

One time I got into a conversation with one of my philosophy professors about music theory. A rather cantankerous continental philosopher, used to holding his own in an otherwise entirely analytic department, interrupted our conversation with the question "how do you define 'music theory'?" I stammered a half-assed response, and the professor I was talking to asked the cantankerous continental philosopher for his definition. The continental philosopher said, simply, "it's the arrangement of the notes" and walked away.

I've never forgotten that, and I don't know why, because I don't quite get it, but something about the notion of translating Chinese without a 'theory of Chinese' brought me back to that moment.
posted by treepour at 8:54 PM on July 21, 2008 [2 favorites]



If the scale of the data is so vast and the patterns are deemed too subtle for humans to understand, how will we be able to distinguish false positives with useful correlations?
If the data is vast enough.


If the sample set is genuinely random, it turns out that it doesn't matter how vast the data set is; your small random sample will converge really, really quickly on the actually-correct number. If anything, that is the single core insight of statistics as a field: that a surprisingly small sample of people, chosen randomly, will give you an extremely accurate result.

Want to know who a million people are going to vote for? Ask forty or so. It turns out that to within two or three percentage points, that's more than plenty. It might be counterintuitive, but the best kinds of science are like that.
posted by mhoye at 8:56 PM on July 21, 2008 [1 favorite]


Hey, if this ends up being a cheaper or faster or more fruitful way to generate hypotheses, I'm all for it. Of course you still have to do the science. (Duh!) The real story here: this is a powerful new way for entrepreneurs to bullshit venture capitalists. Lends a whole new meaning to the term "secret sauce."
posted by sdodd at 8:57 PM on July 21, 2008


Also, if you are not educated in nature's four-day harmonic simultaneous four-corner time cube, you are educated stupid.

I thought I should throw that out there; it's good to be reminded of it now and again.
posted by mhoye at 9:01 PM on July 21, 2008


Cool post. Some of the articles/blogs linked to were pretty dubious, but most of it is still quite stimulating.

Some cents:

First, the connection between data mining and computer assisted proof is pretty obvious: If no one can explain why a particular thing is true (e.g. a proof or some accurate low-D representation), most people who trade in truth would be very suspicious of its supposed validity. It's also pretty obvious (for at least one reason) why this is the case: knowing that that something is true is often less useful than knowing why something is true.

Spacelegoman: The whole point is, is that as your sample size goes big, the chance of extracting patterns from noise goes small.

Generally speaking though, none of this makes me worry too much about an end to human understanding. The simple reason is that as long as humans are the ones' applying this knowledge, we are going to have to filter through our goals, desires, and by extension, our understanding to some degree anyway.

That is, even if we generate this vast body of facts who's meaning is completely alien to us, to do anything with it we are going to have to analyze it, integrate it, and make it aesthetically pleasing. I suppose we might construe various data mining techniques (and whatever else) as putting extra layers between reality and our senses on one hand theories and our minds on the other. Nevertheless, I don't see that as particularly different from what we've done with various scientific instruments for the past couple thousand years or so.
posted by Alex404 at 9:13 PM on July 21, 2008


Also: The hogwash article is indeed good at dispelling some of the silliness of the other articles.
posted by Alex404 at 9:23 PM on July 21, 2008


So what's the equivalent of the moon landing?

Our ability to manage petabyte scale data storage. Add plus abundant CPU power using distributed computing like SETI@HOME; and who knows what you'll find. I wonder why the SETI@HOME folks spend all their time looking for aliens. All that radio telescope data and cpu power processing the signals. It seems like you could discover plenty of other things about the universe along the way.
posted by humanfont at 9:24 PM on July 21, 2008


If the sample set is genuinely random,

good luck with that.
posted by localhuman at 9:25 PM on July 21, 2008


Bleh. I actually studied this stuff a bit towards the end of my undergraduate days. I wouldn't call myself an expert or anything but some of the people talking about this obviously are not.

The idea that you would try to build these translators without a 'theory of Chinese' or a 'theory of french' is absurd. Obviously, you would want to start with a massive built-in dictionary and go from there. Sure, there would be times where your initial seed translator wouldn't be appropriate, and you would correct for that. But it would be insane to start trying to teach chinese without first teaching that '男 (nán)' means male and '女 (nü)' means female.

When people learn a language they do it with constant visual correlation. They hear 'man' whenever they see a man, or women when they see a woman, and so on. After that, they can use basic words to have other words explained to them. So you learn the first few words by visual correlation, and the rest by understanding.

And anyway, it's not like human languages are too complicated for humans to understand, obviously :)
posted by delmoi at 9:29 PM on July 21, 2008


Want to know who a million people are going to vote for? Ask forty or so. It turns out that to within two or three percentage points, that's more than plenty. It might be counterintuitive, but the best kinds of science are like that.

40? Most polling is done with about 1,200 people. 40 might work if you could get a truly random sampling.
posted by delmoi at 9:31 PM on July 21, 2008


At this very moment, as I waste time on Metafilter, my computer is using most of its 630 mhz to generate a big, big pile of data relating to my mathematical research. The basic methodology is to play with small scale examples until I feel ready to make some hypothesis, then write a bit of code to check for counter-examples in the bigger-than-I-care-to-wade-through data sets, and if none are found, go about trying to prove the statement. There's a bit of understanding that goes into this method; especially that the sets I'm looking at are recursively defined (though I can't prove it just yet), so any nastiness should happen at relatively small scales.

This kind of work is done by a great number of other combinatorialists, and was largely impossible twenty years ago. We have bad ass computer algebra systems (plug: sagemath.org), and use them to create data against which we can try out hypotheses on and look for other associations. But at the end of the day, we're still making human-readable proofs of what started as observations. It makes this very 'pure' corner of mathematics feel kinda empirical...
posted by kaibutsu at 9:33 PM on July 21, 2008 [2 favorites]


It's posts (and threads) like this that make me realize how ignorant I am. Thanks!
posted by sluglicker at 9:37 PM on July 21, 2008


As for patterns in large data sets, there is a very interesting field called Ramsey theory about finding patterns that can't help but exist in large enough data sets. A couple sample questions:

* 6 people are at a party. Prove that there exists a set of three people who all know each other or all don't know each other. (More generally, a graph is a bunch of points connected by edges. A complete graph is one in which every pair of points are connected by an edge. A colored graph is one in which you color the edges. The question: Find n such that every two-colored complete graph on n vertices contains a monochromatic complete graph of size k. The party question is about proving that for k=3, n=6.)

* How many points must you place in the plane to guarantee the existence of a convex k-gon?

It's a notoriously hard field, with really, really big numbers...
posted by kaibutsu at 9:40 PM on July 21, 2008


There isn't really any underlying language theory that drives Google's purely-statistical translation engines (not all of their language translation engines use this method). More details on how Google's statistical translation works.
posted by kurmbox at 9:54 PM on July 21, 2008


40 people would get you a confidence interval smaller than +/- 15% with 95% confidence. To get a 3% interval you need to poll at least 144 people. I don't understand delmoi's point about how a "truly random" sample would shrink your confidence interval--if it's biased for 40, it'll be biased with 1200, assuming you use the same procedure to pick your sample.
posted by jewzilla at 10:01 PM on July 21, 2008


The continental philosopher said, simply, "it's the arrangement of the notes" and walked away.

I've never forgotten that, and I don't know why, because I don't quite get it


He was simply distinguishing music theory from music practice.

According to Continental dialectics, there will be a tension & interplay between the theory of how the music should sound and your actual practice, until finally you achieve the unity of theory & practice.

This is known as "praxis" in Critical Theory, but in plain English it translates roughly as "no longer sounding like you're torturing a cat".
posted by UbuRoivas at 10:30 PM on July 21, 2008 [1 favorite]


There isn't really any underlying language theory that drives Google's purely-statistical translation engines

If there is no language theory driving the design of the engine how do they choose between different statistical methods?
posted by afu at 11:16 PM on July 21, 2008


If it's biased for 40, it'll be biased with 1200, assuming you use the same procedure to pick your sample.

Right, but the same method you use to pick forty people (the "call up everyone you know" sampling strategy) doesn't scale to a million people. If you could really get a uniform sample of forty people, then yes, that would be sufficient. But the problem with such small sample sizes is that the low hanging fruit of the population (people who use the internet, people who don't immediately hang up, people who will make eye-contact with street interviewers) tend to get sampled first, biasing your results to hell and back.

Shorter version: Larger sample sizes force you to consider your sampling methodology more carefully, rather than just calling everyone you know.

Snark version: nobody I know voted for Bush; how did he get into office?!
posted by Pyry at 11:23 PM on July 21, 2008


40 people would get you a confidence interval smaller than +/- 15% with 95% confidence. To get a 3% interval you need to poll at least 144 people. I don't understand delmoi's point about how a "truly random" sample would shrink your confidence interval--if it's biased for 40, it'll be biased with 1200, assuming you use the same procedure to pick your sample.

I'm not sure exactly why, but when was the last time you saw a serious political poll published with less then 600 people? Usually 600 person polls are labeled with a MoE of 5%, while 1,200 person polls come with a +/- 3% margin of error. The explanation given is that when you're actually polling people for their opinions sampling errors come up somehow. I don't know exactly why, but it's pretty obvious that the people who actually do this believe it to be the case, otherwise why would they poll so many people?
posted by delmoi at 11:33 PM on July 21, 2008


The explanation given is that when you're actually polling people for their opinions sampling errors come up somehow. I don't know exactly why, but it's pretty obvious that the people who actually do this believe it to be the case, otherwise why would they poll so many people?

In this case it's not really about "sampling error", although that's another factor that has to be taken into account in polling.

You've got a bucket with 1,000 marbles in it. Red ones and blue ones, except you can't see them inside the bucket.

You pull out one marble, and it's blue. What can you infer about the marbles in the bucket...that they're all blue?

You pull out another marble, and this one's red. Can you now infer that the bucket has 500 blue marbles and 500 red marbles?

Now you pull out a pile of marbles and have 50 of them. 30 are blue, and 20 are red. Can you infer that the bucket contains 600 blue marbles and 400 red marbles? Maybe you can...you're certainly getting a better idea of what's in the bucket, but there's still some uncertainty. A margin of error.

What if there were 1,000,000 marbles in the bucket instead of 1,000? Would you be as confident looking at a sample of 50 out of 1,000,000 as you would looking at a sample of 50 out of 1,000?

Now...problems with non-random sampling. What if lots of red marbles were at the top of the bucket, and lots of blue marbles were at the bottom...but you were taking your marbles from the top. You would get more red marbles than blue marbles, whether you pull out 50 or 500 - it would be a directional bias, and sample size isn't going to correct for that kind of error.

So, having 40 "perfectly randomly chosen people" isn't any better than having 1,500 people, because more people equals a better margin of error. But increasing your sample size to get a better margin of error isn't going to help you if there's still a systematic bias in your sampling. You'll get a precise but inaccurate value. You really need to stir up the bucket first.

40 non-random people = inaccurate, imprecise.
40 truly random people = accurate, but imprecise.
1,200 non-random people = inaccurate, but precise
1,200 random people = accurate and precise.
posted by Jimbob at 1:49 AM on July 22, 2008


I work in retail analytics: in other words, I stare all day at terabytes of retailer data and hope to come up with a 'theory' (a repeatable pattern) that makes my clients money. This essay (and the Wired 'special' that preceded it) makes absolutely no sense. Fine, so you have insane amounts of data and you hope to make something out of it. But, even if you use some self-trained magical algorithm, like a neural network to find a pattern out of that data, you have to make some presumptions about the nature of the data and what will be modeled by the neural net. There is no such thing as an algorithm that 'magically' comes up with a concept that fits the data. Humans must make some assumptions (as to what is valid, or interesting, or profitable) and at least pick the algorithm that will detect the patterns.

The Wired folk seem very impressed that algorithms pick patterns. Erm, great, that's what pattern-matching algorithms do. For example, they breathlessly mention that given enough data you can statistically translate English to Chinese. But do they also mention for example, that the training corpus picked for that translation must match exactly (i.e. be translated by humans from English to Chinese first), before the algorithm can make its statistical analysis? that the quality of the original translation thus directly influences the model? that you'll probably have to code some English/Chinese rules into your parser so that you get more matches (ignore spelling mistakes, or extra, commas)?.

Clark was right: Any sufficiently advanced technology is indistinguishable from magic. And magic can sell magazines, apparently.
posted by costas at 2:15 AM on July 22, 2008 [2 favorites]


You've got a bucket with 1,000 marbles in it. Red ones and blue ones, except you can't see them inside... bla bla bla

I don't think anyone here is lacking a 3rd grader's understanding of random sampling.

40 non-random people = inaccurate, imprecise.
40 truly random people = accurate, but imprecise.
1,200 non-random people = inaccurate, but precise
1,200 random people = accurate and precise.


Right... well anyway there's a whole bunch of math involved in figuring these things out. One thing (looking back over the various Wikipedia articles) is that there is also sampling error. That is, suppose there is some probability that if you grabbed a red marble you might actually think it was blue, or perhaps the temp you hired to actually check the marbles might fill in the wrong box on his scantron form. or whatever.

I think that might be one one of the reasons pollster's sample more people then would be strictly necessary if you were measuring something like the error rate of some machine, or the number of marbles in a bucket.
posted by delmoi at 2:36 AM on July 22, 2008


I think that might be one one of the reasons pollster's sample more people then would be strictly necessary


It's called statistical power; you always get better results by sampling more people, it's not that complicated, hence my 3rd grader explanation. Accidentally grabbing more red marbles than blue is equivalent to accidentally thinking a red marble is blue, or accidentally filling in the wrong box.

My example indicated that unless you sample all 1,000 marbles, you'll never know the actual answer. And that's not sampling, that's a census.

So, what is an acceptable margin of error? Why should we be happy with 3%, not 1%? What number of people is "strictly necessary" anyway? It's pretty irrelevant. What's necessary is for pollsters to sample exactly the minimum number of people that produces a result that is in some way broadly comparable to actual election results so that newspapers will continue to buy their polls. That's not statistics, that's business.

The interesting maths in opinion polls is related to correcting for systematic error, not correcting for sampling error. If you ring people up at 6pm, you might get more middle aged people than young people. You note that the young people you do get on the phone vote one way more than another, so you bias the overall results to attempt to correct for this. But what if the young people who are at home in the evening vote one way more than the young people who are out in the evening? Interesting. But just increasing your sample size to correct for sampling error is 3rd grade stuff.
posted by Jimbob at 6:33 AM on July 22, 2008


Networking of computers won't do it, but networking of humans might.

The series Serial Experiment Lain proposes the interesting idea that such a super-human hive mind might well appear, but seems to assume that there would only be one such. It ends up as a variation of "the internet wakes up and becomes intelligent", only it is the humans who use the internet who are the fundamental computing elements of that intelligence rather than the computers connected to it.

I think that the spontaneous appearance of hive minds in the internet is a very real phenomenon already, but I don't credit the idea that it would happen exactly once, more or less at a single moment, and that the result would be permanent and omnipresent.


So is he saying MetaFilter is going to bring about the Singularity?

I hope so, cause that would be way bitchin'. We could get T-shirts with our user numbers, saying "I am part of the Hive Mind". Collectively we would be able to vapourize our enemies, bring about paradise on Earth, sublime at will, etc etc.
posted by Meatbomb at 7:33 AM on July 22, 2008


And because of that, a true superhuman "intelligence" may appear during our lifetime.

But when it does, most of us won't even notice it, because it will be lost amidst the great sea of mediocrity and banality which will always dominate the internet and consume the vast majority of its bandwidth as long as humans exist.


Translation: stynxno is impeding our progress towards the Singularity with Lindsey Lohan links.
posted by Meatbomb at 7:45 AM on July 22, 2008


Hari Seldon is somewhere, I'm sure of it, giggling and doodling.
posted by FormlessOne at 11:12 AM on July 22, 2008


Meatbomb: You do know that you're quoting from Steven C. Den Beste's appropriately named weblog; notable for the extremity and bizarreness of it's Iraq Invasion cheerleading?
posted by blasdelf at 11:17 AM on July 22, 2008


Ack! Cooties!
posted by Steven C. Den Beste at 3:35 PM on July 22, 2008 [2 favorites]


From the first link: Yet, as Peter Norvig, head of research at Google, once boasted to me, "Not one person who worked on the Chinese translator spoke Chinese."

Well, not one of my brain cells speaks English, but the whole collection of them does a passable job when linked by the nervous system to my mouth organ thingie.

Also from that link: The doctor may not ever find the actual cause of an ailment, or understand it if he/she did, but he/she can correctly predict the course and treat the symptom.

Predicting is not the same as intervening causally (i.e., effectively). For example, I can predict someone is a smoker pretty well by determining whether or not they carry a cigarette lighter. But banning lighters is a lousy way to prevent smoking, because it's not causal.

Right... well anyway there's a whole bunch of math involved in figuring these things out.

For samples from effectively infinite populations, the standard error (SE) of an estimated fraction p is (p*(1-p)/n)^(1/2) where n is the size of the sample. The confidence interval based on the approximate normality of the estimator, so it is the estimated fraction +/- SE*1.96, so to get accuracy of +/- 3% for a fraction around 50% requires n = ((1.96/.03)^2)*.5*(1-.5) = ~1067. Note that this is for infinite populations. Finite populations increase the precision, but for populations in the tens of thousands, the correction is negligible. Survey samplers sometimes use some oversampling of strata to ensure representation of low prevalence groups and reweight and consequently have to increase the sample size to account for this loss of precision.

...there is also sampling error. That is, suppose there is some probability that if you grabbed a red marble you might actually think it was blue, or perhaps the temp you hired to actually check the marbles might fill in the wrong box on his scantron form. or whatever.

Sampling error simply refers to the random error introduced by sampling per se. What you describe is misclassification error, or more generally, measurement error, and isn't accounted for by the formula above. Depending upon its characteristics, it can not only further decrease the precision, but can introduce bias into the estimate as well.
posted by Mental Wimp at 6:20 PM on July 22, 2008


Regarding machine translation, check out this chapter from a textbook we used in a natural language processing course I took.

In particular, the bit about the Vauquois triangle might help explain this notion of using (or not using) a "theory of Chinese". Essentially, as you move further up the triangle, the idea, as I understand it, is that you are using more linguistic theory and knowledge representation to perform the translation.

At the lowest level, you just try to build up good correlations between source language words and target language words. At the highest level, you have an "interlingua" knowledge representation, where you understand (in this fictional knowledge representation scheme) what the source language means, and translate it at a conceptual level.

As far as I know (not far), most machine translation techniques live towards the bottom of the triangle, using the words and some syntactic structures to come up with statistical translation models. I don't think most deal in semantics. One could argue that humans live towards the top of the triangle.
posted by zpaine at 9:22 AM on July 23, 2008


In the first link, Kevin Kelly writes:

Not one person who worked on the Chinese translator spoke Chinese." There was no theory of Chinese, no understanding. Just data. (If anyone ever wanted a disproof of Searle's riddle of the Chinese Room, here it is.)

which seems to me to be almost the opposite of the truth. If anything, the Google statistical translation system is the Chinese Room incarnate. What is interesting is that something which Searle proposed as a thought experiment now exists in reality, and I think it would be fair to say that the program doesn't understand Chinese although it might give that impression to an external observer.
posted by jamespake at 6:18 AM on July 24, 2008


« Older Traction Park   |   "It doesn't really seem that long ago." Newer »


This thread has been archived and is closed to new comments