Why Philosophers Should Care About Computational Complexity
May 5, 2013 8:47 PM   Subscribe

"One might think that, once we know something is computable, how efficiently it can be computed is a practical question with little further philosophical importance. In this essay, I offer a detailed case that one would be wrong. In particular, I argue that computational complexity theory---the field that studies the resources (such as time, space, and randomness) needed to solve computational problems---leads to new perspectives on the nature of mathematical knowledge, the strong AI debate, computationalism, the problem of logical omniscience, Hume's problem of induction, Goodman's grue riddle, the foundations of quantum mechanics, economic rationality, closed timelike curves, and several other topics of philosophical interest. I end by discussing aspects of complexity theory itself that could benefit from philosophical analysis."
posted by cthuljew (31 comments total) 58 users marked this as a favorite
 
Funny this doesn't sound like Gene Ray?
posted by sammyo at 8:51 PM on May 5, 2013


Oh gosh no, he's just having a uber-mifi-ish skirmish with one of Genes disciples.
posted by sammyo at 8:57 PM on May 5, 2013 [1 favorite]


This is awesome, and a beast, and I haven't gotten through it all, but I think that this self-acknowledged criticism is quite important:
4) Complexity theory studies only the worst-case behavior of algorithms, and does not address whether that behavior is representative, or whether it merely reflects a few “pathological”
inputs. So for example, even if P ̸= NP, there might still be excellent heuristics to solve most instances of NP-complete problems that actually arise in practice; complexity theory tells us nothing about such possibilities one way or the other.
My bread and butter involves heuristically solving certain large instances of an NP-hard class, and though I use a heuristic optimizer, in the end it doesn't necessarily matter that its NP-hard because my problem instances are solveable.

Computational tractability is an important facet of reasoning about lots of problems, but asymptotic worst case analysis is not suffifient to peform the most interesting analyses for most problems in the world. Average case analysis requires assuming a distribution over the likelihood of occurrence of instances, and greatly complicates analysis mathematically, but it's probably more important.

The author's blog is also great, btw.
posted by Llama-Lime at 8:59 PM on May 5, 2013 [5 favorites]


Just for people (like me) who are wondering how they know this person, it's Scott Aaronson, MIT professor and blogger at Shtetl-Optimized.
posted by LobsterMitten at 9:00 PM on May 5, 2013




I knew that must be Aaronson.

He is one of the very best things Metafilter has ever led me to, and it's mortifying to recall that I said sarcastic things about a short story he wrote as a kid before I realized how truly amazing he is.
posted by jamjam at 9:22 PM on May 5, 2013


Oooh. This looks like perfect bedtime reading for me!!

Nice juicy stuff! Can't wait!
posted by flyinghamster at 9:30 PM on May 5, 2013 [1 favorite]


I read (most of) that some time ago, but sadly the only observation I recorded was "Hm, I'm really unsure how the argument in §6 against the waterfall argument is supposed to work. Doesn't he give the game away by starting it "suppose we actually wanted to use a waterfall to help us calculate chess moves" (emphasis added)? If I thought that meaning was a property not relative to an observer, why would it matter whether some observers need to go through a lot of bother—enough bother that they could, by going through an equivalent amount of bother, get their answer in a different way—in order to determine the meaning of some particular computation?", made before I had even read the preceding sections. Even after having read more of it, though, I wasn't really convinced that computational complexity helped with the principled problems involved in Hume's or Goodman's problems with induction, for instance.
posted by kenko at 9:31 PM on May 5, 2013


I came her to say just what Lllama-Lime wrote, with an additional quibble:

The article says

> Complexity theory studies only the worst-case behavior of algorithms,

but that isn't all the truth - it studies the asymptotic worst-case behavior - in other words, what happens to the behavior as the size of the input approaches infinity.

This isn't just an academic distinction. Consider the problem of multiplying two matrices. The naïve approach is O(n ^ 3) but there are a succession of algorithms that have better complexity...

...except that in practice they often run slower.

The original improvement, Strassen's algorithm, is pretty easy to understand but isn't really practical until you get to moderately large matrices (of order 100 or greater) - and it has some undesirable characteristics like numerical instability (which means you might get very bad results for some problems if you're using finite precision arithmetic).

But there are a whole series of algorithms after that, each one improving the computational complexity of matrix multiplication - but in practice the constant factor in front of the O() is so large that you'd never want to practically use them.

In fact, I'd guess that it's often the case that the best algorithm for fairly small inputs is quite different from the best algorithm for unbounded inputs...
posted by lupus_yonderboy at 9:33 PM on May 5, 2013 [2 favorites]


(Oh, and I haven't finished TFA - I'm so far skeptical but willing to be convinced...)
posted by lupus_yonderboy at 9:34 PM on May 5, 2013


Why Analytic Philosophers Should Care About Computational Complexity would be a more accurate title for the paper. I don't think nytimes The Stone will be writing about this stuff anytime soon—only in that its too mathy.
posted by polymodus at 9:34 PM on May 5, 2013


Why Analytic Philosophers Should Care About Computational Complexity

There's more than two kinds of philosopher, you know.

Nothing in the paper even purports to tell me why I qua philosopher (I worked on action and self-knowledge) should care about computational complexity.
posted by kenko at 9:36 PM on May 5, 2013 [2 favorites]


His remarks on the Problem of Logical Omniscience are kinda interesting. He's suggesting that accounts of knowledge that make knowledge-how (like: knowing how to ride a bike) metaphysically prior to knowledge-that (like: knowing that bikes are cool) will be better equipped to handle logical omniscience. Not my area of expertise, but it seems like a neat suggestion.

But man, he sells his product terribly. That abstract makes him sound like a crank.
posted by painquale at 9:37 PM on May 5, 2013 [1 favorite]


Having just looked it up, I'm gravely disappointed that Goodman's grue paradox has nothing to do with the things that'll eat you if you wander around in the dark without a lantern.
posted by sourcequench at 9:49 PM on May 5, 2013 [3 favorites]


All the other MeFi philosophers are posting, but I just handed in my grades for the semester and I'm way too tired to do anything other than post an "I was here" post.

Oddman, was here.
posted by oddman at 10:04 PM on May 5, 2013 [4 favorites]


Firstly this whole comment is a sidebar irrelevant to the paper in the top link, and I apologize for that. But...

"Oh gosh no, he's just having a uber-mifi-ish skirmish with one of Genes disciples."

I love this Luboš Motl guy! No idea why sammyo thinks he's a follower of the Time Cube, because Luboš seems to have a pretty good head on his shoulders. In that blog entry he's criticizing one of Scott Aaronson's earlier books (not TFA), and he addresses an argument I've heard before! (In the Stephen Baxter novel Manifold: Time.) Coolness.
Doom Soon and Doom Late

This is also the case of the "doomsday is probably coming" argument. Imagine that there are two possible worlds. In one of them, the doom arrives when the human population is just somewhat higher than 7 billion (Doom Soon). In the other one (Doom Late), the population reaches many quintillions (billions of times larger than the current population).

Again, just like in the hair color case, if we have reasons to expect that the prior probability of both worlds are equally or comparably large, then we have no justification to "correct" or "update" these probabilities. The existence of 7 billion people is compatible both with Doom Soon and with Doom Late. So both possible scenarios remain equally or comparably likely!

The totally irrational anthropic argument says that Doom Soon is 1 billion times more likely because it would be very unlikely for us to be among the first 7 billion – one billionth of the overall human population throughout the history. This totally wrong argument says that we're observing something that is unlikely according to the Doom Late scenario – only 1/1,000,000,000 of the overall history's people have already lived – and our belief that we live in the Doom Late world must be reduced by the factor of one billion, too.

That's wrong and based on all the mistakes we have mentioned above and more. The main mistake is the acausality of this would-be argument. The argument says that we are "observing" quintillions of people. But we are not observing quintillions of people. We are observing just 7 billion people. If the Doom Late hypothesis is true, one may derive that the mankind will grow by another factor of one billion. But if we can derive it, then it's not unlikely at all that the current population is just 1/1,000,000,000 of the overall history's population. Instead, it is inevitable: p=1. So the suppression by the factor of 1 billion is completely irrational, wrong, idiotic, and stupid.

The only theory in which it makes sense to talk about quintillions of people – the Doom Late theory – makes it inevitable that the people aren't distributed uniformly over time. Instead, they live in an exponentially growing tree. So there's manifestly no "intertemporal democracy" between them that could imply that we're equally likely to be one of the early humans or one of the later ones. We're clearly not. It is inevitable that in most moments of such Universe's history, the number of people who have already lived is a tiny fraction of the cumulative number of the people in the history (including the future).
I've always known that The Doomsday Argument was wrong, but couldn't put it into words like this guy does. (The Wikipedia article I just linked has a bunch of rebuttals, but whatever worth they may have is hidden in a cloud of mathematics that's as impenetrable as squid ink.) Basically, he's saying that the question "Am I in the last generation?" is not something that can be solved statistically, because the time and place we're born in isn't distributed randomly across the ages. You don't have an equally likely chance of being born at the dawn of time or at the end, your probability of being born now is 1.

And for extra credit, this same argument works against The Simulation hypothesis, which supposedly "proves" we're all software running in a giant computer at the end of time.
posted by Kevin Street at 10:31 PM on May 5, 2013 [8 favorites]


Uh oh, I'm experiencing sensations of defensiveness as a philosopher!

I wonder whether Aaronson mostly did a search for "complexity theory" in the titles of philosophy papers to figure out its impact on the field rather than talking with anyone in the MIT department.

Complexity theory is pretty deeply embedded in philosophy of mind/cognitive science and in areas of epistemology focused on understanding psychogically realistic models of knowledge acquisition and their relationship to inductive and deductive logics.

E.g., there's deep divisions in phil mind/cognitive science over whether results in complexity theory undermine the computational theory of mind or whether they provide evidence that our minds massively rely on heuristics and other non-optimal strategies but are nevertheless computational.

In an argument that's been going on since at least the 80s, philosophers like Jerry Fodor have argued the former, while someone like Chistopher Cherniak (who Aaronson sites, but I'm not sure understands) has argued the latter. This looks to me like the type of AI debate that Aaronson wants to be influenced by complexity theory.

The influence of complexity theory in epistemology also goes back to at least the 80s. People like Gilbert Harman argue that Bayesianism and other logics don't characterize actual norms of reasoning b/c complexity theory results show that mental algorithmic procedures could not conform to them. Philosophers like Elijah Millgram don't agree. This looks to me like the type of discussion over idealizations like logical omniscience that Aaronson wants to be influenced by complexity theory.

Also - I published an article last year on scientific inference that was informed by complexity theory, so I'm ... wounded by this post.

tl/dr; philosophy has been significantly influenced by computational complexity theory for decades in areas in which the author is proposing a meet-up. Commenter is personally offended.
posted by airing nerdy laundry at 10:35 PM on May 5, 2013 [10 favorites]


I love METAFILTER!!!!!
posted by flyinghamster at 10:47 PM on May 5, 2013


airing nerdy laundry: I think you have a misconception about what "computational complexity theory" really is.

If you are interested in "how fast your program will run in practice" (i.e. can your mind really do everything it's supposed to?) then this is not relevant to you.

Here's a couple of examples for you as to why...

First, in computational complexity theory, you can always throw away all of the smallest cases when you do your analysis. The field only studies asymptotic behavior - the definition(*) of complexity says f(x) = O(g(x)) if and only if there is a positive constant M such that for all sufficiently large values of x, f(x) is at most M multiplied by g(x) in absolute value.

Note the "for all sufficiently large values of x". That might be "really large". So it's perfectly plausible that some algorithm performs particularly badly on inputs, say, less than 10^100 (a googol) in length - but then does a little better after that, and thus has "better complexity" - even though for all practical purposes it's terrible.

Second, note the existence of that constant M. That constant M can be really really large. Consider, for example, a program that runs in 10^100 seconds regards of the size of its inputs - a program that could never complete in this universe or any other. This program is O(1) - the "simplest" complexity measure.

Now, in practice programmers informally(**) use O() to compare the gross behavior of certain algorithms but that's sort of "baby" complexity theory - what the "big guys" in the field do is not really at all relevant to the question of "how quickly will this algorithm run in practice."


(* - you can also talk about "computational complexity at a point" but that only has a mathematical meaning, not a practical one...)

(** - in practice, you avoid obviously bad algorithms and then profile when you need to speed up your program - often "worse" algorithms will perform better in your application, i.e. bubble sort might outperform heapsort because all your inputs are small and mostly sorted...)
posted by lupus_yonderboy at 10:52 PM on May 5, 2013 [1 favorite]


Hmm, sorry about that first sentence, that came out waaaay snarky, and I missed the edit window.
posted by lupus_yonderboy at 11:06 PM on May 5, 2013 [1 favorite]


lupus_yonderboy - no worries about tone.

Notice your comment doesn't touch on epistemology and whether a given logical rule governs how we humans ought to reason. Typically such rules are defined for large input values/worst cases. If complexity theory point towards intractability for something like Bayes rule, some epistemologists would say Bayes rule doesn't characterize the way we ought to reason b/c we cannot even in principle do so.

In general, I don't see why complexity theory mightn't tell us something about what idealized reasoning norms are appropriate.

As for what complexity theory says about the psychological plausibility of mental computational models, you might be right that it has few implications. My understanding is that most interesting computational problems are superpolynomial in the worst case. So tractability mostly characterizes problems that aren't of interest to computational modeling. You also point out that intractable algorithms might perform very well in non-worst cases.

Hey, I never said both sides in a philosophy debate got the implications of complexity theory right (though I do think there's a bit more to it), I just said the field had been fighting about the implications for decades.
posted by airing nerdy laundry at 12:05 AM on May 6, 2013


Thanks for posting this although I wish it was something I found more convincing. Content aside I think it suffers from a poor choice of structure for its intended audience, and for touching on too many topics in too-shallow a fashion; at least IMHO it would've been advisable for the author to pick one or two sub-sections' topics and go into them much further than this format allowed.
posted by hoople at 6:48 AM on May 6, 2013


Did you download the paper, hoople? I haven't read through the whole thing (because frankly, most of it is beyond my understanding), but it's a 58 page extract from Aaronson's newest book.
posted by Kevin Street at 10:52 AM on May 6, 2013


A lot of systems can be predicted in useful ways, based on how their complexity splays out. At least, that's what I'm finding, across a surprising number of domains (Bitcoin, Business, Network Architecture, etc.)
posted by effugas at 11:09 AM on May 6, 2013


I don't particularly enjoy this writing style.

The work seems to seek only to conclude that the question is one worth asking, and while giving plenty of reference to idiosyncrasies of either field, gives little reference to the attention that philosophers have given to computational complexity. The implication that the question has not been asked because computational complexity has not been referred to semantically is absurd and gives the entire piece a somewhat tautological feel. (Almost as absurd as the seeming claim contained therein that philosophers just don't think about complexity of computation because they don't enjoy viewing mathematical symbols!)

It is intuitive to assume that there was at least an informal model of computation that the author used to develop a method for the semantics in the paper, since the very subject of the piece is computational complexity theory. Perhaps the author wanted to finish the communication within a human lifetime, which would by necessity impose a certain limitation.

Still, I would have liked to have seen at least an attempt to reconcile the particular insights that philosophical works have had regarding computational complexity with the formal discipline of comp. complx.
posted by flyinghamster at 2:58 PM on May 6, 2013


A lot of systems can be predicted in useful ways, based on how their complexity splays out

Completely different kind of complexity.
posted by empath at 3:37 PM on May 6, 2013


People like Gilbert Harman argue that Bayesianism and other logics don't characterize actual norms of reasoning b/c complexity theory results show that mental algorithmic procedures could not conform to them.

Again, this is a completely different kind of complexity, and misunderstandings like this is what he was trying to correct in his paper. He is essentially only talking about limits to practical limits to computability here, not the whole set of phenomena around chaos theory, cellular automata and so on that is also called complexity. If you're thinking if the butterfly effect and sensitive dependence on initial conditions, this is a completely different field of study.

If Harman is not talking about how long it takes to compute something, or how much memory or space it requires to compute it, he is not talking about computational complexity.
posted by empath at 3:45 PM on May 6, 2013


"If Harman is not talking about how long it takes to compute something, or how much memory or space it requires to compute it, he is not talking about computational complexity."

Harman was talking about that, not sure why you think otherwise.

One of Harman's arguments is that for n background beliefs, Bayesian updating requires 2^n additions. So complexity theory says that's not tractable or feasible for even a small storage of background beliefs. Harman goes on to argue that since humans cannot conform to Bayesianism, it's not true in any sense that we ought to.

Lots of philosophers who've wondered whether logics are normative guides to human reasoning have consulted computational complexity theory. I'm not saying the Aaronson article is w/out interest, but it's also true that almost anyone in MIT's phil dept. could've pt.ed him toward this kind of overlap.
posted by airing nerdy laundry at 4:10 PM on May 6, 2013 [1 favorite]


Lots of philosophers who've wondered whether logics are normative guides to human reasoning have consulted computational complexity theory. I'm not saying the Aaronson article is w/out interest, but it's also true that almost anyone in MIT's phil dept. could've pt.ed him toward this kind of overlap.


In the acknowledgments, he thanks two MIT philosophy profressors for suggesting he write Sections 5 and 6 (others may have been thanked as well, I stopped looking after 2 of the first 3 names were MIT faculty). Perhaps it's not as clear cut as you think.
posted by BlueDuke at 8:26 PM on May 6, 2013 [3 favorites]


I may have judged this a little harshly, but the introduction is not particularly endearing and many of the references are abrupt.
posted by flyinghamster at 8:43 PM on May 6, 2013


Kevin Street: I read it. I'm not a practitioner in either area, but I'm familiar enough with enough of the topics discussed to feel comfortable in my assessment that it's unlikely to interest very many professional philosophers in taking a look -- or another look -- at computational complexity. If it's an excerpt from a book targeted at a popular audience its approach makes more sense.
posted by hoople at 8:52 PM on May 6, 2013


« Older Weird Buzzfeed   |   Corn-fed Congo? Newer »


This thread has been archived and is closed to new comments