# The P vs. NP problem and its importance

August 27, 2009 3:46 PM Subscribe

The Status of the P Versus NP Problem

*It's one of the fundamental mathematical problems of our time, and its importance grows with the rise of powerful computers.*(via mr)I can't get past the second section.

posted by desjardins at 4:15 PM on August 27, 2009

The article says:Other than calling one a class and one a collection, what's the difference?The class of problems with efficient solutions would later become known as P for "Polynomial Time."

Then in the next paragraph:The collection of problems that have efficiently verifiable solutions is known as NP (for "Nondeterministic Polynomial-Time," if you have to ask).

what.So P = NP means that for every problem that has an efficiently verifiable solution, we can find that solution efficiently as well.

posted by desjardins at 4:15 PM on August 27, 2009

Given a bit of time to chew on it, I can comprehend P and NP. But I have to do it every time.

I attribute the fact that I didn't go beyond undergrad in CS to my inability to intuitively grasp P/NP.

posted by flaterik at 4:22 PM on August 27, 2009 [2 favorites]

I attribute the fact that I didn't go beyond undergrad in CS to my inability to intuitively grasp P/NP.

posted by flaterik at 4:22 PM on August 27, 2009 [2 favorites]

desjardins, what it breaks down to is "efficient solutions" vs "efficiently

posted by flaterik at 4:23 PM on August 27, 2009

*verifiable*solutions"posted by flaterik at 4:23 PM on August 27, 2009

P is a subset of NP. The question is whether they're identical.

The technical difference has to do with (classical) Turing machines and what they're allowed to do. Basically, a Turing machine is a simple computer; it has an internal 'state' and a (infinite) paper tape upon which its input is given and it may read/write symbols -- each symbol takes up a 'square.' A deterministic Turing machine reads the current square and, depending on its internal state, performs a single action. A non-deterministic one can perform multiple actions based on the current situation.

To illustrate the difference, think of Big Blue, IBM's chess playing computer. It essentially plays by working out the outcome of a series of moves within a given time limit (the speed of the processor governs how many possible series of moves it can evaluate in the limit). It tries one move, then the counter-move, then the counter-counter-move, etc. until it decides that series of moves is sub-optimal (e.g., it leads to Big Blue losing) or it runs out of time. At the end of the time limit, it makes the move corresponding to the first move in the best series it's discovered so far; it's possible that there is a better series of moves it didn't have time to investigate.

A non-deterministic Big Blue would investigate all possible move series at once.

posted by axiom at 4:25 PM on August 27, 2009 [3 favorites]

The technical difference has to do with (classical) Turing machines and what they're allowed to do. Basically, a Turing machine is a simple computer; it has an internal 'state' and a (infinite) paper tape upon which its input is given and it may read/write symbols -- each symbol takes up a 'square.' A deterministic Turing machine reads the current square and, depending on its internal state, performs a single action. A non-deterministic one can perform multiple actions based on the current situation.

To illustrate the difference, think of Big Blue, IBM's chess playing computer. It essentially plays by working out the outcome of a series of moves within a given time limit (the speed of the processor governs how many possible series of moves it can evaluate in the limit). It tries one move, then the counter-move, then the counter-counter-move, etc. until it decides that series of moves is sub-optimal (e.g., it leads to Big Blue losing) or it runs out of time. At the end of the time limit, it makes the move corresponding to the first move in the best series it's discovered so far; it's possible that there is a better series of moves it didn't have time to investigate.

A non-deterministic Big Blue would investigate all possible move series at once.

posted by axiom at 4:25 PM on August 27, 2009 [3 favorites]

desjardin: it's not actually class versus collection. It's like this: if a problem is in NP, that means we can verify a solution in polynomially-bounded time. Like how you can put a key into a lock and it doesn't really matter how "hard" the lock is, you can verify the solution. On the other hand, if's very difficult to figure out what that key will be just by walking up to the door - you'd have to have a whole crapload of keys and try them one at a time. There's no deterministic way of figuring out which key (or solution) is the right one, you just have to try them.

On the other hand, if a problem is in P, you can find a solution in a reasonable amount of time.

So why do we care? Well, the problems that cryptography is based on (for instance) are in NP. We rely on the fact that you can't just take a message and calculate the key to decrypt it; instead, you'd have to try many many keys. If we discover that P=NP, that will make it clear that there actually /is/ a way to solve these problems easily, making current methods of doing crypto useless - we'd have to go back and figure out how to keep messages secret from scratch.

Okay, so I lied here a bit, but mostly to make this simple.

posted by spaceman_spiff at 4:27 PM on August 27, 2009

On the other hand, if a problem is in P, you can find a solution in a reasonable amount of time.

So why do we care? Well, the problems that cryptography is based on (for instance) are in NP. We rely on the fact that you can't just take a message and calculate the key to decrypt it; instead, you'd have to try many many keys. If we discover that P=NP, that will make it clear that there actually /is/ a way to solve these problems easily, making current methods of doing crypto useless - we'd have to go back and figure out how to keep messages secret from scratch.

Okay, so I lied here a bit, but mostly to make this simple.

posted by spaceman_spiff at 4:27 PM on August 27, 2009

Isn't quantum computing supposed to make the difference irrelevant (once they get it to work)?

posted by rottytooth at 4:35 PM on August 27, 2009

posted by rottytooth at 4:35 PM on August 27, 2009

I understand most of this stuff but I thought the article was really poorly written.

The difference is that for one class you can

Take for example a Rubix cube. Solving a Rubix cube is actually an NP problem. If someone hands you a Rubix cube, if you tried to solve it by trying all possible combinations of turns, it would take forever. If on the other hand, if someone handed you a Rubix cube and a list of twists, you could find out quickly tell whether the list was correct or not. Just apply the twists and see if at the end the cube is solved.

Restating once more P time problems can be

Another example would be a hash of a file. You can validate the hash quickly, but it would take forever to come up with a file that matched a given hash (which is the whole point of a hash)

Now, actually a lot of people can solve Rubix cubes without too much trouble, but that just illustrates how well

posted by delmoi at 4:38 PM on August 27, 2009 [3 favorites]

*Other than calling one a class and one a collection, what's the difference?*The difference is that for one class you can

**find**the solution in polynomial time, whereas with the other one you can**verify that the solution is correct in polynomial time**Take for example a Rubix cube. Solving a Rubix cube is actually an NP problem. If someone hands you a Rubix cube, if you tried to solve it by trying all possible combinations of turns, it would take forever. If on the other hand, if someone handed you a Rubix cube and a list of twists, you could find out quickly tell whether the list was correct or not. Just apply the twists and see if at the end the cube is solved.

Restating once more P time problems can be

**solved**quickly, while NP-Complete problems can**have their solutions verified quickly**.Another example would be a hash of a file. You can validate the hash quickly, but it would take forever to come up with a file that matched a given hash (which is the whole point of a hash)

Now, actually a lot of people can solve Rubix cubes without too much trouble, but that just illustrates how well

*huristics*can be for*some*NP Problems. For most cubes, it doesn't end up taking a lot of time to solve them, but theoretically it could take a very long time.posted by delmoi at 4:38 PM on August 27, 2009 [3 favorites]

*Isn't quantum computing supposed to make the difference irrelevant (once they get it to work)?*

Not at all. Quantum computers don't do anything to attack NP. Quantum computers can speed up

*some*P-Time computations. No one knows the relationship between BQP (bound quantum polynomial time) and NP.

Wikipedia has this handy chart of which complexity classes are in which

posted by delmoi at 4:40 PM on August 27, 2009

P vs NP is Absolutely Crazy Shit. It takes a year to just understand the premise, after which a slowly rising crest of goosebumps mounts on your wretched body. And swells for a decade.

Please trust me on this, you really, really want to think a bit about P vs NP. Read the definitions. Take a nap. Reread them. Think about them on the bus. Stuff like that.

Now, P vs NP. What is it?

The basic philosophical question is this:

If you can understand something quickly, instantly recognise it, could you

Note that this is taking us into the territory of human power; The vast, strange sphere of abstract capacity, in which true geniuses are measured to surpass normal people. E.g., a true genius can easily perform some act, and you can easily recognise the skill involved, the majestic beauty. But you can't

If we prove that P is NP, we have proven that with the right methods, everyone, everything is a genius.

If we prove that P isn't NP, we have mathematically proven that essentially if you aren't a genius at something, there are some capabilities that are simply

And the kicker is that thousands of true, true geniuses

posted by krilli at 4:40 PM on August 27, 2009 [23 favorites]

Please trust me on this, you really, really want to think a bit about P vs NP. Read the definitions. Take a nap. Reread them. Think about them on the bus. Stuff like that.

Now, P vs NP. What is it?

The basic philosophical question is this:

If you can understand something quickly, instantly recognise it, could you

*do*it?Note that this is taking us into the territory of human power; The vast, strange sphere of abstract capacity, in which true geniuses are measured to surpass normal people. E.g., a true genius can easily perform some act, and you can easily recognise the skill involved, the majestic beauty. But you can't

*do it*.If we prove that P is NP, we have proven that with the right methods, everyone, everything is a genius.

If we prove that P isn't NP, we have mathematically proven that essentially if you aren't a genius at something, there are some capabilities that are simply

*out of your reach*, forever, until the end of time.And the kicker is that thousands of true, true geniuses

*haven't been able to prove it either way*. It's not that we don't know; All we know is that we cannot even yet know if we know.posted by krilli at 4:40 PM on August 27, 2009 [23 favorites]

So what polynomial time means is that if you take a problem with a given input size (lets say, n pieces of input, you could imagine this as something like the length of a input phrase in alphabetic characters, where input "mefi" is four characters long and so n = 4, "metafilter" n = 10, etc).

And you can describe how long it takes, in the worst case scenario, to compute the solution to some problem for which this or that string of length n is the input.

The problem could be anything: alphabetize the input, determine whether the input contains more vowels than consonants, whatever. There's a discrete task to be done based on the input, with a specific expected result, and it takes a certain amount of computational resources t (time, in other words) to accomplish the task.

A problem that is in P, that is, that can be solved in polynomial time, is a problem whose time-to-calculate formula for the worst case scenario of any problem can be expressed in the format:

t = k * n

In other words, you have some polynomial degree x to which the size of the input n is raised.

A simple sorting algorithm might take on the order of n * n cpu time to work—our alphabetical sorting problem above, say. Compare each item in the input to each other item, the small ones alphabetically go to the front of the list, keeping going until you've done all the comparisons and then quit, that sort of thing. Because there are n items in the list and you need to compare them to each of the other items, it's n * n comparisons.

So that's a problem with a polynomial time solution of degree 2:

t = k * n

Every problem in P has some value of x that can slot into this to describe the solution time. Problems that are !P ("not P") don't have a solution time that can be described in polynomial time.

We can tell if a given answer is a correct answer to the problem in question in polynomial time.

So if you give me a list of characters like "aeefilmrtt" or "usnerkjls", I can tell you with an algorithm whether or not that list is in alphabetical order. And if my algorithm for

Note that it may be possible that you can check the correctness of a solution in polynomial time (NP) despite the fact that you can't actually

And the discrepancy there is what's at the root of the "does P = NP" puzzle. And it's a very, very hard puzzle.

A couple caveats to the above (very glossed-over) explanation:

Polynomial time is an upper bound; it just means that we know it can't take

- it could have a very large value of x. Polynomial time is great, but raising any number to the power of, say, 100 or 10,000 is going to produce a very, very large computation time. Problems with large polynomial factors scale badly. And to be clear, x = 2 can be a really badly large number; my suggested x = 2 sorting algorithm is a

- it could have a very large value of k. K is just a fixed constant, an amplifying factor in the time required. This isn't as bad as a large x, but it can still be bad. Imagine that a different sorting task takes (for some reason)

That first problem is a bigger one than the second one, because it makes computation time explode much more rapidly for larger inputs. But the key thing with both of these is that proving a problem is P doesn't necessarily mean the problem is

The flip side is that problems can be !P and still be practically solvable for situations where the input is sufficiently small; and, as I said, all of this is looking at worst-case scenarios where the problem takes as long as it possibly could to solve. If a problem is solvable much, much faster in a significant number of cases, it may not matter much in

posted by cortex at 4:40 PM on August 27, 2009 [22 favorites]

And you can describe how long it takes, in the worst case scenario, to compute the solution to some problem for which this or that string of length n is the input.

The problem could be anything: alphabetize the input, determine whether the input contains more vowels than consonants, whatever. There's a discrete task to be done based on the input, with a specific expected result, and it takes a certain amount of computational resources t (time, in other words) to accomplish the task.

A problem that is in P, that is, that can be solved in polynomial time, is a problem whose time-to-calculate formula for the worst case scenario of any problem can be expressed in the format:

t = k * n

^{x}In other words, you have some polynomial degree x to which the size of the input n is raised.

A simple sorting algorithm might take on the order of n * n cpu time to work—our alphabetical sorting problem above, say. Compare each item in the input to each other item, the small ones alphabetically go to the front of the list, keeping going until you've done all the comparisons and then quit, that sort of thing. Because there are n items in the list and you need to compare them to each of the other items, it's n * n comparisons.

So that's a problem with a polynomial time solution of degree 2:

t = k * n

^{2}Every problem in P has some value of x that can slot into this to describe the solution time. Problems that are !P ("not P") don't have a solution time that can be described in polynomial time.

**With NP**, we're describing a different situation entirely. The nature of an NP problem is this:We can tell if a given answer is a correct answer to the problem in question in polynomial time.

So if you give me a list of characters like "aeefilmrtt" or "usnerkjls", I can tell you with an algorithm whether or not that list is in alphabetical order. And if my algorithm for

*checking*if it's in alphabetical order can be done in t = k * n^{x}time, then that problem is in the class NP.Note that it may be possible that you can check the correctness of a solution in polynomial time (NP) despite the fact that you can't actually

*generate*that solution in the first place in polynomial time (!P).And the discrepancy there is what's at the root of the "does P = NP" puzzle. And it's a very, very hard puzzle.

A couple caveats to the above (very glossed-over) explanation:

Polynomial time is an upper bound; it just means that we know it can't take

*longer*than t time to accomplish a problem with input size n. A couple things that can still suck about a P problem:- it could have a very large value of x. Polynomial time is great, but raising any number to the power of, say, 100 or 10,000 is going to produce a very, very large computation time. Problems with large polynomial factors scale badly. And to be clear, x = 2 can be a really badly large number; my suggested x = 2 sorting algorithm is a

*terrible*solution.- it could have a very large value of k. K is just a fixed constant, an amplifying factor in the time required. This isn't as bad as a large x, but it can still be bad. Imagine that a different sorting task takes (for some reason)

*two*comparisons between each of the n pairs in the input. That's functionally be a k = 2 situation: the task takes twice as long for a given input length n. If we have a k = 10,000 situation, all of a sudden things are going to take a lot longer than just the polynomial factor might suggest.That first problem is a bigger one than the second one, because it makes computation time explode much more rapidly for larger inputs. But the key thing with both of these is that proving a problem is P doesn't necessarily mean the problem is

*practical*; we could make the amazing discovery that a problem previously suspected to be !P is actually in P, that it has a polynomial time solution, and yet find out that the values of k and x are so large that we'd still never, ever be able to solve it efficiently, even with a computer the size of the entire universe.The flip side is that problems can be !P and still be practically solvable for situations where the input is sufficiently small; and, as I said, all of this is looking at worst-case scenarios where the problem takes as long as it possibly could to solve. If a problem is solvable much, much faster in a significant number of cases, it may not matter much in

*practice*that the problem could be hard in*theory*.posted by cortex at 4:40 PM on August 27, 2009 [22 favorites]

*So why do we care? Well, the problems that cryptography is based on (for instance) are in NP. We rely on the fact that you can't just take a message and calculate the key to decrypt it; instead, you'd have to try many many keys. If we discover that P=NP, that will make it clear that there actually /is/ a way to solve these problems easily, making current methods of doing crypto useless - we'd have to go back and figure out how to keep messages secret from scratch.*

The benefits of discovering P = NP would

**far**outweigh the inconvenience of redoing crypto (also, it would only be public key crypto, not all crypto. So you wouldn't be able to make a connection with some random person you've never met, but you could still old-school crypto where you distribute your keys in person, or through another secured channel)

posted by delmoi at 4:42 PM on August 27, 2009

holy fuck, cortex.

I didn't mean to turn this thread into "let's educate desjardins" - y'all carry on with your geeky conversation. I'll be in the corner, trembling.

posted by desjardins at 4:43 PM on August 27, 2009 [4 favorites]

I didn't mean to turn this thread into "let's educate desjardins" - y'all carry on with your geeky conversation. I'll be in the corner, trembling.

posted by desjardins at 4:43 PM on August 27, 2009 [4 favorites]

I generated a proof of N=NP, but it didn't fit in the margins of my notebook.

posted by VikingSword at 4:50 PM on August 27, 2009 [7 favorites]

posted by VikingSword at 4:50 PM on August 27, 2009 [7 favorites]

My cat's breath smells like cat food.

posted by ChurchHatesTucker at 4:53 PM on August 27, 2009 [6 favorites]

posted by ChurchHatesTucker at 4:53 PM on August 27, 2009 [6 favorites]

desjardins, don't worry. I'm in the same corner, and I had at least one class in school devoted to this. I kinda sorta get it. And then something distracts me, like, say, the color blue being blue, and then I don't get it again.

posted by flaterik at 4:53 PM on August 27, 2009 [1 favorite]

posted by flaterik at 4:53 PM on August 27, 2009 [1 favorite]

I've heard people say that sort of thing before, particularly Scott Aaronson. It's completely idiotic, in essence it's magical thinking -- supposing that geniuses haveNote that this is taking us into the territory of human power; The vast, strange sphere of abstract capacity, in which true geniuses are measured to surpass normal people. E.g., a true genius can easily perform some act, and you can easily recognise the skill involved, the majestic beauty. But you can't do it.

If we prove that P is NP, we have proven that with the right methods, everyone, everything is a genius.

If we prove that P isn't NP, we have mathematically proven that essentially if you aren't a genius at something, there are some capabilities that are simply out of your reach, forever, until the end of time.

*literally supernatural*powers.

Again,

**completely idiotic**. I can't stress enough how stupid that idea is.

posted by delmoi at 4:53 PM on August 27, 2009 [7 favorites]

Yeah, the bad implications for crypto in the case of an unexpected P = NP proof are all short-term, real-world problems—the time between the P = NP proof and the move away from any accordingly vulnerable public-key crypto would be potential Happy Fun Playtime for people interested in access public-key encrypted data.

Expect very prompt (read: panicked) reactions from security wonks, government entities, and freelance paranoids should that happy day ever come. Expect terrible coverage by news agencies, expect a lot of annoying emails from your Aunt Thelma asking if her fridge has this Public Keyed Cryptograms on it, expect a lot of shrugging from the masses who have little to hide, on average, from the pool of Evil Hackers out there, and expect things to settled down fairly quickly once the panicky replacement of extant crypto systems has taken place.

It'll be a big deal, but almost no one in real life will care. Whether that's a matter of ignorance or of practical indifference depends on your perspective.

posted by cortex at 4:54 PM on August 27, 2009 [5 favorites]

Expect very prompt (read: panicked) reactions from security wonks, government entities, and freelance paranoids should that happy day ever come. Expect terrible coverage by news agencies, expect a lot of annoying emails from your Aunt Thelma asking if her fridge has this Public Keyed Cryptograms on it, expect a lot of shrugging from the masses who have little to hide, on average, from the pool of Evil Hackers out there, and expect things to settled down fairly quickly once the panicky replacement of extant crypto systems has taken place.

It'll be a big deal, but almost no one in real life will care. Whether that's a matter of ignorance or of practical indifference depends on your perspective.

posted by cortex at 4:54 PM on August 27, 2009 [5 favorites]

And krilii, that is the best distillation of the problem I've seen.

can you tell that this problem sticks in my craw?

posted by flaterik at 4:55 PM on August 27, 2009

can you tell that this problem sticks in my craw?

posted by flaterik at 4:55 PM on August 27, 2009

*supposing that geniuses have literally supernatural powers*

It's actually supposing that geninuses have literally superdeterminative powers. Which, depending on how you state it, may or may not be a foolish thing. But it's not quite religion, and too careless a rebuttal of the general notion of abstract computation is bordering on the idiotic itself.

posted by cortex at 4:59 PM on August 27, 2009 [1 favorite]

delmoi, aren't you basically saying that if P=NP, then there's magic?

Again, I struggle with even comphreneding the problem set, but that's the impression I get.

posted by flaterik at 4:59 PM on August 27, 2009

Again, I struggle with even comphreneding the problem set, but that's the impression I get.

posted by flaterik at 4:59 PM on August 27, 2009

*Yeah, the bad implications for crypto in the case of an unexpected P = NP proof are all short-term, real-world problems—the time between the P = NP proof and the move away from any accordingly vulnerable public-key crypto would be potential Happy Fun Playtime for people interested in access public-key encrypted data.*

Just because it's in P time doesn't mean it's actually useful. If you could solve 3-SAT in O(n) = n

^{250}that would prove P = NP, but it wouldn't mean a whole lot to anyone in the real world at all.

posted by delmoi at 5:01 PM on August 27, 2009 [2 favorites]

Even if it were discovered that P = NP that does not make all NP problems solvable in realistic amounts of time. It is frequently the case that algorithms are found with lower asymptotic complexity but where the constants are so large as to make faster growing functions preferable for real world applications. For example, an algorithm that is O(x^100) is asymptotically better than one that is O(2^x), but for a real world problem 2^x may very well be smaller. I suspect that if any proof that P=NP arises, the algorithm it entails will still not allow large numbers to be factored easily.

posted by Pyry at 5:06 PM on August 27, 2009 [1 favorite]

posted by Pyry at 5:06 PM on August 27, 2009 [1 favorite]

*delmoi, aren't you basically saying that if P=NP, then there's magic?*

Again, I struggle with even comphreneding the problem set, but that's the impression I get.

Again, I struggle with even comphreneding the problem set, but that's the impression I get.

Uh no, what I'm saying is that some people seem to be stating that

**some specific people**are able to solve NP complete problems, where as others - "mere mortals" can only verify their solutions and gawk at their genious. That's what I think is stupid.

posted by delmoi at 5:06 PM on August 27, 2009

*It's completely idiotic, in essence it's magical thinking -- supposing that geniuses have literally supernatural powers.*

You're absolutely right. And that's

*precisely*why nobody believes that P=NP.

Here's the argument:

(1) If P=NP, we all have literally supernatural powers.

(2) Oh, plaese. We obviously don't have literally supernatural powers. That's just stupid!

(3) Therefore, obviously, P≠NP. Why are you wasting my time with this crap?

Now all you have to do is turn that argument into a formal proof, and you've got yourself a million dollars. (Hint: the hard part is step 2.)

posted by erniepan at 5:07 PM on August 27, 2009 [2 favorites]

>

OK, I'm sure this is a dumb question, but how does the discovery that, in theory, "there actually /is/ a way to solve these problems easily" change anything? I mean, people are presumably out there trying to solve these problems right now; are they going to try harder and succeed faster just because it's proven that P=NP? How does the existence of a proof affect anything in real life?

posted by languagehat at 5:08 PM on August 27, 2009 [2 favorites]

*So why do we care? Well, the problems that cryptography is based on (for instance) are in NP. We rely on the fact that you can't just take a message and calculate the key to decrypt it; instead, you'd have to try many many keys. If we discover that P=NP, that will make it clear that there actually /is/ a way to solve these problems easily, making current methods of doing crypto useless - we'd have to go back and figure out how to keep messages secret from scratch.*

OK, I'm sure this is a dumb question, but how does the discovery that, in theory, "there actually /is/ a way to solve these problems easily" change anything? I mean, people are presumably out there trying to solve these problems right now; are they going to try harder and succeed faster just because it's proven that P=NP? How does the existence of a proof affect anything in real life?

posted by languagehat at 5:08 PM on August 27, 2009 [2 favorites]

Delmoi, you're obviously taking what I said far too literally. That is a much stupider thing to do than anything I wrote. Turn the "abstract" dial two clicks out and reread what I wrote.

I am interested in P vs NP because I feel it is significant. How I feel about it is based on my knowledge of the subject: I had a very good teacher in theoretical computer science. Now in this context, others have covered the mathematical stuff, in the article as well as in comments in the thread. What I did was to try to convey the feeling, without requiring a CS background.

The

posted by krilli at 5:09 PM on August 27, 2009

I am interested in P vs NP because I feel it is significant. How I feel about it is based on my knowledge of the subject: I had a very good teacher in theoretical computer science. Now in this context, others have covered the mathematical stuff, in the article as well as in comments in the thread. What I did was to try to convey the feeling, without requiring a CS background.

The

*awesomeness*of the problem. In the old and true sense of the word. You're just not going to make that happen with equations and math.posted by krilli at 5:09 PM on August 27, 2009

I'm confused again. Which is why I'm glad I didn't go to grad school.

posted by flaterik at 5:10 PM on August 27, 2009

posted by flaterik at 5:10 PM on August 27, 2009

*How does the existence of a proof affect anything in real life?*

If there were a proof that P=NP it would probably take the form of an algorithm for solving some NP problem in polynomial time. Since all NP problems can be reduced to each other in polynomial time, this would allow any NP problem to be solved in polynomial time. Whether this algorithm would be fast enough to do anything useful is a different question.

posted by Pyry at 5:16 PM on August 27, 2009

*OK, I'm sure this is a dumb question, but how does the discovery that, in theory, "there actually /is/ a way to solve these problems easily" change anything? I mean, people are presumably out there trying to solve these problems right now; are they going to try harder and succeed faster just because it's proven that P=NP? How does the existence of a proof affect anything in real life?*

It's not a dumb question, and the answer is basically "it depends on the proof".

The immediate implication of proving P = NP is that every NP problem can be in principle solved in polynomial time for some values k and x, but it tells us nothing encouraging (from an efficiency standpoint) about what k and x are.

Any formal proof of P = NP would also function as a naive method for specifying an NP problem as an algorithm with execution time P—there's some deep Turing magic here, this is tied to the same theoretic groundwork that shows that you can't build a better computer than the super simple Turing machine in principle, since every computer is just a fancy dressed-up configuration of that same core bit of Alan's genius.

The new-found P algorithm could suck monkey balls, though. And the fact that there are a number of highly visible NP problems that folks would

*love*to show can be solved in P time, and that it's been an open problem for decades and that still hasn't happened, strongly suggests to a lot of folks that either P != NP or that if P = NP then these are going to be some practically shitass P algorithms we'd be talking about.

But no one knows. No one will know until sometime after the point when/if P = NP is first established.

posted by cortex at 5:16 PM on August 27, 2009

... and yes, my allegory was hairy. I admit it.

But I wasn't talking about supernatural powers, if I may allow myself to continue with the comparison. Take Einstein. I can appreciate that he did a really great job on some science, but I don't think I could ever have done the work myself, even if I had learned the right methods and used the right tools. And that's kind of the bit.

Human to human, intellect to intellect, Einstein was so much smarter than me that it becomes a qualitative difference.

The question is whether you can put Astroglide on an NP problem and make it P. Can you?

posted by krilli at 5:17 PM on August 27, 2009

But I wasn't talking about supernatural powers, if I may allow myself to continue with the comparison. Take Einstein. I can appreciate that he did a really great job on some science, but I don't think I could ever have done the work myself, even if I had learned the right methods and used the right tools. And that's kind of the bit.

Human to human, intellect to intellect, Einstein was so much smarter than me that it becomes a qualitative difference.

The question is whether you can put Astroglide on an NP problem and make it P. Can you?

posted by krilli at 5:17 PM on August 27, 2009

Fuckers. How about a nice game of chess?

posted by seanmpuckett at 5:19 PM on August 27, 2009 [1 favorite]

posted by seanmpuckett at 5:19 PM on August 27, 2009 [1 favorite]

*OK, I'm sure this is a dumb question, but how does the discovery that, in theory, "there actually /is/ a way to solve these problems easily" change anything? I mean, people are presumably out there trying to solve these problems right now; are they going to try harder and succeed faster just because it's proven that P=NP? How does the existence of a proof affect anything in real life?*

I think any proof that p=np would produce a general solution to NP problems, right? At least that was what this article seemed to say.

Also, if P problems are soluble in t=k+n^x time, then what's the formulat for NP? Or is it literally unknowable how long it will take?

posted by empath at 5:21 PM on August 27, 2009

*I think any proof that p=np would produce a general solution to NP problems, right? At least that was what this article seemed to say.*

In other words-- you can't say that a problem is soluble in t=k+n^x time unless you know the solution.

posted by empath at 5:22 PM on August 27, 2009

Does someone here has NP problems? Preferably with a commercial background?

Please PM me.

posted by yoyo_nyc at 5:25 PM on August 27, 2009

Please PM me.

posted by yoyo_nyc at 5:25 PM on August 27, 2009

The important takeaway is that as long as P≠NP there's no way to make a profit as a door-to-door salesman.

posted by GuyZero at 5:28 PM on August 27, 2009 [6 favorites]

posted by GuyZero at 5:28 PM on August 27, 2009 [6 favorites]

empath, NP problems are solvable in exponential time, t = 2^(n^x). Basically if you have a polynomial-time checker you can simply apply the checker to all possible inputs aka brute force.

posted by Wood at 5:30 PM on August 27, 2009 [1 favorite]

posted by Wood at 5:30 PM on August 27, 2009 [1 favorite]

*Also, if P problems are soluble in t=k+n^x time, then what's the formulat for NP? Or is it literally unknowable how long it will take?*

If you can state a problem clearly, you can create an algorithm for it that will finish in finite time. The big deal about polynomial time is that a polynomial can only explode yea fast; even for very large values of x, a polynomial problem will for sufficiently large input execute faster than any possible exponential function, for example, because the rate of growth of an exponential is even greater than a polynomial.

n

^{2}grows quickly; n

^{10}grows more quickly yet, but the grow in the same

*way*, no matter what your degree value n

^{x}is.

In an exponential, your problem is of the form t=k*x

^{n}, where n is again the size of your input and x is the degree of the exponential. The curve of growth for any exponential function where x > 1 will eventually be steeper than

*any*polynomial function. And that gap will keep growing wider and wider as n increases.

So x

^{n}(exponential) grows faster (and thus t becomes larger, the problem becomes more unweildy) than n

^{x}(polynomial) for

*any values of x > 1*. You literally cannot define an exponential function that will not, for sufficiently large values of n, take longer to computer than

*any*polynomial function.

And there are other possible growth rates that are worse than exponential.

So no, !P problems aren't of unknowable solution time necessarily; they just don't have known solutions that scale well.

Again, in practice that may not be a problem; for sufficiently small input n and sufficiently small growth values x, an exponential function might be perfectly practical for solving a real-world problem. But in theory, it gets worse as input size grows in a way that's incomparably bad vs. polynomial-time solutions.

posted by cortex at 5:31 PM on August 27, 2009 [3 favorites]

My favorite NP-complete problem is the subset sum problem. Given n integers and a target value m, does there exist a subset of the given integers which add up to m?

What we would like to find is a solution other than "add all possible subsets of integers and see if any give you the target value." What's wrong with that solution is that there are 2 to the n such subsets--i.e. exponential rather than polynomial. It is in NP since if we are given such a subset, we can verify it by adding it up and getting the value m.

posted by Obscure Reference at 5:31 PM on August 27, 2009 [5 favorites]

What we would like to find is a solution other than "add all possible subsets of integers and see if any give you the target value." What's wrong with that solution is that there are 2 to the n such subsets--i.e. exponential rather than polynomial. It is in NP since if we are given such a subset, we can verify it by adding it up and getting the value m.

posted by Obscure Reference at 5:31 PM on August 27, 2009 [5 favorites]

Okay, no one has supernatural powers people. What I said was that krilli's formulation, about geniuses being 'separate' from normal people if P != NP while everyone is a genious if P = NP is wrong

If P = NP then it's true that a lot of interesting problems would be solvable, but if P != NP then it's

posted by delmoi at 5:36 PM on August 27, 2009

*because*it would grant some people super natural powers.If P = NP then it's true that a lot of interesting problems would be solvable, but if P != NP then it's

**not true**that some geniuses arrived at their ideas through an NP process, rather then through some heuristic, just as people use when they solve a Rubik's cube, which is an NP complete problem.Well there are a lot of useful things that we could do if it were possible to solve NP problems in a reasonable amount of time. It would be much, much easier to write a computer program that could understand text, or language, or recognize things visually. Pretty much all "AI" would get much, much easier. (assuming the exponent is actually small, like I pointed out)OK, I'm sure this is a dumb question, but how does the discovery that, in theory, "there actually /is/ a way to solve these problems easily" change anything? I mean, people are presumably out there trying to solve these problems right now; are they going to try harder and succeed faster just because it's proven that P=NP? How does the existence of a proof affect anything in real life?

What? Here is what you said:Delmoi, you're obviously taking what I said far too literally.

In other words, you made a very specific claim, that one thing was predicated on another. You didn't just say "Being a genious is

If we prove that P is NP, we have proven that with the right methods, everyone, everything is a genius.

If we prove that P isn't NP, we have mathematically proven that essentially if you aren't a genius at something, there are some capabilities that are simply out of your reach, forever, until the end of time.

**like**solving NP complete problems" you said that they were literally identical and that if P = NP then everyone would be "a genius". That's not an abstract statement at all, it's very specific.Except you did so in a way that makes little sense, and is wrong. That's the nice thing about "equations and math", which is the only way to explain what itThe awesomeness of the problem. In the old and true sense of the word. You're just not going to make that happen with equations and math.

*actually*means.posted by delmoi at 5:36 PM on August 27, 2009

>

Sure, but that would only occupy us for 23:47.3834 and then we'd be bored again.

posted by ardgedee at 5:37 PM on August 27, 2009 [2 favorites]

*Fuckers. How about a nice game of chess?*Sure, but that would only occupy us for 23:47.3834 and then we'd be bored again.

posted by ardgedee at 5:37 PM on August 27, 2009 [2 favorites]

*But I wasn't talking about supernatural powers, if I may allow myself to continue with the comparison. Take Einstein. I can appreciate that he did a really great job on some science, but I don't think I could ever have done the work myself, even if I had learned the right methods and used the right tools. And that's kind of the bit.*

But this is why your analogy is bad. One of the central tenets of computer science, the Church-Turing hypothesis, says that (to use your analogy) we are all Einstein; we can all solve exactly the same set of problems, it's just a matter of how long it takes.

Furthermore, your analogy is confusing because you are trying to compare

*problems*to

*people*rather than the more natural people to machines analogy. That is, you're saying that you're (for instance) SORT while Einstein is 3SAT, which is bizarre.

I think you're confusing computability with complexity. Both P and NP problems are computable; they can all be solved by Turing and Turing-equivalent machines. Whether P = NP only affects how fast these problems can be solved, not whether they can be solved at all.

Also, on preview:

*Pretty much all "AI" would get much, much easier*

I work in computer vision, and I have to say we are all pretty much fumbling around in the dark; giving us a bigger hammer won't help when we can't even find the nails.

posted by Pyry at 5:42 PM on August 27, 2009 [2 favorites]

I think Pyry makes a very good point that the article glosses over: just because someone proves P=NP doesn't suddenly mean that all the NP problems have efficient real-world solutions. It means that they have

Plus, there are "NP hard" problems (which are 'as hard as the hardest NP problems') which are not themselves NP. These would still be really hard, but because they're not (at least in some cases) actually NP, they wouldn't be affected—even theoretically—by a proof that P=NP.

Since most PK algorithms depend on factorization, unless the P=NP proof involved efficient factorization (which it could, I guess, but that's

The

posted by Kadin2048 at 5:57 PM on August 27, 2009

*some*polynomial-time solution, but it might be a really terrible one. So the impact on public-key cryptography would, in all probability, not be a severe overnight catastrophe.Plus, there are "NP hard" problems (which are 'as hard as the hardest NP problems') which are not themselves NP. These would still be really hard, but because they're not (at least in some cases) actually NP, they wouldn't be affected—even theoretically—by a proof that P=NP.

Since most PK algorithms depend on factorization, unless the P=NP proof involved efficient factorization (which it could, I guess, but that's

*really*unlikely; it's like the mathematical version of cold fusion), you wouldn't necessarily get an efficient real-world factorization algorithm out of the proof.The

*long term*fallout of a P=NP proof might be very significant, but I think the author overstates what it would do to the real world in the short term. There is really nothing to fear or be worried about.posted by Kadin2048 at 5:57 PM on August 27, 2009

*I think any proof that p=np would produce a general solution to NP problems, right? At least that was what this article seemed to say.*

There's a difference between what is called a constructive proof (which would "produce a solution") and an existence proof, which would demonstrate that a solution exists but not tell you how to go about finding it. The article referred, as an example, to the pigeonhole principle, which basically says, if you have n+1 items and only n things to put them in, (at least) one of the things to put them in will have more than one item in it. The principle says that it exists but doesn't say how to find it. Of course, all you have to do is go through them all one by one to find it.

posted by Obscure Reference at 6:52 PM on August 27, 2009

**Proof:**

P = NP

Take my word for it.

Where's my million dollars?

posted by Bonzai at 6:56 PM on August 27, 2009

So, this is the analogy i made up to try to understand this whole problem. Do I get it?

Problem: You want an apple.

P is sort of like creating an apple.

NP is like trying out many different fruits to see if they're apples.

If P = NP, then if you ever want an apple, you'll always be able to make one faster than you can try all those fruits.

But if P /= NP, then even though you want an apple, sometimes you're just going to have to try a whole bunch of fruit to find it.

posted by wayofthedodo at 7:25 PM on August 27, 2009

Problem: You want an apple.

P is sort of like creating an apple.

NP is like trying out many different fruits to see if they're apples.

If P = NP, then if you ever want an apple, you'll always be able to make one faster than you can try all those fruits.

But if P /= NP, then even though you want an apple, sometimes you're just going to have to try a whole bunch of fruit to find it.

posted by wayofthedodo at 7:25 PM on August 27, 2009

P != NP

I have discovered a truly marvelous proof of this, which this comment box is too narrow to contain.

posted by reishus at 7:47 PM on August 27, 2009

I have discovered a truly marvelous proof of this, which this comment box is too narrow to contain.

posted by reishus at 7:47 PM on August 27, 2009

*I have discovered a truly marvelous proof of this, which this comment box is too narrow to contain.*

I'm afraid I beat you to this... see my first comment in this thread.

posted by VikingSword at 7:52 PM on August 27, 2009

wayofthedodo,

Even if P=NP, will the complexity of a solution always be less than brute force?

posted by effugas at 8:10 PM on August 27, 2009

Even if P=NP, will the complexity of a solution always be less than brute force?

posted by effugas at 8:10 PM on August 27, 2009

Solving a Rubic's cube is NOT an NP-complete problem. There are many algorithms guaranteed to come to a best solution in a few dozen moves.

Solving a Rubic's cube by randomly twisting the sides

posted by lupus_yonderboy at 8:21 PM on August 27, 2009 [2 favorites]

Solving a Rubic's cube by randomly twisting the sides

*is*NP complete!posted by lupus_yonderboy at 8:21 PM on August 27, 2009 [2 favorites]

*It would be much, much easier to write a computer program that could understand text, or language, or recognize things visually. Pretty much all "AI" would get much, much easier. (assuming the exponent is actually small, like I pointed out)*

What specific problems in language recognition or AI are NP complete?

posted by afu at 8:23 PM on August 27, 2009

Cool post, thanks!

P.S. I almost posted this a while back, but I chickened out: Robin Moser's constructive proof of the Lovasz Local Lemma, which is pretty cool. He uses an interesting technique (Terence Tao calls it "entropy compression"). I've been wondering if that technique might be applicable to P vs. NP; haven't been able to find any discussion about that, though--has anyone else seen anything?

posted by equalpants at 8:26 PM on August 27, 2009

P.S. I almost posted this a while back, but I chickened out: Robin Moser's constructive proof of the Lovasz Local Lemma, which is pretty cool. He uses an interesting technique (Terence Tao calls it "entropy compression"). I've been wondering if that technique might be applicable to P vs. NP; haven't been able to find any discussion about that, though--has anyone else seen anything?

posted by equalpants at 8:26 PM on August 27, 2009

Nnnnnnooo! I am too sidewise to comment!* It's so interesting, and I have such a strong opinion on this that I just can't remember right now... no fair! No fair! The one night a month I get pickled, and you pick

In short, my opinion is, we'll never truly know the answer - It's all down to the Godel-Escher-Bachman-Turner-Overdrive Incompleteness serum, which was first hypothesized as an alternative to the toaster-strudel vs. pop-tarts debate on nights when everyone's sidewise, and may be a crucial component for a popular Spiderman villain's origin story as well.

* and yet...

posted by Slap*Happy at 8:37 PM on August 27, 2009

*tonight*, that night, to talk about p-v-np. And thyaclines, which are almost as cool.In short, my opinion is, we'll never truly know the answer - It's all down to the Godel-Escher-Bachman-Turner-Overdrive Incompleteness serum, which was first hypothesized as an alternative to the toaster-strudel vs. pop-tarts debate on nights when everyone's sidewise, and may be a crucial component for a popular Spiderman villain's origin story as well.

* and yet...

posted by Slap*Happy at 8:37 PM on August 27, 2009

My dad is friends with one of the computer mathematicians mentioned in that article (Richard Karp).

I like to think I'm reasonably smart, but one time I asked him what he did in his job, and during the 10 minutes of his answer (which was cogent and well-thought-out), the only word I understood was "algorithm."

posted by msalt at 8:40 PM on August 27, 2009

I like to think I'm reasonably smart, but one time I asked him what he did in his job, and during the 10 minutes of his answer (which was cogent and well-thought-out), the only word I understood was "algorithm."

posted by msalt at 8:40 PM on August 27, 2009

Of course, once the reporters and undergrads have left the room, all the mathematicians burst out laughing and start trading high-fives. "Oh, dude! I can't believe they still buy that shit! It's so not NP!"

posted by No-sword at 9:03 PM on August 27, 2009

posted by No-sword at 9:03 PM on August 27, 2009

So P and NP algorithms, for a problem of size n, both have solutions that can be checked in n

However, we have for a while now lived in a world where computer power grows exponentially, by a factor of ten every ~3.5 years. A problem that would take a hundred years on a 1994 computer would take ten years on a 1998 computer, one year on a 2001 computer, six weeks on a 2004 computer, four days on a 2008 computer, and nine hours on a 2012 computer. (The chart shows supercomputers, but commodity computers are on the same curve.)

If the number of steps required grows exponentially, but the time per step falls exponentially, can all NP problems be solved in polynomial time?

posted by fantabulous timewaster at 9:12 PM on August 27, 2009 [1 favorite]

^{x}steps. But the NP algorithm takes a^{n}steps to generate a new solution.However, we have for a while now lived in a world where computer power grows exponentially, by a factor of ten every ~3.5 years. A problem that would take a hundred years on a 1994 computer would take ten years on a 1998 computer, one year on a 2001 computer, six weeks on a 2004 computer, four days on a 2008 computer, and nine hours on a 2012 computer. (The chart shows supercomputers, but commodity computers are on the same curve.)

If the number of steps required grows exponentially, but the time per step falls exponentially, can all NP problems be solved in polynomial time?

posted by fantabulous timewaster at 9:12 PM on August 27, 2009 [1 favorite]

*I work in computer vision, and I have to say we are all pretty much fumbling around in the dark; giving us a bigger hammer won't help when we can't even find the nails.*

Pyry. The thing is, when most people come across an algorithm that's NP complete they either forget about it, or try to think up some heuristic that might approach the problem sideways. Thinking up those heuristics is hard. Again

*assuming a reasonably small exponent*so many things get easier.

I was tying to think up a practical example. Think about a neural network, for example. Ideally you would want to find a neural network that best matches your labeled data. Wouldn't it be great if you could simply pick the one that best matches your data? Well, obviously you can't do that; you've got to use some search algorithm, just like if you want to find a solution to 3SAT. But if you had an algorithm that could solve NP problems, you could pick a NN structure and weights the same way you pick a bit pattern for 3SAT. At least in theory.

You could probably get a lot of mileage out of the question of what we could actually do if P = NP, but the problem is no one spends any time on it, because it's obviously never going to have any practical applications.

*There's a difference between what is called a constructive proof (which would "produce a solution") and an existence proof, which would demonstrate that a solution exists but not tell you how to go about finding it.*

That may be true in general, but actually we

**would**probably have a general solution for any NP solution, and if not someone would find it pretty quickly. The most obvious way to prove P = NP is to come up with a general solution to 3SAT and the thing about these complexity classes is that every problem can be

*converted*into another problem in the same complexity class.

*Even if P=NP, will the complexity of a solution always be less than brute force?*

Not

*every*problem. There are larger complexity classes like NEXP.

*one year on a 2001 computer, six weeks on a 2004 computer, four days on a 2008 computer, and nine hours on a 2012 computer. (The chart shows supercomputers, but commodity computers are on the same curve.)*

That actually hasn't been true in terms of home computers for a while. You get more cores, and especially more memory, but clock speed has really topped out for quite a while. And it obviously won't be true forever.

posted by delmoi at 9:55 PM on August 27, 2009 [1 favorite]

*However, we have for a while now lived in a world where computer power grows exponentially, by a factor of ten every ~3.5 years. A problem that would take a hundred years on a 1994 computer would take ten years on a 1998 computer, one year on a 2001 computer, six weeks on a 2004 computer, four days on a 2008 computer, and nine hours on a 2012 computer. (The chart shows supercomputers, but commodity computers are on the same curve.)*

Suppose we turned every particle into the universe (all 10^87 or so) into a 1 ghz computer (1 billion operations / second) and ran this computer for the age of the universe (13.7 billion years), and suppose that we could check one possible solution to the traveling salesman problem per operation, then how large of a traveling salesman problem could this computer have solved by brute force?

Well, we would be capable of checking approximately 4 * 10^113 solutions in those 13.7 billion years. For a traveling salesman problem of size

*n*, there are

*n!*possible routes, and 77! ~= 1.4 * 10^113 and 78! ~= 10^115, so we could have solved a travelling salesman problem with all of 77 cities

*using the entire universe as a computer over 13.7 billion years*.

So I wouldn't hold my breath for Moore's law to solve all our problems.

posted by Pyry at 10:03 PM on August 27, 2009 [7 favorites]

*If the number of steps required grows exponentially, but the time per step falls exponentially, can all NP problems be solved in polynomial time?*

No.

It's not a matter of how much time it takes to perform a set of computations, but how many computations relative to the size of the input it takes.

If you've got a sorting algorithm that runs in n log n "time", for instance, that tells you

*nothing*about how long it actually will take, in general, to run the algorithm—

*that*depends on the computer it's running on! (As well as the algorithm, obviously.) You probably can't even tell that it will take (2 log 2n)/(log n) times longer to sort a list of length 2n than to sort a list of length n, given modern processor doodads. But there will be that many more steps!

posted by kenko at 10:07 PM on August 27, 2009

From the article:

Maybe I'm missing something, but I've spent quite a bit of time reading and thinking about computation, P/NP, Turing machines, AI, etc and the above seems like bullshit to me. Am I missing something?

posted by Bort at 10:08 PM on August 27, 2009

*but what we will gain from P = NP will make the whole Internet look like a footnote in history ... Learning becomes easy by using the principle of Occam's razor—we simply find the smallest program consistent with the data. Near perfect vision recognition, language comprehension and translation and all other learning tasks become trivial. We will also have much better predictions of weather and earthquakes and other natural phenomenon.*Maybe I'm missing something, but I've spent quite a bit of time reading and thinking about computation, P/NP, Turing machines, AI, etc and the above seems like bullshit to me. Am I missing something?

posted by Bort at 10:08 PM on August 27, 2009

Pyry's point applies equally well to polynomial-time algorithms with polynomials of large degree.

It might be true that an incredibly fast, larger-than-the-universe computer could run exponential algorithms reasonably quickly. But that still wouldn't be running them in polynomial time.

For lots of values of n, an algorithm that runs in 1.000000000000000001

posted by kenko at 10:11 PM on August 27, 2009

It might be true that an incredibly fast, larger-than-the-universe computer could run exponential algorithms reasonably quickly. But that still wouldn't be running them in polynomial time.

For lots of values of n, an algorithm that runs in 1.000000000000000001

^{n}time will be able to be executed significantly faster than one that runs in n^{1012}time, but the former is still exponential and the latter is still polynomial.posted by kenko at 10:11 PM on August 27, 2009

*Maybe I'm missing something, but I've spent quite a bit of time reading and thinking about computation, P/NP, Turing machines, AI, etc and the above seems like bullshit to me. Am I missing something?*

It's about the same thing I said.

*Assuming a small exponent*. If it turns out you can solve any problem in NP in n

^{256}steps, that's an intellectual curiosity but it would have no real impact. If we can do it in n

^{2}steps, that would be a huge deal. Most of the AI algorithms we currently used could be thrown out the window. Obviously over-fitting and noise could still be a problem, but that's about it.

Again it's hard to say exactly because no one spends much time studying this (that is, Machine learning algorithms on Non deterministic Turing machines), because it's all completely useless, and of course you can't really test or do experiments it either.

posted by delmoi at 10:16 PM on August 27, 2009

*Maybe I'm missing something, but I've spent quite a bit of time reading and thinking about computation, P/NP, Turing machines, AI, etc and the above seems like bullshit to me. Am I missing something?*

No. First, proving P = NP doesn't guarantee your P algorithm won't have gigantic exponents (e.g., a O(n^100) algorithm is still not useful). Second, there are problems harder than NP. Two point fiveth: finding "the smallest program consistent with the data" is uncomputable. Third: the problem with AI at the moment isn't "this function is too hard to optimize" but rather "we have no clue what function to optimize" (arguably you can stack optimizations on top of each other, but good luck getting that to spit out anything useful).

posted by Pyry at 10:17 PM on August 27, 2009

*That's the nice thing about "equations and math", which is the only way to explain what it actually means.*

Sadly, this is not true. For if it were, no math text nor teacher would need to speak the dirty human language, and could instead rely only on the austere symbols alone. Of course, conveying what those symbols

*meant*would then be up to the reader, since there would be no one to tell them otherwise.

*And there are other possible growth rates that are worse than exponential.*

If you like numbers and are interested in how fast they grow, check out the unfairly maligned Scott Aaronson's wonderfully accessible essay Who can name the bigger number?, mentioned previously.

posted by abc123xyzinfinity at 10:18 PM on August 27, 2009 [1 favorite]

Isn't it also the case, though, that the sorts of problems that come up in weather modelling are fairly far removed from this realm?

posted by kenko at 10:18 PM on August 27, 2009

posted by kenko at 10:18 PM on August 27, 2009

*Sadly, this is not true. For if it were, no math text nor teacher would need to speak the dirty human language, and could instead rely only on the austere symbols alone.*

He said "equations and math", not "equations and symbols". "Math" is not symbols alone.

posted by kenko at 10:20 PM on August 27, 2009 [1 favorite]

Thanks for the feedback everyone. Didn't think I was missing something, but you never know. :)

I agree. I always thought the issue with weather prediction was chaos theory - the butterfly effect.

posted by Bort at 10:28 PM on August 27, 2009

*Isn't it also the case, though, that the sorts of problems that come up in weather modelling are fairly far removed from this realm?*I agree. I always thought the issue with weather prediction was chaos theory - the butterfly effect.

posted by Bort at 10:28 PM on August 27, 2009

Pyry: "

QFT. When the AI guys can actually formalize a problem, then complexity analysis can be applied to it.

And on the day that the AI guys formalize a problem, they'll be too drunk to discuss it until the following week.

posted by Rat Spatula at 10:34 PM on August 27, 2009

*the problem with AI at the moment...*"QFT. When the AI guys can actually formalize a problem, then complexity analysis can be applied to it.

And on the day that the AI guys formalize a problem, they'll be too drunk to discuss it until the following week.

posted by Rat Spatula at 10:34 PM on August 27, 2009

This thread needs more examples of P and NP problems, because even those few offered so far have been disputed.

posted by rokusan at 10:48 PM on August 27, 2009

posted by rokusan at 10:48 PM on August 27, 2009

*My favorite NP-complete problem is the subset sum problem. Given n integers and a target value m, does there exist a subset of the given integers which add up to m?*

What we would like to find is a solution other than "add all possible subsets of integers and see if any give you the target value." What's wrong with that solution is that there are 2 to the n such subsets--i.e. exponential rather than polynomial. It is in NP since if we are given such a subset, we can verify it by adding it up and getting the value m.

What we would like to find is a solution other than "add all possible subsets of integers and see if any give you the target value." What's wrong with that solution is that there are 2 to the n such subsets--i.e. exponential rather than polynomial. It is in NP since if we are given such a subset, we can verify it by adding it up and getting the value m.

Obscure Reference:

Ok I think I get some of the basic concept, but I'm having problems understanding how the terms P and NP map onto my understanding.

I understand that brute force methods of trying to figure out a solution run into the problem of combinatorial explosion (i.e. systematically checking each and every single possibility). I also understand that given a solution to a problem, it may be easy to verify it despite not knowing whether there is a shortcut to deriving that solution.

But what I'm having difficulty with is this:

P refers to polynomial time

NP refers to the opposite? (i.e. need brute force)?

If those definitions are more or less on the right track, why would you have said that the subset problem is NP just because it's easy to verify. What does verification have to do with the nature of arriving at a solution?

And what does it mean to say that P = NP or P does not equal NP?

I think I'm misunderstanding something basic here. Someone help?

posted by spacediver at 11:21 PM on August 27, 2009

Oh my God, I only just woke up and this article is already sending me back to sleep.

posted by marmaduke_yaverland at 12:12 AM on August 28, 2009

posted by marmaduke_yaverland at 12:12 AM on August 28, 2009

*NP refers to the opposite? (i.e. need brute force)?*

The N stands for "Nondeterministic", So you have Nondeterministic Polynomial time. There are other N complexity classes like NL and NEXP.

A Nondeterministic Turing machine can be imagined in two different ways. One (the more difficult way) is to imagine a Turing machine that, when it wants to make a choice, can make two choices at once, so it's "state" expands into a superstate. Then, when, in one of those states it wants to make another choice, the number of states expands again.

So a Nondeterministic computer can check all the different "paths" at once, whereas a regular computer can only check one path at a time.

Another way to think about it, which is easier but (IMO) less interesting is to imagine a regular computer with a special random number generator so that always makes the best guess.

Wikipedia has a List of NP complete problems, which apparently includes Tetris. But lets look at the Bin Packing Problem, which is where you try to fit a bunch of objects of various sizes into the smallest number of 'bins' (normally you think about this in just one dimension).

Lets restate that as a yes/no question "Can you fit these items i

_{1}...i

_{n}into M bins"

On a deterministic machine, you would pick one object, then pick another, and another until you had picked all of them, then you would have to to start from scratch and try again, untill you had exhausted all the permutations.

Using the second thought model (where an N machine always randomly guesses correctly) you would do the same thing, except you would randomly pick the correct object. Then on the next step, you would pick again and randomly pick the best object and so on. So you only need n steps to find the answer, where n is the number of objects.

Using the first thought model, at the first step you select

*all*of the objects, branching into n separate states. Then at the second step, you pick

*all*of the remaining objects

*in each state*. So now you have n*n-1 states. At the end you have n! states, and your function simply returns "yes" if any of them fit into M bins

Now, to find the smallest number M, you simply start with M = 1, and keep increasing M until you find one that fits.

Does that make sense?

posted by delmoi at 12:37 AM on August 28, 2009 [1 favorite]

*Oh my God, I only just woke up and this article is already sending me back to sleep.*

The article is very badly written.

posted by delmoi at 12:37 AM on August 28, 2009

I enjoyed the article though, and this has been a really good thread IMO, causing rust to drop off some long-unused cogs in the back of my brain. I'm just glad there are people who understand this stuff better than I do - it restores my faith in... something or other.

posted by Phanx at 1:27 AM on August 28, 2009

posted by Phanx at 1:27 AM on August 28, 2009

*But lets look at the Bin Packing Problem, which is where you try to fit a bunch of objects of various sizes into the smallest number of 'bins'*

I knew this would overlap with the burning man thread eventually.

posted by flaterik at 1:50 AM on August 28, 2009 [1 favorite]

The awesomeness of the problem. In the old and true sense of the word. You're just not going to make that happen with equations and math.

*Except you did so in a way that makes little sense, and is wrong. That's the nice thing about "equations and math", which is the only way to explain what it actually means.*

No, equations and math are how you explain WHAT IT IS. English is far better for explaining WHAT IT MEANS.

Look, you still don't get it. Take my explanation and tell it to a child, or on a date. Then take the mathematical approach. Get all equationy on their ass. They'll

*probably*start fidgeting when they're hit by all your correct little math.

When explaining an

*unsolved mathematical problem*, you DO NOT kill layman's interest by worrying too much about mathematical detail - it is a disservice to both mathematics and to the listener! You'll just bury the central point, the importance, and as I said, its

*awesomeness*.

posted by krilli at 2:20 AM on August 28, 2009 [2 favorites]

*No, equations and math are how you explain WHAT IT IS. English is far better for explaining WHAT IT MEANS.*

Look, you still don't get it. Take my explanation and tell it to a child, or on a date. Then take the mathematical approach. Get all equationy on their ass. They'll probably start fidgeting when they're hit by all your correct little math.

When explaining an unsolved mathematical problem, you DO NOT kill layman's interest by worrying too much about mathematical detail - it is a disservice to both mathematics and to the listener! You'll just bury the central point, the importance, and as I said, its awesomeness.

Look, you still don't get it. Take my explanation and tell it to a child, or on a date. Then take the mathematical approach. Get all equationy on their ass. They'll probably start fidgeting when they're hit by all your correct little math.

When explaining an unsolved mathematical problem, you DO NOT kill layman's interest by worrying too much about mathematical detail - it is a disservice to both mathematics and to the listener! You'll just bury the central point, the importance, and as I said, its awesomeness.

The problem here is that you aren't explaining the awesomeness of the "unsolved mathematical problem" because your analogy doesn't work.

You could claim that solving P = NP would let people move objects with the power of their mind and laymen would certainly think that was awesome, but it still wouldn't actually have anything to do with what it really means and wouldn't have aided their understanding in the slightest.

posted by Ruaidhri at 5:37 AM on August 28, 2009

I wish Charles Stross were here. He has a fabulously entertaining short story called "Antibodies" about the implications of proving

posted by adipocere at 6:29 AM on August 28, 2009

`P = NP`

.posted by adipocere at 6:29 AM on August 28, 2009

What does that have to do with anything? This isn't what I said, this isn't what anyone said, except you just said it :DYou could claim that solving P = NP would let people move objects with the power of their mind ...

OK. Team Pedant, Ready, Set, GO! Come up with a correct and interesting analogy. Please invite me to your book signing.

posted by krilli at 6:42 AM on August 28, 2009

They need to spend more time showing that N = 1. Then P = NP would just be a corollary.

posted by King Bee at 6:55 AM on August 28, 2009

posted by King Bee at 6:55 AM on August 28, 2009

*A Nondeterministic Turing machine can be imagined in two different ways. One (the more difficult way) is to imagine a Turing machine that, when it wants to make a choice, can make two choices at once, so it's "state" expands into a superstate. Then, when, in one of those states it wants to make another choice, the number of states expands again.*

So a Nondeterministic computer can check all the different "paths" at once, whereas a regular computer can only check one path at a time.

Another way to think about it, which is easier but (IMO) less interesting is to imagine a regular computer with a special random number generator so that always makes the best guess

So a Nondeterministic computer can check all the different "paths" at once, whereas a regular computer can only check one path at a time.

Another way to think about it, which is easier but (IMO) less interesting is to imagine a regular computer with a special random number generator so that always makes the best guess

Thanks for the reply delmoi.

But I still don't understand what it means to say N = NP or N doesn't equal NP.

P refers to polynomial time, which loosely means a timeframe that is managable.

NP somehow refers to this ability to examine permutations/combinations in parallel, right?

But ontologically speaking, is NP a property of a problem? Is it a property of a machine that is designed to solve a problem? Is it a type of timeframe?

I need to understand the ontological category of P and NP in order to understand what the fuck it means to say P equals NP or P does not equal NP.

posted by spacediver at 8:19 AM on August 28, 2009

spacediver, the question is formulated like this:

Given two classes of complexity, P and NP, defined as such:

The class P is all problems that can be solved by algorithms which take no more than k*n^x time to complete.

The class NP is all problems that can be solved by algorithms which take no more than n^k to complete.

We know that all the problems which are members of P are also members of NP. What we don't know is if

Furthermore, we suspect that any proof that all NP problems can be reformulated as P problems will have an empirical side effect of giving us an algorithm to transform an NP problem to a (potentially very large) P problem.

We also know that many NP problems are isomorphic - that is, they can be transformed into a different NP problem. What this means is that for many NP problems, if you can find a way to formulate the problem as a P problem, you have, as a side effect, transformed

There's a little bit of handwaving here to try and simplify it. Forgive me.

posted by atbash at 9:03 AM on August 28, 2009

Given two classes of complexity, P and NP, defined as such:

The class P is all problems that can be solved by algorithms which take no more than k*n^x time to complete.

The class NP is all problems that can be solved by algorithms which take no more than n^k to complete.

We know that all the problems which are members of P are also members of NP. What we don't know is if

*all*NP problems can be reformulated as problems that are members of P.Furthermore, we suspect that any proof that all NP problems can be reformulated as P problems will have an empirical side effect of giving us an algorithm to transform an NP problem to a (potentially very large) P problem.

We also know that many NP problems are isomorphic - that is, they can be transformed into a different NP problem. What this means is that for many NP problems, if you can find a way to formulate the problem as a P problem, you have, as a side effect, transformed

*many*NP problems into P problems. This property is why it would fundamentally change so many things.There's a little bit of handwaving here to try and simplify it. Forgive me.

posted by atbash at 9:03 AM on August 28, 2009

I have a proof for P != NP that fits nicely in this comment box.

"NP" is not equal to "P" because it has the letter "N" in it.

posted by freecellwizard at 9:05 AM on August 28, 2009 [1 favorite]

"NP" is not equal to "P" because it has the letter "N" in it.

posted by freecellwizard at 9:05 AM on August 28, 2009 [1 favorite]

thank you atbash!

So if I understand this correctly, if someone can actually prove that even one NP problem CANNOT be reformulated as a P problem, then the P vs NP problem has been solved?

(although this proof would not (and should not) deter people from trying to reformulate some NP problems into P problems, because some of them may be reformulable.

posted by spacediver at 9:13 AM on August 28, 2009

So if I understand this correctly, if someone can actually prove that even one NP problem CANNOT be reformulated as a P problem, then the P vs NP problem has been solved?

(although this proof would not (and should not) deter people from trying to reformulate some NP problems into P problems, because some of them may be reformulable.

posted by spacediver at 9:13 AM on August 28, 2009

Spacediver, I believe that the proof would work both ways.

posted by krilli at 9:18 AM on August 28, 2009

posted by krilli at 9:18 AM on August 28, 2009

krill, would you agree with the following propositions?

1:

If we can prove that just one NP problem CAN be reformulated as a P problem, then we have not solved the P vs NP problem (since we havn't proven that all NP problems can be reformulated as P problems, just this one particular instance).

2:

If we can prove that just one NP problem CANNOT be reformulated as a P problem, then we have solved the P vs NP problem (since we now know for sure that not ALL NP problems can be reformulated as P problems).

3:

If we can somehow prove in some ingenious manner that all NP problems can be reformulated as P problems, we have not only solved the P vs NP problem, but have achieved a holy grail status.

Is proposition 3 the one that people are trying to do (as well as trying to make intuitive inductions based on empirical experience with individual instances of P and NP problems).

posted by spacediver at 9:23 AM on August 28, 2009

1:

If we can prove that just one NP problem CAN be reformulated as a P problem, then we have not solved the P vs NP problem (since we havn't proven that all NP problems can be reformulated as P problems, just this one particular instance).

2:

If we can prove that just one NP problem CANNOT be reformulated as a P problem, then we have solved the P vs NP problem (since we now know for sure that not ALL NP problems can be reformulated as P problems).

3:

If we can somehow prove in some ingenious manner that all NP problems can be reformulated as P problems, we have not only solved the P vs NP problem, but have achieved a holy grail status.

Is proposition 3 the one that people are trying to do (as well as trying to make intuitive inductions based on empirical experience with individual instances of P and NP problems).

posted by spacediver at 9:23 AM on August 28, 2009

sorry, krilli not krill :)

posted by spacediver at 9:27 AM on August 28, 2009

posted by spacediver at 9:27 AM on August 28, 2009

My understanding is that (1) and (3) are flawed propositions:

(1) as you've stated it fails to account for the established isomorphism of NP problems, which implies that, since any NP problem can be rewritten as any other, the proof of a given NP problem being a member of P is by definition a proof that all NP problems are members of P.

(3) is therefore moot. (1) and (2) are the opposite sides of the Grail coin; no one's managed to successfully flip it yet, is all.

But I may be misremembering how thorough the isomorphism across NP is, which would throw a wrench in the above if it's incomplete.

posted by cortex at 9:38 AM on August 28, 2009

(1) as you've stated it fails to account for the established isomorphism of NP problems, which implies that, since any NP problem can be rewritten as any other, the proof of a given NP problem being a member of P is by definition a proof that all NP problems are members of P.

(3) is therefore moot. (1) and (2) are the opposite sides of the Grail coin; no one's managed to successfully flip it yet, is all.

But I may be misremembering how thorough the isomorphism across NP is, which would throw a wrench in the above if it's incomplete.

posted by cortex at 9:38 AM on August 28, 2009

I see cortex. I didn't realize that the isomorphism proposition held true for all possible NP problems. Establishing this universal isomorphism (assuming it is true) must have been a great achievement in itself.

I'm actually intuitively skeptical of such an isomorphism, since any such proof (of isomorphism) must take into account the full nature of the NP problem.

And it seems to me that the NP-ness of a problem is only one out of its many relevant properties.

posted by spacediver at 9:42 AM on August 28, 2009

I'm actually intuitively skeptical of such an isomorphism, since any such proof (of isomorphism) must take into account the full nature of the NP problem.

And it seems to me that the NP-ness of a problem is only one out of its many relevant properties.

posted by spacediver at 9:42 AM on August 28, 2009

what i mean is that any such isomorphism proof must take into account the full nature all all possible NP problems.

And it seems to me that the NP-ness of a problem is only one out its many relevant properites.

Surely whether or not an NP problem can be reformulated into a P problem is a function of some of these other relevant properties, and not just the NP-ness of it.

Thus, it is hard to imagine how the isomorphism proof successively exhausts all possible NP problems along with all their possible relevant properties.

posted by spacediver at 9:45 AM on August 28, 2009

And it seems to me that the NP-ness of a problem is only one out its many relevant properites.

Surely whether or not an NP problem can be reformulated into a P problem is a function of some of these other relevant properties, and not just the NP-ness of it.

Thus, it is hard to imagine how the isomorphism proof successively exhausts all possible NP problems along with all their possible relevant properties.

posted by spacediver at 9:45 AM on August 28, 2009

I think, in a sense, all NP-complete problems are the same problem.

But I think NP-complete is a subset of NP, yeah?

posted by empath at 9:58 AM on August 28, 2009

But I think NP-complete is a subset of NP, yeah?

posted by empath at 9:58 AM on August 28, 2009

What cortex just said, but with one additional caveat: any "reformulation" can't take too much time either. That is, being able to reformulate an NP problem as a P problem is only useful if the process of reformulation itself is in P.

Definitely. There are all kinds of other properties to look at, too: how much space (memory) is required to solve the problem, whether or not there exists a fast algorithm that can give approximate solutions that are "close enough" to the true solution (for various values of "close enough"), etc. This site has lots of info about such things.

posted by equalpants at 10:00 AM on August 28, 2009

*And it seems to me that the NP-ness of a problem is only one out of its many relevant properties.*Definitely. There are all kinds of other properties to look at, too: how much space (memory) is required to solve the problem, whether or not there exists a fast algorithm that can give approximate solutions that are "close enough" to the true solution (for various values of "close enough"), etc. This site has lots of info about such things.

posted by equalpants at 10:00 AM on August 28, 2009

I wonder how analogue solutions to things like the traveling salesman problem shed light on this p vs np issue.

For example, I've heard that you can approximate a solution to the traveling salesman problem with a wooden board, some pegs, and a bowl of soapy water.

Place the pegs on the board in the positions of the destinations that the salesman has to visit, then dip the board into the soapy water, and due to things like surface tension and paths of least resistance, the strands of soapy residue will connect the pegs in the most efficient manner.

Or something along those lines.

In this case, it seems that the laws of physics themselves are acting as a computer, self organizing into the most efficient solution.

posted by spacediver at 10:09 AM on August 28, 2009 [1 favorite]

For example, I've heard that you can approximate a solution to the traveling salesman problem with a wooden board, some pegs, and a bowl of soapy water.

Place the pegs on the board in the positions of the destinations that the salesman has to visit, then dip the board into the soapy water, and due to things like surface tension and paths of least resistance, the strands of soapy residue will connect the pegs in the most efficient manner.

Or something along those lines.

In this case, it seems that the laws of physics themselves are acting as a computer, self organizing into the most efficient solution.

posted by spacediver at 10:09 AM on August 28, 2009 [1 favorite]

cortex, I think you've got the isomorphism slightly wrong there. The relationship that's been proven in terms of solvability is that if

posted by atbash at 10:11 AM on August 28, 2009

*any*NP-complete problem is solvable in P, then all P problems must be. That doesn't translate directly to isomorphism - there won't**necessarily**be known ways to reformulate all of them. This is why a lot of CS work goes towards proving that one NP problem is actually isomorphic to another; the hope is that if P=NP is proven, then whatever proof is provided will luckily on one of the problems that has proven isomorphisms.posted by atbash at 10:11 AM on August 28, 2009

Okay -- so any NP problem can be reduced to an NP-Complete problem and every NP-complete problem can be mapped to any other NP-complete problem, but not all NP problems are NP complete themselves, do I understand that?

So, there are NP problems which you cannot map to each other.

posted by empath at 10:14 AM on August 28, 2009

So, there are NP problems which you cannot map to each other.

posted by empath at 10:14 AM on August 28, 2009

*Place the pegs on the board in the positions of the destinations that the salesman has to visit, then dip the board into the soapy water, and due to things like surface tension and paths of least resistance, the strands of soapy residue will connect the pegs in the most efficient manner.*

You'll more likely get a Delaunay Triangulation or something as soap won't tell you what order to visit the stops in.

But yes, there are analog computers that can compute answers very quickly to problems that take a lot of computational power. I don't think it's a general solution to NP problems though.

posted by GuyZero at 10:15 AM on August 28, 2009

I wish I went on to grad school. I miss this crazy stuff.

posted by chairface at 10:20 AM on August 28, 2009 [1 favorite]

posted by chairface at 10:20 AM on August 28, 2009 [1 favorite]

freecellwizard said:

For every problem there exists a solution which is simple, elegant, and wrong.

posted by atbash at 10:25 AM on August 28, 2009 [4 favorites]

*I have a proof for P != NP that fits nicely in this comment box.*

"NP" is not equal to "P" because it has the letter "N" in it."NP" is not equal to "P" because it has the letter "N" in it.

For every problem there exists a solution which is simple, elegant, and wrong.

posted by atbash at 10:25 AM on August 28, 2009 [4 favorites]

*cortex, I think you've got the isomorphism slightly wrong there.*

Ah, that doesn't surprise me, yeah. Heh.

*For every problem there exists a solution which is simple, elegant, and wrong.*

I would buy a book that was a nice presentation of nothing but a series of such solutions.

posted by cortex at 10:28 AM on August 28, 2009 [1 favorite]

*Thus, it is hard to imagine how the isomorphism proof successively exhausts all possible NP problems along with all their possible relevant properties.*

I believe the isomorphism cortex referred to was NP-completeness. As empath said, NP-complete is a subset of NP. (Also note that P is a

*subset*of NP.) A problem A is NP-complete if:

1. A is in NP

2. any problem in NP can be reduced to A

Note that #2 only goes one direction: any problem in NP can be reduced to A, not necessarily the other way around. All NP-complete problems can be reduced to each other, but not necessarily to other problems in NP. (In particular, most people think they cannot be reduced to those problems in NP which are also in P.) In a sense, the NP-complete problems are the "hardest" problems in NP.

So your point (1) becomes: if somebody can invent a way to reduce (in polynomial time) even a single

*NP-complete*problem to P, then NP = P. The difference is that we're going from NP-complete to P, not just from NP to P--reducing

*one*NP-complete problem implies that we can reduce

*all*NP problems.

on preview: what empath just said.

posted by equalpants at 10:30 AM on August 28, 2009

I think I need to read up on what exactly NP-complete means relative to NP. arg...

posted by spacediver at 10:43 AM on August 28, 2009

posted by spacediver at 10:43 AM on August 28, 2009

spacediver:

Scott Aaronson actually tried the soap bubbles for the Steiner tree problem and discovered that it may work for small instances, but will not consistently work for larger instances. The linked paper is really interesting and also discusses other potential approaches of using physics to get solutions for NP-complete problems.

posted by ltl at 1:36 PM on August 28, 2009

*For example, I've heard that you can approximate a solution to the traveling salesman problem with a wooden board, some pegs, and a bowl of soapy water.*Scott Aaronson actually tried the soap bubbles for the Steiner tree problem and discovered that it may work for small instances, but will not consistently work for larger instances. The linked paper is really interesting and also discusses other potential approaches of using physics to get solutions for NP-complete problems.

posted by ltl at 1:36 PM on August 28, 2009

*No, equations and math are how you explain WHAT IT IS. English is far better for explaining WHAT IT MEANS.*

Sure, but it helps if you say what it actually means, rather then something else, which is what you did.

The irony is that this type of computer science is actually very textual. You don't have page after page of dense equations, you pretty much just have text plus some set notation, which is very easily convertible to normal English and vise versa. Probably because algorithms aren't really thought of as algebraic groups, which is what formulas and equations normally operate on.

In other words, 'doing' this kind of math and 'talking about' this kind of math are kind of like the same thing.

--

*P refers to polynomial time, which loosely means a timeframe that is managable.*

Yep.

*NP somehow refers to this ability to examine permutations/combinations in parallel, right?*

Not exactly. Think about N and P separately. The P in "NP" means the same thing as the P in "P" by itself. It means polynomial time. The N means the ability to look at permutations in parallel. So what do we mean when we put those to letters together?

We mean that an "N machine" can do the problem in "P Time"

The opposite of "N" is actually "D", which stands for Deterministic. When we're talking about computers, we leave the D off.

It would help if you understood the much simpler concept of Finite State Machines, of which we have DFAs (Deterministic Finite State Machines) and NFAs. NFAs are Like "N machines" in that they can enter multiple states at one time. DFAs are like normal computers in that they can only enter one state at a time.

So there is a set of problems that can be solved with a simple state machine. And here's the interesting bit: The problems we can solve with a DFA are

**the same**as the problems we can solve with an NFA. We just have to, in the worst case create a DFA with 1 state for each possible combination of states in the NFA.

So the question with regards to P = NP is "Is it possible that we can somehow think up a really good algorithm for solving for a problem in P time on a deterministic computer if we could prove it on a

*Non*deterministic computer in P time?"

It's important to understand that NP is actually a

**maximum**bound on how hard the problem is. Everything easier is in NP. Everything in P is also in NP, because if we can solve it with a deterministic machine, then obviously we can solve it with a non-deterministic machine as well.

*But ontologically speaking, is NP a property of a problem? Is it a property of a machine that is designed to solve a problem? Is it a type of timeframe?*

NP is a property of the problem, the property being solvability on an imaginary type of machine in a certain amount of time.

All the NP problems can be reformulated into each other in P time. In other words, you can write a program that will convert one NP problem into another, and operate in P time.If we can prove that just one NP problem CAN be reformulated as a P problem, then we have not solved the P vs NP problem (since we havn't proven that all NP problems can be reformulated as P problems, just this one particular instance).

So if you solve one, you solve them all. The reverse is also true.

Not really. They are isomorphic by definition, so when you encounter a new problem, the way you determine if it's in NP is by finding an isomorphism to another NP problem.I see cortex. I didn't realize that the isomorphism proposition held true for all possible NP problems. Establishing this universal isomorphism (assuming it is true) must have been a great achievement in itself.

posted by delmoi at 3:02 PM on August 28, 2009

*All the NP problems can be reformulated into each other in P time.*

Minor correction: all NP-complete problems can be reformulated into each other in P time. All NP problems can be reformulated into any NP-complete problem in P time.

Anything in NP can go

*into*NP-complete land; what we don't know is whether anything can come

*out*.

posted by equalpants at 3:53 PM on August 28, 2009 [1 favorite]

nonono, atbash:

P contains problems that can be solved in O(n^k) for some integer k.

NP contains problems where a solution can be verified in O(n^k) for some integer k.

posted by lupus_yonderboy at 5:36 PM on August 28, 2009

*The class P is all problems that can be solved by algorithms which take no more than k*n^x time to complete.*

The class NP is all problems that can be solved by algorithms which take no more than n^k to complete.The class NP is all problems that can be solved by algorithms which take no more than n^k to complete.

P contains problems that can be solved in O(n^k) for some integer k.

NP contains problems where a solution can be verified in O(n^k) for some integer k.

posted by lupus_yonderboy at 5:36 PM on August 28, 2009

Doesn't Godel's Incompleteness Theorem create a big problem for the P=NP problem - in that no formal system can solve all the problems created within its own domain? Help me understand how or why P=NP exists beyond the constraints of the Incompleteness Theorem.

posted by Vibrissae at 10:41 PM on August 28, 2009

posted by Vibrissae at 10:41 PM on August 28, 2009

*Doesn't Godel's Incompleteness Theorem create a big problem for the P=NP problem?*

I'm not sure what you mean. The incompleteness theorem allows for the possibility that there is no proof either way, but the same can be said of any open mathematical problem.

posted by Pyry at 12:51 AM on August 29, 2009

For those confusing NP vs. NP-Complete, it may help to remember

So, NP-Complete problems like 3-SAT are interesting because (1) we know they're in NP and (2) we know that other NP problems can be transformed into an equivalent 3-SAT problem. The most obvious way to prove that P=NP, then, it is to show that an NP-Complete problem like 3-SAT has a polytime solution. This would imply that the hardest problems in NP are all in P, thus P=NP is a natural consequence of that fact.

posted by axiom at 6:07 PM on August 29, 2009

**NP-Hard**. NP-Hard problems are those problems that are "at least as hard as any problem in NP." Note that not all NP-Hard problems are in NP. NP-Complete, however, is the overlap between NP and NP-Hard, that is, NP-problems which are at least as hard as all other NP-problems.So, NP-Complete problems like 3-SAT are interesting because (1) we know they're in NP and (2) we know that other NP problems can be transformed into an equivalent 3-SAT problem. The most obvious way to prove that P=NP, then, it is to show that an NP-Complete problem like 3-SAT has a polytime solution. This would imply that the hardest problems in NP are all in P, thus P=NP is a natural consequence of that fact.

posted by axiom at 6:07 PM on August 29, 2009

Oh, and if you wanted a list of NP-Complete problems, you might check Karp's.

posted by axiom at 6:09 PM on August 29, 2009

posted by axiom at 6:09 PM on August 29, 2009

*Doesn't Godel's Incompleteness Theorem create a big problem for the P=NP problem - in that no formal system can solve all the problems created within its own domain?*

No, it does not cause a problem.

Every problem in P and NP can be solved, for sure. The only question is how long it takes.

posted by delmoi at 8:44 PM on August 29, 2009

« Older Alas, poor Quagga! I knew him, Horatio. | Snapple and a Shorti? Newer »

This thread has been archived and is closed to new comments

posted by effbot at 3:53 PM on August 27, 2009