February 19, 2001 3:18 PM Subscribe

NSA has lost the techno war. It says. But do we believe them? Or is this merely intended to lull us into complacency?

posted by Steven Den Beste (25 comments total)

posted by Steven Den Beste (25 comments total)

This is the sort of thing that I know even less bout than most things. However, if they seem aware of what every other nation has that they lack then they must be sufficiently equipped to know such things, no?

On the other hand, why would they tell us they are behind the 8-ball other than to want more money that we will not know about going into a budget that we will also not know about?

posted by Postroad at 4:32 PM on February 19, 2001

On the other hand, why would they tell us they are behind the 8-ball other than to want more money that we will not know about going into a budget that we will also not know about?

posted by Postroad at 4:32 PM on February 19, 2001

I don't doubt that it's real, or that it's a pitch for more money. Consider the thousands of people online who are technically capable of creating new ways of obscuring information, and the millions who are now connected to learn about and use old and new ways.

They'll stay in business... hard to imagine the executive branch dropping one of its tools -- and whether or not they're falling behind, the cost of not trying is too high. There are still people alive who experienced Pearl Harbor.

posted by Twang at 4:44 PM on February 19, 2001

Twang: do you mean alive then at the harbor to see the bombs drop or rather alive during the time that FDR announced that we had been attacked. If the latter, count me in. Not all that many years ago, of courwse, NSA did not have compuers, that is-e-mail etc to contend with. The book that gave the insights into the agency is now very obsolete because of techological evolution. But then this note might also be monitored, right?

posted by Postroad at 5:26 PM on February 19, 2001

posted by Postroad at 5:26 PM on February 19, 2001

If it were true, why would they announce it? And if it is, so what. What are we supposed to do? Give up encryption? Fsck that. It sounds to me like a plea to the unwashed masses for legislation on weaker encryption schemes. The battle between the code makers and the code breakers has been going on for centuries. The makers have the upper hand at the moment, it won't last forever though. Desparation is often the precursor to wonderous developments. A good book on this is "The Code Book" by Simon Singh.

posted by jbelshaw at 7:11 PM on February 19, 2001

posted by jbelshaw at 7:11 PM on February 19, 2001

c'mon, of course it's propaganda...didn't you see what they did to poor Will Smith and Gene Hackman last year??? If they can see into our shopping bags in a store, they're not gonna be afraid of a few petty terrorists...

posted by DiplomaticImmunity at 7:47 PM on February 19, 2001

posted by DiplomaticImmunity at 7:47 PM on February 19, 2001

Until the 1960's, there was only one type of encryption which was theoretically unbreakable: the one-time pad. The problem with one-time pads was that they couldn't handle much traffic and they required a secure courier.

Now, for the first time in history, we have cyphers which can be proved to be unbreakable except by luck with current technology. If you encipher with a cipher which is 1024 bits strong then it won't be broken within the next hundred years. It can be demonstrated that traditional approaches to solution don't work (e.g. guessing keywords) and that brute force is the only answer. And if computers get faster, all you do is make your key longer and you stay ahead. This is unique in human history, and cheap and powerful computers made it happen.

It's also permanent.

No less expert than David Kahn, who wrote what many consider the definitive history of cryptography (in the 1960's), says "The war of cryptographer against cryptanalyst has been won by the cryptographers. The only way properly encrypted messages can be read nowadays is by theft or betrayal -- that is, noncryptologic means."

posted by Steven Den Beste at 8:21 PM on February 19, 2001

Now, for the first time in history, we have cyphers which can be proved to be unbreakable except by luck with current technology. If you encipher with a cipher which is 1024 bits strong then it won't be broken within the next hundred years. It can be demonstrated that traditional approaches to solution don't work (e.g. guessing keywords) and that brute force is the only answer. And if computers get faster, all you do is make your key longer and you stay ahead. This is unique in human history, and cheap and powerful computers made it happen.

It's also permanent.

No less expert than David Kahn, who wrote what many consider the definitive history of cryptography (in the 1960's), says "The war of cryptographer against cryptanalyst has been won by the cryptographers. The only way properly encrypted messages can be read nowadays is by theft or betrayal -- that is, noncryptologic means."

posted by Steven Den Beste at 8:21 PM on February 19, 2001

Ironically, we had MAGIC intercepts of Japanese radio traffic and were able to decode key diplomatic traffic between Tokyo and Washington prior to Pearl Harbor. The fatal flaw was we didn't have the resources to decode it rapidly, and we didn't have intelligence assets to assess how important the information was.

The lesson is being able to read someone's mail isn't enough. You have to be able to understand what the messages mean, and determine what's critical information and what's chaff. NSA is certainly putting on a publicity campaign (they also convinced NBC's Dateline to do a piece on how blind they were). The question is, what are they lobbying for? Crypto keys? Money?

posted by darren at 5:47 AM on February 20, 2001

There was a comprehensive investigation by Congress on the Pearl Harbor attack to determine whether it was true that the President or any of his advisors knew the attack was coming and decided to let it land. There is no evidence at all that this took place.

MAGIC was the code name for the fact that the US had broken the top Japanese diplomatic cipher machine; MAGIC was used to refer to information derived from that. As a feat of cryptography it tops that of the Enigma break, since the PURPLE system (the American code name for the Japanese machine) was more complex than Enigma and Purple was broken through straight analysis, where as the Enigma break also involved espionage. On the other hand, Enigma was used far more than Purple and carried more important traffic.

MAGIC was the absolutely tippy-top secret in the US at the time, and in Washington only about 8 people outside the cipher organizations were permitted to read its results and even they weren't permitted to retain messages. (It's a measure of the depth of the alliance between the UK and the US after 12/7/41 that the US sent a reconstructed Purple machine to Great Britain so that they could do their own decipherments of intercepted Japanese diplomatic traffic. But it made sense, because the Japanese ambassador in Berlin was being shown a great deal by the Germans, and was radioing home what he saw. MAGIC permitted the US and UK to read that traffic, and it was extremely valuable. Enigma mostly carried lower level information. Purple carried overviews. They complemented each other nicely.)

The US received and deciphered numerous messages in Purple in the months leading up to the attack, but none of them contained explicit information about what would be attacked. The timing of the attack could be deduced pretty clearly about a day ahead of time from the final message sent to the Japanese Ambassador in Washington DC, who was instructed that he absolutely must deliver that message before a certain time and then to utterly destroy his machine. The message essentially broke off diplomatic relations. Because of incompetence at the Japanese embassy, the message was officially delivered late, after the bombing had already begun. (However, the US had actually received the message unofficially before the deadline, an interesting point. When the Japanese ambassador visited the Secretary of State to deliver the message, the Secretary read it but in fact already knew what it contained because he had read the decryption.) But no message has ever been identified as being decrypted which actually identified Pearl Harbor as the point of attack.

A message was sent to all Pacific Theater commanders informing them about the date/time which seemed to be critical but giving them no other information. In another of those historical accidents which seem inexplicable, the message sent to Pearl Harbor wasn't deciphered until after the attack, not that it's clear it would have made any difference.

Darren, I'm afraid you don't quite have the story correct. In fact there was plenty of resource available to decrypt and translate Purple messages, if for no other reason than because traffic levels in that cipher were not very high at the time (perhaps 1% of the amount carried in JN-25, which hadn't yet been cracked). But you can only read what is being sent. The reason that MAGIC didn't prevent the catastrophe at Pearl Harbor was because no message explicitly contained any information hinting at the target.

Although there was plenty of blame to spread around, Congress explicitly cleared the American signal intelligence people, and indeed commended them for the superb job they had done.

David Kahn's book devotes an entire chapter, the first in the book, to this subject and deals with this question in extreme detail. The evidence is completely unambiguous that Pearl Harbor was not a result of a failure in American cryptanalysis.

posted by Steven Den Beste at 8:02 AM on February 20, 2001

MAGIC was the code name for the fact that the US had broken the top Japanese diplomatic cipher machine; MAGIC was used to refer to information derived from that. As a feat of cryptography it tops that of the Enigma break, since the PURPLE system (the American code name for the Japanese machine) was more complex than Enigma and Purple was broken through straight analysis, where as the Enigma break also involved espionage. On the other hand, Enigma was used far more than Purple and carried more important traffic.

MAGIC was the absolutely tippy-top secret in the US at the time, and in Washington only about 8 people outside the cipher organizations were permitted to read its results and even they weren't permitted to retain messages. (It's a measure of the depth of the alliance between the UK and the US after 12/7/41 that the US sent a reconstructed Purple machine to Great Britain so that they could do their own decipherments of intercepted Japanese diplomatic traffic. But it made sense, because the Japanese ambassador in Berlin was being shown a great deal by the Germans, and was radioing home what he saw. MAGIC permitted the US and UK to read that traffic, and it was extremely valuable. Enigma mostly carried lower level information. Purple carried overviews. They complemented each other nicely.)

The US received and deciphered numerous messages in Purple in the months leading up to the attack, but none of them contained explicit information about what would be attacked. The timing of the attack could be deduced pretty clearly about a day ahead of time from the final message sent to the Japanese Ambassador in Washington DC, who was instructed that he absolutely must deliver that message before a certain time and then to utterly destroy his machine. The message essentially broke off diplomatic relations. Because of incompetence at the Japanese embassy, the message was officially delivered late, after the bombing had already begun. (However, the US had actually received the message unofficially before the deadline, an interesting point. When the Japanese ambassador visited the Secretary of State to deliver the message, the Secretary read it but in fact already knew what it contained because he had read the decryption.) But no message has ever been identified as being decrypted which actually identified Pearl Harbor as the point of attack.

A message was sent to all Pacific Theater commanders informing them about the date/time which seemed to be critical but giving them no other information. In another of those historical accidents which seem inexplicable, the message sent to Pearl Harbor wasn't deciphered until after the attack, not that it's clear it would have made any difference.

Darren, I'm afraid you don't quite have the story correct. In fact there was plenty of resource available to decrypt and translate Purple messages, if for no other reason than because traffic levels in that cipher were not very high at the time (perhaps 1% of the amount carried in JN-25, which hadn't yet been cracked). But you can only read what is being sent. The reason that MAGIC didn't prevent the catastrophe at Pearl Harbor was because no message explicitly contained any information hinting at the target.

Although there was plenty of blame to spread around, Congress explicitly cleared the American signal intelligence people, and indeed commended them for the superb job they had done.

David Kahn's book devotes an entire chapter, the first in the book, to this subject and deals with this question in extreme detail. The evidence is completely unambiguous that Pearl Harbor was not a result of a failure in American cryptanalysis.

posted by Steven Den Beste at 8:02 AM on February 20, 2001

What's disturbing is that people in the intelligence community even think in cowoys-and-indians terms of good guys and bad guys.

posted by Loudmax at 9:28 AM on February 20, 2001

Steven, thanks for the detailed post. I concur with your analysis of the value of MAGIC, Purple and Enigma, although I think it's clear that the long time required for decoding, translation and delivery of MAGIC traffic prior to Dec. 7, 1941 proves my point about the need for additional resources.

posted by darren at 9:57 AM on February 20, 2001

posted by darren at 9:57 AM on February 20, 2001

Earth to Loudmax: Terrorists who blow up or threaten to blow up buildings with innocent men, women & children = bad guys.

Glad to help out.

posted by darren at 9:58 AM on February 20, 2001

If you encipher with a cipher which is 1024 bits strong then it won't be broken within the next hundred years.

Stephen, assuming that you mean "If you encrypt a message with a 1024-bit key using a modern public key cryptosystem," it's still the case that the only provably secure code is a one-time pad. Ignoring pie-in-the-sky solutions (Aliens show up with super-algorithms! P = NP! We develop true quantum computing!), it's entirely possible that the underlying mathematics of a given system will fall to a novel solution. That's what happened to the first two iterations of the Merkle-Hellman knapsack cryptosystem -- the method of transformation they used to get from the hard version of the knapsack problem to the easy version was insecure once lattice reduction was turned against it. Incremental gains in cryptanalysis will also gradually whittle away the effectiveness of a cryptosystem. For instance, continuing improvements in the tools used to factor large pseudo-primes hurt RSA. Eventually a given level of security will require requiring larger key sizes, and the whole system will be less functional. (RSA is computationally expensive as it is.) Similar developments could happen with ECC or any public-key system you care to name.

(Side note: "Encipher with a cipher" sounds like an outtake from The Court Jester.)

posted by snarkout at 10:43 AM on February 20, 2001

Stephen, assuming that you mean "If you encrypt a message with a 1024-bit key using a modern public key cryptosystem," it's still the case that the only provably secure code is a one-time pad. Ignoring pie-in-the-sky solutions (Aliens show up with super-algorithms! P = NP! We develop true quantum computing!), it's entirely possible that the underlying mathematics of a given system will fall to a novel solution. That's what happened to the first two iterations of the Merkle-Hellman knapsack cryptosystem -- the method of transformation they used to get from the hard version of the knapsack problem to the easy version was insecure once lattice reduction was turned against it. Incremental gains in cryptanalysis will also gradually whittle away the effectiveness of a cryptosystem. For instance, continuing improvements in the tools used to factor large pseudo-primes hurt RSA. Eventually a given level of security will require requiring larger key sizes, and the whole system will be less functional. (RSA is computationally expensive as it is.) Similar developments could happen with ECC or any public-key system you care to name.

(Side note: "Encipher with a cipher" sounds like an outtake from The Court Jester.)

posted by snarkout at 10:43 AM on February 20, 2001

Eek! Sorry for misspelling your name, Steven.

To keep this from being entirely content free, those of you interested in WWII-era cryptography might want to visit the NSA museum next time you're in the DC area.

posted by snarkout at 11:08 AM on February 20, 2001

To keep this from being entirely content free, those of you interested in WWII-era cryptography might want to visit the NSA museum next time you're in the DC area.

posted by snarkout at 11:08 AM on February 20, 2001

Breaking a message encrypted with a cipher which has a strength of 2^1024 (which doesn't mean the key is 1024 bits long; it could be longer) requires a sufficient degree of computing that it probably could be solved within the lifetime of the human race, but not within the lifetime of anyone now alive *even postulating* that Moore's Law continues for the forseeable future.

For all practical purposes that is secure now. (And it doesn't require a public key cipher. Symmetric ciphers which are well designed also can only be attacked by brute force.) The only way such a thing could be broken was if someone found a weakness in the cipher later which caused the solution space to collapse. That's always possible, but not likely with most symmetric ciphers.

posted by Steven Den Beste at 2:42 PM on February 20, 2001

For all practical purposes that is secure now. (And it doesn't require a public key cipher. Symmetric ciphers which are well designed also can only be attacked by brute force.) The only way such a thing could be broken was if someone found a weakness in the cipher later which caused the solution space to collapse. That's always possible, but not likely with most symmetric ciphers.

posted by Steven Den Beste at 2:42 PM on February 20, 2001

Breaking a message encrypted with a cipher which has a strength of 2^1024 (which doesn't mean the key is 1024 bits long; it could be longer) requires a sufficient degree of computing that it probably could be solved within the lifetime of the human race, but not within the lifetime of anyone now alive even postulating that Moore's Law continues for the forseeable future...

All I can say is that you should look at the history of public-key encryption systems based on the knapsack problem after lattice reduction was applied to cryptanalysis. Or how Khafre fared after the discovery of differential cryptanalysis. (The S-boxes for DES were chosen in such a way as to make them more resistant to differential cryptanalysis than some other block ciphers that were around at the time; as the NSA was involved with the creation of DES, this has led some to conclude that the NSA knew about differential cryptanalysis well before the civilian world.) If someone comes up with a novel method of cryptanalysis that can be applied to the encryption method you're using, you're screwed -- the literature is rife with cryptographic schemes that are no longer used.

Sorry to harp on the lattice reduction thing, but I studied it briefly in college and thought it was a really impressive piece of mathematical work. As I said, were someone to come up with a really good method of factoring large pseudoprimes, RSA would be much more vulnerable, and there's nothing impossible about that (assuming the problem isn't NP-complete). Factoring pseudoprimes attracts enough attention that someone will do it sooner or later.

I'll readily admit that cryptography has advanced a great deal in the last ten to fifteen years, and it certainly seems*unlikely* that great leaps forward will be made anytime soon against, say, RSA, but I'd bet good American dollars against any widely used encryption scheme that's based on a non-NP-complete problem remaining secure two hundred years from now. (Maybe I just have more faith in mathematicians than you do!)

posted by snarkout at 4:41 PM on February 20, 2001

All I can say is that you should look at the history of public-key encryption systems based on the knapsack problem after lattice reduction was applied to cryptanalysis. Or how Khafre fared after the discovery of differential cryptanalysis. (The S-boxes for DES were chosen in such a way as to make them more resistant to differential cryptanalysis than some other block ciphers that were around at the time; as the NSA was involved with the creation of DES, this has led some to conclude that the NSA knew about differential cryptanalysis well before the civilian world.) If someone comes up with a novel method of cryptanalysis that can be applied to the encryption method you're using, you're screwed -- the literature is rife with cryptographic schemes that are no longer used.

Sorry to harp on the lattice reduction thing, but I studied it briefly in college and thought it was a really impressive piece of mathematical work. As I said, were someone to come up with a really good method of factoring large pseudoprimes, RSA would be much more vulnerable, and there's nothing impossible about that (assuming the problem isn't NP-complete). Factoring pseudoprimes attracts enough attention that someone will do it sooner or later.

I'll readily admit that cryptography has advanced a great deal in the last ten to fifteen years, and it certainly seems

posted by snarkout at 4:41 PM on February 20, 2001

It's logical that any system that is widely used will have a lot of resources thrown against it. Hence, the probability that a widely-used system is secure is lower.

Their problem, I think, is that thousands of alternative systems are being explored and created. Defeating most of them is probably relatively trivial, but the sheer volume must be daunting.

Wired had an interesting story today about research into the use of steganography in WAVs and other image files.

posted by Twang at 7:41 PM on February 20, 2001

Postroad:

*Not all that many years ago, of courwse, NSA did not have compuers, that is-e-mail etc to contend with. The book that gave the insights into the agency is now very obsolete because of techological evolution. But then this note might also be monitored, right?*

For sure, all the books are ancient history. As for the Net, it was first developed by DARPA, and most of the original web browsers were developed with Fed funding. Considering that, and the topography of the net, I'm sure they have a very good handle on plaintext content.

I always assume that everything I write and send out this wire is potentially going to run through a big brother computer somewhere... or might be someday. This assumption does encourage me to select me syntax more carefully.

posted by Twang at 8:03 PM on February 20, 2001

<EXPLANATORY_TEXT>

For those who don't know what I'm going on about with P and NP, they're two classifications of mathematical problems. A problem is in P if it can be solved in polynomial time -- that is, the amount of time to solve expands as a polynomial function of the size of the input. A problem is in NP if it a solution can be verified in polynomial time. There's a further property of some functions, "NP-complete"; the basic idea is that if any of these problems are solvable in polynomial time, they're all solvable in polynomial time. P is a subset of NP, but it's an unresolved question of whether P = NP. If P = NP, a vast number of problems could be solved quickly, but many people think that this is not the case. The Knapsack Problem is an NP-complete problem and was used as the basis for the first public-key cryptosystem, but the key -- the method of transforming it from a hard problem to an easy problem -- proved vulnerable to attack.

RSA, however, is based on the computational difficulty of factoring the product of two large primes, and the factoring problem isn't NP complete -- in fact, it hasn't even been rigorously proved to be a difficult problem (though Steven's right that most people think it is).

The math optimist in me firmly believes that, until the factoring problem is proved to be difficult, its difficulty should not be taken for granted, because*some* hungry young grad student is eventually going to have a breakthrough and end RSA's usefulness.

</EXPLANATORY_TEXT>

posted by snarkout at 9:41 PM on February 20, 2001

For those who don't know what I'm going on about with P and NP, they're two classifications of mathematical problems. A problem is in P if it can be solved in polynomial time -- that is, the amount of time to solve expands as a polynomial function of the size of the input. A problem is in NP if it a solution can be verified in polynomial time. There's a further property of some functions, "NP-complete"; the basic idea is that if any of these problems are solvable in polynomial time, they're all solvable in polynomial time. P is a subset of NP, but it's an unresolved question of whether P = NP. If P = NP, a vast number of problems could be solved quickly, but many people think that this is not the case. The Knapsack Problem is an NP-complete problem and was used as the basis for the first public-key cryptosystem, but the key -- the method of transforming it from a hard problem to an easy problem -- proved vulnerable to attack.

RSA, however, is based on the computational difficulty of factoring the product of two large primes, and the factoring problem isn't NP complete -- in fact, it hasn't even been rigorously proved to be a difficult problem (though Steven's right that most people think it is).

The math optimist in me firmly believes that, until the factoring problem is proved to be difficult, its difficulty should not be taken for granted, because

</EXPLANATORY_TEXT>

posted by snarkout at 9:41 PM on February 20, 2001

Snark, DES has been in use for I believe 30 years, and so far no-one has found a backdoor or a way to collapse the solution space. DES fell, to the EFF's "Deep Crack", but Deep Crack is a brute force solution. Public key ciphers are a different problem because they try to take advantage of what are known as "trap door" problems in mathematics, where going one way is easy but the other is not.

But symmetric ciphers don't have that issue; they're not limited to specific mathematical phenomena. They can be designed limited only by the abilities of computers. That's a much broader algorithmic space, and within that space it is indeed possible to create ciphers which are not public key but also which are exceedingly unlikely to ever be collapsed. Of course, nothing is certain, but the approach used by DES appears to be secure. DES itself is limited to 56 bits by design, but the approach used in DES is amenable to being made arbitrarily long and arbitrarily complex. There are certain ways of analyzing such a system to determine if it is implementing real complexity or illusory complexity.

Focusing on public key ciphers is a mistake. Design of a public key cipher has severe constraints imposed on it that are not present in a symmetric cipher.

Symmetric ciphers have enormous advantages over public-key ciphers and one gaping disadvantage -- but public key ciphers have enormous problems but only one advantage. Public key ciphers are extremely complex computationally per plaintext bit transmitted but have the advantage of not requiring a trusted courier to carry the key. Symmetric ciphers are at least as secure as long as both sides have the same key, but they have the problem of needing a trusted courier to carry the key from one to the other.

The solution was to use the strengths of both. So a public-key cipher is used to transmit a small number of bits, that being what's known as a "session key". The public key cipher acts as a trusted courier. Then the system switches to the much more computationally efficient symmetric cipher using the key delivered by the public key cipher. (I'm not trying to be pedantic here; I assume you know most of this but I'm trying to lay complete groundwork for my conclusion.)

Right now, the public key cipher used is the original one designed by RSA which uses primes and composites. Mathematicians have been trying to find efficient ways (in the sense that a computer scientist uses the term) to factor large composite numbers for nigh unto 2300 years and no-one has ever done so. Anything might happen in future, but that one looks mighty secure now. And symmetric ciphers are well understood and no-one expects any kind of breakthrough to collapse the solution space on the ones which are well designed.

Now the thing about this is that nothing really is certain. A dramatic improvement in the efficiency of parallel computing could bring the whole edifice down if the key being used is too short. (An interesting approach being tried is to actually use genetic engineering to program bacteria to try solutions. They're not very fast but you can apply trillions of them to a problem. It's an intriguing idea.)

In computer science we distinguish between truly impossible problems, ones which can be proved to require infinite time (usually by showing isomorphism to the "stopping problem") and infeasible problems, ones where the solution takes finite time but still takes too long. A useful number (if it has a name I don't know what it is) is one which assumes that every subatomic particle in the universe is a computer, and that each one tests a solution per "chronon", the smallest measurable time in the universe (it happens to be the time taken for a photon to cross the diameter of a proton). Then you estimate the life of the universe and multiply it all together and you end up with a number of computing solutions which is the absolute theoretical maximum which can be accomplished in this universe. If the solution space is still large, then while a solution is theoretically possible it's not going to happen in this universe by brute force without luck. I don't remember the exact value of that number, but it's on the order of 10^85 or something like that.

This is an area of active research in Computer science, in a field known as "theory of computing". The idea is to see how a given algorithm scales as the size of the data increases. The best algorithms (e.g. hash tables) require constant time irrespective of the number of items being manipulated. But those are rare. Nearly every algorithm known scales badly, and it's possible to establish upper limits on the data size any algorithm can handle by calculating the number of computations needed and comparing it to that number I just cited.

Brute force solution of encryption is such a problem, and it's well understood. If no way can be found to collapse the solution space, then you just have to plod through and try every possible solution. On average you have to try half of them. It's inelegant and scales exceedingly badly since each 1-bit increase in the key doubles the time needed for the solution. Exponential solutions are horrible.

And it's not too hard to make the key large enough to make the solution space vastly exceed that critical number. If the solution space is 20 orders of magnitude or more larger, then you can rule out luck. This is not too hard when using symmetric ciphers, because among their many virtues is that the normal enciphering and deciphering processes are really very efficient even when long keys are used. While brute force cracks scale exponentially, enciphering and deciphering with the key scale as (I believe) the square which is much more controllable. Going from 128 bits to 256 bits quadruples the encryption cost but increases the crack cost by 2^128 (which is a REALLY BIG NUMBER).

So if you have a cipher whose characteristics are understood well enough to rule out a collapse of the solution space, and whose key is large enough to exceed that computational magic number, then it isn't mathematically precise to say that a break is "impossible" but from an engineering standpoint it's accurate. And that's the capability we now have. And that's why the game is now over, barring a breakthrough in theory.

Yes, such a breakthrough might happen. But all engineering is about computing probabilities and taking chances. This one is a very good risk.

posted by Steven Den Beste at 9:51 PM on February 20, 2001

But symmetric ciphers don't have that issue; they're not limited to specific mathematical phenomena. They can be designed limited only by the abilities of computers. That's a much broader algorithmic space, and within that space it is indeed possible to create ciphers which are not public key but also which are exceedingly unlikely to ever be collapsed. Of course, nothing is certain, but the approach used by DES appears to be secure. DES itself is limited to 56 bits by design, but the approach used in DES is amenable to being made arbitrarily long and arbitrarily complex. There are certain ways of analyzing such a system to determine if it is implementing real complexity or illusory complexity.

Focusing on public key ciphers is a mistake. Design of a public key cipher has severe constraints imposed on it that are not present in a symmetric cipher.

Symmetric ciphers have enormous advantages over public-key ciphers and one gaping disadvantage -- but public key ciphers have enormous problems but only one advantage. Public key ciphers are extremely complex computationally per plaintext bit transmitted but have the advantage of not requiring a trusted courier to carry the key. Symmetric ciphers are at least as secure as long as both sides have the same key, but they have the problem of needing a trusted courier to carry the key from one to the other.

The solution was to use the strengths of both. So a public-key cipher is used to transmit a small number of bits, that being what's known as a "session key". The public key cipher acts as a trusted courier. Then the system switches to the much more computationally efficient symmetric cipher using the key delivered by the public key cipher. (I'm not trying to be pedantic here; I assume you know most of this but I'm trying to lay complete groundwork for my conclusion.)

Right now, the public key cipher used is the original one designed by RSA which uses primes and composites. Mathematicians have been trying to find efficient ways (in the sense that a computer scientist uses the term) to factor large composite numbers for nigh unto 2300 years and no-one has ever done so. Anything might happen in future, but that one looks mighty secure now. And symmetric ciphers are well understood and no-one expects any kind of breakthrough to collapse the solution space on the ones which are well designed.

Now the thing about this is that nothing really is certain. A dramatic improvement in the efficiency of parallel computing could bring the whole edifice down if the key being used is too short. (An interesting approach being tried is to actually use genetic engineering to program bacteria to try solutions. They're not very fast but you can apply trillions of them to a problem. It's an intriguing idea.)

In computer science we distinguish between truly impossible problems, ones which can be proved to require infinite time (usually by showing isomorphism to the "stopping problem") and infeasible problems, ones where the solution takes finite time but still takes too long. A useful number (if it has a name I don't know what it is) is one which assumes that every subatomic particle in the universe is a computer, and that each one tests a solution per "chronon", the smallest measurable time in the universe (it happens to be the time taken for a photon to cross the diameter of a proton). Then you estimate the life of the universe and multiply it all together and you end up with a number of computing solutions which is the absolute theoretical maximum which can be accomplished in this universe. If the solution space is still large, then while a solution is theoretically possible it's not going to happen in this universe by brute force without luck. I don't remember the exact value of that number, but it's on the order of 10^85 or something like that.

This is an area of active research in Computer science, in a field known as "theory of computing". The idea is to see how a given algorithm scales as the size of the data increases. The best algorithms (e.g. hash tables) require constant time irrespective of the number of items being manipulated. But those are rare. Nearly every algorithm known scales badly, and it's possible to establish upper limits on the data size any algorithm can handle by calculating the number of computations needed and comparing it to that number I just cited.

Brute force solution of encryption is such a problem, and it's well understood. If no way can be found to collapse the solution space, then you just have to plod through and try every possible solution. On average you have to try half of them. It's inelegant and scales exceedingly badly since each 1-bit increase in the key doubles the time needed for the solution. Exponential solutions are horrible.

And it's not too hard to make the key large enough to make the solution space vastly exceed that critical number. If the solution space is 20 orders of magnitude or more larger, then you can rule out luck. This is not too hard when using symmetric ciphers, because among their many virtues is that the normal enciphering and deciphering processes are really very efficient even when long keys are used. While brute force cracks scale exponentially, enciphering and deciphering with the key scale as (I believe) the square which is much more controllable. Going from 128 bits to 256 bits quadruples the encryption cost but increases the crack cost by 2^128 (which is a REALLY BIG NUMBER).

So if you have a cipher whose characteristics are understood well enough to rule out a collapse of the solution space, and whose key is large enough to exceed that computational magic number, then it isn't mathematically precise to say that a break is "impossible" but from an engineering standpoint it's accurate. And that's the capability we now have. And that's why the game is now over, barring a breakthrough in theory.

Yes, such a breakthrough might happen. But all engineering is about computing probabilities and taking chances. This one is a very good risk.

posted by Steven Den Beste at 9:51 PM on February 20, 2001

You posted your comments about "NP Complete" while I was composing my message; that's why the overlap in concepts. Had I read your message before writing mine, it would have come out different.

posted by Steven Den Beste at 10:00 PM on February 20, 2001

posted by Steven Den Beste at 10:00 PM on February 20, 2001

Not proven. The only thing that can be demonstrated is that no one has

This is not a trivial problem -- The very people most likely {to break/ to have broken/ to break in the future} any given algorithm have a vested interest in not telling you so.

Which, I thought, was kinda-sorta what the scepticism you started this thread was about, Steven.

But for me, this is crypto's biggest problem -- there's no way to show that it works.

posted by aurelian at 12:39 AM on February 21, 2001

Yes, such a breakthrough might happen. But all engineering is about computing probabilities and taking chances. This one is a very good risk.

I think this is a philosophical difference between Steven, trained as an engineer, and me, trained as a (rather sorry example of a) mathematician. Do I think that there's a magic back-door to DES? No. Do I think that it's*conceivable* that some techniques of cryptanalysis that haven't been discovered yet by academic or industry researchers might make it more vulnerable? Yes.

When differential cryptanalysis, which is a plaintext attack (that is, it's useful for recovering bits of a key when you have the original text and the encrypted version by making specific changes in the original and seeing how it changes the encrypted version; it's not useful for getting the original text*out* of the encrypted version), was invented, a number of schema were shown to be vulnerable to it, but DES is fairly resistant. Why? IBM and the NSA knew about differential cryptanalysis twenty years before it was rediscovered by academics. It seems at least possible to me, if not likely, that a cryptanalysis technique that hasn't been independently discovered in industry or the academy will work against DES or whatever was chosen as the AES, possibly even as a ciphertext attack. As Aurelian suggests, if some of the mathematicians at the NSA knew of such a thing, they'd have no reason to tell the world.

Steven, you're right that for all real-world (engineering) purposes modern symmetric ciphers are totally secure. But that security isn't a (mathematical) certainty.

posted by snarkout at 8:16 AM on February 21, 2001

I think this is a philosophical difference between Steven, trained as an engineer, and me, trained as a (rather sorry example of a) mathematician. Do I think that there's a magic back-door to DES? No. Do I think that it's

When differential cryptanalysis, which is a plaintext attack (that is, it's useful for recovering bits of a key when you have the original text and the encrypted version by making specific changes in the original and seeing how it changes the encrypted version; it's not useful for getting the original text

Steven, you're right that for all real-world (engineering) purposes modern symmetric ciphers are totally secure. But that security isn't a (mathematical) certainty.

posted by snarkout at 8:16 AM on February 21, 2001

As you say, I'm not a mathematician. Engineers don't look for perfect solutions because they cost too much. I have to not only solve a problem, but do so efficiently and timely.

It may or may not be the case that IBM knew about differential cryptanalysis. My understanding was that DES grew out of research into forward error correction.

For a forward error correction algorithm to work well, you want each raw bit to affect lots of bits on the backside of the encoder, because that way if one or two of the bits being transmitted/stored gets trashed you can still recover the original bits. (The output is slightly larger than the input.) Implicitly that's the same as differential cryptanalysis, but they got there by an entirely different path, attempting to perform reliable communication over an unreliable data path, or perform reliable storage on an unreliable storage medium.

A nearly perfect FEC algorithm makes it so that each raw bit affects all the output bits equally. This is, of course, unattainable but they can get damned close. DES is really a variation on an FEC algorithm. FEC implicitly is encryption; the only difference is that there's no key because the purpose of FEC isn't obfuscation.

But if DES accomplishes the goal of being an ideal FEC, then it also defeats differential cryptanalysis because any single-bit change in the plaintext results in lots of the output bits flipping.

It has to be understood that cryptography right now is essentially an engineering problem, not a mathematical problem. We're trying to find practial solutions, not ideal ones -- and that's the way engineers think. It's extremely rare for engineers to achieve mathematical certainty that their solutions are correct -- but that doesn't mean that they're useless.

posted by Steven Den Beste at 11:03 AM on February 21, 2001

It may or may not be the case that IBM knew about differential cryptanalysis. My understanding was that DES grew out of research into forward error correction.

For a forward error correction algorithm to work well, you want each raw bit to affect lots of bits on the backside of the encoder, because that way if one or two of the bits being transmitted/stored gets trashed you can still recover the original bits. (The output is slightly larger than the input.) Implicitly that's the same as differential cryptanalysis, but they got there by an entirely different path, attempting to perform reliable communication over an unreliable data path, or perform reliable storage on an unreliable storage medium.

A nearly perfect FEC algorithm makes it so that each raw bit affects all the output bits equally. This is, of course, unattainable but they can get damned close. DES is really a variation on an FEC algorithm. FEC implicitly is encryption; the only difference is that there's no key because the purpose of FEC isn't obfuscation.

But if DES accomplishes the goal of being an ideal FEC, then it also defeats differential cryptanalysis because any single-bit change in the plaintext results in lots of the output bits flipping.

It has to be understood that cryptography right now is essentially an engineering problem, not a mathematical problem. We're trying to find practial solutions, not ideal ones -- and that's the way engineers think. It's extremely rare for engineers to achieve mathematical certainty that their solutions are correct -- but that doesn't mean that they're useless.

posted by Steven Den Beste at 11:03 AM on February 21, 2001

Agreed. As I've been known to say, Zeno be damned, sooner or later you gotta get laid. :)

The problem is, crypto is not unlike a missile defense shield --

This is why I treat everything on magnetic media as being public record. It's not possible to surprise me. {shrug} (For similar reasons, it's not possible to blackmail me, either... I'm pretty much a "Publish and be damned!" kind of guy.)

And before someone asks for my Social Security number, or my bank account number, or whatnot -- just because it's out there, a needle in a stack of needles, doesn't obligate me to help you. :)

*^*^

Interesting sidelight: I've known engineers to make both of the following quotes: "Good enough is the enemy of the perfect." "The perfect is the enemy of good enough."

This is a valid use of "is" being the syntactic equivalent of an equal sign... But clearly the two readings have different implications.

posted by aurelian at 7:35 PM on February 22, 2001

« Older yesterday the times printed an op-ed by clinton in... | The new Mahir?... Newer »

This thread has been archived and is closed to new comments

Their problem was that starting around 1980 they were faced with the completely new potential of truly unbreakable encryption, which was readily accessible by nearly everyone owning a computer costing on the order of $1500. And that spelled doom for their way of doing business. They tried (and failed) to suppress publication of the original RSA paper on asymmetric ciphers, an important piece of the modern solution. While the RSA cipher is too computationally expensive to be practical for most traffic, it provides a way to exchange session keys without using a physical courier thus permitting very rapid changes of keys in ciphers which would be vulnerable if they were used with the same key for too long.

All one has to do is to read a good history of cryptology to realize that the 1970's and 1980's were a complete revolution. 400 years of cryptographic experience went down the tubes in fifteen years.

It's been apparent to everyone in the field over the last twenty years that NSA viewed the progress being made with sheer horror. Their attempts to portray existing (vulnerable) encryption standards as secure were embarassed by high profile private demonstrations. Their attempts to derail it approached the laughable and set back US competitiveness on world markets without having any noticeable effect on the spread of strong cryptography.

So now they seem to be throwing in the towel, and I think it's genuine. I don't believe anything they say without confirmation, but the evidence is there to confirm this.

posted by Steven Den Beste at 3:42 PM on February 19, 2001