[math-fun] Why cryptosystems should be tiny
The difficulty of breaking a typical secret key cipher grows exponentially with key length N. For known public key systems, the growth instead is exponential in N^(1/2) or N^(1/3), which is less useful, but still adequate. But meanwhile, there is another threat: the threat that your crypto-algorithm may not actually be doing what you think, because somebody in the KGB is circulating a spoof crypto-algorithm containing built in "back doors", side effects, or whatever. The difficulty of verifying a program of length L does what it is supposed to (plus does not have unwanted side effects) grows FAR FASTER than exponential with L, in fact faster than any computable function. So therefore, the program length L is actually a security bottleneck. Typical algorithms papers in computer science purporting to prove correctness do so for programs <=1 or 2 pages long. Alleged bug rates in commercial programs range from 0 bugs in 500,000 lines of code (space shuttle) to 1 bug per 20 lines. Microsoft claims to have achieved 1 bug per 2000 lines of released code, which allegedly is comparable to the bug rates in a survey of open-source projects, and which they claim is way better than the industry average which is 1 bug per 20-100 lines. Further, rates of "exploitable" bugs per line of code are believed to exceed about 1 per 10000 lines based on exploit counts and line counts for various systems. All this is for unintentional bugs, not our threat of intentionally introduced and intentionally hidden exploitable bugs, and all those rates were in the absence of intentional $billion efforts to spoof and defeat the software, etc etc. Published crypto-algorithms have historically often been broken. All this suggests that 1. it is completely unacceptable for a cryptosystem to exceed 10000 lines. 2. It is best if it fits in 1-2 pages (<200 lines, 10000 characters). Tinyness matters, even though the vast majority of the crypto community has regarded this as something that does not matter. Compare: "Gnu privacy guard" is a 3.6 megabyte download even in compressed form. If you attempted to verify their code, it would take you a couple man-weeks at the very least just to read it. Very few people are going to perform that effort. Joe Schmoe cryptosystem user would be completely at the mercy of any evil programmers trying to fool him. -- Warren D. Smith http://RangeVoting.org <-- add your endorsement (by clicking "endorse" as 1st step)
From: Warren D Smith <warren.wds@gmail.com> To: math-fun <math-fun@mailman.xmission.com> Sent: Monday, August 19, 2013 1:47 PM Subject: [math-fun] Why cryptosystems should be tiny
The difficulty of breaking a typical secret key cipher grows exponentially with key length N. For known public key systems, the growth instead is exponential in N^(1/2) or N^(1/3), which is less useful, but still adequate. But meanwhile, there is another threat: the threat that your crypto-algorithm may not actually be doing what you think, because somebody in the KGB is circulating a spoof crypto-algorithm containing built in "back doors", side effects, or whatever.
The difficulty of verifying a program of length L does what it is supposed to (plus does not have unwanted side effects) grows FAR FASTER than exponential with L, in fact faster than any computable function.
So therefore, the program length L is actually a security bottleneck. Typical algorithms papers in computer science purporting to prove correctness do so for programs <=1 or 2 pages long. Alleged bug rates in commercial programs range from 0 bugs in 500,000 lines of code (space shuttle) to 1 bug per 20 lines. Microsoft claims to have achieved 1 bug per 2000 lines of released code, which allegedly is comparable to the bug rates in a survey of open-source projects, and which they claim is way better than the industry average which is 1 bug per 20-100 lines. Further, rates of "exploitable" bugs per line of code are believed to exceed about 1 per 10000 lines based on exploit counts and line counts for various systems. All this is for unintentional bugs, not our threat of intentionally introduced and intentionally hidden exploitable bugs, and all those rates were in the absence of intentional $billion efforts to spoof and defeat the software, etc etc. Published crypto-algorithms have historically often been broken.
All this suggests that 1. it is completely unacceptable for a cryptosystem to exceed 10000 lines. 2. It is best if it fits in 1-2 pages (<200 lines, 10000 characters).
Tinyness matters, even though the vast majority of the crypto community has regarded this as something that does not matter.
Compare: "Gnu privacy guard" is a 3.6 megabyte download even in compressed form. If you attempted to verify their code, it would take you a couple man-weeks at the very least just to read it. Very few people are going to perform that effort. Joe Schmoecryptosystem user would be completely at the mercy of any evil programmers trying to fool him.
-- Warren D. Smith http://RangeVoting.org <-- add your endorsement (by clicking "endorse" as 1st step)
_______________________________________________
These considerations argue in favor of simple cryptographic algorithms, whose mathematical basis is understood and whose software implementation is easily coded and clearly verifiable. The two most obvious are RSA and the one-time key, with all needed random numbers being generated by a physical hardware device. Nothing about RSA requires it to be used in public-key mode; one can keep the modulus and encryption exponent secret in addition to the decryption exponent. This should raise the security level from N^(1/3) or N^(1/2) up to almost N. This biggest risk seems to be the assumption that factoring is difficult. New algorithms may be discovered, or may already be known to the NSA and KGB, or practical quantum computers might become a reality. Here's a suggestion. Use a one-time key, then super-encrypt the one-time key and code-text with RSA, in secret-key mode with separate keys. -- Gene
participants (2)
-
Eugene Salamin -
Warren D Smith