Warren wrote:
The time consumption here seems to be with fast multiplication about N^2 * logN ops per plaintext bit which for N bit message and key would be N^3 * logN.
What?! Multiplication takes O(N log N log log N), not O(N^2 log N). There's a very big difference. (It only takes one multiplication per loop.)
Also, by the way, I think to get theoretical nirvana with BlumBlumShub you must only use one bit of r at a time, not 32 bits (right?) costing a factor 32 work.
No, that's unnecessary. The last 32 bits will certainly be uniformly distributed (up to a ridiculously microscopic difference) amongst the 2^32 possibilities, and the significant bits of r will sufficiently mess up any possible patterns between consecutive words outputted by Blum-Blum-Shub.
Also, with Goucher's system the preprocessing time (to determine p,q from key) is rather large, maybe of order N^4,
I can get O(N^3 log N log log N) by naive Miller-Rabin on every value in a range of length O(N), which will on average contain a prime by the prime number theorem. The constant factor can be heavily optimised by using trial division.
(And slower if use slow multiplication.) The security level is around N^(1/3) considering NFS integer factoring algorithm.
Are you saying that if I gave you a black box oracle capable of constant-time integer factorisation, you could break my cryptosystem? I highly doubt it, since there's no obvious way of determining M from the output of the BBS (which you can in turn only compute if you know both a sequence of ciphertext and corresponding plaintext).
So... asymptotic-wise, this sucks, it is even worse than everything I said before!
Well, you included an extra factor of N in the time it takes to multiply two numbers...
Elegance-wise, it is very neatly defined.
Thank you. Sincerely, Adam P. Goucher http://cp4space.wordpress.com