The impact isn't in discovering a vuln in the software. The impact (as stated in the abstract) is a way to further fingerprint and de-anonymize Tor users and other users who are trying to maintain privacy.
This kind of inference can be interesting, because occasionally people try to convert a private key from one type to another; for example, there are scripts to convert a private key between PEM, OpenSSH, and GPG formats so that you can re-use it for different applications. (One application for this is to use GPG signatures on public keys to authenticate users and/or servers in SSH.)
So this research might allow someone who didn't already know to recognize more about where a key came from.
Verification that TLS keys are mostly generated by OpenSSL and PGP keys by PGP/GPG are sanity check that method actually works.
If it will turn out, that particular library has vulnerability, one can quickly search for other vulnerable keys from large datasets like IPv4-wide TLS scans.
It's definitely not proven to be correct; we definitely don't know for sure that integer factorization is in NP-hard. If I remember correctly, it's generally suspected that factorization is not NP-complete (and thus outside of NP-hard).
Wikipedia has it first on their "list of problems that might be NP-intermediate", i.e. problems that are in NP but not in P or NP-hard. [0]
Yes, this is an incorrect statement, sorry. It sneaked there after edits by proofreading service, but that is no excuse for us. Should be just NP. (I'm one of the authors of the paper)
NP hard is defined with reference to a non-deterministic Turing machine which runs in polynomial time.
A quantum computer is not a Turing machine. Yes, it can solve things in polynomial time that a Turing machine cannot do. But that doesn't mean that given problem is/isn't NP hard.
That's not what I'm saying. I'm saying that if factoring were NP hard, by extension a quantum computer could solve NP complete problems in polynomial time (since every problem in NP is reducible to every NP hard problem, by definition [0]).
That's all I'm saying... they're implying that you could reduce, say, 3SAT to factoring, which would be very very surprising.
("BQP (bounded error quantum polynomial time) is the class of decision problems solvable by a quantum computer in polynomial time, with an error probability of at most 1/3 for all instances.")
Right, it’s not known. Although technically we don’t know for sure if classical computers can solve NP-hard problems in polynomial time either (that’s the P vs. NP problem).
Not to mention that being an NP hard problem wouldn't be a strong enough property to ensure the average-case was hard, so I don't think the "considered" saves this from being incorrect...
You may also try online tool that classifies keys based on the results from paper: http://crcs.cz/rsapp/
Just insert encoded certificate or URL to https server and let tool to tell you what library generated that key(s).
If you will provide 5 keys all generated by the same library, the correct library should be within top three most probable sources with high (>95%) probability.
Yes, exactly. To generate an RSA key you need a complex mechanism wherein many vulnerabilities can hide. To generate an ECC key all you need is a source of entropy. To be sure vulnerabilities can hide in entropy sources as well, but it's a lot easier to swap out an entropy source than it is to swap out a random prime generator.
Could you be more specific about the "vulnerable numbers" we're talking about? The curve base point is a parameter for ECDSA as well.
I agree that any protocol that required you to generate curves or even base points on the fly would have similar concerns, but we generally don't use those kinds of protocols.
My err - rubbish wording as I was thinking primes and talking curves.
Still, while I'm punching here a wee bit above my weight - I should read the 186-4 again - there are some tests to verify the strength of the prime. Is it enough for a similar classification? - I don't know...
The prime field is part of the definition of the curve. Everyone interoperating on that curve shares it. It may have been generated weirdly --- the Curve25519 prime sure was! --- but all that tells you is what curve they're using.
It's true that the particular curves a system supports could be a kind of fingerprint, but that's a banal observation, equally true of the ciphers they support, or the compression algorithms.
Given the thousands of employees the NSA has working on all aspects of cryptography, there must be countless examples of this type of investigation. It's integral to traffic analysis and to fingerprinting of cryptosystems.
At least I hope that the NSA does lots of stuff like this. Because if they don't, what does that leave them doing? If the NSA is simply evil and/or incompetent, that's not enough ROI for the US taxpayers.
Unfortunately, NSA work probably remains highly classified for so long that an ex employee would never be able to write about it in technical detail. But I could be wrong? Are there any Inside Baseball books out there revealing the inner workings of NSA spooks?