zmap / zlint

X.509 Certificate Linter focused on Web PKI standards and requirements.
https://zmap.io
Apache License 2.0
353 stars 107 forks source link

Lint for RSA close prime Fermat factorization susceptibility #674

Closed christopher-henderson closed 2 years ago

christopher-henderson commented 2 years ago

@aarongable @vanbroup

Adds a lint that checks for RSA key pair that are susceptible to Fermat's factorization method.

https://fermatattack.secvuln.info/ https://nvd.nist.gov/vuln/detail/CVE-2022-26320 https://en.wikipedia.org/wiki/Fermat%27s_factorization_method

What I find most tricky about this change is that we still seem a bit off from being able to merge support for configuring individual lints (https://github.com/zmap/zlint/pull/648) lints. This makes the value for the number of rounds a hardcoded shot-in-the-dark. My first stab is 1000, but I'm checking to see what the perf impact is on the test corpus. On the one hand, we don't want a hardcoded value exploding our scan time. On the other hand, we don't want a hardcoded value that is too low and utterly useless in detecting dangerous primes.

EDIT: Indeed, this lint alone has exploded the runtime of the test corpus on my local by 16x :frowning: . CICD also appears to be choking on this setting.

Addressed #673

vanbroup commented 2 years ago

It should be fine to use 100 rounds, quoted from https://fermatattack.secvuln.info/:

With common RSA key sizes (2048 bit) in our tests the Fermat algorithm with 100 rounds reliably factors numbers where p and q differ up to 2^517.

This is the same value used by crt.sh and I thought it was the default for boulder.

zakird commented 2 years ago

I'm nervous about including this as a defauilt-on given the performance implications (or without a major version release so that folks have the chance to disable it). Even with 100 rounds, it sounds like it will have a significant performance hit. I think we should consider how we note or segment these. Maybe there's a new category?

christopher-henderson commented 2 years ago

Well bumping the round down to 100 at least becomes workable.

On my local without this lint it takes 85s. With the lint at 100 rounds it takes 163s.

We see an even more drastic difference in CICD seeing as the test was timing out after 20minutes but it now appears to complete in ~6-7 minutes (which is a doubling from the3-4minutes it usually takes).

aarongable commented 2 years ago

Wow yeah, I see the same test-time explosion that you see. I had no idea the zlint integration test corpus was so large.

I think this check, or something very similar, is still worth adding. And I think the default should be 100 rounds, at least to match crt.sh and Boulder.

To me the only question is whether it is worth landing now, or whether it should wait until after configurable lints become a thing, and then the integration tests could configure this one to (e.g.) 10 rounds to save on test execution time.

zakird commented 2 years ago

I suspect it's actually a fair amount worse than that, because that cost is including all the parsing, which is non-negligible compared to the lints themselves. A long while back, someone ran https://github.com/zmap/zlint/blob/master/v3/benchmarks_test.go against all our lints. It may be worth running again. If this more than doubles the cost of running lints, that's going to be tremendous for folks like Censys.

christopher-henderson commented 2 years ago

Howdy folks @aarongable @zakird

With our merge of the feature for configurable lints we can now make this lint, in particular, configurable.

I set the default to 100 as per Aaron's suggestion and added a benchmark test around that setting. It is worth nothing that, at least in GitHub Actions, that this does expand the time of running the test corpus by approximately 34% (last merge to master at 5m35s vs this PR a 7m29s).

@zakird 34% is not much a burden on this repo as it is relatively small in absolute terms. However, as you pointed out there are those out there (namely Censys) that do indeed consume certs in mass quantities for which a 34% slowdown is not acceptable.

Fortunately, this lint is indeed configurable by any party that deems the default setting to be unacceptable. The questions is - whose desired default should win out?

The way I see this is that either:

  1. The default is something like 100 and mass consumers such as Censys can configure it all the way down to 0 if they like. 1a. Pro: It casts a wider net and gives the community at large a better shot at catching these bad key pairs. 1b. Con: A potentially burdensome computation cost for a scenario that seems unlikely.
  2. The default is 0-10 and consumers that wish to take a deeper look (LE/Boulder) can set it to any desired goal. 2a. Pro: Still catches some obviously bad pairs while keeping mass lints fast. 2b. Con: Reduces the odds of finding a bad key pair by intentionally an order of magnitude.
aarongable commented 2 years ago

The current CA/BF weak key ballot proposes enshrining the "100 rounds" language directly in the Baseline Requirements, so I'd lean toward having that be the default and letting mass certificate consumers adjust it downwards if they desire.

zakird commented 2 years ago

If CABF formalizes 100 rounds, I agree then that makes this a pretty easy decision.