Closed Iiridayn closed 2 years ago
We use zxcvbn to provide entropy calculations that takes into account known attack patterns on passwords.
It appears you did not read what I wrote. zxcvbn
provides an estimate of guessability (in terms of equivalent entropy). Entropy is something different.
The exact Shannon entropy of a sequence cannot be calculated with an open alphabet. Any "exact" calculation would be meaningless.
When generating a password, the exact Shannon entropy is simple to calculate; trivial for the computer, burdensome for a human.
I reiterate: this is specifically for passwords generated by KeepassXC. Since KeepassXC generated the password randomly from a known set of possible passwords, it can very easily show the entropy of the generated password.
It is not unreasonable to also show the guessability of the generated password (as noted in the original post), but the entropy of the generated password is a much more useful value.
It absolutely is not a more useful value. The Shannon entropy is the upper bound, the maximum possible entropy. You want to know the lower bound which is ostensibly what zxcvbn provides. Zxcvbn takes into account known patterns and other easily/frequently guessed scenarios by reducing the entropy value. This is because when you have those in your password they are effectively worth ZERO entropy since they are not random to the guesser. What pick groups you use to generate the password is largely irrelevant, the end result is what is measured and what matters.
zxcvbn
is designed to estimate guessability of user generated passwords attacked by an attacker attacking user generated passwords. At some point, say around 56 bits or so (see section 2.1 for justification), guessability is out of reasonable reach to an attacker. I agree that for something as simple as a 4 digit PIN, guessability plays an important role, even when randomly generated (for user chosen PINs, a blocklist of 10% of the space may best balance usability and security). Kerckhoffs's principle (as phrased by Shannon) states "the enemy knows the system". Thus, an attacker may be expected to know that a password was generated by a random process.
Additionally, both zxcvbn
- which mimics attackers - and attackers themselves have little to no data regarding user-generated high entropy passwords; it is possible that they do not exist in any meaningful form. I note that John the Ripper, once it runs out of dictionary words and all dictionary permutation rules defaults to random guessing. There is no better known attack strategy for zxcvbn
to estimate for higher entropy randomly generated passwords.
I agree that anything a user types in the box is suspect and should display only guessability per zxcvbn
. However, for higher entropy passwords the system generates, I find it much more useful to know the true entropy value (currently I have to calculate it myself), as at high entropy the best known attack strategy is essentially random guessing.
Yes, there is an infinitesimal chance of ~10^-36 that a 20 character randomly generated alphanumeric password (that's 119 bits) will be "00000000000000000000". For reference, your odds of dying to a meterorite are around 1/1,600,000 = 6.25*10^-7. 10^-36 is quite literally astronomically smaller. With high entropy passwords you are extrapolating by applying zxcvbn
to data it was not designed to handle.
Put another way, if you have 6 billion users generating 1000 20 character alphanumeric passwords every second for 1000 years, you will have covered 2^72.7 of the search space. The odds that any of them generates the password "00000000000000000000" in all their attempts is around 10^-14; you are around 10,000,000 times more likely to die from a meteorite.
Sorry, but your conception of what these metrics mean is wrong. The Shannon entropy is absolutely not more useful: Not for user-generated passwords and not for auto-generated passwords.
Entropy is not a property of the password itself (or in other words: its entropy, once generated, is exactly 0), it is a measure of the uncertainty involved in guessing the outcome of the generating source and if you are talking at this level, the source is NOT the character alphabet we pick from, the source is /dev/urandom and we can only estimate, not calculate its entropy (it is pseudo-random after all).
But back to passwords: Assuming /dev/urandom is sufficiently random to assume a uniform distribution, then the letter sequence "aaaa" is exactly as likely as the sequence "sldp", yet a somewhat intelligent password cracker will take significantly less time for guessing the first option and for that it is completely irrelevant whether the password was generated by a user or /dev/urandom-based sampling from a character array. Yet both sequences have 4x6 bits of entropy if you assume they were sampled from printable ASCII characters. But what if they were sampled from the whole Unicode alphabet and we just got (un)lucky? Well, then the entropy would suddenly be 4x20.1 bits, yet the quality of the password hasn't changed and it would still be as easy to crack as "aaaa" sampled from ASCII, which seems paradoxical from a theory standpoint that focuses too much on the generating source. A guessability measure will take these patterns into account, largely independently of the actual sample alphabet. Naively calculated Shannon entropy will not. The paradox is simply that even though (in theory) discarding passwords with obvious (and otherwise common) patterns in them reduces the entropy of the source, it will (in practice) create safer passwords, because password crackers rely on these patterns, so the average crack time will always be less than n/2 for a password with patterns (this is not to be confused with the urge of some users to eliminate all instances of repeating characters to create a more "randomly looking" password). In any case, neither measure gives you any indication as to how long it would actually take an attacker to guess the password. They both will only give you a relative estimate of which one of two given passwords is stronger and for that, an actual guessability metrics is much, MUCH more useful.
When talking about very short 4 character passwords, I approve of providing guessabilitiy, and using blocklists. I am not generating 4 character passwords.
I am generating passwords with around 50 bits of entropy or more. While we could skew the generator to favor less "guessable" 50+ bit passwords, we are instead giving the attacker an advantage by deviating from true randomness. "The attacker knows the system"; even if only that users will occasionally regenerate passwords until they get one with a high guessability score via zxcvbn
, the attacker can then adjust their attack generation algorithm to prefer those passwords instead. Our best possible defense at high values of entropy is random selection.
At lower values of entropy (such as 4 lowercase characters), we may reasonably be concerned about attacker guessing patterns, at which point we are the attacker who knows their system, and lose relatively little against an attacker tailored to ours. At higher levels of entropy, our gain is negligible and our loss is not.
By providing only the guessability estimate and not the entropy measure for high-entropy passwords, you are weakening security for your users.
Thank you for opinion, we aren't going to making any changes to the program when it comes to entropy or password generation.
Our best possible defense at high values of entropy is random selection.
Your best possible defence is password length. Generate a sufficiently long password and ignore the strength/entropy value.
By providing only the guessability estimate and not the entropy measure for high-entropy passwords, you are weakening security for your users.
No. Any sufficiently long password will be sufficiently secure and the zxcvbn metric reflects that. All the Shannon entropy gives you is an overall larger number, which is a) irrelevant and b) would lead to users choosing shorter passwords, so we would have to correct the good/excellent quality scores.
https://dl.acm.org/doi/abs/10.1145/3584318.3584324 - has a few bits relevant to our conversation, as I was thinking about this discussion as I wrote them. I wrote those parts for you, but it's up to you if you want to read it.
Edited to add: author copy: https://iiridayn.info/papers/cryptwords.pdf
Overview
When generating a password, the entropy value changes every time a password is (re)generated with the same settings.
Steps to Reproduce
Expected Behavior
I expect the entropy value to be exactly the amount of entropy in the generated password, just like happens for generated passphrases.
Actual Behavior
The entropy instead appears to be an estimate of guessability, which is NOT the same thing as Shannon entropy. However, for generated passwords, the Shannon entropy is the same thing as the guessability (assuming the attacker uses the same distribution; if the attacker uses a Zipf's law distribution, the guessability may in extremely rare cases be notably lower, but should usually be around the same).
Context
This is similar to #2061, and #3785.
Notably, the first proposes a reasonable solution to concerns that the password generator might generate something like "password" if set to 8 lowercase characters. The odds are 1/208,827,064,576, so I don't think that's a significant concern personally; though perhaps the odds of more easily guessable generated PINs (4 digits only) could justify a generator blocklist or showing a guessability metric alongside the 13.29 bits of entropy in a random pin (technically adding a blocklist reduces entropy slightly in exchange for boosting minimum guessability).
KeePassXC - Versión 2.7.1 Revisión: 5916a8f
Operating System: Arch Linux Desktop Env: i3 Windowing System: X11