Closed calestyo closed 2 years ago
Hi @calestyo ,
Thanks for your hint!
Yes, random.SystemRandom()
uses /dev/urandom
- and it is the right way to go IMHO. /dev/random/
is deprecated for a long time now and the quality of /dev/urandom
is by far enough for diceware.
See https://www.2uo.de/myths-about-urandom/ for some ongoing myths about the "insecureness" of /dev/urandom
. A short version of this post can be found on stackexchange: https://unix.stackexchange.com/questions/324209/when-to-use-dev-random-vs-dev-urandom.
Also on https://crypto.stackexchange.com/questions/41595/when-to-use-dev-random-over-dev-urandom-in-linux you will find reasons why to prefer /dev/urandom
over /dev/random
in nearly all cases, includung use on "low-entropy" systems.
So, the short answer is: no, sorry. I see no good reason to switch.
One could, however, consider to use os.getrandom
, which is a a state-of-art source for randomness. But not with /dev/random
as backend.
I hope you agree, when diving deeper into that topic.
Hey.
Yes,
random.SystemRandom()
uses/dev/urandom
- and it is the right way to go IMHO./dev/random/
is deprecated for a long time now and the quality of/dev/urandom
is by far enough for diceware.
I'm pretty sure /dev/random is not deprecated... in fact it’s rather the other way round. Few months ago there was a patch set in the Linux kernel, which made urandom
a true copy of random
, “solving” the blocking issue by using the jitter-entropy.
See e.g. https://lwn.net/Articles/884875/ .
The change was, for the time being, only reverted because some architectures Linux may be built on, don’t support this method of getting entropy from the CPU jitter.
See https://www.2uo.de/myths-about-urandom/ for some ongoing myths about the "insecureness" of
/dev/urandom
.
IMO that article lacks some important point: references ... I mean anyone could claim something. At least some of what's written there doesn't seem to be fully the case. E.g.:
Fact: /dev/urandom is the preferred source of cryptographic randomness on UNIX-like systems.
One surely cannot say that in general. Many applications will only use /dev/random
, e.g. GnuPG and so for good reasons.
In the current state urandom
, even on Linux, may simply give back bad randomness when the CSPRNG is not yet seeded, which is just the reason for the above mentioned desire to eventually simply drop urandom
at all.
Fact: /dev/random has a very nasty problem: it blocks.
And the next “Fact” where the author claims the opposite. Seems plain wrong. Again, in a low entropy situation, the random data will be simply bad.
It also doesn't help if 256 bits of entropy are enough the seed the CSPRNG for urandom
... if these aren’t there, /dev/urandom
would still give back bad entropy.
Look, I don't claim that injecting entropy is bad. It's good. I just claim that it's bad to block when the entropy estimate is low.
Also seems plain wrong. One can argue whether one wants to allow something non-blocking, which getrandom()
offers via a flag (and where hopefully the developer setting it knows what’s being done. But I think many security experts would say that using bad entropy in a situation where one however wants entropy (otherwise one wouldn’t request it) is always bad as it defeats the purpose in the first place.
Further down in “Two kinds of security, one that matters” his argument seems to be that since algorithms may be broken (and ever except the OTP can be) it also doesn't make sense to use best quality randomness.
That seems a bit strange to me, TBH,… that e.g. AES gets broken (in the usable way) is probably far in the future... however, when I have a system with no HDD, no NIC, a CPU with no jitter and no keyboard... there will be little entropy fed into the system and my urandom could be predictable.
So what has the one to do with the other? The remote possibility of some algorithm broken vs. securing against giving back entropy in a low entropy state?
“What's wrong with blocking?” is IMO also quite misguided. The argument of the author seems to be that since people don’t like the blocking they’ll hack around it and because that’s even worse it would be better to not block in the first place. It’s true that users tend to do that... but why punish all users who use the system properly (and simply wait until it unblocks or realise that this may not be the right environment for keygeneration) just because there are some who desperately try to saw of the branch they’re sitting on? Yes people do not check the remote SSH server’s fingerprints... but because some are stupid, should we just stop displaying it altogether and do some TOFU model? No! Because that would also break security for all those that do actually check the fingerprints.
Several times he claims that the manpages would be wrong, without giving any real proper proof or reference of that. TBH, if someone makes unproven claims like “The man page is silly, that's all.” and tries to ridicule some attack scenarios with things like the Bogeyman, it feels hard to take him serious.
The author is of course also right in some aspects:
urandom
should in principle be secure for cryptographic use, but the critical bit here is “well seeded”.random
and urandom
use the same CSPRNG in the back, but again lacking the important detail that in the urandom
case it will return data even if not properly seeded.But still, I’d say without any given proof or something peer-reviewed, the documentation of the authors should be taken as mandatory... and not some claims in some blogs.
Btw, I think the misconception of /dev/random
being deprecated my come from random(4)
’s:
The /dev/random device is a legacy interface which dates back
But I think what’s rather meant here is that getrandom(2)
is now the proper choice.
In fact, the very same paragraph emphasises:
/dev/random is suitable for applications that need high quality randomness, and can afford indeterminate delays.
Same is said in random(7)
:
Unless you are doing long-term key generation (and most likely not even then), you probably shouldn't be reading from the /dev/random device or employing getrandom(2) with the GRND_RANDOM flag.
So the advice seems to be, that for any normal crypto-use, like your average TLS connection, urandom should be used, but for anything that generates key material (and IMO passphrases as diceware are comparable to that in that respect) should use random
.
Also on https://crypto.stackexchange.com/questions/41595/when-to-use-dev-random-over-dev-urandom-in-linux you will find reasons why to prefer
/dev/urandom
over/dev/random
in nearly all cases, includung use on "low-entropy" systems.
Well but the latter, seems again plain wrong. While I agree that using urandom
should be no issue on high-entropy systems... it’s just the point of it that it’s not safe to use on low-entropy systems, with that being a real threat for security and not just some hypothetical possibility.
I hope you agree, when diving deeper into that topic.
I'm afraid, but not really :-)
A passphrase, especially a high-entropy one as generated with diceware
may likely be something for long-term and something from which crypto keys are generated (e.g. via some KDF like Argon2, which of course already gives some protection against low-entropy).
I see no good reason why it would make sense to possibly let such passphrases be created in a low-entropy situation, which would get unnoticed when using urandom
(or the respective flags with getrandom()
).
And since diceware
is nothing like ssh
which e.g. "still" needs to start up (for the better or worse) on possibly low-entropy systems like VM… there is no such pressure for diceware
.
Passphrases aren’t created so often or so early in boot, thus even with using random
(respectively GRND_RANDOM
) should in practise never block, but just add that little extra protection of generating passphrases in a low-entropy situation, which users of it likely would not want.
Yes, the entropy quality is still the same in both cases and if the CSPRNG is well-seeded, but only with GRND_RANDOM
a non-well-seeded one would get noticed.
That’s also why e.g. gnupg
doesn’t even provide a switch to use urandom
and others like cryptsetup
merely provide it for things like random keys for swap at boot.
Cheers, Chris
Well,
tl;dr:
random.SystemRandom
implies using a seeded /dev/urandom
output, calling getrandom
syscall where available./dev/random
.random.SystemRandom
:For diceware
I would generally prefer not to do my (or your) own crypto. Except you are djb. Or Tanja Lange. Or your code was discussed and approved in the kernel mailing list.. That basically also includes reasoning about the right RNG interface. Talking about that is fine. But you should be very convincing if you want to replace SystemRandom
with a self-brewed solution.
Cause there is more to it than pure theoretical entropy reasoning. Using os.SystemRandom
does not only give us normally safe defaults concerning crypto options. It also takes the burden from us to manually check dozens of architectures and operating systems for RNG-related changes all the time. Decisions made by the Python maintainers might not in every respect be the best at all times, but to my (limited) experience bad decisions will not go unnoticed and will normally follow the more general lines drawn by more acknowledged crypto experts. Their decisions will overall be better than ours.
/dev/random
?I'm pretty sure /dev/random is not deprecated...
Hm, here is what Theodore Ts'o wrote. In 2017:
"Practically no one uses /dev/random. It's essentially a deprecated interface; the primary interfaces that have been recommended for well over a decade is /dev/urandom, and now, getrandom(2)." (https://lkml.org/lkml/2017/7/20/993)
in fact it’s rather the other way round. Few months ago there was a patch set in the Linux kernel, which made
urandom
a true copy ofrandom
, “solving” the blocking issue by using the jitter-entropy. See e.g. https://lwn.net/Articles/884875/ .
I am afraid you skipped an important step in development of /dev/random
. Since 2020 the blocking pool in the Linux kernel was removed. Please see https://lwn.net/Articles/808575/ for details.
This left /dev/random
as a non-blocking interface to getrandom(2)
. It was not providing the "real random numbers" any more and de-facto turned to be some sort of /dev/urandom
with the only difference, that it blocked until it got an initial seed. It does not block afterwards anymore.
If you say, /dev/urandom
was nearly to be made like /dev/random
, I would reply that in fact you compare two types of urandom
. One which blocks before being seeded and one that does not. None of them provides the "real random" numbers people were fighting for in former times. None of them blocks after being seeded initially. Therefore I would say: in fact /dev/random
is no more. There is only some sort of /dev/urandom
left that is now called /dev/random
. And I think this is good.
As a sidenote, there are also some harsh words in the post linked above concerning apps insisting to use "good" random numbers:
"This [introducing another blocking pool] doesn’t solve the problem. If two different users run stupid programs like gnupg, they will starve each other.
As I see it, there are two major problems with /dev/random right now: it’s prone to DoS (i.e. starvation, malicious or otherwise), and, because no privilege is required, it’s prone to misuse. Gnupg is misuse, full stop. " (Andy Lutomirski on LWM: https://lwn.net/ml/linux-kernel/888017FA-06A1-42EF-9FC0-46629138DA9E@amacapital.net/)
See https://www.2uo.de/myths-about-urandom/ for some ongoing myths about the "insecureness" of
/dev/urandom
.IMO that article lacks some important point: references ... I mean anyone could claim something.
I just wanted to be helpful. The article expresses (better than I could) my opinion in that case. It convinced me. The article is BTW referenced and cited (affirmatively) in tons of discussions about the /dev/random
problem. It was also updated after changes in the kernel and should not convince by authority but by plausibility. Anyway...
At least some of what's written there doesn't seem to be fully the case. E.g.:
Fact: /dev/urandom is the preferred source of cryptographic randomness on UNIX-like systems.
One surely cannot say that in general. Many applications will only use
/dev/random
, e.g. GnuPG and so for good reasons. In the current stateurandom
, even on Linux, may simply give back bad randomness when the CSPRNG is not yet seeded, which is just the reason for the above mentioned desire to eventually simply dropurandom
at all.
I disagree and I think the Theodore Ts'o quote above backs that. In the article is also shown, that beside older Linux versions nearly no system is using the blocking /dev/random
-approach. Even Gnupg is considering to switch (and already uses urandom
for plenty operations). Openssl did the switch already. I do not know of any important crypto library that by default still fetches random numbers via /dev/random
(at least not the blocking version).
Fact: /dev/random has a very nasty problem: it blocks.
And the next “Fact” where the author claims the opposite. Seems plain wrong. Again, in a low entropy situation, the random data will be simply bad. It also doesn't help if 256 bits of entropy are enough the seed the CSPRNG for
urandom
... if these aren’t there,/dev/urandom
would still give back bad entropy.
So you are talking now about the initial seed only? This is not the main problem of /dev/random
. And this is also not, what the article is mainly about.
Look, I don't claim that injecting entropy is bad. It's good. I just claim that it's bad to block when the entropy estimate is low.
Also seems plain wrong. One can argue whether one wants to allow something non-blocking, which
getrandom()
offers via a flag (and where hopefully the developer setting it knows what’s being done. But I think many security experts would say that using bad entropy in a situation where one however wants entropy (otherwise one wouldn’t request it) is always bad as it defeats the purpose in the first place.
You seem to mismatch things again. The "entropy estimate" is something different than an initial one-time seed. It is a try to compute the "randomness" of the random sources supported by the kernel and block (at any time) based on the result of that estimation. The recent Linux kernels do not suffer from that problem. The olders do. And we have to serve both of them.
Further down in “Two kinds of security, one that matters” his argument seems to be that since algorithms may be broken (and ever except the OTP can be) it also doesn't make sense to use best quality randomness. That seems a bit strange to me, TBH,… that e.g. AES gets broken (in the usable way) is probably far in the future... however, when I have a system with no HDD, no NIC, a CPU with no jitter and no keyboard... there will be little entropy fed into the system and my urandom could be predictable. So what has the one to do with the other? The remote possibility of some algorithm broken vs. securing against giving back entropy in a low entropy state?
What exactly do you mean by "best quality randomness"? Is it about the (one-time) seeding only or do you really think there are "good random bits" you can tell apart from "bad pseudo-random bits"?
In my understanding, the "Two kinds of security" the author compares are not the one provided by (old-style) /dev/urandom
and (old-style) /dev/random
. Instead he compares a hypothetical stream of "really random" bits with a stream of bits provided by a (seeded) CSPRNG.
* “What's wrong with blocking?” is IMO also quite misguided. The argument of the author seems to be that since people don’t like the blocking they’ll hack around it and because that’s even worse it would be better to not block in the first place. It’s true that users tend to do that... but why punish all users who use the system properly (and simply wait until it unblocks or realise that this may not be the right environment for keygeneration) just because there are some who desperately try to saw of the branch they’re sitting on? Yes people do not check the remote SSH server’s fingerprints... but because some are stupid, should we just stop displaying it altogether and do some TOFU model? No! Because that would also break security for all those that do actually check the fingerprints.
You continue to presume, that the author favors unseeded CSPRNGs over seeded ones. But this is not the point.
* Several times he claims that the manpages would be wrong, without giving any real proper proof or reference of that. TBH, if someone makes unproven claims like “The man page is silly, that's all.” and tries to ridicule some attack scenarios with things like the Bogeyman, it feels hard to take him serious.
The misleading man page texts were also critically discussed in various places. In the end the "documentation of the authors" was indeed changed, although it took some time. But there was no objection I know of. Cf. https://bugzilla.kernel.org/show_bug.cgi?id=71211
But still, I’d say without any given proof or something peer-reviewed, the documentation of the authors should be taken as mandatory... and not some claims in some blogs.
Btw, I think the misconception of
/dev/random
being deprecated my come fromrandom(4)
’s:The /dev/random device is a legacy interface which dates back
But I think what’s rather meant here is that
getrandom(2)
is now the proper choice. In fact, the very same paragraph emphasises:/dev/random is suitable for applications that need high quality randomness, and can afford indeterminate delays.
Same is said in
random(7)
:Unless you are doing long-term key generation (and most likely not even then), you probably shouldn't be reading from the /dev/random device or employing getrandom(2) with the GRND_RANDOM flag.
So the advice seems to be, that for any normal crypto-use, like your average TLS connection, urandom should be used, but for anything that generates key material (and IMO passphrases as diceware are comparable to that in that respect) should use
random
.
... "and most likely not event then)"
Also on https://crypto.stackexchange.com/questions/41595/when-to-use-dev-random-over-dev-urandom-in-linux you will find reasons why to prefer
/dev/urandom
over/dev/random
in nearly all cases, includung use on "low-entropy" systems.Well but the latter, seems again plain wrong. While I agree that using
urandom
should be no issue on high-entropy systems... it’s just the point of it that it’s not safe to use on low-entropy systems, with that being a real threat for security and not just some hypothetical possibility.I hope you agree, when diving deeper into that topic.
I'm afraid, but not really :-)
A passphrase, especially a high-entropy one as generated with
diceware
may likely be something for long-term and something from which crypto keys are generated (e.g. via some KDF like Argon2, which of course already gives some protection against low-entropy).I see no good reason why it would make sense to possibly let such passphrases be created in a low-entropy situation, which would get unnoticed when using
urandom
(or the respective flags withgetrandom()
).And since
diceware
is nothing likessh
which e.g. "still" needs to start up (for the better or worse) on possibly low-entropy systems like VM… there is no such pressure fordiceware
.Passphrases aren’t created so often or so early in boot, thus even with using
random
(respectivelyGRND_RANDOM
) should in practise never block, but just add that little extra protection of generating passphrases in a low-entropy situation, which users of it likely would not want.
Again: this is not a "low entropy" situation but a "go unseeded" situation, which makes a difference for me. Beside this, well, see below.
Yes, the entropy quality is still the same in both cases and if the CSPRNG is well-seeded, but only with
GRND_RANDOM
a non-well-seeded one would get noticed.
No, it won't. See below.
That’s also why e.g.
gnupg
doesn’t even provide a switch to useurandom
Erm ...
rndoldlinux
Uses the operating system provided /dev/random and /dev/urandom devices. The
/dev/gcrypt/random.conf config option only-urandom can be used to inhibit the use
of the blocking /dev/random device."
` (https://www.gnupg.org/documentation/manuals/gcrypt/Random_002dNumber-Subsystem-Architecture.html)
and others like
cryptsetup
merely provide it for things like random keys for swap at boot.
FWIW, on my desktop computer, cryptsetup
comes with /dev/urandom
as default source.
Okay, you do not like that article and insist. That's a pity but maybe there was also a misunderstanding about unseeded pools and old-style blocking /dev/random
with a pool that blocks because entropy was "depleted". In the beginning I thought you were requiring to go back to /dev/random
(well, the headline told so), which on older machines might include blocking again and again while getting nearly no benefit for that.
Therefore I have to ask: what is the issue you are complaining about? Do you want us to remove usage of /dev/urandom
? Do you want us to use os.getrandom()
(which is available on Linux only)? You seem not neccessarily to worry about random numbers that come straight from a CSPRNG? So it is only the initial seeding that worries you? No one said that running an unseeded CSPRSG is a good thing. Not the article I linked to and hopefully I did it neither.
But have you ever had the time to look into the Python implementation details?
Basically, you are complaining that we use random.SystemRandom
. Right? Because it uses/dev/urandom
. It does. And this is right from my perspective. But there is more:
The module also provides the SystemRandom class which uses the system function
os.urandom() to generate random numbers from sources provided by the operating system.
(https://docs.python.org/3/library/random.html)
which is not exactly the same as using /dev/urandom
, I admit. But what does os.urandom
do? Let's see...
On Linux, if the getrandom() syscall is available, it is used in blocking
mode: block until the system urandom entropy pool is initialized (128 bits
of entropy are collected by the kernel). See the PEP 524 for the rationale.
On Linux, the getrandom() function can be used to get random bytes in
non-blocking mode (using the GRND_NONBLOCK flag) or to poll until
the system urandom entropy pool is initialized."
(https://docs.python.org/3/library/os.html#os.urandom)
Which means: by default and on Linux we already use a seeded /dev/urandom
pool that only blocks for the first 8 bytes, if getrandom(2)
is available. This is in line with recommendations from the kernel crypto people. No need of /dev/random
. Minimal blocking. Isn't that what you wanted?
I still cannot see your problem.
Hey.
Well I guess this whole discussion went a bit off... ayyway:
* in case you haven't noticed: using `random.SystemRandom` implies using a seeded `/dev/urandom` output, calling `getrandom` syscall where available.
This I don't understand… AFAICS from the documentation, it either uses /dev/urandom
, which may however not be seeded (or how should Python know)? And even if getrandom()
is available, it uses it via os.urandom()
. which according to the documentation uses GRND_NONBLOCK
, again meaning it would return data even if not seeded.
* the policy I would like to apply here: do not use your own crypto reasoning. There are people that are more into it. Like the kernel maintainers and the Python maintainers. I trust their decisions.
Sure, and I don't say any change should be made because some random dude like me proposed, it but here there's the upstream documentation for at least Linux’ random sources, which suggests rather using random
for things like long-lived key-material, and there are not so unimportant examples, e.g. gpg, doing the same.
* I still see no reason why to switch to (yes) deprecated `/dev/random`.
I don't know why you keep repeating this. The patches I've mentioned clearly prove that the random
source of the kernel is not deprecated and actually the recommended one for key-material.
It's only use of the device file for reading, rather then the syscall, which is not recommended (but this btw. for both, random
and urandom
).
For
diceware
I would generally prefer not to do my (or your) own crypto. Except you are djb. Or Tanja Lange. Or your code was discussed and approved in the kernel mailing list.. That basically also includes reasoning about the right RNG interface. Talking about that is fine. But you should be very convincing if you want to replaceSystemRandom
with a self-brewed solution.
None of this was proposed by me, or was it? random.SystemRandom()
is already documented to use os.urandom()
, which has its sibling os.getrandom()
doing, just the same, except it sets os.GRND_RANDOM
and GRND_NONBLOCK
.
So from a Python-POV there should be no difference other than the flags it sets in the syscall.
And the recommended use of the syscall is also no invention by myself or some arbitrary voodoo, but it's what the upstream documentation of the Linux kernel RNG suggests:
I've already quoted some sections stating that before, e.g. from random(4)
:
/dev/random is suitable for applications that need high quality randomness, and can afford indeterminate delays.
Using
os.SystemRandom
does not only give us normally safe defaults concerning crypto options. It also takes the burden from us to manually check dozens of architectures and operating systems for RNG-related changes all the time.
Does it anything else for gathering the randomness than calling os.urandom()
?
Decisions made by the Python maintainers might not in every respect be the best at all times, but to my (limited) experience bad decisions will not go unnoticed and will normally follow the more general lines drawn by more acknowledged crypto experts.
Wasn't there just recently that debacle with the security hole around bignums, that had been ignored for over 2 years? ;-)
"Practically no one uses /dev/random. It's essentially a deprecated interface; the primary interfaces that have been recommended for well over a decade is /dev/urandom, and now, getrandom(2)." (https://lkml.org/lkml/2017/7/20/993)
But isn't the main point here not-using-the-file but rather getrandom()
?!
Look at the article https://lwn.net/Articles/884875/ (respectively https://lwn.net/Articles/889452/ which explains why the former was reverted for now)... which describes the situation from early this year.
AFAIK, Ted Tso had recently only limited time for the kernel RNG, so Jason Donenfeld stepped up (and is now co-maintainer) and initiated some larger scale evolution of the driver.
The well known situation is quite clearly:
/dev/urandom was always meant as the device for nearly everything to use, as it does not block; it simply provides the best random numbers that the kernel can provide at the time it is read. /dev/random, on the other hand, blocks whenever it does not have sufficient entropy to provide cryptographic-strength random numbers.
And the goal is:
the plan to unite the two kernel devices that provide random numbers; /dev/urandom was to effectively just be another way to access the random numbers provided by /dev/random
So not sure where you take your information from, but I think it's the opposite way round - eventually urandom
(as a source, not the device file) will become random
.
I am afraid you skipped an important step in development of
/dev/random
. Since 2020 the blocking pool in the Linux kernel was removed. Please see https://lwn.net/Articles/808575/ for details.
Are you sure this patch was ever merged? I couldn't find a commit Rework random blocking
in a quick search… and the much more recent articles I quote above also imply the opposite.
If you say,
/dev/urandom
was nearly to be made like/dev/random
AFAIU, it was only reverted for the time being, because some architectures lacked support to provide good entropy via the CPU jitter.
None of them provides the "real random" numbers people were fighting for in former times. None of them blocks after being seeded initially.
Sure, none of this alone is a TRNG, unless one uses some hardware dongles or so, e.g. ChaosKey or EntropyKey.
As a sidenote, there are also some harsh words in the post linked above concerning apps insisting to use "good" random numbers:
"This [introducing another blocking pool] doesn’t solve the problem. If two different users run stupid programs like gnupg, they will starve each other.
Well I cannot really tell what exactly was meant there back then, but "stupid" seems to be more in the sense that gpg just reads as much as it needs (thereby causing the starvation)... which shouldn't be a big problem with random
anymore these days on most platforms (where CPU jitter can be used). So not sure whether one can take this as a hint on whether or not gpg using the random
source is being considered stupid. The manpages still say it's suggested for long-term key-like material, so I guess this comment just referred to the fact that it reads as much as it needs, without looking at other consumers.
As I see it, there are two major problems with /dev/random right now: it’s prone to DoS (i.e. starvation, malicious or otherwise), and, because no privilege is required, it’s prone to misuse. Gnupg is misuse, full stop. " (Andy Lutomirski on LWM: https://lwn.net/ml/linux-kernel/888017FA-06A1-42EF-9FC0-46629138DA9E@amacapital.net/)
At least the starvation issue seem to be not true [anymore] (on common architectures), which also makes the misuse/DoS point go mostly away... plus it's anyway hard to count that as misues/DoS, since it's local running stuff - nothing that an arbitrary remote attacker would control.
I just wanted to be helpful. The article expresses (better than I could) my opinion in that case.
Sure, and I thankfully read it... but it felt more to me to contain a number of half-truths. Of course I'm neither a crypto/RNG expert... so I basically also just follow what I read e.g. in the LWN articles or in the kernel manpages.
It convinced me. The article is BTW referenced and cited (affirmatively) in tons of discussions about the
/dev/random
problem. It was also updated after changes in the kernel and should not convince by authority but by plausibility. Anyway...
Sure, but that's the difficulty with when you have multiple plausible sources which (well partially) contradict each others.
Even Gnupg is considering to switch (and already uses
urandom
for plenty operations). Openssl did the switch already. I do not know of any important crypto library that by default still fetches random numbers via/dev/random
(at least not the blocking version).
Well it would be useless to use a non-blocking version, then it's more or less urandom
. I'm not aware of the status of openssl... my expectation would have been and that's what I read from the kernel manpages, that it makes sense to use the random
for any long-term key-like material, and urandom
for anything else.
Also seems plain wrong. One can argue whether one wants to allow something non-blocking, which
getrandom()
offers via a flag (and where hopefully the developer setting it knows what’s being done. But I think many security experts would say that using bad entropy in a situation where one however wants entropy (otherwise one wouldn’t request it) is always bad as it defeats the purpose in the first place.You seem to mismatch things again. The "entropy estimate" is something different than an initial one-time seed. It is a try to compute the "randomness" of the random sources supported by the kernel and block (at any time) based on the result of that estimation. The recent Linux kernels do not suffer from that problem. The olders do. And we have to serve both of them.
I didn't write about the entropy estimate there, or did I? It's more about the fact that with the non-blocking/urandom version one will get lower quality randomness in low-entropy situations.
What exactly do you mean by "best quality randomness"? Is it about the (one-time) seeding only or do you really think there are "good random bits" you can tell apart from "bad pseudo-random bits"?
At this point I had meant the seeding.
The misleading man page texts were also critically discussed in various places. In the end the "documentation of the authors" was indeed changed, although it took some time. But there was no objection I know of. Cf. https://bugzilla.kernel.org/show_bug.cgi?id=71211
Well I was quoting the most recent versions of them. If there'd be still anything misleading in them, it should probably be reported.
So the advice seems to be, that for any normal crypto-use, like your average TLS connection, urandom should be used, but for anything that generates key material (and IMO passphrases as diceware are comparable to that in that respect) should use
random
.... "and most likely not event then)"
Well, yes, but still, as quoted before and again above,... the current documented recommendation is random
source for long-term key material... the subclause you quote is of course in there, but it says “most likely”... but admittedly, it’s a misleading sentence. Either they should clearly suggest one or the other.
Again: this is not a "low entropy" situation but a "go unseeded" situation, which makes a difference for me. Beside this, well, see below.
I always considered this the same,... with "go unseeded" being "low entropy" from the beginning.
Anyway I thought a distinct feature of the kernel CRNG was the constant re-seeding, which would make live of an attacker harder to gather the state of the CRNG, because even if he does, it would have been re-seeded by the time this happens.
How this "guessing" is done is obviously beyond scope... but things like Meltdown/Spectre or rowhammer would be just some ideas.
Maybe this is also an explanation on the suggestions from the manpage vs. e.g. what djb said in the quote you made earlier.
Sure it's possible to make a CRNG yield unpredictable data, but that breaks down if the internal state of that is found out.
It was always my assumption that this would have been harder with the random
source which constantly draws entropy from several sources... unless of course the re-seeding of the urandom
source happens (which may however not take place in low entropy again).
Yes, the entropy quality is still the same in both cases and if the CSPRNG is well-seeded, but only with
GRND_RANDOM
a non-well-seeded one would get noticed.No, it won't. See below.
That’s also why e.g.
gnupg
doesn’t even provide a switch to useurandom
Erm ...
rndoldlinux Uses the operating system provided /dev/random and /dev/urandom devices. The /dev/gcrypt/random.conf config option only-urandom can be used to inhibit the use of the blocking /dev/random device."
I said gnupg, not gcrypt. The former doesn't seem to export that, or at least I coulnd't find it. The latter is only for developers, where one probably hopes that they know what they do.
` (https://www.gnupg.org/documentation/manuals/gcrypt/Random_002dNumber-Subsystem-Architecture.html)
and others like
cryptsetup
merely provide it for things like random keys for swap at boot.FWIW, on my desktop computer,
cryptsetup
comes with/dev/urandom
as default source
But only because some things wouldn't work out of the box anymore, at least back then: https://gitlab.com/cryptsetup/cryptsetup/-/issues/161#note_978521 :
The main reason are automatic installs where you cannot do terminal inputs (yes, I know that it is problem itself).
…
so perhaps one day I will switch to /dev/random by default.
Anyway... I merely meant this as a small improvement, cause passphrases might easily be long term in use, that, I'd have "complained" is perhaps a bit exaggerated or at least it wasn't meant so.
Actually, I considered this merely as a change that would at best do good and could at worst cause no harm (since diceware
isn't reading so much entropy and it's probably not used much in non-interactive systems).... I didn't expect this to cause so much controversy.
So if you do not want random
as source respectively the blocking behaviour... or make it configurable as cryptsetup
does... fine by me... not worth so much of an argument.
Best wishes, Chris.
btw: I've just been pointed to a recent talk from Donenfeld and part of what I wrote above is actually already wrong for recent kernels.
That change, that the urandom
produces the same than random
source` after initialisation seems to have already been merged.
Hey.
For security reasons it would probably make sense if
diceware
used/dev/random
rather than/dev/urandom
.And since people probably don’t use it to generate passphrases in a batch mode on very low entropy systems, it should default to the safer variant.
Python’s
random.SystemRandom()
seems to always useurandom
, but there’sos.getrandom()
which can be used withos.GRND_RANDOM
.Thanks, Chris.