eternaltyro / cryptsetup

Since Google code is shuttering...
http://code.google.com/p/cryptsetup
GNU General Public License v2.0
0 stars 0 forks source link

use /dev/random per default #161

Open GoogleCodeExporter opened 9 years ago

GoogleCodeExporter commented 9 years ago
Hi.

Why not using /dev/random per default?

Isn't it anyway only used in luksFormat or when adding new keys (which happens 
only rarely)?

Sure it could block in batch applications, but it's generally questionable 
whether encryption makes that much sense there... and if they need it, they 
still could use the command line option for urandom.

And we should always default to the most secure settings.

Cheers,
Chris.

Original issue reported on code.google.com by calestyo@gmail.com on 28 Jun 2013 at 6:12

GoogleCodeExporter commented 9 years ago
Especially with things like haveged this should be a non issue in many systems 
anyway.

Original comment by calestyo@gmail.com on 28 Jun 2013 at 6:16

GoogleCodeExporter commented 9 years ago
The main reason are automatic installs where you cannot do terminal inputs 
(yes, I know that it is problem itself).

The former plan was to use RNG from backend crypto lib (gcrypt strong secure 
RNG mode) reason to not do this was http://bugs.g10code.com/gnupg/issue1217
Now it seems to be fixed but we have more backends now...
And note that in FIPS mode it already uses different RNG.

Anyway, there is some work to fix depletion problem of /dev/random in kernel 
(mainly use of Intel RDRAND/RDSEED instructions) and also rng-tools (or haveged 
as you mentioned) feeding kernel rng is more widely used, so perhaps one day I 
will switch to /dev/random by default.

But not now yet. Distro maintainer can change it during compile-in time (and I 
would perhaps suggest this for hardened distros).

Well, let's keeps this issue open. I will perhaps do some more text in next 
months.

Original comment by gmazyl...@gmail.com on 28 Jun 2013 at 7:00

GoogleCodeExporter commented 9 years ago
Sure... but as I've said... with automated installs it's anyway questionable 
whether that can be made really secure at all (at least some kernel/initrd) 
will usually stay unencrypted... and as such you have considerable attack 
vectors to these systems.
Also with haveged, there should always be plenty of entropy left.

Anyway... the defaults should always be to use the secure setting and manually 
(as in automated installs) select something less secure.
mkfs.ext4 also doesn't overwrite any pre-existing filesystems per default... 
and only prevents that if you give it a --dont-screw-existing-fs options...
It does vice-versa and the more secure setting is the default.

And I don't quite see why the majority of users should suffer for the 
questionable case of batch installs.

Original comment by calestyo@gmail.com on 28 Jun 2013 at 7:22

GoogleCodeExporter commented 9 years ago
Hello,

I agree with the initial poster:  cryptographic software should be 
secure-by-default.

In low-entropy situations, it seems that both /dev/urandom and /dev/random have 
their drawbacks:  The former is insecure whereas the latter will result in 
(seemingly) non-deterministic hangs that may be difficult to debug.

Thus, I'd like to suggest a third option as default:  Read from /dev/random but 
give up after a short while (eg. 20 seconds) and abort with a descriptive error 
message.  (This option could be called something like 
--use-random-with-timeout.)

That way, even in low-entropy situations, everyone is happy:  Users concerned 
about security learn that they need to improve their entropy sources and batch 
users learn that they can specify --use-urandom to never see that error again.

I'd volunteer to implement this approach in case it can be agreed upon.  What 
do you think?

Thank you and best regards,
Thiemo Nagel

Original comment by thiemo.n...@gmail.com on 9 Oct 2013 at 10:57

GoogleCodeExporter commented 9 years ago
http://www.2uo.de/myths-about-urandom/

Original comment by saltykre...@gmail.com on 14 Aug 2014 at 8:29