Closed andrewdavidwong closed 7 years ago
We need someone who know cryptography to check discussed key derivation scheme. Especially answer for questions in this message: https://groups.google.com/d/msg/qubes-devel/CZ7WRwLXcnk/1N0sYf6lVvUJ
We may never get the assistance of a cryptographer on this. For now, we should at least pass the -md
option to openssl
and specify sha256
(to match the key length of aes256
). In other words:
openssl enc -md sha256 -e -aes-256 [...]
This would be better than nothing. Currently, md5
is used by default, which is actually weakening security for users who select passphrases larger than 128 bits (probably almost all Qubes users).
On Thu, Oct 22, 2015 at 02:54:10AM -0700, Axon wrote:
We may never get the assistance of a cryptographer on this. For now, we should at least pass the
-md
option toopenssl
and specifysha256
(to match the key length ofaes256
). In other words:openssl enc -md sha256 -e -aes-256 [...]
This would be better than nothing. Currently,
md5
is used by default, which is actually weakening security for users who select passphrases larger than 128 bits (probably almost all Qubes users).
As the user can (theoretically) freely choose encryption algorithm, just sha256 may not be an universal option. Do you know any way to get key length from openssl cipher name? Or just search for some numbers in it?...
Best Regards, Marek Marczykowski-Górecki Invisible Things Lab A: Because it messes up the order in which people normally read text. Q: Why is top-posting such a bad thing?
As the user can (theoretically) freely choose encryption algorithm, just sha256 may not be an universal option.
Two ideas:
Do you know any way to get key length from openssl cipher name? Or just search for some numbers in it?...
I'm not aware of any. That would be my guess too...
Based on this and after further research, I think it would be best to go with GPG for encryption (but not for integrity-verification).
Ideally, we would read from the user-defined preferences in ~/.gnupg/gpg.conf
in dom0. This would allow users to set the following options (among others):
--s2k-cipher-algo name
Use name as the cipher algorithm used to protect secret keys. The default cipher
is CAST5. This cipher is also used for conventional encryption if --personal-
cipher-preferences and --cipher-algo is not given.
--s2k-digest-algo name
Use name as the digest algorithm used to mangle the passphrases. The default
algorithm is SHA-1.
--s2k-mode n
Selects how passphrases are mangled. If n is 0 a plain passphrase (which is not
recommended) will be used, a 1 adds a salt to the passphrase and a 3 (the
default) iterates the whole process a number of times (see --s2k-count). Unless
--rfc1991 is used, this mode is also used for conventional encryption.
--s2k-count n
Specify how many times the passphrase mangling is repeated. This value may range
between 1024 and 65011712 inclusive. The default is inquired from gpg-agent.
Note that not all values in the 1024-65011712 range are legal and if an illegal
value is selected, GnuPG will round up to the nearest legal value. This option
is only meaningful if --s2k-mode is 3.
Recommended defaults:
gpg --encrypt --symmetric --s2k-cipher-algo AES256 --s2k-digest-algo SHA512 --s2k-mode 3 --s2k-count 65011712 --compress-algo bzip2
This seems to be the best option. Some minor questions/improvements/thoughts:
--encrypt
as it is for asymmetric encryption--compress-algo none
. Our current approach allows to specify any "filter" program to do the actual compression. Personally I like pigz
(parallel gzip) for much better performance on 8-core system :)--s2k-count 65011712
; on my system it needs 1.00s to encrypt just "test" string (so mostly key derivation benchmark), but it may be somehow longer on older systems; do we care? On Thinkpad T61 (Core 2 Duo T7300) - 1.42s. Ok, not bad at all.~/.gnupg/gpg.conf
. Either let use the defaults (possibly changed by user in that file), or override some option.As for most secure gpg settings, check this out.
Quote https://github.com/Whonix/anon-gpg-tweaks
"Security and privacy enhancements for gnupg's config file for user "user" in /home/user/.gnupg/gpg.conf." See also: https://raw.github.com/ioerror/torbirdy/master/gpg.conf and https://github.com/ioerror/torbirdy/pull/11.
- Backup data will be integrity-protected twice - once by openssl, and additionally by gpg. I don't think it is a problem, but I'm not a cryptographer.
Not a bad thought. In past there were "known plaintext attacks", where it was recommended against to encrypt known plaintext as this would weaken the ciphertext. Any serious algorithm nowadays must pass this.
Normally it should not. If it did, it would be a critical cipher weakness.
I am just worried of the extra complexity of double encryption wrt to manual backup recovery. But if you think it's fine, go for it.
- I don't think we should parse
~/.gnupg/gpg.conf
. Either let use the defaults (possibly changed by user in that file), or override some option.
Yes, programs should not rely on user settings in ~/.gnupg/gpg.conf
indeed. Configurable options is a pony.
gpg supports --no-options (skip ~/.gnupg/gpg.conf
) and --homedir.
By the way, in Qubes Q3 (and earlier) you cannot continue to backup if you uncheck the encryption box. [I personally use full disk encryption for all my disks, so the encryption of QVMM is nice, but I prefer to disable it to ease manual recovery [in case the backup is ever damaged, version conflict or w/e).]
- Backup data will be integrity-protected twice - once by openssl, and additionally by gpg. I don't think it is a problem, but I'm not a cryptographer.
Not a bad thought. In past there were "known plaintext attacks", where it was recommended against to encrypt known plaintext as this would weaken the ciphertext. Any serious algorithm nowadays must pass this.
Normally it should not. If it did, it would be a critical cipher weakness.
I am just worried of the extra complexity of double encryption wrt to manual backup recovery. But if you think it's fine, go for it.
Not encryption, just integrity protection - (.hmac files).
By the way, in Qubes Q3 (and earlier) you cannot continue to backup if you uncheck the encryption box. [I personally use full disk encryption for all my disks, so the encryption of QVMM is nice, but I prefer to disable it to ease manual recovery [in case the backup is ever damaged, version conflict or w/e).]
Can you create separate ticket for it? I was not aware of this problem...
Best Regards, Marek Marczykowski-Górecki Invisible Things Lab A: Because it messes up the order in which people normally read text. Q: Why is top-posting such a bad thing?
@marmarek:
- Should be no --encrypt as it is for asymmetric encryption
Oops. Right.
- We currently have compression handled independently of encryption, I think we should keep that and use --compress-algo none. Our current approach allows to specify any "filter" program to do the actual compression. Personally I like pigz (parallel gzip) for much better performance on 8-core system :)
Sounds good to me. :)
- Generally I'm fine with --s2k-count 65011712; on my system it needs 1.00s to encrypt just "test" string (so mostly key derivation benchmark), but it may be somehow longer on older systems; do we care? On Thinkpad T61 (Core 2 Duo T7300) - 1.42s. Ok, not bad at all.
I suggested 65011712 as the default because it's the maximum. I think we should assume Qubes users want the "most secure" options by default. If they value backward compatibility more, they can override the defaults. (The security gain of the "most secure defaults" strategy is in some cases debatable, but the choice of defaults acts as an important reputational signal for a security-oriented OS.)
- What about integrity protection (openssl)? is it a problem that the passphrase is used directly here (at least I think it is)?
Yes, this is the other part of the problem.
If yes and if we want to change that, it would require some more work to make it compatible with older formats, because integrity protection of backup header (only after its verification we can get some properties like backup format version, encryption algorithm etc).
True...
- Backup data will be integrity-protected twice - once by openssl, and additionally by gpg. I don't think it is a problem, but I'm not a cryptographer.
IANAC either. I take it integrity-protection from gpg can't be turned off?
- I don't think we should parse ~/.gnupg/gpg.conf.
The only reason I suggested parsing ~/.gnupg/gpg.conf
is because that seemed like the most logical place for the user to put their custom settings, but certainly if it's dangerous to parse it (or undesirable for another reason), then let's not do it. I really only meant to suggest that users should be able to specify their own options for s2k-cipher-algo
, s2k-digest-algo
, s2k-mode
, and s2k-count
(and any others that should be available).
Either let use the defaults
Sorry, I don't understand. Can you rephrase?
(possibly changed by user in that file),
Which file?
or override some option.
Would it be something like this?
qvm-backup --encrypt --s2k-count 1024 [...]
@adrelanos:
Yes, programs should not rely on user settings in
~/.gnupg/gpg.conf
indeed.
I'm glad you and @marmarek pointed this out to me. As mentioned in my previous message, my suggestion about reading from ~/.gnupg/gpg.conf
was really just an expression of my desire for users to be able to choose their own custom settings for gpg to use.
However, now I'm curious: Why is it such a bad idea for programs to read from/rely on ~/.gnupg/gpg.conf
?
Configurable options is a pony.
What does this mean?
On Fri, Oct 23, 2015 at 07:16:42PM -0700, Axon wrote:
- Backup data will be integrity-protected twice - once by openssl, and additionally by gpg. I don't think it is a problem, but I'm not a cryptographer.
IANAC either. I take it integrity-protection from gpg can't be turned off?
Yes, I think it can't be disabled.
- I don't think we should parse ~/.gnupg/gpg.conf.
The only reason I suggested parsing
~/.gnupg/gpg.conf
is because that seemed like the most logical place for the user to put their custom settings, but certainly if it's dangerous to parse it (or undesirable for another reason), then let's not do it. I really only meant to suggest that users should be able to specify their own options fors2k-cipher-algo
,s2k-digest-algo
,s2k-mode
, ands2k-count
(and any others that should be available).Either let use the defaults
Sorry, I don't understand. Can you rephrase?
(possibly changed by user in that file),
Which file?
qvm-backup (one program) should not read ~/.gnupg/gpg.conf
(configuration of another program). We have no control over it, syntax
can change, its location may change, some other options may be
introduced (including another file?) etc.
So for each option we should either set it explicitly on command line
(always), or not set at all. Do not try to guess whether user set some
custom value in gpg.conf
or not.
or override some option.
Would it be something like this?
qvm-backup --encrypt --s2k-count 1024 [...]
Yes.
Best Regards, Marek Marczykowski-Górecki Invisible Things Lab A: Because it messes up the order in which people normally read text. Q: Why is top-posting such a bad thing?
Axon:
@adrelanos:
Yes, programs should not rely on user settings in
~/.gnupg/gpg.conf
indeed.I'm glad you and @marmarek pointed this out to me. As mentioned in my previous message, my suggestion about reading from
~/.gnupg/gpg.conf
was really just an expression of my desire for users to be able to choose their own custom settings for gpg to use.However, now I'm curious: Why is it such a bad idea for programs to read from/rely on
~/.gnupg/gpg.conf
?
It's not super bad, not a huge issue, just bad practice. Because that
file is for user settings. Not arbitrary tools. Avoiding conflicts with
whatever the user did. It's hard to modify a user config file.
Specifically if there is no .d
folder and especially in the home
folder. By avoiding it, it gets easier to configure it tailed for the
tool in question.
Configurable options is a pony.
What does this mean?
An optional, non-essential, nice to have, thing if someone contributes it.
Axon:
integrity-protection from gpg can't be turned off?
I don't know if it should be turned off (why?) but it can be configured:
--force-mdc / --disable-mdc
qvm-backup (one program) should not read
~/.gnupg/gpg.conf
(configuration of another program). We have no control over it, syntax can change, its location may change, some other options may be introduced (including another file?) etc.So for each option we should either set it explicitly on command line (always), or not set at all. Do not try to guess whether user set some custom value in
gpg.conf
or not.
Ok, that makes sense. Thank you for explaining it to me. :)
It's not super bad, not a huge issue, just bad practice. Because that file is for user settings. Not arbitrary tools. Avoiding conflicts with whatever the user did. It's hard to modify a user config file. Specifically if there is no
.d
folder and especially in the home folder. By avoiding it, it gets easier to configure it tailed for the tool in question.
Right, makes sense.
An optional, non-essential, nice to have, thing if someone contributes it.
Oh, I see. Thanks. :)
I don't know if it should be turned off (why?) but it can be configured:[...]
Oh, ok. I guess the only reason to turn it off would be if we're worried about double integrity-protection causing problems, but it sounds like that's not really a concern.
FWIW, I wonder whether the minimum effort variation would be to apply the key stretching just once, to encrypt a random master key.
e.g.:
MK
, of 512 bits is generated../master-key
, encrypted under the user's passphrase, using a key stretching method of some variety (I'll get back to the details of such later).EK
is derived as EK = HMAC_SHA512(MK, "qubes_backup_encryption_key")
EK
, truncated to 256 bits, is used to encrypt files in the current manner, but it is supplied to openssl
via the parameter -K
(i.e. as a raw key); I suspect this also means that the IV will have to be generated by the backup program and be manually fed in to openssl
via -iv
.AMK
is derived as AMK = HMAC_SHA512(MK, "qubes_backup_authentication_key")
AK_<filepath>
is derived as AK_<filepath> = HMAC_SHA512(AMK, "<filepath>")
<blah>.hmac
file, in the current manner.Using a unique key for each file's MAC resolves a possible issue where an attacker could swap files around in the archive, potentially migrating an infection from a lower-security VM to a higher-security VM.
This also avoids having an attacker brute-force the user's passphrase against one/any of the HMACs.
A NIST-specified KDF could be used on the Master Key to derive the other keys instead, but none lend themselves to being specified in an openssl
one-liner. Also, 2 of the 3 algos in NIST 800-108 more or less reduce to the above HMAC invocations if we don't need subkeys longer than the HMAC's block size.
Alas, this proposal requires more operations before verifying the ./backup-header
file, which is bad both for code exposure concerns and backwards compatibility. However, preserving the ./backup-header.hmac
in its current form will leave a method by which, as mentioned, an attacker can bypass the key stretching and brute-force the bare passphrase (HashCat has an "HMAC-SHA256 (key = $pass)" mode already that might be able to be dropped in to serve such a purpose, though I've not tested it).
I've not looked at the code yet, but if GnuPG does not do /too much/ parsing, maybe it's still okay to use it for key stretching?
There was actually a competition held recently to pick a good key stretching ("password hashing") algorithm. The winner was an entrant called Argon2
, which beat out other entrants including yescrypt
, which is an improved scrypt
variant. Note that a key issue with scrypt
is that it allows time/memory trade-offs to an extent that's undesirable in an optimal key stretching algorithm, though this issue can be mitigated by tuning the scrypt
invocation to require sufficiently large memory. And all of these key stretching algorithms wipe the floor with GnuPG's S2K Mode 3, which is just an iterated hash, and therefore not memory-hard at all (i.e. GPUs and ASICs can attack it much faster than the CPUs that will be computing it under normal circumstances).
Of course, the problem with exotic key stretching algos is that it's hard to use them without making manual backup recovery dependent on non-ubiquitous software. And a million iterations of SHA512 is much better than no stretching at all.
I'll look into this a bit more later, and report back with any developments :)
On Sun, Oct 25, 2015 at 10:13:47AM -0700, pqg wrote:
FWIW, I wonder whether the minimum effort variation would be to apply the key stretching just once, to encrypt a random master key.
Generally we want to avoid doing to much low-level crypto handling, because we are not cryptographers.
Using a unique key for each file's MAC resolves a possible issue where an attacker could swap files around in the archive, potentially migrating an infection from a lower-security VM to a higher-security VM.
This isn't a problem, because paths are stored inside of archive(s). Outer layer is only for convenience (streaming the backup while creating it, partial restore). Take a look at point 7 here: https://www.qubes-os.org/en/doc/backup-emergency-restore-v3/
Alas, this proposal requires more operations before verifying the
./backup-header
file, which is bad both for code exposure concerns and backwards compatibility. However, preserving the./backup-header.hmac
in its current form will leave a method by which, as mentioned, an attacker can bypass the key stretching and brute-force the bare passphrase (HashCat has an "HMAC-SHA256 (key = $pass)" mode already that might be able to be dropped in to serve such a purpose, though I've not tested it).
Yes, IMO this is currently the main concern. When we get to the ./backup-header
, we can have KDF paramenters, salt, iteration count, or any other configuration needed for restoring the rest of the backup. But before, we need something working without such knowledge...
I've not looked at the code yet, but if GnuPG does not do /too much/ parsing, maybe it's still okay to use it for key stretching?
I think we previously concluded, that we don't want to expose gpg to any unverified input.
Of course, the problem with exotic key stretching algos is that it's hard to use them without making manual backup recovery dependent on non-ubiquitous software. And a million iterations of SHA512 is much better than no stretching at all.
And also using exotic algorithms isn't a good idea, because those didn't received that much review/analysis/tests.
But generally IMHO scrypt
as a KDF (exactly the thing it was designed for, right?) should be ok. If we'd need to employ KDF ourself at all...
Best Regards, Marek Marczykowski-Górecki Invisible Things Lab A: Because it messes up the order in which people normally read text. Q: Why is top-posting such a bad thing?
There was actually a competition held recently to pick a good key stretching ("password hashing") algorithm. The winner was an entrant called Argon2, which beat out other entrants including yescrypt, which is an improved scrypt variant. Note that a key issue with scrypt is that it allows time/memory trade-offs to an extent that's undesirable in an optimal key stretching algorithm, though this issue can be mitigated by tuning the scrypt invocation to require sufficiently large memory. And all of these key stretching algorithms wipe the floor with GnuPG's S2K Mode 3, which is just an iterated hash, and therefore not memory-hard at all (i.e. GPUs and ASICs can attack it much faster than the CPUs that will be computing it under normal circumstances).
Of course, the problem with exotic key stretching algos is that it's hard to use them without making manual backup recovery dependent on non-ubiquitous software. And a million iterations of SHA512 is much better than no stretching at all.
Exactly. I'm skeptical that the practical security benefits of using a newer KDF will outweigh the benefits of being able to do manual backup recovery in a variety of disaster scenarios. Security-concious users (which almost by definition includes all Qubes users) ought to be using reasonably long, high-entropy passphrases (I'll refer to these as "good" passphrases for the sake of brevity).
The current system (in which openssl applies a single round of md5) punishes users who pick good passphrases by capping maximum entropy at a measly 128 bits. We should fix that. However, my understanding (correct me if I'm wrong) is that S2K or PBKDF2, applied to good passphrases, will be uncrackable for the foreseeable long-term future (i.e., even if we assume many quadrillions of guesses per second, it would still take at least millions of years). If that's correct, then it seems to me that we should instead select the most trusted, tested, ubiquitous tools available. My backup is no good to me if I can't recover the data because the only software available to me is whatever's on an old Ubuntu disc, for example. (We can't predict in advance what kinds of emergency scenarios Qubes users might find themselves in, and under what conditions they might need manual access to their data.)
Having said all that, if scrypt
provides exactly what we're looking for, then it may be worthwhile to go with that. (And advise users to store a copy of scrypt
with their backups, or something.) If anyone can get this right, it's Colin Percival.
we should add a crypto tag to this (and any other similar issues) and I will create a thread with Cure53 (or NCC / iSec Partners) about this. They can provide free crypto advice to OTF-funded projects.
Great idea!
@axon-qubes
I concur with your position that, if at all possible, the data should be recoverable with an "old Ubuntu disc".
The scrypt binary and python-scrypt module don't seem to be currently packaged with Fedora anyway (they are available on Debian Jessie; I've not checked elsewhere). And keeping scrypt alongside the backup would be problematic if we need to invoke scrypt /before/ we can verify the integrity of anything.
As an aside, I think we should distinguish between 2 uses of "KDF":
For instance, scrypt could provide function 2, but not function 1. Function 1 is served by things like NIST 800-108, which in the special case of all subkeys being of equal or shorter length than the underlying hash function, reduces to HMAC(MK, "\x01<subkey_purpose_specifier_string>\x00\x20")
, where the \x01
and \x00
are no longer serving a particular purpose in this reduced form, and \x20
is the length of the resulting subkey. More conservative KDFs (purpose 1) use a single keystream that feeds back the result of the previous HMAC invocations or, more conservative still, they create one feedback pipeline whose output is passed to a second HMAC invocation to produce the actual keystream (TLSv1.2 does this, see section 5). However, all of these are prohibitively awkward in the "old Ubuntu disc" scenario. By contrast:
echo -n qubes_backup_authentication_key | openssl dgst -sha256 -hmac <Hex(Master Key)>
...should work almost anywhere, yield the desired different keys for different purposes, and be just as secure as long as SHA-256 is a strong hash function. (The fancier derivation methods are just meant to give some additional protection if it should turn out that the underlying hash function is weaker than expected.) Though note that the Master Key (randomly generated or the result of scrypt or some other key stretching) will be a binary blob, which will be tough to pass via -hmac
unless encoded by hex (as above) or base64, which is kind of unsightly protocol-wise. On OpenSSL >= v1.0.0 (Ubuntu since at least 2011) the following invocation could be used instead to pass the master key hex-encoded:
echo -n qubes_backup_authentication_key | openssl dgst -sha256 -mac hmac -macopt hexkey:<Hex(Master Key)>
So that the output is actually HMAC_SHA256(MK, "qubes_backup_authentication_key")
rather than HMAC_SHA256(Hex(MK), "qubes_backup_authentication_key")
as it would have to be with -hmac
.
However, my understanding (correct me if I'm wrong) is that S2K or PBKDF2, applied to good passphrases, will be uncrackable for the foreseeable long-term future (i.e., even if we assume many quadrillions of guesses per second, it would still take at least millions of years).
With no key stretching, e.g. just take the SHA-256 of the passphrase, an attacker capable of 10 quadrillion tries a second for 10 million years would exhaust an ~101-bit keyspace. This represents approximately an 8-word diceware passphrase (~103 bits). With ~1million rounds of key stretching, as in S2K Mode 3 at max settings, ~81 bits of input keyspace could be searched, which requires a diceware passphrase of 7 words (~90 bits) though you would get pretty close with 6 words (~78 bits).
A GPU running hashcat can currently trial on the order of 2 billion SHA-256 sums per second (2*10^9/sec) so 10^16/sec represents a powerful adversary indeed. 10 million years is of course just providing a robust margin of error, since in 30 years the crypto will be completely obsolete, and in 100 years we'll all be dead.
So 128 bits is still enough for anyone. Unless your threat model includes the arrival of quantum computers in the very near term, in which case you should double all your symmetric crypto key length requirements.
That said, as you originally advanced, even if nothing else changes from this discussion, -md sha256
should be added to the openssl enc
invocations. It's much more conservative.
None of this, of course, helps with the MACs, where we really want to salt and stretch the user's passphrase before doing anything, even if the same resulting key is used for both verification and encryption.
Adding a salt not only avoids precomputaion attacks, but binds the files to the particular backup. While the renaming attack I previously feared is indeed impossible, I don't /think/ that there's currently any protection against shuffling in an identically-named VM from a different backup, as long as both backups were made with the same passphrase.
However, there then needs to be some way to store the salt and, if variable, any key stretching parameters, and load them before verification. Would it be an acceptable risk to parse just the key=values out of the ./backup-header
and interpret those necessary to establish the key, then check the ./backup-header.hmac
, and finally interpret the rest of the values (e.g. if, in the future, a timestamp is added to the header, it would be good to not have to parse that until after verification)? Setting hard line and file length limits might help limit the attack surface.
This certainly presents less attack surface than any way I can think of using GPG to do pre-verification stretching. Though, I'm aware I haven't actually proposed another way to perform key stretching yet. Python has baked-in hashlib.pbkdf2_hmac()
, but only since v3.4 and v2.7.8, which are far too recent (mid 2014). I'd hoped it might be possible to use gpg --symmetric
on a fixed value, hash the resulting "file", and use that as the key, but I've not been able to find a way to pass a salt and/or IV to GPG, so it unfortunately generates a random one on each invocation. The scrypt binary has the same problem, assuming its availability issues could be overcome.
The bcrypt functionality in python-passlib
may be a reasonable bet. I can't guarantee how ubiquitous it is, but it's in Debian >=7 and backports for 6, definitely in Ubuntu >= 12.04, and available on Fedora 21. bcrypt has been fairly heavily popularized and so should common enough; I suspect that a working implementation should be more readily obtainable for our hypothetical frazzled power user than scrypt.
On the other hand, the scrypt binary is pretty easy to build if you've got gcc and autotools. As mentioned, it's only good for encrypting/decrypting files, not plain key stretching, so we'd be back to the ./master-key
idea or similar, but it still presents a considerably smaller attack surface than GnuPG.
Adding a salt not only avoids precomputaion attacks, but binds the files to the particular backup. While the renaming attack I previously feared is indeed impossible, I don't /think/ that there's currently any protection against shuffling in an identically-named VM from a different backup, as long as both backups were made with the same passphrase.
Yes, such attack probably is possible.
However, there then needs to be some way to store the salt and, if variable, any key stretching parameters, and load them before verification. Would it be an acceptable risk to parse just the key=values out of the
./backup-header
and interpret those necessary to establish the key, then check the./backup-header.hmac
, and finally interpret the rest of the values (e.g. if, in the future, a timestamp is added to the header, it would be good to not have to parse that until after verification)? Setting hard line and file length limits might help limit the attack surface.
This might be some option if nothing better comes to us. Those
pre-verification options needs to be clearly marked (some prefix in
name?) and reduced to absolutely minimum. If we'd have to go that way, I
think this should include version
field, which would determine most of
parameters. For example it would be bad idea to include hash algorithm
in those pre-authentication set, because the attacker would be able to
replace it with some weak value (vide "cipher_null" in LUKS header[1]).
The bcrypt functionality in
python-passlib
may be a reasonable bet. I can't guarantee how ubiquitous it is, but it's in Debian >=7 and backports for 6, definitely in Ubuntu >= 12.04, and available on Fedora 21. bcrypt has been fairly heavily popularized and so should common enough; I suspect that a working implementation should be more readily obtainable for our hypothetical frazzled power user than scrypt.
And according to previous comments, bcrypt
is also somehow sensible
choice, right?
On the other hand, the scrypt binary is pretty easy to build if you've got gcc and autotools. As mentioned, it's only good for encrypting/decrypting files, not plain key stretching, so we'd be back to the
./master-key
idea or similar, but it still presents a considerably smaller attack surface than GnuPG.
I think we can simply have hex-encoded, encrypted master key included in the backup header.
And as the backup format is going to be more and more complex, maybe it's a good idea to include emergency restoration script as one of plain text, integrity protected files (similar to backup-header)? So the emergency procedure would include manual verification of backup header and that script, and then simply run the script.
[1] https://github.com/QubesOS/qubes-secpack/blob/master/QSBs/qsb-019-2015.txt
Best Regards, Marek Marczykowski-Górecki Invisible Things Lab A: Because it messes up the order in which people normally read text. Q: Why is top-posting such a bad thing?
I notice that all of pqg's messages have disappeared from this thread. pqg, did you intentionally delete them? If so, may I ask why? (Feel free to send me an encrypted message if you prefer.) Note that this may create difficulties for people who try to read this thread in the future (e.g., cryptographers interested in helping with this issue).
I notice that all of pqg's messages have disappeared from this thread.
As the whole account...
As the whole account...
Ah. In light of that, I'm not sure what to make of his/her/their previous suggestions...
Apparently I tripped over some bot heuristics and was hellbanned. Presumably because I am using Tor and posted some comments with some links in them.
I trust that you can see me again?
I trust that you can see me again?
Yes.
Hi. Use the scrypt source code for key derivation. It's simple, good license, and lets you create arbitrarily hard derivations. I'm using it for a project I'm working on, and it works exactly fine.
Take a look at https://github.com/QubesOS/qubes-issues/issues/971#issuecomment-151125927 and https://github.com/QubesOS/qubes-issues/issues/971#issuecomment-151005970 mostly about scrypt availability and usage for sole key derivation. One of important requirements for backup format here is ability to restore the data using commonly available tools, not necessary on Qubes OS.
Python has baked-in hashlib.pbkdf2_hmac(), but only since v3.4 and v2.7.8, which are far too recent (mid 2014).
There is pbkdf2 implementation in passlib python module (http://pythonhosted.org/passlib/lib/passlib.utils.pbkdf2.html), for quite a long time. As for bcrypt in that library, I can see only an API to create a password hash, not sure how can it be applied as key derivation...
What about including an implementation of a command line program that simulates the six words of diceware using /dev/random? If that was a thing, then we could even have a backup option --diceware that generates and shows those six words on the standard output, telling the user to memorize them or back those words up somewhere safe. And that way you need no passphrase stretching like pbkdf2 because the entropy is good enough as is.
I think that would be a poor substitute for proper key derivation. It's basically forcing the user to choose between two bad options (three including the status quo):
And that way you need no passphrase stretching like pbkdf2 because the entropy is good enough as is.
But remember that low-entropy passphrases are not the main problem here. We tell users that they ought to have reasonably high-entropy passphrases and assume they will. The problem is that the current implementation takes the high-entropy passphrase a user provides and, in many cases, reduces its entropy.
Thinking out loud:
openssl enc
(the symmetric encryption module we use to encrypt Qubes backups) has not received much developer attention.
openssl enc
and that developers don't spend much time on the code.Given all of this, I think it's not inconceivable (perhaps even likely) that there are more bad bugs lurking in openssl enc
that could expose Qubes users' data in the future. If moving away from OpenSSL very soon isn't a viable option (due to lack of a crypto expert), we should at least consider alternatives like double-encryption schemes. For example, we initially backed away from using GPG because it did too much parsing of untrusted data before verifying authenticity, but what's preventing us from piping data through GPG, then through the current OpenSSL setup (perhaps even using two different algos, i.e., cascade encryption)? Presumably, we wouldn't be giving up anything on the authentication front, since the backup would look externally the same as it does now. We'd just be feeding openssl
ciphertext instead of plaintext.
tbh, if all you want is a wrapper for the encryption with verification, I think you can do well with the plain scrypt command from the scrypt distribution. it encrypts and verifies, and I believe it does so in a streamable way (which is what you want), but if I am in error, you can always spit blocks of a specific size to the output file (or to multiple files) and only decrypt the first block to get to the header. The same file can then be scrypted --decrypt in the same order. You don't have to deal with gruyeressl, you don't have to deal with monstrous different algorithms, check the source and see what it does, it's very simple and it's the same shit used on the client side for the tarsnap service prior to sending it to the cloud, which is secure as far as we know
Build a package, ship it in the Qubes repos, drama solved.
tbh, if all you want is a wrapper for the encryption with verification, I think you can do well with the plain scrypt command from the scrypt distribution. [...]
Sounds promising. @marmarek, what do you think?
I'm curious to get others' opinions on some general questions:
qvm-backup
as your main backup solution?qvm-backup
?openssl enc
secure enough as is?If it turns out that, e.g., hardly anyone else actually uses qvm-backup
, then maybe this is a moot point. Or, even if people do use it, but I'm the only one worried about this, maybe I'm just being too paranoid. :)
- You already encrypt your data some other way before/after using
qvm-backup
?- You don't store your backups in a vulnerable place (e.g., only on local external hard drives, not in the cloud)?
For me - those two ;)
Best Regards, Marek Marczykowski-Górecki Invisible Things Lab A: Because it messes up the order in which people normally read text. Q: Why is top-posting such a bad thing?
For me - those two ;)
Ah, interesting. You encrypt before or after (or both)? (I'm guessing the whole external drive is LUKS encrypted?)
(I'm guessing the whole external drive is LUKS encrypted?)
Exactly.
(I'm guessing the whole external drive is LUKS encrypted?)
In fact this is an artifact of using old backup format, which didn't support encryption itself...
In fact this is an artifact of using old backup format, which didn't support encryption itself...
Ah, yes, I remember those days. :)
But what, if anything, do you do about offsite backups, then?
But what, if anything, do you do about offsite backups, then?
Keep that disk far enough...
Keep that disk far enough...
Oh, ok!
Should we just recommend that users do the same, I wonder? IOW, display a warning that says something like, "Do not rely on Qubes backup for strong encryption. Instead, encrypt the data yourself"?
Sounds promising. @marmarek, what do you think?
AFAIR the whole point of using openssl enc
was to use something broadly available for easy emergency restore on non-Qubes system. It looks like it is available in Debian, but not Fedora. Which in theory pass the "old Ubuntu" test.
While the tool is indeed quite simple, it is advertised as a demonstration of the scrypt key derivation function. Not sure if something meant for real-world usage. But the more I read that code the more I think it is good idea to use it.
What we'd still need, is some verification of backup header, which IMO shouldn't be encrypted for compatibility reasons (if we ever decide to change encryption again, to have ability to detect that during restore). Maybe backup-header.hmac
file can be simply scrypt
-encrypted version of the header (which should handle authentication itself)?
AFAIR the whole point of using openssl enc was to use something broadly available for easy emergency restore on non-Qubes system. It looks like it is available in Debian, but not Fedora. Which in theory pass the "old Ubuntu" test.
It seems like scrypt
is prevalent enough that it strikes a reasonable balance between availability and security. To my mind, the "old Ubuntu test" is passed if the user can, e.g., store a small binary with her backups that can be installed on commonly available systems without Internet access. The point of the test is just to make sure that we give reasonable consideration to data availability across plausible disaster scenarios. I don't think we should take the requirement too literally.
Maybe one possibility is to give the user the option to create backups using the current system (e.g., --version=3
), in case they require the ubiquity of OpenSSL.
While the tool is indeed quite simple, it is advertised as a demonstration of the scrypt key derivation function. Not sure if something meant for real-world usage.
Yeah, that part is a bit worrying... When it comes to something fundamental like backup encryption, I would prefer to see Qubes stay on the conservative side and employ widely-acknowledged best practices. Unfortunately, the consensus in this area seems to be "use GPG."
What we'd still need, is some verification of backup header, which IMO shouldn't be encrypted for compatibility reasons (if we ever decide to change encryption again, to have ability to detect that during restore). Maybe backup-header.hmac file can be simply scrypt-encrypted version of the header (which should handle authentication itself)?
Hm, I'm not sure I follow. Here's my understanding (tell me if I'm going wrong somewhere):
The current order:
backup-header.hmac
.backup-header
.We do it this way because we don't want to parse backup-header
before verifying that it's non-malicious.
So, suppose we decide to make backup-header.hmac
by scrypt
-encrypting backup-header
(since it includes authentication). Then, sometime in the future, we decide to switch from scrypt
to foocrypt
(which also includes authentication). Won't FutureQubes have to try decrypting backup-header.hmac
with foocrypt
, then scrypt
(or vice versa) until one works (authenticates), before reading backup-header
?
If so, then we always have to decrypt backup-header.hmac
before reading backup-header
. But if backup-header.hmac
is just the ciphertext of backup-header
, then this first step will always yield the plaintext of backup-header
. If that's correct, then why store a duplicate plaintext of backup-header
alongside backup-header.hmac
? What does that gain us?
Maybe one possibility is to give the user the option to create backups using the current system (e.g.,
--version=3
), in case they require the ubiquity of OpenSSL.
Current design is to "create always latest backup version, be able to restore all of them". Support for creating multiple backup versions will make the code more error prone (more paths to test, more corner cases etc). This is something we really don't want to happen in backup handling code...
Hm, I'm not sure I follow. Here's my understanding (tell me if I'm going wrong somewhere):
The current order:
- Verify
backup-header.hmac
.- Read
backup-header
.- [Continue...]
Correct.
Won't FutureQubes have to try decrypting backup-header.hmac with foocrypt, then scrypt (or vice versa) until one works (authenticates), before reading backup-header?
Yes, indeed.
If that's correct, then why store a duplicate plaintext of backup-header alongside backup-header.hmac? What does that gain us?
To use the same header extraction code for every backup version. If backup-header.hmac
turns out to be scrypt
-ed backup header, the plaintext one can be simply deleted (or better - compared for authenticity). But if not, it may be hmac of the actual header stored in backup-header
file (the current backup format). Also having plaintext backup-header
will help with disaster recovery - user may look at it (especially version
field) to see what instruction use to manually restore the data (including header verification...).
Current design is to "create always latest backup version, be able to restore all of them". Support for creating multiple backup versions will make the code more error prone (more paths to test, more corner cases etc). This is something we really don't want to happen in backup handling code...
Ah, good point. (Especially when the the potential benefit is so dubious.)
To use the same header extraction code for every backup version. [...]
Ok, that makes sense.
Also having plaintext backup-header will help with disaster recovery [...]
True. Even though we advise against reading it before verifying authenticity, in many disaster scenarios authenticity may not matter much.
One reason not to use scrypt enc
as-is: It relies on a homebrew implementation of AES. Granted, it was written by Colin Percival, so it is much more trustworthy than most roll-your-own AES implementations. But still, it is not as thoroughly tested and vetted as the AES implementations in, e.g., GPG and dm-crypt. So, the safer way to go would be to use scrypt
for the KDF and use GPG for the crypto, IMHO.
See: https://groups.google.com/d/msg/qubes-devel/CZ7WRwLXcnk/u_rZPoVxL5IJ