Closed apoelstra closed 2 years ago
Thanks @jgriffiths for your detailed review! I will accept all your suggestiotns, except to change secret
to private
even in the places that you used "secret".
Regarding the name hash_to_sk
, what if I used hash_to_private
? I don't want to use scalar
because I'd like to indicate to the user that the output of this function is private key data, and that anyone who is able to compute it (i.e. anyone with access to the descriptor) can unblind outputs.
Also missing I think are a reference implementations section listing the elements-miniscript impl (until there are more impls), and a set of test vectors.
For test vectors for hash_to_private
I think a mix of public and private keys out of public-key-order would be good, my preference is the input string and expected output private key encoded in WIF. Perhaps we want to specify that all hash_to_private
keys must belong to the same network and have an invalid test vector which mixes them. Also an invalid vector with an input that is not a public/private key and with an empty list.
1. I strongly prefer to use 'private key' rather than 'secret key' for EC key pair terminology, this matches other BIPs better, and as noted here here there may be confusion with 'secret' public keys. My suggested changes use 'secret' just because thats the minimum review delta, but I think it should be globally changed so the only reference to secrecy is the notion of confusion around secret public keys.
I don't agree but I don't have a strong opinion here. I think "secret" and "private" are interchangeable and both are good. It just should be consistent. libsecp256k1 uses "secret", and "secret" has the tiny advantage that you can write sk and pk.
2. Similarly does the naming of `hash_to_sk` conflict with any current or proposed descriptor/miniscript operator that produces or takes private keys? I feel like `hash_to_scalar` or something other than `sk` would be a better name.
I think it the name should make clear that the output is secret (or private). Or do you see a change that this is reused in a context where the result is not secret?
3. There is a negligible but non-zero chance that the `hash_to_sk` construction will produce an invalid key, a resolution or suggested action should this occur should be listed for completeness, as done with bip32 for example, despite it being for all intents and purposes cryptographically unreachable.
Actually, what's a valid secret blinding key? The same as a signing key, so 0 is invalid? Or is 0 actually allowed?
Assuming that 0 is the only scalar which is an invalid secret blinding key, this happens when the hash is 0 or the group order, so the probability of this happening is 2/2^-256 = 2^-255 when SHA256 is modeled as a RO. I'm not strictly against specifying what to do in these cases but discussion about this tend to be a waste of time, as this simply won't happen in practice. A good choice is aborting or incrementing by 1.
Assuming that 0 is the only scalar which is an invalid secret blinding key, this happens when the hash is 0 or the group order, so the probability of this happening is 2/2^-256 = 2^-255 when SHA256 is modeled as a RO. I'm not strictly against specifying what to do in these cases but discussion about this tend to be a waste of time, as this simply won't happen in practice. A good choice is aborting or incrementing by 1.
Actually the draft implementation does not match the spec, which says take the hash mod the group order. But we can also change the spec to simply forbid every hash which is too large. None of this really matters in the end.
I missed the hash being taken mod n, if thats the case then then no further action is needed except to state that 0 is invalid.
hash_to_sk
to hash_to_private
At this point the remaining TODOs are
I don't know what's wrong with github. It shows new commits on my branch but they aren't appearing on this PR.
I have a couple questions:
hash_to_private
keys?Bear in mind that the keys in the descriptor may be xpubs, potentially with wildcards, and therefore will not sort the same way as the EC keys that get hashed.
* when serializing descriptors as strings, should the keys be sorted at all? should we require they be sorted when parsing? * should two descriptors compare equal if they differ only in the order of `hash_to_private` keys?
As you note, there is the question of sorted in what sense - the expressions or the resulting pubkeys. With core descriptors you don't need to parse it or do EC math to produce and verify a checksum. You could require that the list items are sorted lexicographically, but it doesn't add much since you still have to sort the pubkeys when processing.
Really, we missed the boat here in Bitcoin core because the checksum is produced on the input text and not a canonical representation. e.g. sortedmulti
does not require that the items are sorted, and it produces different checksums for different orders. Even accepting different hardening characters h
and '
produce different checksums, and that misfeature can be used to swap the sort order of arbitrary key expressions if they share a common root path with a hardened element.
This is frustrating, but that's what we have to work with. Given this, I suspect its not worth trying to make elements specific operators non-malliable, since you'd also have to fix core malliabliity to make generalised expressions non-malliable, and that would result in most descriptor checksums mismatching between core and Elements which doesn't feel like a useful state to end up in.
I don't know what's wrong with github. It shows new commits on my branch but they aren't appearing on this PR.
I think your fork doesn't even exist anymore (or it's private).
Ok @jgriffiths, makes sense. I won't try enforcing any sort order (or making any internal operations sort-order-independent).
For what it's worth, it's not that we "missed the boat", but that we simply could not come up with a canonical representation. '
vs h
, sort order, etc., is easy enough, but deciding when xpubs of different length are equivalent was not. (You can fingerprint the final derived path, but not when wildcards are present, and we technically support hardened wildcards and post-wildcard hardened steps which makes "derive at some fixed index" impossible, and so on ... and this is not even touching issues with taproot/partial descriptors that we knew we'd eventually have to deal with).
Good catch @real-or-random. I think this repo was original private by accident, and changing the visibility of the two forks independently has caused some github bug to appear. I fixed the visibility on my fork but that doesn't seem to have helped.
Neat, apparently Github does allow me to take actions that will permanently auto-close the PR without warning, it just prevents me from doing anything useful.
Moving to #2, sorry for the confusion.
For what it's worth, it's not that we "missed the boat", but that we simply could not come up with a canonical representation.
Yes, I think you can have a canonical representation only for a given collection of conditions excluding the actual keys. That allows you to say things like 'these are bip44 compatible' or 'this is a BOLT-standard spending path' but not compare two expressions where say one is for public consumption and the other is used by the signer.
This ELIP describes CT descriptors, as needed to attach blinding keys to existing descriptors and therefore create confidential addresses.
It does not cover:
eltr
; these need a separate standard preceding this one