Closed dpc closed 5 years ago
:+1: I'm also super eager to get to 1.0.0.
Personally I don't think that 1.0 should be released if there is non-1.0 dependencies which are part of public API, like digest
crate which is pulled via sha2
, failure
and clear_on_drop
. And I don't plan on doing 1.0 release for RustCrypto crates until stabilization of const generics (and ideally addition of MSRV as well).
In addition to having only 1.0 public dependencies, I believe there should be a longer "settling down" time (say one year) after last minor version update before releasing 1.0. For example I think releasing subtle
v2.0 just after several months since v1.0 was unfortunate.
@newpavlov I think I saw some PRs to remove the sha2
bits from the public API? If not, they could be feature-gated under something which calls out the instability (e.g. digest-preview
).
So long as it's not a part of the public API, I think it's fine for ed25519-dalek to use not-quite-1.0 gems behind the scenes, and even bump major versions of these in a point release.
releasing subtle v2.0 just after several months since v1.0 was unfortunate.
Indeed it is unfortunate, but in the end I don't think it's really a big deal. There's nothing wrong with the 1.0
version, it's just that the traits are not as flexible as they should be, and fixing it required bumping the major version number -- even though the actual upgrade is pretty painless.
It's fine for other people to want to wait for various language changes before declaring their APIs "done", but I don't think that that's our plan.
I don't think ed25519-dalek
is wide-spread public dependency shared between projects. You can do major release every 3 months, and from the perspective of the user, it is not a big of a deal. Major releases are typically a problem in crates that establish interfaces between other crates, especially ones that are API-heavy. futures
, logging libraries, frameworks, traits crates (like digest
, and I guess subtle
) etc. With ed25519-dalek
I just want to get some stuff signed, and I don't even expose it anywhere.
Also, from my perspective it doesn't matter if it's 1.0
or 0.9
. I'm just looking for the pre.0
to disappear and some certainty that things are OK.
I will chime in as well to say I don't care about major version bumps. They are realistically no different from how pre-1.0 versions are managed.
I also don't care at all about major version bumps. I do want a version with rand 0.6 and curve-25519 1.0 as a dependency, though.
@dpc
Also, from my perspective it doesn't matter if it's 1.0 or 0.9. I'm just looking for the pre.0 to disappear and some certainty that things are OK.
Hi, I put out a 1.0.1-pre.1 release as well as a 0.9 series (0.9 is roughly what 1.0.0-pre.0 was, with some updated dependencies).
@isislovecruft So, should I just "downgrade" and go with 0.9 for now? Thanks.
@dpc Yes, that'll be the easiest; although switching to the new 1.0.0-pre.1 shouldn't be too hard. There's a few API changes in that the usage of SHA512 is now hardcoded so you won't need to do e.g. keypair.sign::<sha2::Sha512, rand::OsRng>(b"foo")
etc. (as in, the generics arguments should all be removed in the newer version).
The only thing left for 1.0.0 is that I'm not entirely satisfied with the API for verify_batch()
. I'd like to try to find a way to make it easy to use to both:
core::iter::once(&public_key)
and pass that to verify_batch
?)Just to make sure you're aware: I am on 1.0.0-pre.1
already, and there's nothing wrong with that except scaring people that there might be something functionally wrong with that release.. :)
Ah nope, nothing wrong/changed with the crypto.. it's just me being fiddly about trying to make an easy-to-use hard-to-mess-up API for the most use cases possible.
I'll just wait until you make up your mind. :D
Having 0.9.0 is good because 0.8.x was the last significant downstream crate depending on pre-1.0 versions of curve25519-dalek
. I've been yanking old versions as their use declines, and I'm hoping to get rid of them entirely as soon as there's an upgrade path for everyone. So now that people can get onto 0.9.0, that can keep going independently of the batch verification work.
@isislovecruft : I've looked at pre.1
API.
Why is it necessary to pass the whole hasher to verify_prehashed
? Rather confusing. :) . Please at least add an example, because "prehashed_message is an instantiated hash digest with 512-bits of output which has had the message to be signed previously fed into its state." is there, but it's easy to overlook, and the fact that Digest
is the hasher, and not the result of hashing, is always getting me (and probably not only me).
https://docs.rs/ed25519-dalek/1.0.0-pre.1/ed25519_dalek/struct.PublicKey.html#method.verify_prehashed states "Returns true if the signature was a valid signature created by this Keypair on the prehashed_message.", but the function does not return bool.
I did something wrong as all my previous signatures are failing. BTW "Verification equation was not satisfied" is a confusing way of saying "signature verification failed"
@isislovecruft
I guess it doesn't work because of the above.
So, I used to use Blake2b
for the whole thing. But now I see that Sha512
will be used, no matter what is the type of prehashed_image
. So, was using Blake2b
a transgression in the first place (I hope not), or is it just a bug?
While, we're at it. If prehashed_image
is used just as a sequence of bytes, than I would rather prefer to pass it as such (or FixedResult
or something), and not the whole Digest
object.
The only thing left for 1.0.0 is that I'm not entirely satisfied with the API for
verify_batch()
One option is to add a batch-preview
cargo feature for batch verification which is explicitly exempted from 1.0 semver/stability guarantees. This would give you time to work on that while freezing the rest of the API.
Speaking as one of the few people who could potentially make use of a batch verification API, I imagine for most it is an exotic feature they will never use.
So, I used to use
Blake2b
for the whole thing. But now I see thatSha512
will be used, no matter what is the type ofprehashed_image
. So, was usingBlake2b
a transgression in the first place (I hope not), or is it just a bug?
It's not really a transgression, since, in terms of security, it's fine to use any digest with 512 bits of output. However, I felt like ed25519-dalek
should only implement RFC8032 and not try to do any generalisations over the signatures. I've been working on a more generalised Schnorr-like signature scheme over the Ristretto group in the davros repo.
@isislovecruft It leaves me in a weird position, as I already have bunch of users that generated documents using dalek
+ blake2b
signatures, and in the new ed25519-dalek
version , these signatures are different/invalid.
It would be great if somehow, I was still able to use the old method for verfication, so I can at least phase-out the old signature. Well.. I guess I could maybe import both new and old versions of ed25519-dalek
... though it's rather meh.
But what's even more important - I need to make sure that this won't happen in the future. I need to establish a signature algorithm/format, that people can rely.
@isislovecruft I've submitted #79 to allow the previous signing method.
I can't figure out why would anyone want the the _prehashed
versions of these functions anyway.
Now ... I'm unsure what to do about crev
. One hand I use blake2
for everything already, so it makes sense for me to keep using it for signing too. On the other, if I am setting myself up for the world of pain, by diverging from some interoperability standards, than there's still a room for changing to Sha512
.
I think it might be worth considering using traits/types from the signature
and ed25519
(vaporware) crates. I opened an issue about that here:
https://github.com/dalek-cryptography/ed25519-dalek/issues/80
I think I'm just going to give up on the blake2
. One final question that I have is: Is RFC8032 (or rather ed25519-dalek's implementation of it) fully interoperable? In essence:
@dpc short answer: yes.
Ed25519 is a concrete instantiation of the EdDSA signature framework, which is generic over both an Edwards curve and a hash function (H
). Ed25519 instantiates EdDSA with the “edwards25519” (RFC8032 terminology) elliptic curve and SHA-512, whereas Ed448 instantiates it with the “edwards448” elliptic curve and SHAKE256.
Using BLAKE2 instantiates EdDSA with a different hash algorithm, and the resulting construction is no longer rightly described as “Ed25519” (confusing terminology, no doubt, but that’s how it works).
Ed25519 is in many ways a reaction to the proliferation of options and complexity in ECDSA, and is intended to be an opinionated algorithm with a simple interface. With one minor exception* all Ed25519 implementations are fully interoperable.
*There’s one case that matters when Ed25519 is used in a consensus critical manner for e.g. “blockchain” purposes but is irrelevant to general usage.
With one minor exception* all Ed25519 implementations are fully interoperable.
That's the most impoirtant part. Thank you very much for the answer. In that case I want to use something as interoperable as possible, and using blake was just a bad idea.
Are we there yet? :)