Open skaunov opened 9 months ago
Hey, we thought they were correct because JS and Rust outputs agreed. Can you add logging and check where in the process your numbers diverge? #24 should have resolved this.
Yes and no. Current implementation (and their output) were introduced before hash_to_curve
spec was finalized IIRC. That's why it's the place I'm testing after verifying my inputs. So it wouldn't be a surprise that those implementation do match and are "correct" w.r.t. some interim hash_to_curve
spec.
If it's the case then it's possible to say PLUME will be using that hash_to_curve
variant and call that the day; but I believe implementors and other participants/users won't be very happy with an approach like this.
And also it's the reason why I believe having a hash_to_curve
unit test would be beneficial for the project.
Yes and no. Current implementation (and their output) were introduced before
hash_to_curve
spec was finalized IIRC. That's why it's the place I'm testing after verifying my inputs. So it wouldn't be a surprise that those implementation do match and are "correct" w.r.t. some interimhash_to_curve
spec.If it's the case then it's possible to say PLUME will be using that
hash_to_curve
variant and call that the day; but I believe implementors and other participants/users won't be very happy with an approach like this.
Oh interesting, what changed between the draft spec and the final spec? I thought they were the same but might be mistaken. Yeah it's possible we are not up to date or have an error, I'm happy to update to the latest version of things and a unit test sounds great.
Seems that all implementations are using the same function in the end of the day. But I fail to see why they have so much layers before they actually call it (arkworks flavor do conversion, and still there's complex twists which makes everything difficult to read and doesn't leave me with any sense of purpose).
I mean I feel like it's likely the thing should be ok, but this is a thing that demands unit testing of it. Should be ok, since it seems improbable that any conversion error would be reproduced in JS.
Seems that all implementations are using the same function in the end of the day. But I fail to see why they have so much layers before they actually call it (arkworks flavor do conversion, and still there's complex twists which makes everything difficult to read and doesn't leave me with any sense of purpose).
I mean I feel like it's likely the thing should be ok, but this is a thing that demands unit testing of it. Should be ok, since it seems improbable that any conversion error would be reproduced in JS.
You said you saw an inconsistency, what exactly was that in what program, inconsistent with what result? The ark works was an early proof of concept, I thought the consensus was to hazard that and leave it alone?
hash_to_curve
is ok, I checked that another way; so the point of this one is just quite confusing approach, which would be much cleared with unit tests.
You said you saw an inconsistency, what exactly was that in what program, inconsistent with what result? The ark works was an early proof of concept, I thought the consensus was to hazard that and leave it alone?
Oh... Than it's maybe would be a good idea to move it to it's own repo and archive it.
Both implementations has overlays for the hash_to_curve
for some reason. And I use plume_arkworks
as the reference, and its overlay has compute_h(pk, msg)
which basically do nothing but calls another helper (with a more helpful order of arguments) which in the end of the day hashes [msg, pk]
so it took from the correct order so the value checks.
Ok this issue il just mark the same as #24 then?
I don't think so, since I like to read it "we need a set of verified test vectors which would any implementor use, and which would show that their implementation is correct". At least I consider that practice as best. It can be read as a weaker thing "while we have no test vectors let's try different implemetations to approve each other". But it doesn't say anything about internal testing, so at extreme it could be read as "we don't need unit tests if integrational of different implementations match".
And this one about the fact that there're some places in current implementation working with which would be really facilitated by the presence of unit tests.
It's kind of https://github.com/plume-sig/zk-nullifier-sig/issues/72#issuecomment-1950386486, but letting moving forward meanwhile.
I added a test which should be showing that hashing to curve isn't correct (at least on signing; before #84 ). As there's many conversions and different formats/endings the test isn't super readable, so it's quite probable I just messed up along the way. This is quite easy to check if anybody would add a
hash_to_curve
test which a current implementation would pass.Anyway this/such
test
is a good contribution to the code. Mine was born when my implementation yielded a signature (V1) different from the current one using the same inputs which correctness I double-checked. (Yep, it's quite probable I messed up implementation and that test; it's enough to have just two errors for this. On the other hand it's god enough signal to dig into this.)