Open 0xbok opened 1 year ago
So, to implement this we need to add CI which would raise a warning when some consistency checks are failed? Or is there a better approach?
If CI it is, is there any preference towards a particular system?
Yup, CI is great! All the tests should read from the same configuration file or something that has the inputs and outputs, so the different languages are all verified to match!
So, we need following? 1) GitHub Actions workflows which would check PR (based on files changed in perspective) against three test sets which are included for each language; and 2) introduce a (config) file with data that those aforementioned test sets could use and be synchronized that way.
So, we need following?
- GitHub Actions workflows which would check PR (based on files changed in perspective) against three test sets which are included for each language; and
- introduce a (config) file with data that those aforementioned test sets could use and be synchronized that way.
Yup
I see no good way to actually provide a single source for automated reading test vectors while autotest. It will degrade development experience significantly: introduction of file reading during running tests, decoding of the values into native format, using a file outside of working dir (referencing top level).
There's few strategies to tackle this one; though they're not simple.
Yup, CI is great! All the tests should read from the same configuration file or something that has the inputs and outputs, so the different languages are all verified to match!
So, I added basic CI in https://github.com/plume-sig/zk-nullifier-sig/commit/9a068afafae55c25cffd5ff73b16ade57bc02030 and https://github.com/plume-sig/zk-nullifier-sig/commit/ccd26a8dbc9860a646c06c0ab6cd48c74847302f for all four entities the repo provides as you suggested here and at https://github.com/plume-sig/zk-nullifier-sig/pull/30#issuecomment-1731694991. Let me unassign myself while some direction regarding the analysis in the previous comment would be chosen.
I see no good way to actually provide a single source for automated reading test vectors while autotest. It will degrade development experience significantly: introduction of file reading during running tests, decoding of the values into native format, using a file outside of working dir (referencing top level).
There's few strategies to tackle this one; though they're not simple.
- Add reporting to test runs and a workflow that would analyze the reports. Requires adding tests reporting features to each implementation.
- Add a workflow which would run each implementation with test values and check their output (kind of additional integration testing done with GA). Seems that such a workflow will require frequent maintenance when an implementation is changed.
- Setting up some build system in repo could be a solution. Though I never focused on DevOps, so it's hard to me to recommend/compare them.
These are great points and ideas. I was thinking more a test.env in each of the repos and a CI that verifies it is the same, rather han a config file per se, but your point eegarding parsing etc still stands. An 80/20 for now is independent CIs for now with comments that they should be kept consistent with the other files. I agree with you that the easiest thing is to add logging/reporting to a file and ensure the outputs are the same in all cases.
I didn't really looked below Jest in circuits
until now, and now I see it just run circom
, also I don't see any GA for it (I guess it's too expensive). So with a light CI this is reduced to crates and javascript
testing workflows. (I'll remove excessive stuff from <.github/workflows/npm.yml>.)
So I personally would tackle this one further in the following two streams.
So I'd propose to convert this issue into a test vectors document with known/potential inconsistencies section. Then based on this doc start to introduce sync automation. Which could eventually be absorbed by circom
capable CI.
Sweet, that sounds great. A circom test can use circom-tester library to do witness generation at least within tests, which is fast; proofgen will work 99% of the time if witnessgen works so I think it's pretty good substitute.
It's important to have consistency across these impls. A few examples:
c
is returned as a plain number in js, circom vs a field element in rust.You should run a differential test on random values for message and private key (maybe pre-generate and freeze them).