Closed ghost closed 2 years ago
Thanks for the PR! For the type parameters, can you use a where
clause instead of declaring the type constraints inline at the declaration site, it’s easier to read/change this way and keeps it consistent with the rest of the library.
Also, if you think it's not ready for review you can mark it as draft so we know it's not ready from the PR list.
Hey @joebebel this looks really promising! Willing to see it completed!
Also, as @bhgomes suggested, if you mark it as Draft for now while the PR is not ready would be nice! :)
@joebebel
Since FFtField
does not imply PrimeField
, how do we plan to ensure that the field is prime and also extract lots of the associated trait methods info for them?
Also, going this way will probably imply gate-featured verification & prover for IPA and KZG verifications. As now the verification always uses pairings and you don't want to enforce the PairingEngine
.
Since
FFtField
does not implyPrimeField
, how do we plan to ensure that the field is prime and also extract lots of the associated trait methods info for them?
we should just bind to FftField
when we need those interfaces, and PrimeField
when we need that. For practical purposes, I don't think any circuit will be over a non prime field. There is some theoretical possibility where non-fft fields might be desirable, but that seems like an extra complication for now.
Also, going this way will probably imply gate-featured verification & prover for IPA and KZG verifications. As now the verification always uses pairings and you don't want to enforce the PairingEngine.
As long as we're generic over the field and the PolynomialCommitment
I think this should work. But it requires some moving of the Rust generics.
OK, this is getting closer. There are some problems:
SonicKZG
and MarlinKZG
the open
operation on multiple polynomials is not as efficient as the random linear combination before. So more powers of tau in the setup are required. This regression is unfortunate, might require some further attention. I had to increase the setup size to get the tests to pass.?
operator, as it is unable to convert ark_poly_commit
errors to ark_plonk
errors implicitly. I tried several ways, I can't figure out how to get it to work, because the ark_poly_commit
error is an associated type of PolynomialCommitment
it's difficult to debug. For now I just .unwrap()
instead, but this isn't a great long term solution.TEModelParameters
as a generic for StandardComposer
and other circuit related structs. It would be ideal to use ModelParameters
instead (so that short weierstrass inner curves can be used) but after spending some hours trying to figure this out, it's difficult to do in a fully generic way.I think other than the regression in the efficiency of the polynomial commitment and the error handling, it's ready to review for merge.
- Rust type errors when using the
?
operator, as it is unable to convertark_poly_commit
errors toark_plonk
errors implicitly. I tried several ways, I can't figure out how to get it to work, because theark_poly_commit
error is an associated type ofPolynomialCommitment
it's difficult to debug. For now I just.unwrap()
instead, but this isn't a great long term solution.
Whenever an error type is an associated type of a trait, you cannot write a From
impl since it would actually be generic over every type (since the compiler cannot prove that only some types can inhabit the associated type). So instead, I usually use this pattern:
let value = fn_returning_ark_poly_commit_error(x, y, z).map_err(ark_plonk::Error::CustomVariant)?;
It would be nice if the error could be converted with an Into<ark_plonk::Error>
impl to remove the map_err
but as I said above, this is not possible generally (again, this is only because the error we want to map is an associated type, otherwise this does work).
- I added back
TEModelParameters
as a generic forStandardComposer
and other circuit related structs. It would be ideal to useModelParameters
instead (so that short weierstrass inner curves can be used) but after spending some hours trying to figure this out, it's difficult to do in a fully generic way.
I think that the inner curve abstractions in arkworks are lacking here, and in general, there should be a trait that defines the addition and scalar multiplication laws for the curves directly, and the StandardComposer
just queries that trait for implementing the circuit. Then, when constructing the circuit, you specify which instance of this trait you want, and the circuit is populated with the correct constraints. The arkworks abstractions are too low level to be leveraged to do this, since they are more about specifying the kind of curve (and performing the direct native computation), than specifying the algorithm itself at compile-time.
For some reason, in SonicKZG and MarlinKZG the open operation on multiple polynomials is not as efficient as the random linear combination before. So more powers of tau in the setup are required. This regression is unfortunate, might require some further attention. I had to increase the setup size to get the tests to pass.
This should not be the case unless you're using hiding and/or strict degree bounds (which plonk doesn't need, AFAIK). I can hop on a call to help debug. btw you can turn on the print-trace
feature on ark-poly-commit
; that should give you a sense of why opening requires a larger SRS.
This should not be the case unless you're using hiding and/or strict degree bounds (which plonk doesn't need, AFAIK). I can hop on a call to help debug. btw you can turn on the
print-trace
feature onark-poly-commit
; that should give you a sense of why opening requires a larger SRS.
I don't know what I changed to fix it, but now it functions correctly. It wasn't a problem with the aggregation, though; the error was TooManyCoefficients { num_coefficients: 80, num_powers: 23 }
. In this case the largest degree of the aggregated polynomials, t_4_poly
was correctly 80, but only a degree 23 setup was generated. Strangely, t_4_poly
was degree 23 in the previous proof in the same test. So somehow the CRS was not correctly sized.
When this ready for a review, even if still in draft, could you let us know and I'll give a first round? 😄 @joebebel
I believe this branch is now non-inferior (passes all tests, etc) to the main branch, so it is "ready for review".
There is one main deficiency, which is that it performs two PC::check
operations (whereas the original effectively just did one). So slightly worse performance.
Thanks @Pratyush, I figured out what you meant about the trait bounding and it seems to work well. Also thanks @bhgomes, the error handling is better now.
OK, ready to merge
Summary of changes:
In summary this PR is not really ready to be merged, but open to input and feedback.