Open jkwoods opened 3 years ago
Ah, I think I may have just broken this, by breaking an invariant here: https://github.com/circify/compiler/blob/de7af39c6ab6022f2561420b97acb1b3813c810b/src/IR/SMT/ToPf.hs#L980.
I'll take a look later today and get this patched up.
Ah. We were getting the QEQ error (in a larger more complicated example) before your most recent patch, but I suspect that is also related to the (bvudiv n 0)
issue.
Hmm. I seem to be having trouble reproducing. Could you let me know which commit you are building, and which command you are running (with any configurations)?
I tried to reproduce in commit 6d1b451.
We were running on our fork, on the darpa_demo branch, commit 415b629
, with this command: C_no_overflow=True C_streams=smt::opt stack run -- c main qeq_error.c --prove -i <(echo;)
It's possible we introduced the error. I'll update the new changes and troubleshoot.
@alex-ozdemir what's the semantics of isInit
in declareVar
here? @jkwoods noticed that we use that in our gadgets framework with declareInitVar
setting it to True
, in order to set values in the vals
dictionary in the Assert
monad. But I'm not sure this is correct. I think we need a different way to declare witnesses and inputs. It has something to do with the QEQ error being reproducible and I'm trying to figure it out, any intuition might help.
Good question. It's used to help the front-end lookup user-provided values for inputs to the circuit. I've just provided a bit of documentation here.
@jkwoods noticed that we use that in our gadgets framework with declareInitVar setting it to True, in order to set values in the vals dictionary in the Assert monad.
Yeah, that is probably not quite correct. Perhaps you could use setValue
?
Thanks for this explanation, the bug was on our side and was fixed.
Should we close this issue?
Sorry to reopen this - we fixed one of the things throwing the issue on our side. But I think there's a deeper issue as well. Here's a better example that doesn't work for on your master fork when called with prove mode.
int main(void) {
int x = (-1*-1 == 1);
return x;
}
Command = C_no_overflow=True stack run -- c main qeq_error.c --prove -i <(echo;)
It throws a similar The QEQ (0) * (0) = 1 "eqZero_v4" + -1 is not satisfied 0 0 -1
error, called from here.
When called without the C_no_overflow
flag, this error is not thrown. Maybe this has something to do with how negative one overflow is handled?
Sorry to reopen this - we fixed one of the things throwing the issue on our side. But I think there's a deeper issue as well.
Don't be! I appreciate your finding these bugs. I know if must be hard work, and it's helpful to have such well-refined bug reports.
... the bug ...
Ah, interesting. Upon reflecting on this example, I agree that there are some foundational issues with the definition of the "no-overflow" configuration. I never thought about it very carefully, and I see now that its current implementation respects an ill-defined notion.
Let's make that more formal... How should "no-overflow" be defined? Well, probably the most reasonable definition would be based on the integer characterization of bit-vector arithmetic. However, there are really two such characterizations, one based on signed integers, and one based on unsigned naturals. The two characterizations, which define functions over bit-representations (so x
, y
are bit sequences) are as follows:
umul(x,y) = nat2bv(bv2nat(x) * bv2nat(y))
(this is from the SMT standard)smul(x,y) = int2bv(bv2int(x) * bv2int(y))
It turns out these are equivalent functions (fun exercise!), which is why the SMT standard doesn't bother to distinguish them (it just uses the unsigned variant). However, if the "no overflow" assumption is something like "the int2bv/nat2bv doesn't drop any bits", then that assumption is not equivalent for these two characterizations. There are inputs which satisfy that assumption for int2bv, and falsify it for nat2bv (such as the twos-complement representation of x=-1,y=-1
). There are also inputs which do the reverse (such as the twos-complement representation of x=SIGNED_INT_MAX,y=2
).
This means that to make "no-overflow" well defined, we probably have to separate signed and unsigned arithmetic bit-vector operations.
However, I'd like to better understand your use of and need for the "no-overflow" feature before starting down that path. It's never been a feature that I've been very invested in (I just rigged it up to get some semblance of a fair comparison v. Pequin), and I want to make to understand the perspective of someone whose using it more before investing more effort in it.
However, I'd like to better understand your use of and need for the "no-overflow" feature before starting down that path. It's never been a feature that I've been very invested in (I just rigged it up to get some semblance of a fair comparison v. Pequin), and I want to make to understand the perspective of someone whose using it more before investing more effort in it.
I totally agree with you that no-overflow is super sketchy, since the prover can exploit overflow in many cases by providing inputs that could violate soundness (particularly when we have existential variables). The one place where no-overflow is helpful is when expressing crypto operations that use the same curve/field over which the underlying proof system is defined. However, having a flag called "no-overflow" that disables all overflow checks for all variables is perhaps not the way to go. Instead, we should have a special field_p
integer type or something like that. Then, arithmetic operations for that type do not have overflow checks, while all other types do. We can handle bitwise operations via casting.
For example:
field_p x = 25412;
uint32_t y = 77;
if ((uint_32t) x > y) {
// foo baz bar
}
Would something like this be more reasonable? Curious to hear your thoughts.
Instead, we should have a special field_p integer type or something like that. Then, arithmetic operations for that type do not have overflow checks, while all other types do. We can handle bitwise operations via casting.
This is extremely reasonable. We already support finite field elements (and converting from them to bit-vectors) in our IR. These parts of the IR are used by the Zokrates front-end. It would be easy enough to bubble them up in the C front-end.
Subtraction from variables is being done incorrectly. For example, something like this throws the error
The QEQ (0) * (0) = 1 "eqZero_v2" + -1 is not satisfied 0 0 -1
, when it is run in--prove
mode.If you turn off that enforceCheck error, you get this error instead:
The constraint (0) * (0) = -1 "f0_main_lex1__a_v0" + 1 "f0_main_lex1__x_v0" + 4294967295 evaluated to -4294967296 not 0
4294967295 is the unsigned version of a 2's complement -1.That makes me think there might be an issue with here during subtraction? Somewhere a signed number is being treated as an unsigned number.