Closed DavePearce closed 4 weeks ago
@delehef I'm wondering if you can shed any light on why these two seemingly identical constraints are translated so differently?
UPDATE: I'm now thinking that this is actually related to #111. Is that right? Its starting to look like a bug to me.
There are two ways to prove a range inclusion, with different compromises w.r.t. performances:
@prove
generate the former, while definrange
generate the latter. There maybe should have been a convergence on a single solution, but as we never gathered performance data from the prover on this question, we were not able to decide.
Ta! So I figure these constraints are generated only once and then reused across all byte@prove annotations. Because they are actually creating 16 columns worth of data? So they are used for other things as well I guess.
But I actually don't see where/how the column X is being constrained. There seems like a missing connection? For reference I attached the full output: test.txt
Ok, so I now see this:
AUX_255_HOOD ≜ ↻ X
INTRLD_AUX_255_HOOD ⪡ AUX_255_HOOD, X
Hmmmm, so what we actually have here is a permutation constraint stating that X must be a permutation of the 256 height column AUX_255_HOOD
?
We prove that the sorted interleaving of X
and AUX_255_HOOD
must (i) be sorted, (ii) start at 0, (iii) end at 255; you can see with @OlivierBBB for the design of these constraints.
So, something still doesn't make sense here. There are 16 range constraints being generated of the form:
__SRT__Delta_0_c0d3bd-is-byte
__SRT__Delta_0_c0d3bd < 256
__SRT__Delta_1_c0d3bd-is-byte
__SRT__Delta_1_c0d3bd < 256
__SRT__Delta_2_c0d3bd-is-byte
__SRT__Delta_2_c0d3bd < 256
__SRT__Delta_3_c0d3bd-is-byte
__SRT__Delta_3_c0d3bd < 256
...
If we wanted AUX_255_HOOD
to be within 0..255
we could use a single range constraint AUX_255_HOOD < 256
, right? So these are being used for some other purpose ... surely?
There are two things at play here:
AUX_255_HOOD
is generated by Corset as a cyclic column, so it is not proved and is assumed to be in [[0; 255]];
the Delta
stuff that you see is not coming from the range check, but from the sorting proof (cf. with @OlivierBBB for the concepts & details), and are there to prove that the sorted copy of the interleaved [X|AUX_255_HOOD]
column is indeed ascendingly sorted; this implies to prove that the difference of each two successive elements is positive, which requires a byte-decomposition over the whole underlying field (hence the 16 bytes).
and are there to prove that the sorted copy of the interleaved [X|AUX_255_HOOD] column is indeed ascendingly sorted
So, that suggests they are created afresh for each column marked byte@prove
, right? Isn't that counter productive? I mean its using 16 range constraints to avoid using 1 range constraint ... what did I miss here?
So, that suggests they are created afresh for each column marked byte@prove, right?
No, the whole mechanism is only triggered once per type being proven per module, and all the columns of this type are proved at once.
per type being proven per module
Ok, that makes sense. Does it have to be per module? If so, then you need at least 16
columns marked as byte@prove
in a module to make this efficient?
Does it have to be per module?
Not really.
then you need at least 16 columns marked as byte@prove in a module to make this efficient?
The question is two-fold:
Hmmmm. It seems to me that a simpler translation like this makes sense:
AUX < 256
(i.e. all values of AUX are bytes)X_AUX = X || AUX
(i.e. interleaving of X and AUX)X_AUX_SRT = perm X_AUX
_(i.e. X_AUX_SRT
is a permutation of X_AUX
)_X_AUX_SRT[0] = 0
X_AUX_SRT[i+1] = X_AUX_SRT[i] * (1 + X_AUX_SRT[i])
X_AUX_SRT[N] = 255
How does that look?
X_AUX_SRT[i+1] - X_AUX_SRT[i] >= 0
not sure where you get this one from?
Yeah, my translation looks broken. I just dumped it out there and then. Is supposed to be in logic: X_AUX_SRT[i+1] = X_AUX_SRT[i]
OR X_AUX_SRT[i+1] = X_AUX_SRT[i] + 1
So, the actual constraint should have been : (X_AUX_SRT[i+1] - X_AUX_SRT[i]) * (X_AUX_SRT[i+1] - (1 + X_AUX_SRT[i]))
. This forces the column to count up in steps of at most 1
from 0
to 255
.
(FYI, these are my interpretations of what Olivier said)
What about a lookup to a reference table going from 0 to 255 ? Seems less constraints no ?
@letypequividelespoubelles Well, you definitely could use a lookup table. I don't have a good feeling for what the trade offs are here. We need more input from the prover side on the relative costs of different approaches.
We should involve the prover team and @AlexandreBelling in particular to figure out what makes the most sense. I think I remember from discussions with Alex that the prover would produce a mega column (concatenation of sorts of all :byte
columns in the arithmetization and have a single byte hood argument apply to that mega column.
Just use a range-check for bytehood. The only exception is when you want to restrict to 1 or 2 bytes
Just use a range-check for bytehood.
Ok, great --- that's an easy option for me!
The following:
is translated directly as a range constraint for the prover:
However, in contrast, this is handled quite differently:
by translating into a much larger number of constraints / columns:
So, what is the difference here?