Closed arvidn closed 1 year ago
Changes Missing Coverage | Covered Lines | Changed/Added Lines | % | ||
---|---|---|---|---|---|
src/chia_dialect.rs | 0 | 2 | 0.0% | ||
src/f_table.rs | 0 | 3 | 0.0% | ||
<!-- | Total: | 37 | 42 | 88.1% | --> |
Totals | |
---|---|
Change from base Build 5284059658: | -0.02% |
Covered Lines: | 5129 |
Relevant Lines: | 5520 |
I updated the last commit to include the program that generated the test cases (tools/generate-modpow-tests.py
) while updating it to have test cases where the numerator is larger than the denominator.
I added a new commit where I add the ability for the benchmark program to generate gnuplots of the measurements along with the line fittings.
I added a new commit where I changed the value used for the two other arguments to modpow
that are not being tested.
I think you're right that the cost was a bit low. I'm adjusting it upwards. I'm updating the original commit, but basically making these changes:
MODPOW_BASE_COST
from 11000 -> 17000
MODPOW_COST_PER_BYTE_BASE_VALUE
from 15 -> 38
MODPOW_COST_PER_BYTE_EXPONENT
unchanged at 3. This is the cost per byte squared.
MODPOW_COST_PER_BYTE_MOD
from 8 to 21. This is the cost per byte squared.
Here's the output from the updated benchmark program:
cost scale: 6.425771579565472
base cost scale: 2.3786823529411767
arg cost scale: 10.418449612403101
opcode: modpow (modulus cost) (60)
time: per-byte: 3.32ns
cost: per-byte: 21
time: base: 7442.54ns
cost: base: 17703
opcode: modpow (exponent cost) (60)
time: per-byte: 0.36ns
cost: per-byte: 2
time: base: 7427.67ns
cost: base: 17668
opcode: modpow (value cost) (60)
time: per-byte: 5.85ns
cost: per-byte: 38
time: base: 7139.68ns
cost: base: 16983
The model fits well for size of base-value and modulus. the size of the exponent doesn't quite grow exponentially, but I think it's a close enough match. We want to avoid spending too much resource on computing the cost too. Note that the y-axis is the square root of the measured value on the plots where the x label ends with "log". This is because the cost model of these is that they grow exponentially, but they are brought down to linear for the linear regression analysis.
These changes are best reviewed one commit at a time.
This introduces two new operators, under the same
softfork
extension as the BLS operators. Similar to those, these will also become available outside of thesoftfork
operator after the hard-fork.(base^exponent) % modulus
. exponent may not be negative. modulus may not be 0.numerator
divided bydenominator
/
, base cost: 998. cost per byte (sum of both operands): 4The fuzzing infrastructure is updated to generate initial corpus for the new operators.
The benchmark program was refactored to more easily support new kinds of operator signatures, and extended to support the new operators.
test cases, generated from python, are added as its own test file under
op-tests
.