We can add "tests" that fail if they are not successful some percentage of the time (TBD if configuring this is desirable) by making the fuzzer monitor the success/fail rate. By encouraging a pattern like the following, they should get put in the corpus, but we could consider prioritizing them as well.
function fuzz() external {
bool result = ...
emit sometimesTrue(result); // fail if never true
....
}
function sometimesSucceeds() external { // fail if always reverts
...
}
This is effectively asserting that some path of the code is reached. Some test harnesses/fuzzers will let you mark a line of code as expected to be reached by a given input, for example. That way it is clear the harness/test was not invalidated or regressed to a point that it's no longer effective thereby serving as a "fuzz canary", alerting devs of trouble in the mines.
A common issue is not knowing whether some state is being reached effectively and wanting greater insight into execution statistics (see this issue https://github.com/crytic/medusa/issues/431 and PR with some progress here https://github.com/crytic/medusa/pull/364)
We can add "tests" that fail if they are not successful some percentage of the time (TBD if configuring this is desirable) by making the fuzzer monitor the success/fail rate. By encouraging a pattern like the following, they should get put in the corpus, but we could consider prioritizing them as well.
This is effectively asserting that some path of the code is reached. Some test harnesses/fuzzers will let you mark a line of code as expected to be reached by a given input, for example. That way it is clear the harness/test was not invalidated or regressed to a point that it's no longer effective thereby serving as a "fuzz canary", alerting devs of trouble in the mines.
See also https://antithesis.com/docs/best_practices/sometimes_assertions.html