Open mds1 opened 1 year ago
Maybe https://blog.trailofbits.com/2023/02/27/reusable-properties-ethereum-contracts-echidna/ would also be useful, e.g. the bug found in ABDK
Hi guys, I'm taking this up.
Hi guys,
I think this issue is strictly dependent on
In fact, making Forge more configurable and rich (in terms of collected metrics) can seriously help the benchmarking process.
I'll continue my investigation, and try to unlock such dependencies.
Peace
since this was opened, this was created: https://github.com/Consensys/daedaluzz results: https://consensys.io/diligence/blog/2023/04/benchmarking-smart-contract-fuzzers/
and this: https://twitter.com/agfviggiano/status/1682631396702003200 results:
personally, I would close this and point to these benches
Sorry, the issue description is unclear here—the intent was to develop benchmarks and include them in CI or maintain the benchmark results somewhere. Equally important to comparing against echidna/etc is having a concrete way of measuring how changes to the forge fuzzer impact it's performance
I agree we don't need to develop our own benchmarks now and can use these, just need to actually integrate them into CI / persist results somehow for comparison when we make future fuzzer changes
These benchmarks should not be taken seriously other than a superficial comparision.
These are really great guidelines for designing a benchmark https://github.com/fuzz-evaluator/guidelines
Component
Forge
Describe the feature you would like
There are currently no benchmarks for the fuzz and invariant test features, so it's hard to know how good forge actually is compared to other tools like echidna and harvey.
Echidna has a lot of test cases in its repo, maybe those can be used? https://github.com/crytic/echidna/tree/master/tests/solidity
Additional context
No response