Closed nathaniel-brough closed 1 year ago
I've also tested this change to ensure that the same coverage is calculated before and after the change.
$ rm fuzz/coverage/fuzz_target_1/coverage.profdata
$ cargo fuzz coverage -O fuzz_target_1
$ cargo cov -- report fuzz/target/x86_64-unknown-linux-gnu/release/fuzz_target_1 -instr-profile fuzz/coverage/fuzz_target_1/coverage.profdata
src/
This approach creates a dummy-corpus and merges the main corpus into the dummy-corpus. After the corpus is merged we delete the dummy-corpus. This allows us to calculate code coverage signficantly faster than calculating it per-file in the corpus, as it can be run in parallel and there is less startup initialisation cost for creating a new process for each entry in the corpus.
This was mainly inspired by the approach taken in google/oss-fuzz.
This should also fix #254 as there will only be one set of profile data per corpus directory rather than a set of profile data per file in all the corpus directories. Although I can't confirm this as I don't own a mac :)
Performance diff:
Testing performance on the fuzzers in quick-xml. With a corpus of 38780 files.
Before
After
Delta
real: 101x faster