Open ggrieco-tob opened 1 year ago
It sounds interesting to explore the opportunities to leverage libFuzzer somewhere here. Just for more context, what do we mean by react quickly? Our actual fuzzing strategy in how we use values in the new corpus sequence to feed into the fuzzer, or etc?
Also what are we thinking in terms of priority here? I figure some other tasks we have as medium priority should maybe come first, so maybe a low, very-low if it's more of an experimental idea? Or would you prefer we take this more seriously and explore this when doing #28 in parallel?
libfuzzer will start re-using some input as soon as it triggers more coverage, while echidna will put it in a queue to mutate it in a indeterminate amount of time in the future. libfuzzer also perform frecuents minimization of the input to re-use smaller inputs, which could be useful in this case (assuming we define some reasonable minimization function in medusa)
ah, gotcha. and for sure!
edge = hash(pc, jumpdest)
global (edge -> hit count) sort low to high
(input -> set(edge))
queue by rarity sort by sum(idx) where idx = position of edge in hit count
Echidna coverage based feedback loop is not great, for a number of reason. The most important: it is too simple and it is unclear it is good enough to react quickly when there is a new coverage in the corpus. I suggest to test, even using some naive approach, the libfuzzer port to golang to see if it works better.