risicle / cpytraceafl

CPython bytecode instrumentation and forkserver tools for fuzzing pure python and mixed python/c code using AFL
MIT License
28 stars 4 forks source link

Expected Performance #3

Closed bit-twidd1er closed 3 years ago

bit-twidd1er commented 4 years ago

Are there any expected performance metrics when fuzzing mixed python/c code? I attempted to run the pillow_pcx_example.py. In order to do that, I downloaded Pillow and built it with CC=afl-gcc and CXX=afl-g++ and then installed it.

When running the fuzzer, a single instance of the fuzzer seems to always execute at 3-4 execs/second. That seems slower than what I expected, is their any numbers that you can provide as to what you see there?

Additionally, is there a way to verify that the instrumentation is working? I provided in my input directory multiple files that should cause the code to go down different paths within the C portion of the Pillow code, however the when starting the fuzzer I received a message that no new instrumentation was detected with the additional test cases. Is there something that I could be doing wrong to cause that?

Thank you.

risicle commented 4 years ago

When I'm having a target that's not giving me any new paths, most of the time I find out that it's because the reader is aborting early due to some dumb mistake. The forkserver.err and forkserver.out files produced by dummy-afl-qemu-trace may be illustrative in this - or you may have to replace pillow_pcx_example.py's exception-ignoring clause with one printing the exception for these to become useful.

For pillow_pcx_example.py I achieve about 25 execs/s on a pretty old machine. Note that it's important to use small examples as inputs (typically <1k). I also only tend to use afl-clang-fast for binary instrumentation - it's significantly better than the older gcc hack. But this is all just standard afl stuff.