googleprojectzero / Jackalope

Binary, coverage-guided fuzzer for Windows, macOS, Linux and Android
Apache License 2.0
1.07k stars 123 forks source link

Question about OOM in GrammarMutator #33

Open Kiprey opened 2 years ago

Kiprey commented 2 years ago

Hello! Would you like to ask whether OOM is considered in GrammarMutator? There seems to be no limit to the size of interesting_trees and various *_ candidates in GrammarMutator. This can lead to OOM during long fuzzing.

If it is a bug, can you please fix it? Or I just miss something?

Thanks.

ifratric commented 2 years ago

Hi! It's certainly possible to hit OOM if the samples are large and the corpus keeps growing. However, I'm curious if you can share more about your setup (either here or privately), because I used the grammar engine to fuzz reasonably complex targets (such as v8 javascript engine for example) and didn't encounter such errors in practice (IIRC the fuzzing workers had 8 or 16GB RAM each, not sure anymore).

Jackalope has some mechanisms to decrease RAM usage such as -keep_samples_in_memory=0, however this might not give you much with the grammar mode. The interesting_trees you mentioned keeps all samples in the corpus in the tree form. This is so that these trees can be combined with the currently fuzzed sample to generate new interesting samples. In the interest of saving memory, you could simply comment out this line https://github.com/googleprojectzero/Jackalope/blob/bd49b2c447be91d9f7f33bd4e176e4390f4148d0/mutators/grammar/grammarmutator.cpp#L85, however those types of mutations would no longer work.

However, there are several other, easier things you can do to to reduce memory usage in grammar fuzzing mode

Having said all of the above, if there is a memory leak somewhere, of course this would be considered a bug.

Kiprey commented 2 years ago

Thank you for your reply! I combined the Jackalope grammar mutation strategy with AFL (alias as afl-jackalope )for better fuzzing binaries with source code. So I'm sorry for not being able to provide OOM setups as I'm not using Jackalope directly.

For Jackalope itself, simply removing interesting_trees.push_back(context->tree) may not reduce memory consumption, because context is still held by other data structures, such as sample_queue.

For afl-jackalope, if interesting_trees's size reaches a certain value, then I just make interesting_trees simply delete half of nodes to prevent from OOM, now it works quite well.

I read the source code of jackalope carefully, although there are quite a few structures that never release memory, their life cycles are quite correct. So there is no hint that there is any memory issue inside.

I have solved the problem about OOM. However, I still have some questions about the corpus.

Please excuse my rough English. Best wishes for you! :)

ifratric commented 2 years ago

Thanks for reviewing the code :)

"Can jackalope run without inputs?" Not sure what you mean, did you mean with inputs. Because in the grammar fuzzing mode, it's actually expected that Jackalope will start without inputs; and the initial inputs will be generated using the provided grammar. See more info in this thread https://github.com/googleprojectzero/Jackalope/issues/26

"Is there any way to convert from common JS to serialized tree (jackalope input format)?" As noted in the other thread, there is no guaranteed unique way to parse a sample generated by a context-free grammar back into its grammar representation. We could maybe do a "best effort", but this is not implemented in Jackalope currently.

"Would the input of jackalope have an impact on its mutation process? And if so, what is the approximate impact?" Sorry, not sure what you mean here - can you give me an example?

Kiprey commented 2 years ago

Thanks for your kind reply. The last question means whether the corpus is important for the fuzzing process or coverage results. For example, in AST fuzzer, good corpus inputs can significant improve the performance of fuzzer, while junk corpus inputs make littile contribution to the performance of entire fuzzing process.

Now I can probably deduce the answer to the last question with your kind answer and great code implementation. Thanks for your artwork and patient reply. I have no other questions.

Best wishes.