fuzzware-fuzzer / fuzzware

Fuzzware's main repository. Start here to install.
Apache License 2.0
302 stars 51 forks source link

pipeline.py - Too many fuzzer sessions died #35

Closed B03901108 closed 9 months ago

B03901108 commented 10 months ago

Hi, I am trying to use Fuzzware with some MMIO models provided at the very beginning. First, in P2IM/Steering_Control, I ran "fuzzware pipeline" for 15 minutes with the default config.yml (fuzzer: AFL++, #fuzzers: 2) to obtain the "fuzzware-project" directory, especially the mmio_config.yml therein. Then, I merged the default config.yml and the mmio_config.yml into a new config.yml and re-ran "fuzzware pipeline" with this new config.yml (all the other settings: the same). In the re-run, the supposedly 15-minute fuzzing terminated within 4 minutes, with the ERROR message "pipeline.py - Too many fuzzer sessions died, exiting. Check for bogus MMIO accesses created from fuzzer-controlled firmware execution."

I tried 1) switching between AFL and AFL++ and 2) adjusting the number of fuzzers, but they seem not helpful. Do you have some suggestions on this situation? Thank you.

Scepticz commented 10 months ago

What was the config that you used? The use case of re-using MMIO models is common and expected. I have not had issues with that. Instead of manually merging the MMIO config, I would recommend using the include syntax in config.yml. In that scenario you would first copy the mmio models from the latest fuzzware-project next to the original config.yml and then include it in the other config. E.g.:

include:
- mmio_config.yml

# Original contents starting from here...

That way less can go wrong in the process of merging the configs.

B03901108 commented 10 months ago

@Scepticz During the weekend, I played with the latest version of Fuzzware. I narrowed down the cause of my fuzzer deaths to an extra c-level basic-block hook I added (in native_hook.c) for checking if an incoming block indicates entering or leaving an ISR. Still, I could not find the root cause.

Does Fuzzware log the errors/causes of a fuzzer's death? If not, do you have any recommended ways of diagnosing fuzzer deaths? Also, are there any fuzzware-specific mechanisms that force a fuzzer death? Thank you very much.

Scepticz commented 10 months ago

Often, when the fuzzer dies it is on its latest input, which for AFL is still available in the .cur_input file. What you can try to do is run this input and see whether an issue is indicated:

for in_path in fuzzware-project/main*/fuzzers/fuzzer*/.cur_input; do fuzzware emu -v $in_path; done

A second option is to run a single AFL instance instead of running the full pipeline. Here you would re-use the latest config from one of the erroring fuzzware-project/mainXXX directories and run AFL using the utility fuzzware fuzz -h.

If you are using AFL++, there are additional Environment variables that you can set for debugging, such as AFL_DEBUG_CHILD (see https://github.com/AFLplusplus/AFLplusplus/blob/stable/docs/env_variables.md)

B03901108 commented 9 months ago

The probabilistic crashes were caused by my changes to Fuzzware's test-harness code. I did not notice that in the fork-server mode, a test harness will first start a child process to run part of the given test case, then re-run the test case by itself up to the time to take a snapshot, and finally run the rest of the test case multiple times (starting from the snapshot). My previous modifications led to memory errors as they assumed that the test harness only runs each test case once.

I have now taken the above into account when playing with Fuzzware's code. The fuzzing now works fine. Thank you.