google / fuzzbench

FuzzBench - Fuzzer benchmarking as a service.
https://google.github.io/fuzzbench/
Apache License 2.0
1.11k stars 270 forks source link

Experiment request for custom benchmarks #2026

Open ardier opened 3 months ago

ardier commented 3 months ago

Description

Add mutant-based benchmarks and update experiment data in the YAML file. This experiment only introduces new benchmarks as we want to address the saturated seed corpus problem through corpus reduction techniques.

We have decided to use AFL and AFL++ for this experiment to observe any difference in the outcomes due to the difference in these fuzzers.

We use four benchmarks:

  1. The original seed corpus from the lcms_cms_transform_fuzzer benchmark
  2. An unfiltered seed corpus from our saturated corpus
  3. Filtering strategy one applied to the seed corpus
  4. Filtering strategy two applied to the seed corpus
google-cla[bot] commented 3 months ago

Thanks for your pull request! It looks like this may be your first contribution to a Google open source project. Before we can look at your pull request, you'll need to sign a Contributor License Agreement (CLA).

View this failed invocation of the CLA check for more information.

For the most up to date status, view the checks section at the bottom of the pull request.

ardier commented 3 months ago

Updated the Description.

ardier commented 2 months ago

@DonggeLiu @jonathanmetzman Could you please have a look?

DonggeLiu commented 2 months ago

Hi @ardier, we are happy to run experiments for you, but could you please:

  1. Move the seeds directory in this PR to a cloud storage (e.g., GitHub repo), and download it in the Dockerfile? Otherwise the 'Files changed' tab becomes too slow or crashes.
  2. Would you mind making a trivial modification to service/gcbrun_experiment.py? This will allow me to launch experiments in this PR before merging. Here is an example to add a dummy comment : )
  3. In addition, could you please write your experiment request in this format? You can swap the --experiment-name, --fuzzers, --benchmarks parameters with your values:
    /gcbrun run_experiment.py -a --experiment-config /opt/fuzzbench/service/experiment-config.yaml --experiment-name <YYYY-MM-DD-NAME>  --fuzzers <FUZZERS> --benchmarks <BENCHMARKS>

We would really appreciate that.

ardier commented 2 months ago

/gcbrun run_experiment.py -a --experiment-config /opt/fuzzbench/service/experiment-config.yaml --experiment-name 2024-09-03-afl-mutants --fuzzers afl aflplusplus --benchmarks lcms_cms_transform_fuzzer lcms_cms_transform_fuzzer_all_seeds lcms_cms_transform_fuzzer_minimized_mutants lcms_cms_transform_fuzzer_dominator_mutants

ardier commented 2 months ago

@DonggeLiu, apologies that this took a while for me to get to. I have applied the changes you asked for. Please let me know if I should be taking any additional steps.

DonggeLiu commented 2 months ago

/gcbrun run_experiment.py -a --experiment-config /opt/fuzzbench/service/experiment-config.yaml --experiment-name 2024-09-03-afl-mutants --fuzzers afl aflplusplus --benchmarks lcms_cms_transform_fuzzer lcms_cms_transform_fuzzer_all_seeds lcms_cms_transform_fuzzer_minimized_mutants lcms_cms_transform_fuzzer_dominator_mutants

ardier commented 2 months ago

Hello. I don't see the results of this experiment anywhere. Am I missing something, or do I need to take other steps to generate the reports?

DonggeLiu commented 2 months ago

Sorry @ardier , it appears cloud build failed to pick up the previous experiment request command: image

Let me retry this.

DonggeLiu commented 2 months ago

/gcbrun run_experiment.py -a --experiment-config /opt/fuzzbench/service/experiment-config.yaml --experiment-name 2024-09-11-afl-mutants --fuzzers afl aflplusplus --benchmarks lcms_cms_transform_fuzzer lcms_cms_transform_fuzzer_all_seeds lcms_cms_transform_fuzzer_minimized_mutants lcms_cms_transform_fuzzer_dominator_mutants

ardier commented 2 months ago

No problem. Thank you for looking into this.

DonggeLiu commented 2 months ago

Hi @ardier, the experiment request failed again for the same reason, and there is no further log from the cloud logs.

Let's do it again, and I will spend time debugging it if it fails again.

DonggeLiu commented 2 months ago

/gcbrun run_experiment.py -a --experiment-config /opt/fuzzbench/service/experiment-config.yaml --experiment-name 2024-09-12-afl-mutants --fuzzers afl aflplusplus --benchmarks lcms_cms_transform_fuzzer lcms_cms_transform_fuzzer_all_seeds lcms_cms_transform_fuzzer_minimized_mutants lcms_cms_transform_fuzzer_dominator_mutants

DonggeLiu commented 2 months ago

Experiment 2024-09-12-afl-mutants data and results will be available later at: The experiment data. The experiment report. The experiment report(experimental).

DonggeLiu commented 2 months ago

A quick update on this:

  1. The experiment launched successfully this time (finally).
  2. No report generated because of a known issue with llvm-profdata coverage measurement. It failed to measure the coverage hence cannot generate report.
  3. From the cloud log, so far the issue is from benchmark lcms_cms_transform_fuzzer_dominator_mutants only.
  4. I will rerun the exp without that benchmark.
  5. Unfortunately the error happened on all benchmarks, I reckon that's because they are all based on lcms.
  6. Would you mind using other benchmarks? If not I can run some other benchmarks and let you know which one works.
  7. We will look into ways to fix this, but currently I am fully occupied by other tasks and may take weeks before I can go back to this.