Closed tokatoka closed 3 months ago
The command is
/gcbrun run_experiment.py -a --experiment-config /opt/fuzzbench/service/experiment-config.yaml --experiment-name 2024-05-12-libafl --fuzzers libafl_saturation
Hi @DonggeLiu This is the longer fuzzer experiment that I was talking about last month. For now can we check if this fuzzer stands the 24 hours run?
Sure! It's actually 23 hours : )
Experiment 2024-05-12-libafl
data and results will be available later at:
The experiment data.
The experiment report.
The experiment report(experimental).
/gcbrun run_experiment.py -a --experiment-config /opt/fuzzbench/service/experiment-config.yaml --experiment-name 2024-05-12-libafl --fuzzers libafl_saturation
looks like it is not built. was something wrong?
nevermind 😅 i think i just forgot to refresh the webpage before checking the result.
Hello @DonggeLiu I checked the log. I think the run was successful. Can we ask a 48 hour run that we discussed last month?
Hello @DonggeLiu I checked the log. I think the run was successful. Can we ask a 48 hour run that we discussed last month?
Sure! Would you mind modify the experiment-config.yaml as discussed? Change this to 2 days: https://github.com/google/fuzzbench/blob/master/service/experiment-config.yaml#L6 Change this to false: https://github.com/google/fuzzbench/blob/master/service/experiment-config.yaml#L14
@jonathanmetzman please let us know if I missed anything. E.g., Shall we run a separate 48-hour exp for base fuzzers beforehand? I reckon we only have their 24-hour results.
done
E.g., Shall we run a separate 48-hour exp for base fuzzers beforehand?
yeah i'm interested to see that too :)
done
E.g., Shall we run a separate 48-hour exp for base fuzzers beforehand?
yeah i'm interested to see that too :)
Sorry that this took so long, @jonathanmetzman and I were extremely busy last week. We will start this tmr (if not today).
BTW, may I know which baseline fuzzers you are interested in comparing against? Here are the options, but I presume not all of them are useful? (e.g., some were not updated in years.)
i'd like to see,
please!
/gcbrun run_experiment.py -a --experiment-config /opt/fuzzbench/service/experiment-config.yaml --experiment-name 2024-05-22-base --fuzzers afl aflfast aflplusplus centipede libafl libfuzzer
Experiment 2024-05-22-bases
data and results will be available later at:
The experiment data.
The experiment report.
The experiment report(experimental).
Hi @tokatoka, while we are waiting for the base fuzzers experiment, would you like to run yours in parallel?
This can save some waiting time (particularly if some benchmarks fail), but it requires you to manually combine the two results together when both are ready. In addition, the report won't include the Unique code coverage plots
section under each benchmark.
/gcbrun run_experiment.py -a --experiment-config /opt/fuzzbench/service/experiment-config.yaml --experiment-name 2024-05-22-bases --fuzzers afl aflfast aflplusplus centipede libafl libfuzzer
Hi @tokatoka, while we are waiting for the base fuzzers experiment, would you like to run yours in parallel? This can save some waiting time (particularly if some benchmarks fail), but it requires you to manually combine the two results together when both are ready. In addition, the report won't include the Unique code coverage plots section under each benchmark.
For me I can wait, and it's better for me to see the combined results
It seems they are stuck after 35 hours..?
but well it's fine.. can we start the experiment for our fuzzer too? @DonggeLiu
It seems they are stuck after 35 hours..?
Sorry I was traveling this week and did not check emails frequently. @jonathanmetzman could you please have a look at this? It appears to be stuck at ~35-hour.
can we start the experiment for our fuzzer too? @DonggeLiu
We might have to understand why it stuck first.
I can see a lot of errors related to requesting metadata, maybe they are related?
network error when requesting metadata
Is there anything that I can help? 😃
/gcbrun run_experiment.py -a --experiment-config /opt/fuzzbench/service/experiment-config.yaml --experiment-name 2024-06-04-bases --fuzzers afl
@jonathanmetzman, gentle ping : )
I suspect that this is the measurement bottleneck again, probably due to the experiment doubles the time?
Let me restart the experiment with afl
only.
If that works, I will restart the experiments with one fuzzer for each.
Experiment 2024-06-04-bases
data and results will be available later at:
The experiment data.
The experiment report.
The experiment report(experimental).
For me to copy and paste later:
gcbrun run_experiment.py -a --experiment-config /opt/fuzzbench/service/experiment-config.yaml --experiment-name 2024-05-22-bases --fuzzers aflfast
gcbrun run_experiment.py -a --experiment-config /opt/fuzzbench/service/experiment-config.yaml --experiment-name 2024-05-22-bases --fuzzers aflplusplus
gcbrun run_experiment.py -a --experiment-config /opt/fuzzbench/service/experiment-config.yaml --experiment-name 2024-05-22-bases --fuzzers centipede
gcbrun run_experiment.py -a --experiment-config /opt/fuzzbench/service/experiment-config.yaml --experiment-name 2024-05-22-bases --fuzzers libafl
gcbrun run_experiment.py -a --experiment-config /opt/fuzzbench/service/experiment-config.yaml --experiment-name 2024-05-22-bases --fuzzers libfuzzer
Hi @tokatoka Thanks for the waiting. The report above confirms that the previous failure is caused by measurement bottleneck, I will only run one fuzzer per exp below. Once they finish, I will run yours in another exp.
We can merge the statistics later manually, I don't think we can get unique coverage for each fuzzer this way, but it should be able to give us overall coverage info as usual. Hope that's OK.
/gcbrun run_experiment.py -a --experiment-config /opt/fuzzbench/service/experiment-config.yaml --experiment-name 2024-06-07-bases-aflfast --fuzzers aflfast
/gcbrun run_experiment.py -a --experiment-config /opt/fuzzbench/service/experiment-config.yaml --experiment-name 2024-06-07-bases-aflpp --fuzzers aflplusplus
/gcbrun run_experiment.py -a --experiment-config /opt/fuzzbench/service/experiment-config.yaml --experiment-name 2024-05-22-bases-centipede --fuzzers centipede
/gcbrun run_experiment.py -a --experiment-config /opt/fuzzbench/service/experiment-config.yaml --experiment-name 2024-06-07-bases-libaf --fuzzers libafl
/gcbrun run_experiment.py -a --experiment-config /opt/fuzzbench/service/experiment-config.yaml --experiment-name 2024-06-07-bases-libfuzzer --fuzzers libfuzzer
Experiment 2024-06-07-bases-aflfast
data and results will be available later at:
The experiment data.
The experiment report.
Experiment 2024-06-07-bases-aflpp
data and results will be available later at:
The experiment data.
The experiment report.
Experiment 2024-05-22-bases-centipede
data and results will be available later at:
The experiment data.
The experiment report.
Experiment 2024-06-07-bases-libaf
data and results will be available later at:
The experiment data.
The experiment report.
Experiment 2024-06-07-bases-libfuzzer
data and results will be available later at:
The experiment data.
The experiment report.
/gcbrun run_experiment.py -a --experiment-config /opt/fuzzbench/service/experiment-config.yaml --experiment-name 2024-06-07-bases-libfuzzer --fuzzers libfuzzer
@DonggeLiu Thank you. They seem to be working. Next can you run my fuzzer?
/gcbrun run_experiment.py -a --experiment-config /opt/fuzzbench/service/experiment-config.yaml --experiment-name 2024-05-12-libafl --fuzzers libafl_saturation
Sure, I presume this is the right fuzzer?
gcbrun run_experiment.py -a --experiment-config /opt/fuzzbench/service/experiment-config.yaml --experiment-name 2024-06-11-libafl-sat --fuzzers libafl_saturation
BTW, I noticed that the afl++
exp failed, likely due to measurement issues again.
Is that an important benchmark for you? I can further split the experiment by benchmarks if that's necessary.
Sure, I presume this is the right fuzzer?
Yes that's it.
BTW, I noticed that the afl++ exp failed, likely due to measurement issues again. Is that an important benchmark for you? I can further split the experiment by benchmarks if that's necessary.
No that is not important so i don't need extra experiment for aflpp
Thank you @DonggeLiu
/gcbrun run_experiment.py -a --experiment-config /opt/fuzzbench/service/experiment-config.yaml --experiment-name 2024-06-13-libafl-sat --fuzzers libafl_saturation
Thanks for the confirmation and waiting, @tokatoka.
Experiment 2024-06-13-libafl-sat
data and results will be available later at:
The experiment data.
The experiment report.
The experiment report(experimental).
Thank you 👍 We can close this
Hi @DonggeLiu This is the longer fuzzer experiment that I was talking about last month. For now can we check if this fuzzer stands the 24 hours run?