Closed tokatoka closed 5 months ago
this PR is ready php is broken but is due to clang 17, plus it is not used by fuzzbench so it's fine anyways
@DonggeLiu Could you run the experiment? The command is
/gcbrun run_experiment.py -a --experiment-config /opt/fuzzbench/service/experiment-config.yaml --experiment-name 2024-03-12-libafl --fuzzers libafl_pre012_0 libafl_pre012_1 libafl_pre012_2 libafl_pre012_3
/gcbrun run_experiment.py -a --experiment-config /opt/fuzzbench/service/experiment-config.yaml --experiment-name 2024-03-13-libafl --fuzzers libafl_pre012_0 libafl_pre012_1 libafl_pre012_2 libafl_pre012_3
Hmm... there is a typo in the fuzzer name: libafl_pre012_0
should be libafl_pre_012_0
.
I will rerun.
/gcbrun run_experiment.py -a --experiment-config /opt/fuzzbench/service/experiment-config.yaml --experiment-name 2024-03-14-libafl --fuzzers libafl_pre_012_0 libafl_pre_012_1 libafl_pre_012_2 libafl_pre_012_3
oops, sorry 😓
It seems like they are still not working yet
It seems like they are still not working yet
Ah, this is due to a problem with gcbrun
.
After a comment triggers an experiment, gcloud
will parse the latest comment to run an experiment.
In this case, your comment was posted immediately after mine, hence gcloud
incorrectly parsed your comment and reported 'Experiment not requested.'
.
This is not your fault, it's just gcloud
's problem in comment parsing. Previously, we loop the comments backwards until the comment has gcbrun
, but that has other problems too.
A simple mitigation is to avoid leaving comments until the experiment
CI finishes. We will come up with more sophisticated solutions later.
/gcbrun run_experiment.py -a --experiment-config /opt/fuzzbench/service/experiment-config.yaml --experiment-name 2024-03-14-libafl --fuzzers libafl_pre_012_0 libafl_pre_012_1 libafl_pre_012_2 libafl_pre_012_3
Experiment 2024-03-14-libafl
data and results will be available later at:
The experiment data.
The experiment report.
obviously there're some bugs in the recent version of LibAFL. 😞 I made more fuzzers that check-out different commits inbetween recent and the previous version to debug these.
Could you help by running additional test? I selected 5 benchmark programs that showed large vdifference
/gcbrun run_experiment.py -a --experiment-config /opt/fuzzbench/service/experiment-config.yaml --experiment-name 2024-03-12-libafl --benchmarks sqlite3_ossfuzz stb_stbi_read_fuzzer libxml2_xml harfbuzz_hb-shape-fuzzer bloaty_fuzz_target --fuzzers libafl_231002 libafl_231012 libafl_231116 libafl_231201 libafl_231226 libafl_240123 libafl_240219 libafl_240312
/gcbrun run_experiment.py -a --experiment-config /opt/fuzzbench/service/experiment-config.yaml --experiment-name 2024-03-15-libafl --benchmarks sqlite3_ossfuzz stb_stbi_read_fuzzer libxml2_xml harfbuzz_hb-shape-fuzzer bloaty_fuzz_target --fuzzers libafl_231002 libafl_231012 libafl_231116 libafl_231201 libafl_231226 libafl_240123 libafl_240219 libafl_240312
Experiment 2024-03-15-libafl
data and results will be available later at:
The experiment data.
The experiment report.
Hello. I think we fixed the problem in 2 possible way. Can you rerun to see if they are really fixed and which fixes are better @DonggeLiu ?
Thank you. The command is
/gcbrun run_experiment.py -a --experiment-config /opt/fuzzbench/service/experiment-config.yaml --experiment-name 2024-03-12-libafl --fuzzers libafl_240312 libafl_fix1 libafl_fix2
/gcbrun run_experiment.py -a --experiment-config /opt/fuzzbench/service/experiment-config.yaml --experiment-name 2024-03-19-libafl --fuzzers libafl_240312 libafl_fix1 libafl_fix2
Experiment 2024-03-19-libafl
data and results will be available later at:
The experiment data.
The experiment report.
May I ask why proj4_proj_crs_to_crs_fuzzer
was removed recently?
https://www.fuzzbench.com/reports/experimental/2024-03-19-libafl/index.html this result does not contain proj4 for our updated fuzzer, and https://storage.googleapis.com/fuzzbench-data/index.html?prefix=2024-03-19-libafl/build-logs/ this page does not show the build log So if i understand correctly, this benchmark was simply not built right?
Hello, thanks for the previous run. After reviewing the previous result, we suspect if there's another bug. So we would like to run another test The command is
/gcbrun run_experiment.py -a --experiment-config /opt/fuzzbench/service/experiment-config.yaml --experiment-name 2024-03-21-libafl --benchmarks sqlite3_ossfuzz freetype2_ftfuzzer lcms_cms_transform_fuzzer bloaty_fuzz_target --fuzzers libafl_231002 libafl_231005 libafl_231006 libafl_231011 libafl_231012 libafl_latest
Ping :) @DonggeLiu
/gcbrun run_experiment.py -a --experiment-config /opt/fuzzbench/service/experiment-config.yaml --experiment-name 2024-03-25-libafl --benchmarks sqlite3_ossfuzz freetype2_ftfuzzer lcms_cms_transform_fuzzer bloaty_fuzz_target --fuzzers libafl_231002 libafl_231005 libafl_231006 libafl_231011 libafl_231012 libafl_latest
Experiment 2024-03-25-libafl
data and results will be available later at:
The experiment data.
The experiment report.
Ping :) @DonggeLiu
Sorry! Done above.
Thanks we confirmed the root cause is on the 10/06's commit. now I pushed a potential fix for it. Could you run it again to see if it works?
The command is
/gcbrun run_experiment.py -a --experiment-config /opt/fuzzbench/service/experiment-config.yaml --experiment-name 2024-03-21-libafl --benchmarks sqlite3_ossfuzz freetype2_ftfuzzer lcms_cms_transform_fuzzer bloaty_fuzz_target --fuzzers libafl_231005 libafl_231006 libafl_inlined
/gcbrun run_experiment.py -a --experiment-config /opt/fuzzbench/service/experiment-config.yaml --experiment-name 2024-03-26-libafl --benchmarks sqlite3_ossfuzz freetype2_ftfuzzer lcms_cms_transform_fuzzer bloaty_fuzz_target --fuzzers libafl_231005 libafl_231006 libafl_inlined
Experiment 2024-03-26-libafl
data and results will be available later at:
The experiment data.
The experiment report.
Hi we pushed the last possible fix ready for the experiment. Could you run this?
/gcbrun run_experiment.py -a --experiment-config /opt/fuzzbench/service/experiment-config.yaml --experiment-name 2024-03-27-libafl --fuzzers libafl_231002 libafl_231005 libafl_latest libafl_latest2
/gcbrun run_experiment.py -a --experiment-config /opt/fuzzbench/service/experiment-config.yaml --experiment-name 2024-03-28-libafl --fuzzers libafl_231002 libafl_231005 libafl_latest libafl_latest2
Experiment 2024-03-28-libafl
data and results will be available later at:
The experiment data.
The experiment report.
Hello. @DonggeLiu We have another potential improvment but we don't know it's good. Could you test this too?
The command is
/gcbrun run_experiment.py -a --experiment-config /opt/fuzzbench/service/experiment-config.yaml --experiment-name 2024-04-04-libafl --fuzzers libafl_latest2 libafl_latest3
/gcbrun run_experiment.py -a --experiment-config /opt/fuzzbench/service/experiment-config.yaml --experiment-name 2024-04-05-libafl --fuzzers libafl_latest2 libafl_latest3
Experiment 2024-04-05-libafl
data and results will be available later at:
The experiment data.
The experiment report.
I have updated the libafl
fuzzer. this PR is ready to be merged
We are soon to release libafl 0.12 before doing so we want to do a fuzzbench run since this update includes lots of changes to the core part of LibAFL.
each of libafl_pre012_0, libafl_pre012_1, libafl_012_2, libafl_pre012_3, is using the commit right after some changes.