Closed tokatoka closed 3 months ago
@DonggeLiu Could you run the CI?
@DonggeLiu Ping
Done! I was on leave last week.
it looks like everytime i update it needs additional approval 😅 can you run it again?
it looks like everytime i update it needs additional approval 😅 can you run it again?
Do you happen to know any way to allow certain users (like you) to always be able to run CIs?
I think you can make me "Collaborator".
I think you can make me "Collaborator".
Oh we will have to discuss this with other owners of this repo. Is there a more lightweight alternative?
I think all the options are written here but it looks like there's no functionality to allow specific users to run CI
but it's strange because previously you didn't have to manually run it for me right?
but it's strange because previously you didn't have to manually run it for me right?
I am not sure, maybe I did.
i'm still debuggin it :)
i think i resolved the problem, could you run again?
/gcbrun
I've changed things so we shouldn't need to approve every time actions wants to run
thank you!
@DonggeLiu The CI looks good can we run the experiment? The command is
/gcbrun run_experiment.py -a --experiment-config /opt/fuzzbench/service/experiment-config.yaml --experiment-name 2024-05-14-libafl-pruner --fuzzers libafl libafl_latest libafl_r120_force_10 libafl_r120_force_50 libafl_r120_last_10 libafl_r120_last_50 libafl_r30_force_10 libafl_r30_force_50 libafl_r30_last_10 libafl_r30_last_50
Sure! We are still resolving the bottleneck in measurement so we cannot run too many fuzzers in one experiment. Ideally let's keep ~5 fuzzers in each. How would you like to group them?
ok
This is group A.
/gcbrun run_experiment.py -a --experiment-config /opt/fuzzbench/service/experiment-config.yaml --experiment-name 2024-05-14-libafl-pruner --fuzzers libafl libafl_latest libafl_r120_force_10 libafl_r120_force_50 libafl_r120_last_10
This is group B
/gcbrun run_experiment.py -a --experiment-config /opt/fuzzbench/service/experiment-config.yaml --experiment-name 2024-05-14-libafl-pruner --fuzzers libafl_r120_last_50 libafl_r30_force_10 libafl_r30_force_50 libafl_r30_last_10 libafl_r30_last_50
/gcbrun run_experiment.py -a --experiment-config /opt/fuzzbench/service/experiment-config.yaml --experiment-name 2024-08-02-libafl-pruner --fuzzers libafl libafl_latest libafl_r120_force_10 libafl_r120_force_50 libafl_r120_last_10
/gcbrun run_experiment.py -a --experiment-config /opt/fuzzbench/service/experiment-config.yaml --experiment-name 2024-08-02-libafl-pruner --fuzzers libafl_r120_last_50 libafl_r30_force_10 libafl_r30_force_50 libafl_r30_last_10 libafl_r30_last_50
/gcbrun run_experiment.py -a --experiment-config /opt/fuzzbench/service/experiment-config.yaml --experiment-name 2024-08-02-libafl-pruner-1 --fuzzers libafl_r120_last_50 libafl_r30_force_10 libafl_r30_force_50 libafl_r30_last_10 libafl_r30_last_50
it looks like it didn't run unfortunately
HI @tokatoka , not my PR so sorry to intrude; it looks like your experiment did start, as the experiment data was created and the logs indicate it's running here. I've had the same thing happen on the last 2 runs of my PR here; the coverage
sub-directory in the data never gets created, even though the fuzzer is running.
I wonder if there's anything obvious in the logs? (I guess one of the FB team can see these?)
thanks for the info! it looks like all the experiment that began today is affected..
This is likely due to no space on device
:
@gustavogaldinoo could you please look into this? Thanks! I've removed all running experiments since none of them produced any results.
Also noticed many Profdata files merging failed.
and https://github.com/google/fuzzbench/pull/2011#issuecomment-2270197163
in the cloud log, which may block the experiment report generation. Related: https://github.com/google/fuzzbench/pull/2011#issuecomment-2270197163.
BTW, will this PR generate a large corpus? This may explain the tons of no space left on device
errors.
BTW, will this PR generate a large corpus? This may explain the tons of no space left on device errors.
yes. i'm thinking about the fix for it now..
Any chance you ran this somewhere in the end? It would be interesting to see the results even if it's only a subset of the available benchmarks that don't use much storage (e.g. open_h264 looks bad for storage, as does proj4 and woff2)
no i didn't run this in the end
This PR tries new idea from https://mschloegel.me/paper/schiller2023fuzzerrestarts.pdf
I implemented a fuzzer that periodically reset the scorpus every 30/120 minutes after novelty was not found/enough time has passed.