Closed DonggeLiu closed 3 months ago
/gcbrun run_experiment.py -a --experiment-config /opt/fuzzbench/service/experiment-config.yaml --experiment-name 2024-08-12-dg --fuzzers aflplusplus centipede honggfuzz libfuzzer --benchmarks stb_stbi_read_fuzzer openh264_decoder_fuzzer
/gcbrun run_experiment.py -a --experiment-config /opt/fuzzbench/service/experiment-config.yaml --experiment-name 2024-08-12-2023 --fuzzers aflplusplus centipede honggfuzz libfuzzer --benchmarks stb_stbi_read_fuzzer openh264_decoder_fuzzer
Experiment 2024-08-12-2023
data and results will be available later at:
The experiment data.
The experiment report.
The experiment report(experimental).
This failed likely because both fuzz targets failed to generate coverage repots, e.g.:
Not sure if this related: OSS-Fuzz's build status page shows openh264_decoder_fuzzer
failed.
/gcbrun run_experiment.py -a --experiment-config /opt/fuzzbench/service/experiment-config.yaml --experiment-name 2024-08-13-2023-libfuzzer-1 --fuzzers libfuzzer
Experiment 2024-08-13-2023-libfuzzer-1
data and results will be available later at:
The experiment data.
The experiment report.
The experiment report(experimental).
Report is back : ) @addisoncrump I will wait a bit longer before merging this to ensure the report stays alive. Once I merge this to master, could you please update your PR and bring back the changes you added? Thanks!
Sure, I'll rebase.
@DonggeLiu I am able to build both openh264 and stb_stbi fuzzers as in master
locally with no issue. Like #2021, I think this is a cache issue.
@DonggeLiu I am able to build both openh264 and stb_stbi fuzzers as in
master
locally with no issue. Like #2021, I think this is a cache issue.
I see, thanks for the info! Given that you are investigating this, is there any help I can provide? For example, if you think some more cloud build logs can save you time debugging, please feel free to add them and request experiments. I can run them for you and send you the related logs.
Report on this PR is still not ready, likely due to some VMs were preemptied. I will give it one more day just to be 100% safe.
Given that you are investigating this, is there any help I can provide?
Ah, I was investigating the specific issue with the bug benchmark. I don't think I can offer much help with the CI or the fuzzbench infra directly. I can say, however, that the coverage benchmarks you removed do work as expected locally with test-run. I need to check if the coverage measurer works as anticipated; maybe this needs to be updated instead.
Ah, @DonggeLiu, try running make test-run-coverage-all
. It complains that it can't find bloaty_fuzz_target
on master :eyes:
@DonggeLiu I am able to build both openh264 and stb_stbi fuzzers as in master locally with no issue. Like https://github.com/google/fuzzbench/pull/2021, I think this is a cache issue.
For me the same, they are working. I don't think they should be removed
I see, thanks @addisoncrump and @tokatoka . I've brought them back.
The experiment is about to finish, I will merge this tmr morning.
I confirmed the coverage measurers build locally as well. Will test when everything has finished building.
Yup, I tested openh264 and stb benchmarks locally and they do perform measurements as anticipated. The issue is with the GCP runs, I would presume a build cache issue.
Yup, I tested openh264 and stb benchmarks locally and they do perform measurements as anticipated. The issue is with the GCP runs, I would presume a build cache issue.
I see, I reckon this could be due to impatible GCP vm environment and llvm? I will look into this once I finish other tasks in hand.
Just to double-check @addisoncrump : When you test them locally, did you remove their old local images beforehand?
Thanks for the information again, @addisoncrump!
TBR by @jonathanmetzman.
The experiment that proving this works: https://github.com/google/fuzzbench/pull/2023#issuecomment-2285147301
When you test them locally, did you remove their old local images beforehand?
Yes, I do a docker system prune --all
before every experiment.
When you test them locally, did you remove their old local images beforehand?
Yes, I do a
docker system prune --all
before every experiment.
I see, thanks for confirming. I will merge this then.
I thought I had seen this a long ago, Déjà vu?
The same bug happened 1 year ago https://github.com/google/fuzzbench/pull/1886
The same bug happened 1 year ago #1886
Thanks for noticing this, let me see if @jonathanmetzman has more insight once he is back.
Just to reiterate, this is a major threat to validity -- especially when cached data is used. The cache completely overwrites the report, so the final report generated is simply showing only the last successful experiment. This effectively invalidates all future Fuzzbench reports until this issue is resolved.
I think the report generation issue indicates that safeguards should be put in place that simply terminate the experiment in such degenerative cases, since the results are effectively guaranteed to be invalid.
Fix
TypeError: expected str, bytes or os.PathLike object, not NoneType
in2024-08-10-test
.This happens on many benchmarks+fuzzers. To be investigated later:
Why
fuzz_target_path
isNone
.Why this did not happen in other recent experiments.
I thought I had seen this a long ago, Déjà vu?
Fixing
No such file or directory: '/work/measurement-folders/<benchmark>-<fuzzer>/merged.json
:Remove incompatible benchmarks:
openh264_decoder_fuzzer
,stb_stbi_read_fuzzer