Open fmeum opened 3 years ago
Weird. This doesn't happen with the helper.py script.
@jonathanmetzman The issue is more widespread and also affects other projects, but non-deterministically. Coverage builds for these projects also failed in the last week due to this issue: https://oss-fuzz-build-logs.storage.googleapis.com/index.html#apache-commons https://oss-fuzz-build-logs.storage.googleapis.com/index.html#javaparser
Is it possible that the GET
request for the target list in the GCloud bucket has an unexpected response, perhaps due to unspecified character encoding, redirects or rate limiting? It could be helpful to log the full HTTP response here.
Hmm...I can't seem to trigger this by manually scheduling a coverage job. Maybe it is a rate limiting issue.
This issue also affects non-Java projects (e.g. uwebsockets) and started to appear around July 21.
@asraa since you're current sheriff could you please look into this? 7 days ago was when we started fixing the issue with preserving state in HOME: https://github.com/google/oss-fuzz/pull/6079 and https://github.com/google/oss-fuzz/pull/6069 But I don't see how this is related.
We're having the same issue with the Zydis coverage build: https://bugs.chromium.org/p/oss-fuzz/issues/detail?id=36433
@asraa Did you have a chance to look into this?
Taking a look now
Is it possible that the GET request for the target list in the GCloud bucket has an unexpected response, perhaps due to unspecified character encoding, redirects or rate limiting? It could be helpful to log the full HTTP response here.
I tried locally to slam the gcloud bucket with the same method as used in the build_lib, and don't think this is the issue. (Regardless it might not hurt to explicitly add an accept encoding). Still tracing this down
(https://bugs.chromium.org/p/oss-fuzz/issues/detail?id=36429)
Without any changes to its build setup and no recent changes to the upstream repo, the coverage build for
jackson-dataformat-xml
is now failing in step 5 (full log):The coverage build works flawlessly locally and the target list at https://storage.googleapis.com/clusterfuzz-builds/jackson-dataformat-xml/targets.list.address also looks correct.
I tried to decode that garbled filename with various encodings, but still have no clue which part of the stack is responsible for this issue.