google / oss-fuzz

OSS-Fuzz - continuous fuzzing for open source software.
https://google.github.io/oss-fuzz
Apache License 2.0
10.14k stars 2.16k forks source link

mpv fixed bug does not get closed #11958

Open kasper93 opened 1 month ago

kasper93 commented 1 month ago

Hi,

Initially I though it was due to excessive timeouts, but they have been fixed now. Some of testcases are stuck, all I see is pending status and progression started that never ends.

oss-fuzz-linux-zone8-host-scn6-11: Progression task started.

Sure enough after searching similar issues, I found #11490 that was related to disk space issues on runners. And now is my fault, because we were leaking files in /tmp... oops, sorry, though it would be one per process, not that much data. It now has been fixed and rewritten to use memfd_create. https://github.com/mpv-player/mpv/commit/6ede7890925f75c90987e79da8a427db4d4a233c

I'm creating this issue, because there is not much visibility into runners. Currently I don't see many of fuzz binaries running, stats and logs are missing, coverage build is failing. So I presume /tmp is persistent and it is failing?

Could you take a look and see if runners rebuild is needed similar to #11490?

EDIT:

One more generic question, what are the limits of concurrent jobs? FAQ says

Fuzzing machines only have a single core and fuzz targets should not use more than 2.5GB of RAM.

Say we have N fuzzing targets multiplied by sanitizers and fuzzing engines, each target is allowed one fuzz runner or they are queued and what's the limit?

EDIT2: I think I found the root cause https://github.com/google/oss-fuzz/pull/11965 (will close this issue if this helps after merge)

EDIT3: Nothing changed, still there is no progression.

EDIT4: Example of completely stuck testcase https://oss-fuzz.com/testcase-detail/4875501058457600

Thanks, Kacper

kasper93 commented 1 month ago

Sorry to bother you again. Is there anything I can do to help resolve this situation? Currently there seem to be no jobs running at all. So far only clue I have is that disk quota is exceeded and this makes runners stuck somehow. Is /tmp storage persistent? In libfuzzer fork mode (which seems to be used) it would indeed leak some files there previously, but I have no way to validate that this is the problem. I don't think fuzzers itself are that big to cause the problem.

Everything is working fine locally and with cifuzz workflow, only clusterfuzz (oss-fuzz) seems to be stuck completely.

oliverchang commented 1 month ago

Sorry for the delay. It doesn't appear to be a disk space issue, and I'm not sure why they're stuck. I'll kick off a restart of all the machines to see if that resolves it.

kasper93 commented 1 month ago

Thank you. Unfortunately nothing moved. On fuzzer statistic I get Got error with status: 404, on testcase(s) [2024-05-24 13:08:05 UTC] oss-fuzz-linux-zone8-host-lt79-0: Progression task started. and Pending status.

In fairness, it never fully worked, since the initial integration we got some crash reports and some of them were detected as fixed. So far so good, but we never got corpus saved, coverage build since the beginning is failing with

Step #5: Failed to unpack the corpus for fuzzer_load_config_file. This usually means that corpus backup for a particular fuzz target does not exist. If a fuzz target was added in the last 24 hours, please wait one more day. Otherwise, something is wrong with the fuzz target or the infrastructure, and corpus pruning task does not finish successfully.

I thought it needs to stabilize, but now it doesn't seem to give any sign of life, no logs, reports.

I've tested locally full infra/helper.py pipeline and I can generate coverage report without issue, so build and fuzzers seems to be ok. I'd appreciate any help on this matter. I had plans to improve things, add initial corpus, but first we need to stabilize things. There is no rush, but if you need anything on my side to change/update, let me know.

kasper93 commented 1 month ago

Friendly ping. Any pointers on how we can resolve this? It works on CIFuzz and locally. Thanks!

kasper93 commented 3 weeks ago

Sorry for the delay. It doesn't appear to be a disk space issue, and I'm not sure why they're stuck. I'll kick off a restart of all the machines to see if that resolves it.

@oliverchang: Sorry for direct ping. Are you sure about that? I disabled half of our fuzzing targets and things seems to unblock. I get logs and corpus saved now.

I've based my assumptions on documentation.

Our builders have a disk size of 250GB In addition, please keep the size of the build (everything copied to $OUT) small (<10GB uncompressed).

which should fit our case. Our statically linked binaries are not that small ~200MB, but this makes space for 50 of them in $OUT, which is well above what we have and yet we hit limits, also recently during build, that's why I disabled some targets.

I still see some stubborn cases not closing, I will monitor, but things seems to be rolling now, at least I see the logs from fuzzers being saved.

Keeping it open, because I would like to understand what is the limit and if we can enable more targets. There are few protocols and demuxers, better to test them separately.

maflcko commented 3 weeks ago

cross-ref to https://github.com/google/oss-fuzz/issues/11993#issuecomment-2148208135

kasper93 commented 2 weeks ago

I still see some stubborn cases not closing, I will monitor, but things seems to be rolling now, at least I see the logs from fuzzers being saved.

It has been over a week, things seems to work for new issues. Now that the build itself is smaller. Old ones are still stuck, though.

Specifically this: https://bugs.chromium.org/p/oss-fuzz/issues/detail?id=68817 https://bugs.chromium.org/p/oss-fuzz/issues/detail?id=68832 https://bugs.chromium.org/p/oss-fuzz/issues/detail?id=68837 https://bugs.chromium.org/p/oss-fuzz/issues/detail?id=68843 https://bugs.chromium.org/p/oss-fuzz/issues/detail?id=68844 https://oss-fuzz.com/testcase-detail/6265069141819392 https://oss-fuzz.com/testcase-detail/5128934898335744 https://oss-fuzz.com/testcase-detail/6637317872222208

I suspect it tries to use old build that somehow is exceeding disk quota and things are still stuck there.