Open kasper93 opened 2 months ago
I suspect that libvips might be experiencing a similar issue, see: https://introspector.oss-fuzz.com/project-profile?project=libvips https://oss-fuzz-build-logs.storage.googleapis.com/log-c652af70-8e81-4bfc-93c1-8cbf2665529e.txt
This problem seems to occur after commit https://github.com/libvips/libvips/commit/65a1371fb6e999ffab2ae5fc2132e851d5485a7f, which added a couple more fuzzers. I was unable to reproduce this issue locally using:
$ python infra/helper.py build_image libvips
$ python infra/helper.py build_fuzzers --sanitizer introspector --engine libfuzzer --architecture x86_64 libvips
@kleisauke: your issue is during linking. With lto and especially with full lto there is a lot of memory needed. Reduce concurrent linking jobs to fix this. I see libvips uses meson, so backend_max_links
will do the job. I don't know what is the RAM quota on builder machines, so you will have to experiment with number of jobs that doesn't fail... For mpv I use -Dbackend_max_links=4
else, ninja would start linking all fuzzer binaries at the same time.
@kasper93 Thanks for the hat tip, I'll try to build with -Dbackend_max_links=4
to see if this fixes it.
FWIW, it seems that libvips' compile tests are also failing. However, since none of these tests are mandatory, the issue went unnoticed. I should probably use a similar workaround to the one described in https://github.com/google/oss-fuzz/pull/7583#issuecomment-1104011067.
FWIW, it seems that libvips' compile tests are also failing. However, since none of these tests are mandatory, the issue went unnoticed. I should probably use a similar workaround to the one described in https://github.com/google/oss-fuzz/pull/7583#issuecomment-1104011067.
Ah, indeed the linker issue with meson. I did this https://github.com/google/oss-fuzz/pull/12081 for mpv.
EDIT: For reference here is the issue about it https://github.com/google/oss-fuzz/issues/12167
Hi,
This is mostly a question if there is something we could to to workaround this, I tried making targets smaller and exclude some files from introspect or, but it fails.
See: https://oss-fuzz-build-logs.storage.googleapis.com/log-49c8a121-941c-4911-be95-5f06c0c7bc8a.txt
It works locally, so it is likely OOM on build machine. I wonder if the introspector itself could be optimized to be less resource heavy?
Not highly important, but I figured to report it in case there are some low hanging fruits to grab :)
Thanks, Kacper