bazelbuild / bazel

a fast, scalable, multi-language and extensible build system
https://bazel.build
Apache License 2.0
22.98k stars 4.03k forks source link

(zstd compression?) com.google.common.base.VerifyException: java.io.IOException: attempt to write to a closed Outputstream backed by a native file #22930

Open rbeasley-avgo opened 2 months ago

rbeasley-avgo commented 2 months ago

Description of the bug:

We're observing sporadic build failures where the Bazel daemon crashes with the following:

240630 09:50:38.941:XT 2149 [com.google.devtools.build.lib.bugreport.BugReport.handleCrash] Handling crash with CrashContext{haltJvm=true, args=[], sendBugReport=true, extraOomInfo=, eventHandler=com.google.devtools.build.lib.events.Reporter@2a243856}
com.google.common.base.VerifyException: java.io.IOException: attempt to write to a closed Outputstream backed by a native file
        at com.google.devtools.build.lib.remote.GrpcCacheClient$1.onNext(GrpcCacheClient.java:433)
        at com.google.devtools.build.lib.remote.GrpcCacheClient$1.onNext(GrpcCacheClient.java:414)
        at io.grpc.stub.ClientCalls$StreamObserverToCallListenerAdapter.onMessage(ClientCalls.java:474)
        at io.grpc.ForwardingClientCallListener.onMessage(ForwardingClientCallListener.java:33)
        at io.grpc.ForwardingClientCallListener.onMessage(ForwardingClientCallListener.java:33)
        at com.google.devtools.build.lib.remote.logging.LoggingInterceptor$LoggingForwardingCall$1.onMessage(LoggingInterceptor.java:138)
        at io.grpc.internal.DelayedClientCall$DelayedListener$2.run(DelayedClientCall.java:457)
        at io.grpc.internal.DelayedClientCall$DelayedListener.drainPendingCallbacks(DelayedClientCall.java:507)
        at io.grpc.internal.DelayedClientCall$1DrainListenerRunnable.runInContext(DelayedClientCall.java:296)
        at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
        at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144)
        at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642)
        at java.base/java.lang.Thread.run(Thread.java:1583)
Caused by: java.io.IOException: attempt to write to a closed Outputstream backed by a native file
        at com.google.devtools.build.lib.unix.UnixFileSystem$NativeFileOutputStream.write(UnixFileSystem.java:562)
        at com.google.devtools.build.lib.remote.common.LazyFileOutputStream.write(LazyFileOutputStream.java:44)
        at com.google.devtools.build.lib.remote.RemoteCache$ReportingOutputStream.write(RemoteCache.java:550)
        at com.google.devtools.build.lib.remote.util.DigestOutputStream.write(DigestOutputStream.java:58)
        at com.google.common.io.CountingOutputStream.write(CountingOutputStream.java:54)
        at com.google.devtools.build.lib.remote.zstd.ZstdDecompressingOutputStream.write(ZstdDecompressingOutputStream.java:61)
        at com.google.devtools.build.lib.remote.zstd.ZstdDecompressingOutputStream.write(ZstdDecompressingOutputStream.java:54)
        at com.google.protobuf.ByteString$LiteralByteString.writeTo(ByteString.java:1459)
        at com.google.devtools.build.lib.remote.GrpcCacheClient$1.onNext(GrpcCacheClient.java:430)
        ... 12 more

Which category does this issue belong to?

Remote Execution

What's the simplest, easiest way to reproduce this bug? Please provide a minimal example if possible.

No response

Which operating system are you running Bazel on?

Linux

What is the output of bazel info release?

release 7.2.0-vmware

If bazel info release returns development version or (@non-git), tell us how you built Bazel.

No response

What's the output of git remote get-url origin; git rev-parse HEAD ?

No response

If this is a regression, please try to identify the Bazel commit where the bug was introduced with bazelisk --bisect.

No response

Have you found anything relevant by searching the web?

No. I couldn't find any matches for "closed outputstream".

Any other information, logs, or outputs that you want to share?

We're using dynamic RBE (impl: bazelbuild/bazel-buildfarm) and remote cache compression.

build:remote --extra_execution_platforms=//:rbe_platform
build:remote --remote_executor=grpcs://<endpoint>
build:remote --jobs=HOST_CPUS*10
build:remote --remote_retries=5
build:remote --experimental_remote_cache_eviction_retries=5
build:remote --verbose_failures
build:remote --remote_cache=
build:remote --disk_cache=
build:remote --noremote_upload_local_results
build:remote --experimental_remote_cache_async
build:remote --experimental_remote_merkle_tree_cache
build:remote --remote_local_fallback
build:remote --remote_local_fallback_strategy=sandboxed
build:remote --experimental_remote_downloader_local_fallback
build:remote --remote_cache_compression

build:rbe_dynamic \
    --config=remote \
    --internal_spawn_scheduler \
    --spawn_strategy=dynamic \
    --dynamic_local_strategy=worker,sandboxed,local

build --experimental_debug_spawn_scheduler
tjgq commented 1 month ago

@rbeasley-avgo is this reproducible without dynamic execution? My theory at the moment is that the write happens after the remote branch gets canceled (because we're not propagating the cancellation to the write thread properly).

rbeasley-avgo commented 1 month ago

@rbeasley-avgo is this reproducible without dynamic execution? My theory at the moment is that the write happens after the remote branch gets canceled (because we're not propagating the cancellation to the write thread properly).

@tjgq I'll try it out and get back to you.

rbeasley-avgo commented 1 week ago

@tjgq In order to establish a baseline, I updated one of our canary pipelines to reenable --remote_cache_compression while still using dynamic execution. I was hoping to encounter the failure described by this issue, after which I'd switch off dynamic execution and re-observe. However, I haven't seen these failures. (FWIW, they coincided with a window where our RBE instance was unhealthy. Brief summary of that here: https://github.com/bazelbuild/bazel/issues/22854#issuecomment-2256166384.)

Unless anyone else can corroborate this, I guess we'll just need to close as not planned. :(