Open sthornington opened 1 month ago
I'm going to look into this myself on Monday - if anyone wants to drop any hints about how to schedule retries of async actions in the bazel http downloader code, that would be welcome.
on a filesystem that does not guarantee atomic delete/unlink visibility
Out of curiosity (and also because it's an assumption we rely upon elsewhere), which filesystem have you observed this on?
It’s most noticeable on NFS (I know I know) specifically on some scale-out caching appliances.
On NFS we tracked down the root cause on our system. It's not so much atomic delete/unlink, it's actually that if you have an open file inside that directory and ask NFS to delete that file, NFS will keep it around as a .nfs-<some hash>
hidden file inside the directory until you close that file reference. That makes it impossible to remove the directory containing that hidden file, and so bazel's downloader fails saying it tried to remove a non-empty directory. But by the time you go check if the file is still around, bazel has closed its open reference to that file and so the directory is indeed empty.
On non-NFS filesystems, you can rely on the kernel to keep that file resource around until the file handle it closed, so you can delete a file even if it's still open somewhere (that's how tempfiles work on linux). But NFS uses that hidden file technique since it's got multiple kernels that might have open file handles to deleted files (I haven't looked too deeply at the kernel code though maybe this could be fixed).
Possibly my patch would solve your situation too, if it’s a race between Bazel and itself.
it's actually that if you have an open file inside that directory and ask NFS to delete that file, NFS will keep it around as a .nfs-
hidden file inside the directory until you close that file reference.
That NFS behaves in that way makes sense to me. But if it's a race between Bazel and itself, why can't we arrange for the file to be closed before we attempt the cleanup? Or is there some other process holding the file open?
That NFS behaves in that way makes sense to me. But if it's a race between Bazel and itself, why can't we arrange for the file to be closed before we attempt the cleanup? Or is there some other process holding the file open?
I can't speak to @mark64 's problem, but in my case, a file is correctly removed, but that remove is not visible (to bazel, for example) for a few tens or sometimes hundreds of milliseconds, as the deletion is socialized to all of the cache cluster members. So, the bazel sequence of "remove all the files in a subdirectory" then "remove the subdirectory" throws an exception because at step 2, sometimes, the subdirectory is not yet empty (yet).
My patch above is simply to retry step 2 until it succeeds, and only ultimately re-throw the exception and abort if it continued to be impossible for five seconds.
as the deletion is socialized to all of the cache cluster members
Thanks, that was the missing detail. I just wanted to make sure the retries weren't simply papering over a Bazel bug (failure to issue operations in the right order). I am a bit concerned that this pattern does exist elsewhere in the codebase (we generally assume the filesystem is POSIX-compliant) but I guess we can cross that bridge when we get to it.
Something that occurred to me yesterday is that “clean —expunge” never seems to have this issue, so however that is implemented, it is doing something different.
Description of the bug:
Downloading artefects using http_archive, such as the rust_rules do for downloading cargo crates, can sometimes run into issues where it fails to delete a temporary directory because it's not yet empty:
This error seems to originate from https://github.com/bazelbuild/bazel/blob/ce64b1aef273f559bf4d7a08be005bbf5ae9b33c/src/main/java/com/google/devtools/build/lib/bazel/repository/starlark/StarlarkBaseExternalContext.java#L1061 and if one's filesystem does this, there does not seem to be any possible mitigation? Perhaps temporary downloads could be downloaded to a real temporary directory, not part for the output_base, before being emplaced in the final spot?
Which category does this issue belong to?
Starlark Interpreter
What's the simplest, easiest way to reproduce this bug? Please provide a minimal example if possible.
Build a rust project with lots of crate dependencies and an output_user_root on a filesystem that does not guarantee atomic delete/unlink visibility.
Which operating system are you running Bazel on?
linux
What is the output of
bazel info release
?release 7.3.1
If
bazel info release
returnsdevelopment version
or(@non-git)
, tell us how you built Bazel.No response
What's the output of
git remote get-url origin; git rev-parse HEAD
?No response
If this is a regression, please try to identify the Bazel commit where the bug was introduced with bazelisk --bisect.
No response
Have you found anything relevant by searching the web?
https://github.com/bazelbuild/bazel/issues/20013 seems like the same problem - it just seems to me that exceptions marked TRANSIENT which are to do with cleaning up things like temporary scratch directories should be retried instead of killing the entire build.
Any other information, logs, or outputs that you want to share?
No response