Open jamesmunns opened 7 months ago
Maybe a dupe of https://github.com/rust-lang/rust/issues/109879 - tho this is on Linux instead of Windows, and at a different worker stage (not LTO).
I did have this happen again, though it is very rare. One potential note is that it seems to occur when I am using cargo watch
, and I hit save multiple times quickly.
It's possible what is happening is:
cargo watch
notices a new save was madeDoes incremental compilation have any notion of "transactions"? e.g. what if the compilation is killed?
In the run BEFORE the error occurred, I did get this warning
:
warning: variable does not need to be mutable
--> src/main.rs:23:13
|
23 | let mut listeners = Listeners::new();
| ----^^^^^^^^^
| |
| help: remove this `mut`
|
= note: `#[warn(unused_mut)]` on by default
warning: error copying object file `/mnt/share/vmshare/contracts/isrg/river/source/river/target/debug/deps/river-f96f9f6f9262ae0d.26wqoch9hrpxldmx.rcgu.o` to incremental directory as `/mnt/share/vmshare/contracts/isrg/river/source/river/target/debug/incremental/river-1mpzbb0wp27rk/s-guwsu8i3hc-1glc1l2-working/26wqoch9hrpxldmx.o`: No such file or directory (os error 2)
[Running 'cargo check && cargo test && cargo run -- --config-toml ./assets/test-config.toml']
Checking river v0.2.0 (/mnt/share/vmshare/contracts/isrg/river/source/river)
warning: unused variable: `listeners`
--> src/main.rs:23:17
|
23 | let mut listeners = Listeners::new();
| ^^^^^^^^^ help: if this is intentional, prefix it with an underscore: `_listeners`
|
in the subsequent error:
thread 'cpy 26wqoch9hrpxldmx' panicked at /rustc/aedd173a2c086e558c2b66d3743b344f977621a7/compiler/rustc_codegen_ssa/src/back/write.rs:917:44:
no saved object file in work product
stack backtrace:
0: 0xffff7fd439ec - std::backtrace_rs::backtrace::libunwind::trace::ha909211bcabbe3ac
I did have this happen again, though it is very rare. One potential note is that it seems to occur when I am using
cargo watch
, and I hit save multiple times quickly.
This is great info. Maybe I can craft another test harness that provokes this crash.
Does incremental compilation have any notion of "transactions"? e.g. what if the compilation is killed?
To be clear: There is supposed to be a mechanism to prevent the scenario you're reporting here. We write all our output to a staging directory then fs::rename
the staging directory to the name that the next compilation is actually going to look for. The implementation was designed with this "what if the compiler is killed?" scenario in mind. It's probably just buggy.
Also potentially worth mentioning: I'm using a VM with a shared folder for this, with:
cargo watch
IMO it shouldn't be THAT different than having an IDE with R-A and running cargo-watch at the same time, but just wanted to note.
VM with a shared folder
Hmmmmmmmmmmmmmmmmmmmmmmm
HMMMM indeed. For MORE detail on my setup:
virtiofs
for the shared folderI will try two things today (tho it might not proc/repro today) to see if I can retrigger the issue (it only happens once every day or two, but maybe I can try to trigger it intentionally):
btrfs
for the /
and /home
mountsIn both of those cases I eliminate the "shared" folder, if I can get it to trigger in either of those cases it's probably not virtiofs' fault, if it is then it is. I'll also peek at some of the other issues to see if they are also potentially using virtualization or Weird Filesystems
So, I wrote this crude stress testing script, sharing in case it is useful:
#!/bin/bash
set -euxo pipefail
for i in {1..1000}
do
stime=`bc <<< "scale=2; $i/100"`
echo "// hehe" >> ./toml.rs
sleep $stime
done
I was able to get it to repro once (out of a couple of runs) using the VM + Host setup where it repro'd before, and it acted weird one other time (errors about "can't copy, folder doesn't exist", but no ICE).
I wasn't able to get it to repro at all in the other setups (only linux on btrfs, only macos on apfs).
I'll try it again a couple times, it doesn't proc reliably, so there's still some chance the timing is just different in the other configs.
Still definitely could be virtiofs not handling file locking or hard linking correctly.
Let me know if there's anything I can do to help with my occasionally reproing setup.
Edit, here's the "weird" errors I mention in the not-quite-repro case above:
Compiling river v0.2.0 (/mnt/share/vmshare/contracts/isrg/river/source/river)
error: unable to copy /mnt/share/vmshare/contracts/isrg/river/source/river/target/debug/incremental/river-u700q7cd53gp/s-guxhx8y84l-96tqgp-working/5axxxb4yfcfqc0p3.o to /mnt/share/vmshare/contracts/isrg/river/source/river/target/debug/deps/river-df648408b139a4a9.5axxxb4yfcfqc0p3.rcgu.o: No such file or directory (os error 2)
error: could not compile `river` (bin "river" test) due to 1 previous error
[Finished running. Exit status: 101]
[Running 'cargo check && cargo test']
Checking river v0.2.0 (/mnt/share/vmshare/contracts/isrg/river/source/river)
Finished dev [unoptimized + debuginfo] target(s) in 0.52s
Compiling river v0.2.0 (/mnt/share/vmshare/contracts/isrg/river/source/river)
error: unable to copy /mnt/share/vmshare/contracts/isrg/river/source/river/target/debug/incremental/river-u700q7cd53gp/s-guxhx9ybn5-bxtj5l-working/3c55jdxwnbidcvho.o to /mnt/share/vmshare/contracts/isrg/river/source/river/target/debug/deps/river-df648408b139a4a9.3c55jdxwnbidcvho.rcgu.o: No such file or directory (os error 2)
error: could not compile `river` (bin "river" test) due to 1 previous error
[Finished running. Exit status: 101]
[Running 'cargo check && cargo test']
Checking river v0.2.0 (/mnt/share/vmshare/contracts/isrg/river/source/river)
Finished dev [unoptimized + debuginfo] target(s) in 0.51s
Compiling river v0.2.0 (/mnt/share/vmshare/contracts/isrg/river/source/river)
error: unable to copy /mnt/share/vmshare/contracts/isrg/river/source/river/target/debug/incremental/river-u700q7cd53gp/s-guxhxaykhm-1eu4zaf-working/3tluzuoovoqa9saq.o to /mnt/share/vmshare/contracts/isrg/river/source/river/target/debug/deps/river-df648408b139a4a9.3tluzuoovoqa9saq.rcgu.o: No such file or directory (os error 2)
error: could not compile `river` (bin "river" test) due to 1 previous error
[Finished running. Exit status: 101]
I slapped together a little flock stress tester using the flock
implementation in rustc that's used on non-Linux unices, which I think is what you have: https://github.com/saethlin/flock-stress
Can you run it in your virtiofs?
@saethlin is there a specific way you want me to run it?
Trying either of these it doesn't seem to crash
You did everything I would have suggested. I suppose exposing a bug with such a simple test would have been too easy.
I have access to a Mac, I'll see if I can recreate your setup.
Code
This commit: https://github.com/memorysafety/river/commit/5ff346dd17524dff03a560517312caa7840569b2
Meta
This didn't seem to reproduce after a
cargo clean
, so I can't provide too much more wrt what caused this to happen.So far I've only seen this once.
This was running in an aarch64 fedora VM running on an M2 Mac inside of UTM.
Error output