Closed saethlin closed 2 years ago
Some hacky investigation pointed out two outdated values were read in this test (I'll add proper logging to Miri weak memory at some point so people can debug with MIRI_LOG
alone)
= note: inside `std::sync::atomic::atomic_load::<u32>` at /home/cbeuw/.rustup/toolchains/miri/lib/rustlib/src/rust/library/core/src/sync/atomic.rs:2586:24
= note: inside `std::sync::atomic::AtomicU32::load` at /home/cbeuw/.rustup/toolchains/miri/lib/rustlib/src/rust/library/core/src/sync/atomic.rs:1712:26
= note: inside `std::sys::unix::futex::futex_wait` at /home/cbeuw/.rustup/toolchains/miri/lib/rustlib/src/rust/library/std/src/sys/unix/futex.rs:33:12
= note: inside `std::sys_common::thread_parker::futex::Parker::park` at /home/cbeuw/.rustup/toolchains/miri/lib/rustlib/src/rust/library/std/src/sys_common/thread_parker/futex.rs:52:13
= note: inside `std::thread::park` at /home/cbeuw/.rustup/toolchains/miri/lib/rustlib/src/rust/library/std/src/thread/mod.rs:929:9
= note: inside `std::sync::mpsc::blocking::WaitToken::wait` at /home/cbeuw/.rustup/toolchains/miri/lib/rustlib/src/rust/library/std/src/sync/mpsc/blocking.rs:67:13
= note: inside `std::sync::mpsc::oneshot::Packet::<()>::recv` at /home/cbeuw/.rustup/toolchains/miri/lib/rustlib/src/rust/library/std/src/sync/mpsc/oneshot.rs:140:21
= note: inside `std::sync::mpsc::Receiver::<()>::recv` at /home/cbeuw/.rustup/toolchains/miri/lib/rustlib/src/rust/library/std/src/sync/mpsc/mod.rs:1161:49
note: inside `imp::tests::wait_for_force_to_finish` at src/imp_std.rs:361:9
19: 0x55a515f9c64a - miri::shims::unix::linux::sync::futex::hda7315750bbb03b7
at /home/cbeuw/rust/miri/src/shims/unix/linux/sync.rs:141:29
...
= note: inside `std::sys::unix::futex::futex_wait` at /home/cbeuw/.rustup/toolchains/miri/lib/rustlib/src/rust/library/std/src/sys/unix/futex.rs:61:21
= note: inside `std::sys_common::thread_parker::futex::Parker::park` at /home/cbeuw/.rustup/toolchains/miri/lib/rustlib/src/rust/library/std/src/sys_common/thread_parker/futex.rs:52:13
= note: inside `std::thread::park` at /home/cbeuw/.rustup/toolchains/miri/lib/rustlib/src/rust/library/std/src/thread/mod.rs:929:9
= note: inside `std::sync::mpsc::blocking::WaitToken::wait` at /home/cbeuw/.rustup/toolchains/miri/lib/rustlib/src/rust/library/std/src/sync/mpsc/blocking.rs:67:13
= note: inside `std::sync::mpsc::oneshot::Packet::<()>::recv` at /home/cbeuw/.rustup/toolchains/miri/lib/rustlib/src/rust/library/std/src/sync/mpsc/oneshot.rs:140:21
= note: inside `std::sync::mpsc::Receiver::<()>::recv` at /home/cbeuw/.rustup/toolchains/miri/lib/rustlib/src/rust/library/std/src/sync/mpsc/mod.rs:1161:49
note: inside `imp::tests::wait_for_force_to_finish` at src/imp_std.rs:361:9
I'm not familiar with once_cell
, don't know if these pointers are helpful to you? I will look into this further
I don't know once_cell
or atomics very well actually, I'm just here because I wanted to patch the int-to-ptr casts out of it then I ran the tests and they reported a deadlock.
In any case, what I'm curious about here is whether this is a bug in once_cell
or Miri. If you're confident this looks like a once_cell
issue, I'll report it.
I just remembered to check with ThreadSanitizer and it reports a data race here so I'm more confident in reporting this as a bug but it is odd that we get a deadlock, not a data race reported. Operator error, I forgot you need to pass -Zbuild-std
or TSan misbehaves.
Looking into the tests a bit closer, the deadlock appears to happen on the lines 357 and 361, though inserting a panic!
before line 357 makes it deadlock still (without any spans though, which is fun), but commenting out the recv
after the panic!
stops it, so that's quite weird.
Can confirm this is not a once_cell
issue. This panics deadlocks too.
use std::{sync::mpsc::channel, thread};
fn test() {
let (tx1, rx1) = channel();
let (tx2, rx2) = channel();
let t1 = thread::spawn(move || {
tx1.send(()).unwrap();
rx2.recv().unwrap();
});
rx1.recv().unwrap();
tx2.send(()).unwrap();
assert!(t1.join().is_ok());
}
pub fn main() {
for i in 0..1000 {
println!("iter {i}");
test();
}
}
$ cargo miri --version
miri 0.1.0 (4d6eca1 2022-06-08)
Closing this because it has served its purpose (see linked Miri issue above)
I'm trying to do some hacking locally, and Miri is detecting a deadlock. So I'm putting up this PR just to see if this reproduces in your CI, in which case this is probably related to new Miri behavior.
Yup, deadlock: