Open kyrias opened 1 year ago
probably a duplicate of #112723 though that one is only about try_recv
They look related, but the actual spinning case from their backtrace is different.
Ultimately it's not a big problem for us if try_recv
blocks for a short while in certain cases, but it's a big problem if it's a spinloop because in RTOS conditions it means that lower-priority threads will never get to run, and so the whole system hangs.
It seems like they're slightly different instances of the general symptom: something is spinning during a priority inversion. With RT scheduling the inversion can last forever. With less strict scheduling it merely takes a while until it resolves.
@kyrias can you try running your code with the https://github.com/crossbeam-rs/crossbeam/pull/1105 branch of crossbeam to see if that fixes your issue?
We have a system built around an Espressif ESP32-S3 MCU using ESP-IDF/FreeRTOS. Recently when we updated our Rust toolchain we started having issues where under certain conditions the watchdog timer would constantly trigger and reset the system. We've managed to track it down to being caused by the new
crossbeam-channel
-basedChannel
spinlooping intry_send
andtry_recv
.Many parts of our system communicate using
Channel
s and the highest priority threads are the ones that read measurements from sensors and then sends those measurements to various channels for further processing usingtry_send
. These threads also read from command channels usingtry_recv
.Our expectation with this approach is that the sending and receiving from these channels should never block on waiting for other threads to run and if we can't read/send anything right then the methods should immediately return an
Err
which we ignore.Through some judicious
println!
-debugging I've found that when we calltry_send
ortry_recv
we sometimes end up in a situation wherestart_send
/start_recv
performs the followingspin_light
calls multiple thousands of times:https://github.com/rust-lang/rust/blob/eb26296b556cef10fb713a38f3d16b9886080f26/library/std/src/sync/mpmc/array.rs#L186
https://github.com/rust-lang/rust/blob/eb26296b556cef10fb713a38f3d16b9886080f26/library/std/src/sync/mpmc/array.rs#L277
This then leads to our idle task never getting to run and thus the watchdog timer times out and resets the system. Disabling the watchdog timer doesn't seem to let it ever get unstuck on its own.
I've tried switching to
crossbeam-channel
as well and while it seems harder to reproduce using that crate it's still happening.Meta
rustc --version --verbose
:Backtrace
``` 0x420304b2 - as core::iter::range::RangeIteratorImpl>::spec_next
at ??:??
0x3fcd9bd0 - _btdm_bss_end
at ??:??
0x4203038e - std::sync::mpmc::array::Channel::start_send
at ??:??
0x3fcd9c00 - _btdm_bss_end
at ??:??
0x42029852 - std::sync::mpmc::Sender::try_send
at ??:??
0x3fcd9c50 - _btdm_bss_end
at ??:??
0x42038706 - std::sync::mpsc::SyncSender::try_send
at /home/remmy/.rustup/toolchains/esp/lib/rustlib/src/rust/library/std/src/sync/mpsc/mod.rs:739
0x3fcd9c70 - _btdm_bss_end
at ??:??
0x4200b3e3 - std::thread::Builder::spawn_unchecked_::{{closure}}::{{closure}}
at /home/remmy/.rustup/toolchains/esp/lib/rustlib/src/rust/library/std/src/thread/mod.rs:529
0x3fcd9d50 - _btdm_bss_end
at ??:??
0x420bd223 - as core::ops::function::FnOnce>::call_once
at /home/remmy/.rustup/toolchains/esp/lib/rustlib/src/rust/library/alloc/src/boxed.rs:1985
0x3fcd9de0 - _btdm_bss_end
at ??:??
0x420c3735 - as core::ops::function::FnOnce>::call_once
at /home/remmy/.rustup/toolchains/esp/lib/rustlib/src/rust/library/alloc/src/boxed.rs:1985
0x3fcd9e00 - _btdm_bss_end
at ??:??
0x420ef854 - pthread_task_func
at /home/remmy/src/i/elofleet/firmware/elobox/.embuild/espressif/esp-idf/v5.0.3/components/pthread/pthread.c:196
0x3fcd9e20 - _btdm_bss_end
at ??:??
```