Open noahzarro opened 1 year ago
Hi, it's not possible to have 2 inner_dequeue
race against each other as it's guarded by an Consumer
that required &mut self
access.
What can happen is that there is a windows on inconsistency in which the queue can be considered empty/full while there is 1 element in it or 1 less than full - and this is by design. Then we don't need to use CAS operations which makes the queue work on more Cortex-M targets.
Hi @korken89 @newAM
Sorry to bring this old topic up again, I've got a similar question. Say if:
Then in this case, can I make these two queues lock_free? i.e. I put them in the struct like this:
struct Shared {
#[lock_free]
tx_queue: Queue<u8, 1024>,
#[lock_free]
rx_queue: Queue<u8, 1024>,
}
Is it safe to do so? Or should I wrap them with lock()
instead?
Jackson
I am developing in RISC-V and in my understanding the Queue is not thread or interrupt safe. Imagine the following scenario where an item is dequeued:
Imagine if the
inner_dequeue
function is interrupted at position(1)
, right between the load of thehead
and thetail
and in the other context (either interrupt or different thread)inner_dequeue
is executed. Now the head still is already incremented by the second context, but the original context still uses the old value. Like this, one value is returned/dequeued twice.As far as I understand, the atomics do not prevent this behavior. Or at least not for the single core
risv32imc
target. Here, interrupts get disabled and re-enabled just for the two load instructions. But they are enabled in between the two loads:So I am not sure if thread/interrupt safety is guaranteed, but if it is, I would suggest using a critical section around the whole dequeue process.