rust-embedded / heapless

Heapless, `static` friendly data structures
Apache License 2.0
1.55k stars 182 forks source link

Queue not thread/interrupt safe? #355

Open noahzarro opened 1 year ago

noahzarro commented 1 year ago

I am developing in RISC-V and in my understanding the Queue is not thread or interrupt safe. Imagine the following scenario where an item is dequeued:

    unsafe fn inner_dequeue(&self) -> Option<T> {
        let current_head = self.head.load(Ordering::Relaxed); // holds head before second dequeue
        // (1) interrupt or thread switch happens here, inner_dequeue is executed in second context
        // -> head is incremented by second context

        // current head is no longer up to date and still holds the not yet incremented value
        if current_head == self.tail.load(Ordering::Acquire) { 
            None
        } else {
            // the same elemet is returned twice
            // once here and once in the second context
            let v = (self.buffer.get_unchecked(current_head).get() as *const T).read();  

            self.head
                .store(Self::increment(current_head), Ordering::Release);

            Some(v)
        }
    }

Imagine if the inner_dequeue function is interrupted at position (1), right between the load of the head and the tail and in the other context (either interrupt or different thread) inner_dequeue is executed. Now the head still is already incremented by the second context, but the original context still uses the old value. Like this, one value is returned/dequeued twice.

As far as I understand, the atomics do not prevent this behavior. Or at least not for the single core risv32imc target. Here, interrupts get disabled and re-enabled just for the two load instructions. But they are enabled in between the two loads:

// disable interrupts
let current_head = self.head.load(Ordering::Relaxed);
// enable interrupts

// disable interrupts
if current_head == self.tail.load(Ordering::Acquire) { 
// disable interrupts
...
}

So I am not sure if thread/interrupt safety is guaranteed, but if it is, I would suggest using a critical section around the whole dequeue process.

korken89 commented 1 year ago

Hi, it's not possible to have 2 inner_dequeue race against each other as it's guarded by an Consumer that required &mut self access.

What can happen is that there is a windows on inconsistency in which the queue can be considered empty/full while there is 1 element in it or 1 less than full - and this is by design. Then we don't need to use CAS operations which makes the queue work on more Cortex-M targets.

huming2207 commented 2 months ago

Hi @korken89 @newAM

Sorry to bring this old topic up again, I've got a similar question. Say if:

Then in this case, can I make these two queues lock_free? i.e. I put them in the struct like this:

struct Shared { 
  #[lock_free]
  tx_queue: Queue<u8, 1024>,

  #[lock_free]
  rx_queue: Queue<u8, 1024>,
}

Is it safe to do so? Or should I wrap them with lock() instead?

Jackson