Closed DXist closed 1 year ago
Different drivers use queue for different reasons:
Send + Sync
.RefCell
or Mutex
, it may panic or deadlock.In short words, all for the signals. Maybe the SegQueue
could be replaced to a RefCell<VecDeque>
if signal
feature is disabled. Are you interested in implementing that?
Linux could use the first available thread that didn't mask the signal and if the main thread can't be used.
On Linux signaling could be done via eventfd. The thread that owns the queue could issue blocking read of the eventfd descriptor via IO_Uring, while signal handling thread could write signal number to the event file descriptor.
So we will increase overhead of signal handling but optimize the normal IO path.
I've added a Queue
wrapper type in this PR.
When "signal" feature is disabled Queue is implemented through UnsafeCell<VecDeque>
. Otherwise "signal" enables "sync-queue" feature that depends on crossbeam_queue::SegQueue
.
I've included a benchmark that I run inside a container in Docker Desktop VM.
The benchmark creates Driver with default capacity and posts 1024 completions.
Median latency for VecDeque-based queue is 93 microseconds, for SegQueue - 130 microseconds.
PR is merged, closing
The README mentions thread-per-core architecture but driver code uses queue with thread synchronization.
Probably it's better to give users a choice?
I'm personally interested to drive IO from a single thread.