Open snoopcatt opened 3 years ago
I don't know the cqueues API so I'm not quite sure what it is that that FIFO implementation does to make it "non-blocking". How do you make a queue/stack non-blocking? Is it the access to it that you're controlling? Could you show an example usage?
I will try to explain. Let's look on example: https://github.com/luvit/luv/blob/master/examples/uvbook/tcp-echo-server.lua
Imagine that we have FIFO with some object inside. Let's say there is 5 objects.
In callback function for read_start
, to conjure a response, we :pop()
an object from FIFO, and after we send a response and close connection — we :push()
our object back to FIFO.
If we have 6 simultaneous requests, sixth request will face that FIFO is empty.
It will invoke fifo:empty()
function (throws an error by default)
Let's look at cqueues-fifo.lua.
There is modified fifo:empty()
function:
fifo:setempty(function(self)
cond:wait()
return fifo:pop()
end)
And also fifo:push()
:
function methods:push(...)
self.fifo:insert(...)
self.pollfd:signal(1)
end
So, when we trying to :pop()
an object from empty FIFO, it will wait for a signal, that will occur when someone :push()
into FIFO.
Let's return to our example. When we will receive 6 simultaneous request, on a sixth request we will face empty FIFO.
With cqueues
, sixth request will invoke fifo:empty()
that waits for a signal.
When one of first five requests will finish and :push()
our object back to FIFO, procedure in sixth request will be awakened with a signal and will be completed normally.
How to do the same with luv
?
I tried playing around with timers and pipes, but it does not work.
When sixth request invokes fifo:empty()
(with timer/pipe), it blocks whole I/O, so neither of first five requests will complete and :push()
object back to FIFO, so our program will just hang.
Hope I explained it understandable.
@daurnimator do you understand what is being asked here? When I hear fifo, I think of the linux file objects created by mkfifo(3)
, but I don't think that's @snoopcatt means.
@snoopcatt, do you just was a pure lua code library that creates a queue of arbitrary lua values or do you want something related to actual linux fifos? I don't understand what the empty thing that throws an error by default means.
@daurnimator do you understand what is being asked here? When I hear fifo, I think of the linux file objects created by
mkfifo(3)
, but I don't think that's @snoopcatt means.
A first-in-first-out queue interface. https://en.wikipedia.org/wiki/FIFO_(computing_and_electronics)#Computer_science
i.e. you can push
in one end; and pop
out the other. It stores objects waiting to be popped in an internal buffer.
The integration between a fifo and luv would be: "when I pop
and the internal buffer is empty; the current coroutine should be paused; and the next push
would wake it up"
@daurnimator, Thanks I understand the general concept of fifo queues, but the title "with luv" made me think maybe it was related to linux syscalls.
For starters, here is a simple FIFO queue with no max buffer size:
local function fifo()
local reader = 0
local writer = 0
local function pop()
if reader < writer then
local value = queue[reader]
queue[reader] = nil
reader = reader + 1
return value
end
local current = coroutine.runing()
queue[reader] = current
reader = reader + 1
return coroutine.yield()
end
local function push(value)
if reader > writer then
local waiting = queue[writer]
queue[writer] = nil
writer = writer + 1
return coroutine.resume(waiting, value)
end
queue[writer] = value
writer = writer + 1
end
return {
pop = pop,
push = push,
}
end
It will block the current coroutine on queue.pop()
if there is nothing waiting in the queue and resume when data appears. But it will not ever block the writer because there is no max queue size. I'm guessing you want some kind of max queue size that blocks the writer if that max size is reached? What if there are multiple concurrent writers and a full buffer, should we then queue all the writers without a limit, or throw an error after the first blocked writer? And same thing for readers, is there a limit for the number of enqueued readers allowed or is it just the one?
return coroutine.resume(waiting, value)
this of course wouldn't handle errors well; nor do you want random 'poppers' resumed from the context of the 'puller': you want the puller to be resume next time the main loop is hit. I guess in luv you'd use a uv_async_t
for that?
I'm guessing you want some kind of max queue size that blocks the writer if that max size is reached?
maybe; the example I wrote (back in 2015) that the OP linked doesn't have that feature though.
True, I've often been in a context where that was the only option, but I like the idea of using async or something to keep them on clean stacks.
Hello. Looking for a solution (or even possibility) to implement non-blocking FIFO queue with libuv.
Just like that: https://gist.github.com/daurnimator/88e942022d4f3cabf57c but for luv instead of cqueues.
I'm ready to pay some bounty (like $100 in USDT or LTC) if someone suggests an answer.