Open nagua opened 2 years ago
Hi,
We should probably clarify some terms so the things we speak about become less ambiguous. So first of all, when you say:
images are always only generated once you call
io::mmap::stream::dequeue
what really happens is that captured frames are constantly written to the buffers which sit in the incoming queue of the driver.
Once we call dequeue()
, a buffer is removed from the drivers' incoming queue and given to the application. If no buffer is available, the call will block, which is the expected behavior right now.
Just using NONBLOCK
as flag when opening the file descriptor will not magically alter that behavior. Instead, it will make the dequeue()
call nonblocking.
Right now, we queue up all the buffers on stream start in the next()
function: https://github.com/raymanfx/libv4l-rs/blob/master/src/io/mmap/stream.rs#L166. This makes sure the stream is not starving of buffers right after it has been created.
I agree that a non-blocking API is required if you cannot afford to have the queue()
and dequeue()
(and thus next()
) calls blocking you. There is an existing PR to do this: https://github.com/raymanfx/libv4l-rs/pull/48, but I think I prefer poll()
over select
. It's been sitting there for long enough though, I will probably just rework this into a polling mechanism after merging it.
Does this help you? I had a look at the changes you linked and they seem to do similar things to what the PR is doing, but using poll
. There are various smaller issues with the code in the repo you linked, so I would rather do it a bit differently.
Hi,
first off, you are right, I remembered that part incorrectly. dequeue()
should totally return directly when a buffer was filled while our code was processing the last image. But for some reason that didn't work for us. We always needed to wait at least 1/FPS
seconds to get a new image whenever we reached the dequeue
call. Maybe that was another error with the old vl4-rs or our code, but at least that brought me back to reconsidering the poll
based solution with nonblock
. So it should be still valuable to have both ways.
So at least we need the nonblock
behavior, so we should concentrate on that problem. So what problems do you see with our current implementation. What can we do to get the code upstreamed? Also what to do about the now missing get
and get_meta
functions? Without them it is impossible to get the contents of an image after calling dequeue
. How can we bring them back or introduce a better way of getting the image contents and metadata while using the dequeue
function?
Have a nice day!
Could you please check out the next
branch [1] and see whether that is sufficient for your purposes?
If the next
API of the Stream
trait is not enough to satisfy your needs, we can probably restore the get
and get_meta
functions.
First off: I still don't understand, why dequeue
and queue
are still part of the public API? As I cannot do anything with the result currently.
For the usability of the next
API in the next
branch: It is nice, that there is a poll function now for a Device
that makes my live a bit easier but if I supply multiple events to the API I still cannot get the event source after the waiting is done. Also your queue
, dequeue
and next
calls are still completely synchronous. My usecase is that our software can handle 30fps of the camera in 95% of the cases. But sometimes we fall behind. If this happens we most probably cannot catch up and need to skip images that were taken while our computation took longer than expected. Then I want to call next
until I receive the newest available image. With the current API I could probably look at the metadata and compute if the image is the newest. However my old approach was to just call dequeue until it returned that no buffer is available and used the buffer I received from the dequeue
call before. That approach is no longer valid and I'm not sure if that is a good solution currently.
Another problem that we are currently facing is trying to implement a profiling tool. This tool will signal SIGPROF
in regular intervals. The signals will cause the kernel to cancel the poll
/ioctl
calls and we want to be able to recover from these conditions.
I hope that this will help you understand our usecases better.
Hello @raymanfx ,
first of, I want to thank you for creating this crate. Unfortunately I'm wondering about some decisions that you did in this crate.
You implemented an
mmap
interface where you can queue multiple buffers, but in the way you are using it images are always only generated once you callio::mmap::stream::dequeue
so you will never have images queued up ready to use. For that to work you need to open the filedescriptor asNONBLOCK
.Also with your current iteration I'm not sure why
queue
anddequeue
are still public as you cannot do anything with them. You candequeue
a new image, you get abuffer_index
but you cannot get the image from this index, as the arena is private and you don't have anyget
andget_meta
functions anymore. We are currently using it in this way: We run an analysis on an image, get the next image and skip all images that were asynchronously taken in the meantime (see NONBLOCK above) until we reach the newest image that is available at this time, without waiting for a new one. This isn't possible with the use of thenext
method. As you will always wait for a new image, regardless of how long it waits. For this to work you also need a method to poll the filedescriptor to you know if there are images waiting currently.For the changes we made to do that, you can look at our fork here: https://github.com/HULKs/libv4l-rs/tree/hulksChanges
If you want to, we can come to an understanding on how to integrate these changes properly into your crate. It would be nicer to have one good v4l wrapper instead of forking and maintaining a second one.
Have a nice day!