alliedvision / gst-vimbasrc

Official vimbasrc element for use of Vimba with GStreamer
Other
10 stars 9 forks source link

Losing frame when recording #2

Open iliesaya opened 2 years ago

iliesaya commented 2 years ago

I am trying to record a video , with a Jetson Nano and a USB Alvium 1800 U -508 , here is my command line to do it : I am using bayerRG8 because RGB8 can't get more than 20fps.

gst-launch-1.0 -e vimbasrc camera=DEV_1AB22C00C5A8 settingsfile=/home/jetson/CAMUSB.xml ! \
video/x-bayer,format=rggb ! bayer2rgb ! \
videoconvert ! video/x-raw ! nvvidconv ! 'video/x-raw(memory:NVMM)' ! \
omxh265enc bitrate=20000000 ! matroskamux ! filesink location=outrec.mkv

Here is the result https://youtu.be/iOeARhV2G0I The recording is lagging, missing frames and jumping.

here is the camera xml config file CAMUSB.xml.zip Disabling Device Kink Throughput Limit or changing it's value don't change anything. usbfs_memory_mb is set to 1000. Jetson CPU set to MAXN.

the Jetson nano seems to can't handle full resolution, so I use 1848x1542 at 60fps , no issue with Vimba Viewer so I am believe it's vimbasrc that can't follow. No issue recording that way with any other webcam, even grabbing frame 4032x3040 at 30 fps with a cheap arducam and nvarguscamerasrc or v4l2src is perfecly smooth .

Any ideas ?

thanks a lot !

EDIT : removing the line video/x-bayer,format=rggb ! bayer2rgb give a smooth black and white video, any ideas of other format I can use ? YcbCr8_CbYCr is also maxing at 22 fps ...

NiklasKroeger-AlliedVision commented 2 years ago

I believe you might be running into a problem I have also seen before myself that is listed in the known issues in the README (though there it is mentioned as a problem with high-fps situations and image display):

  • In situations where cameras submit many frames per second, visualization may slow down the pipeline and lead to a large number of incomplete frames. For incomplete frames warnings are logged. The user may select whether they want to drop incomplete frames (default behavior) or to submit them into the pipeline for processing. Incomplete frames may contain pixel intensities from old acquisitions or random data. The behavior is selectable with the incompleteframehandling property.

Unfortunately there is currently no clear plan on how to tackle this problem. The following is just speculation from my side for now, but my guess is that there is too much processing time being used to guarantee the transfer of all camera frames successfully. From my understanding the entire pipeline seems to be running in the same process. The typical job would be to call the registered create function for the source element and move that buffer through the pipeline until it arrives at the sink. This processing does take some time depending on how complex the pipeline is. After the buffer is handled by the sink, the next buffer is requested by again calling the create function of the source. During this processing time the camera continues to record images and transfers them via Vimbas asynchronous transfer methods.

Small explanation of how the asynchronous transfer in gst-vimbasrc works: For every received frame a previously registered frame callback is called and Vimba expects that callback function to perform the image processing as quickly as possible and then hand the buffer that contained the image back to Vimba so it may be used in the next image transfer. In the case of gst-vimbasrc the frame callback simply takes the received frame and puts it into a small queue of filled frames from which the first element is taken whenever a create call is performed. The frame is not actually handed back to Vimba for use yet since the containing image data has not been pushed out to the GStreamer pipeline. The frame buffer is actually handed back to Vimba in the create function, where the image data is copied into a separate GStreamer buffer and that buffer is passed down the pipeline. After the image data is copied out it is save to return the Vimba frame back to Vimba for transfers because now a copy of the image data exists that travels down the pipeline and the frame buffers memory may be changed.

This has the benefit that the frame callback function is able to finish processing very quickly because it simply has to add a pointer to a queue. It therefore does not block further internal Vimba tasks from running. However it also means, that the filled_frame_queue might be sitting there with some filled frames that are not being taken out quickly enough, leading to a shortage of frame buffers for Vimba to transfer data with. I believe this is what is happening and that the "fps processing speed of the pipeline" is smaller than the "fps recording of the camera" and there are not enough buffers present to work around this.

The problem could (if my assumptions above are correct) in theory be solved by having some sort of buffer element that just calls the create function of gst-vimbasrc as quickly as possible and stores the result internally. The rest of the pipeline could then take images out of that buffer at a slower pace without any frames being dropped. Unfortunately I am no real GStreamer expert (yet) but perhaps something from this buffering strategy website might be helpful. Maybe it is even as simple as adding a queue right after the vimbasrc element though I would guess some configuration on that is necessary because by default a queue will only buffer about 10MB which is not a lot if uncompressed images are transferred. There are properties to adjust the size of the buffer provided by queue. This would of course run into the danger of eating up quite a bit of memory as the buffer grows to hold more and more frames...

If you want you can of course also try and adjust the internal behaviour of gst-vimbasrc itselfl. Perhaps you have better ideas on how filled frames should be handled. One first simple thing you may try is to simply increment the number of Vimba frame-buffers that gst-vimbasrc uses to transfer image data from the camera. The number is defined here in the header file though from my experience it does not have any significant impact.

removing the line video/x-bayer,format=rggb ! bayer2rgb give a smooth black and white video,

That sounds like you identified the bottleneck in the used pipeline. That is good!

any ideas of other format I can use ? YcbCr8_CbYCr is also maxing at 22 fps ...

I am not entirely sure where the fps limitation in the color formats comes from. It might be either the fact that the camera itself needs time to perform the debayering which results in the fps drop, or the increase in transferred data. In the bayer Image the amount of data that needs to be transferred is roughly equal to number_of_pixels * 8bit where for color images it is closer to number_of_pixels * number_of_color_channels * 8bit (unless there is some color subsampling). This shows that you can expect about 3 times the amount of data from an RGB image (or a non subsampled YCbCr image) you would from a bayered image. That would be inline with your ~1/3 fps performance you see compared to the bayered image...

Another thing that might lead to more help is if you also contact our support via the form on our website. They generally have far more experience in getting the last bit of performance from our camera especially in combination with certain boards. Feel free to also link them to this github issue when you contact them.

PS: Thanks for the high quality issue reporting! Also: sorry for the wall of text...