gwappa / python-lab-grab

a grabber for ImagingSource cameras equipped with the optional NVenc encoder.
MIT License
0 stars 0 forks source link

Possible frame drops #19

Closed gwappa closed 2 years ago

gwappa commented 2 years ago

The number of strobe triggers do not often match with that of the resulting frames in the saved video.

Some of the most recent figures:

Frame rate (FPS) # strobe triggers # video frames Difference
30 111319 111315 4
30 109556 109552 4
30 83774 83770 4
160 592876 592863 13
160 583540 583531 9
160 445965 445952 13

Based on the discussion in this issue https://github.com/TheImagingSource/IC-Imaging-Control-Samples/issues/37 , frame drops must be strictly avoided by minimizing the overhead taken to process individual frames (i.e. during callbacks).

Possible bottlenecks (those I can think of currently):

There are two possibilities:

  1. Use an FIFO to buffer frames before being stored.
  2. Directly link ffmpeg-related libraries, cf.
gwappa commented 2 years ago

migrated to use PyAV.

What was observed during testing 1f77fc3 :

  1. the storage procedures freezes from time to time (and I had to re-start GRAB every time).
  2. there remains to be (seemingly) frame drops.

Possible causes: the additional frame-copy step. Some rough checks on a decent CPU tell that a single-frame copy() (640 x 480 RGB24) can take ~3 ms on average! On the other hand, just copying the content took ~0.3 ms/frame, suggesting most of the latency comes from memory allocation.

Temporary solutions:

  1. Use a double-buffered scheme so that both callback and the encoding contexts do not share a single mutex in common (322709a).
  2. Create buffers of frame objects in BufferThread so that array allocation should not occur in the callback context (3fa2c1f).
gwappa commented 2 years ago

tested 5226735, and found the results to be even worse than when I used bare ffmpeg...

the (relatively heavy) use of python in the callback context seems to be the culprit. It may be necessary to refine the encoder code based on PyAV, and migrate completely to Cython-based.

The use of bare ffmpeg (possibly with buffers?) could be at least better than PyAV-based code. I will try this approach for the time being.

gwappa commented 2 years ago

1e42336 back to the original figures after migrating to ffmpeg pipe + buffering.

In theory, further steps may be taken as:

  1. Write a Cython module to buffer frames without bothering the GIL.
  2. Write a Cython module to write to a pipe without bothering the GIL.
  3. Write a Cython module to directly call e.g. libavcodec without bothering the GIL.

But I have no idea how long it could take for me to implement them...

gwappa commented 2 years ago

come to think of it, buffering frames as soon as they are acquired (in addition to buffering at the storage step) would be even better solution.

need to modify ks-tisgrabber then...

gwappa commented 2 years ago

as of 2b1e938

gwappa commented 2 years ago

it does not seem like the limit of GPU encoding but rather an error in task prioritization (which I probably do not have much control on).

gwappa commented 2 years ago

Now everything is clear:

gwappa commented 2 years ago

The problem now is to reworking with the system of trigger modes... (see #22 )