Closed Cybis320 closed 8 months ago
Thanks! Would you mind submitting a pull request for this change, so you can be credited for the contribution?
Could you also submit a pull request for your changes to the capture script? It will then be able to be reviewed and tested.
Cheers, Dneis
Regarding the updated frame capture script.
The current approach to timestamping video frames is susceptible to variable latency issues, as frames are retrieved downstream of buffers. This results in full buffers on slower systems (as indicated by dropped frames), causing old frames to be incorrectly timestamped as new, with potential errors up to several seconds. An additional factor is the aging of micro SD cards. As these cards slow down over time, systems that were previously fast may start to exhibit this issue.
I don't know of a way of timestamping upstream of the buffers on these inexpensive IP cameras (is there a way?) So, I'm experimenting with ways to keep the buffers nearly empty by retrieving frames more reliably and quickly. That would reduce latency variability and could more easily be calibrated.
One method involves using a separate thread or process to isolate it from the main code. While a separate thread improves timestamp accuracy, it still faces latency issues due to Python's Global Interpreter Lock (GIL) which limits the execution of multiple threads.
Implementing a separate process with multiprocessing addresses the accuracy issue by circumventing the GIL. However, the multiprocessing Queue introduces different performance issues, and I had to reduce the frame rate to 15 fps on a Raspberry Pi 4. This seems related to the serialization process and use of Python pickles. The benefit is consistent, accurate timestamps to within about 25 ms.
I'm now considering a shared memory solution for both accuracy and speed. However, this approach is incompatible with Python 3.7 (and Debian 10), requiring Python 3.8 or newer which would break many existing installs.
Next, I'll try multiprocessing manager, which might resolve the Queue's performance issues. This could also be a fallback option if shared memory isn't viable on a machine.
I'll let you know what I find out. Suggestions are welcome!
Cheers, Luc
StartCapture assigns bc.dropped_frames to dropped_frames after it runs out of scope due to bc.stopCapture. Therefore dropped_frame is always reported as zero in the log. Assigning it just before stopCapture() fixes the problem for me.
I changed this section in startCapture.py to solve the problem:
To:
Note that dropped frames are a symptom of the system not keeping up. This can significantly affect the timestamps accuracy as old frames are retrieved from buffer and stamped as fresh frames. I find dropping the fps to reduce or eliminate dropped frames improves timestamp accuracy.