Closed ArmandB closed 2 years ago
Hello,
A key parameter you must set is processingQueueSize
, perhaps to 100 or 500 (depending on IMU frame rate, each sample takes one slot). If it is left to the default value of 0, the sensor input and VIO processing work synchronously and deterministically in the same thread, no matter how fast/slow your computer is. When the processing queue size is non-zero, the addFrame()
, addGyro()
, etc. API functions finish quickly and the VIO processing is done in a separate thread.
However, src/commandline/main.cpp
is not designed for that kind of real-time simulation. What will likely happen is that it reads the input data on disk much faster than VIO can process it, resulting either in dropped frames if -sampleSyncSmartFrameRateLimiter=true
, or filling of the processing queue which turns the system again synchronic. The main.cpp
would need to be modified in such a way that it looks at the input sensor sample timestamps and only sends them to VIO in real-time, and otherwise sleeps.
If you are using a thus fixed main.cpp
for simulation, or you are using the vio.hpp
/internal.hpp
API from a system which inputs sensor data in real-time, then setting the -processingQueueSize=500 -sampleSyncSmartFrameRateLimiter=true
might be sufficient for what you ask. The other SampleSync
parameters mainly have to do with reordering the samples and handling camera-IMU input time offset.
The frame limiter implementation looks complex, so if you want the kind of simple frame drop logic you described it might be easiest to replace that code with your own.
Note that for best performance routinely dropping frames might not be optimal. For example feature tracking might fail if multiple frames are dropped in succession. For that reason it's better to first tune the input and algorithm parameters so that the processing is roughly real-time, and frame dropping rarely occurs. Some easy parameters to modify here would be camera and IMU frame rates, camera resolution, maxTracks
, cameraTrailLength
.
Thank you for your detailed response! I really appreciate you!
I will give your suggestions a try and see what I can come up with.
I'm trying to run HybVIO on euroc/tum-vi as if it were a realtime system where camera frames are processed as they arrive. If there are ever multiple frames that haven't been processed yet I'd like to discard the old frame and use the new one. For example: HybVIO gets frame 100 and starts processing frame 101 arrives frame 102 arrives HybVIO completes processing of frame 100 HybVIO discards frame 101 and starts processing frame 102
Do the parameters I've chosen accurately model this situation? I'm particularly unsure about the values of "sampleSyncLag" and "sampleSyncFrameBufferSize": -sampleSyncFrameCount=1 -sampleSyncLag=9 -sampleSyncSmartFrameRateLimiter=true -sampleSyncFrameBufferSize=1
Any help would be greatly appreciated. Thank you!