Hi Lukas, thanks for your great work! We are recently deploying your work in an arm64 platform, and when we validate with TUM-VI dataset, the runtime sometimes spikes to 500ms when there are low texture, and normally it keeps at 100ms speed, we investigated the time consumption of each step and found the most time-consuming part is in function: fh->makeImages(image->image, &Hcalib);
Do you have any suggestion of what to do to optimize the speed?
Lowering resolution and point number (e.g. to 300 or below) are indeed the first things to try.
fh->makeImages just builds the image pyramid and gradients. Maybe this code could be sped up by reducing cache misses, parallelizing it or rewriting it with SIMD instructions, but I am not sure.
You could also reduce the keyframe number but this will result in less accuracy.
You could change marginalization order to only keep the newest IMU velocity and bias but that would only make sense if the total time is not dominated by other parts of the system anyway. For the settings I have used this was not necessary but maybe with less points, etc. it will become more important.
(The last 2 tips are mostly for making the bundle adjustment thread faster).
Hi Lukas, thanks for your great work! We are recently deploying your work in an arm64 platform, and when we validate with TUM-VI dataset, the runtime sometimes spikes to
500ms
when there are low texture, and normally it keeps at100ms
speed, we investigated the time consumption of each step and found the most time-consuming part is in function:fh->makeImages(image->image, &Hcalib);
Do you have any suggestion of what to do to optimize the speed?
We tried: