PetWorm / LARVIO

A lightweight, accurate and robust monocular visual inertial odometry based on Multi-State Constraint Kalman Filter.
715 stars 154 forks source link

Trying to get LARVIO to run in "realtime" #13

Open MatthewFehl365 opened 3 years ago

MatthewFehl365 commented 3 years ago

I have been attempting to modify the provided example codes to run "Real-time" with an imu and raspberry pi cam on a NVIDIA Jetson Nano dev board. So far I have managed to fill the imu buffer and gather imaged properly but after some digging in the code it is returning an error at "not enough features; move device around". The imu and camera are attached to the same device so their movement is correlated but it is not able to track features through the frames.

As there are no examples of how to get the algorithm running in "real time" I was wondering if anyone could help with how to structure the information and feed it to the algorithm correctly!

Thank you for any help you can provide, I can provide any information that would be helpful.

itaouil commented 3 years ago

Hi @MatthewFehl365,

I had a similar issue when trying to run the LARVIO framework on an NVIDIA Jetson XS as well as my pc, and it turned out the error was that after the whole frontend process (i.e. Ransac, descriptors, etc) there were not enough features for the backend to process.

I am not sure how many feature you are using, but I would suggest using a higher number of detectable features (i.e. 200).

MatthewFehl365 commented 3 years ago

We are currently using parameters based on the euroc set parameters provided in the repo, so the max features are set to 300. Based on the image viewer it seems like its detecting a large amount of features but they tend to flash as they are not being tracked through many frames.

How did you go about assigning the correct amount of IMU measurements per frame? We just set it to gather a fixed amount of 10 imu measurements per frame gathered.

MatthewFehl365 commented 3 years ago

I've narrowed it down to something with the feature tracker. I went through and uncommented the feature tracking comments and it is correctly identifying features in the first received frame, then it looks like when it tries to propagate them to the next frame it loses them and resets the estimate.

Not sure how to continue, any insight?

itaouil commented 3 years ago

Hi,

I have not touched the IMU buffer part. I left it as it is was in the original code.

What do you mean by it loses the feature and resets the estimate? Do you have maybe a log?

Can you also check what is the frequency of the odometry topic published? I know for the EUROC the images are received at 30Hz and IMU at 200Hz so maybe check if the topic is published at around 30Hz.

So maybe if frames are skipped due to resource constraints LARVIO is not able to detect recurrent features hence your error.

MatthewFehl365 commented 3 years ago

So turns out i kept redifing the imgPtr so it wasnt properly tracking the features. Seems to be tracking features correctly and the imu buffer is flling correctly. I am using a raspberryPi Cam and the BNO055 imu. I manually fill the buffer with 10 imu measurements and then grab an image to be used by the algorithm. It appears now that something is not correct with the pose estimate.

I will keep updated!

MatthewFehl365 commented 3 years ago

UPDATE:

I've been able to get everything running correctly but the results I'm seeing are far from expected. My camera feed is extremely lagged which leads me to believe that is the cause for improper state estimateion. I'm using a CSI camera with the jetson nano, anyone have an ideas how to properly capture video (or stills) and timestamp them correctly?