Open jimaldon opened 5 years ago
Hi @jimaldon and @NikolausDemmel I think the -O2 or -O3 is missing in the cmakelist which probably causes poor performance when running the examples.
@gaoxiang12, that should not be the issue, since the default build type is Release
, so unless you specify Debug
explicitly, it should be an optimized build.
I think this is about odometry performance, not runtime speed.
@jimaldon From a quick glance at the webpage it looks like the car bonnet is in view. You might have to cut that, or implement something like a mask. Else if points on the bonnet are chosen, it won't fit together with the motion of the static scene.
@NikolausDemmel Ah, that could be it - I'll try that!
I tried ORB-SLAM on the dataset without a mask and it seemed to perform okay - it must be robust to static occlusion somehow.
Yes, for ORB-SLAM, I guess RANSAC both during initialization and localization helps in this case compared to DSO.
PS: Would be great to see your results if you manage to get it to work (or not...)
So, in a preliminary attempt to implement a "mask" over an image, I restricted grid construction for corner feature detection for all pixels (horizontally) below a threshold which contains the car bonnet. It involved mostly changing stuff inFeatureDetector.cc::DetectCorners
While I can get LDSO to initialize and the performance is a lot better the before, it still doesn't compare with ORB-SLAM. For one, the process can't cope and quits about midway - and the map generated has a lot of inconstancies - mostly from not being planar and overestimating the pitch on mild inclines and declines of the road.
Here's a video of them visualization https://drive.google.com/open?id=1am3xgN_RBEWQg8QnMUQJOtQ_8GogA7WU
Hi jimaldon,
from the video it seems LDSO is still not running properly.
Another easier way to remove the effect from the engine cover is to simply crop the image to remove the bottom part, then you don't need to modify any code. Could you try this?
I suggest you do what Rui did to determine if the masking is the issue or something else.
Other things that might lead to bad performance include bad calibration, both geometric and photometric, lack of known exposure times, etc...
For the mask, changing the feature detection is not enough. You also have to ensure there are no observations in target frames that fall on the mask (or hope the outlier detection catches it...). Moreover, if the images are distorted, DSO will do undistortion as a preprocessing step, so you need to ensure you also undistort the mask.
Hi @jimaldon and @NikolausDemmel I think the -O2 or -O3 is missing in the cmakelist which probably causes poor performance when running the examples.
was trying to open a similar issue. but just found out that you have posted this. XD
The Oxford RR dataset was captured with a global shutter camera - and is in the kitti dataset.
Without resetting at about 1000 frame mark, it never initializes. When it does initialize, it loses track at random intervals (even when the car is moving forward without rotation at a constant velocity) - and requires resetting again.
What could explain this behaviour? Is there anything peculiar about this dataset that sets it apart from KITTI. Both don't have photometric calibration and are recorded with a global shutter camera.
I run LDSO on the rectified, undistorted images from the stereo/centre bumbebee camera and use the following calibration:
Here's the radar robotcar dataset: https://dbarnes.github.io/radar-robotcar-dataset/datasets