urbste / MultiCol-SLAM

This repository contains a multi-fisheye camera SLAM. The underlying SLAM system is based on ORB-SLAM.
627 stars 221 forks source link

tracking lost #10

Closed zhangpj closed 7 years ago

zhangpj commented 7 years ago

Hi Steffen Urban and your group, I have trouble when I run the MultiCol-SLAM , the problem is when I run use my own indoor dataset , I got very little Multi-KeyFrames.With time go on, it will track lost.I'm using three cameras and I have calibrated the camera parameter . I suspect this is related to the relative location of the three cameras. So when you run MultiCol-SLAM , what is the angle value between the three cameras ? Or is there anything else I should pay attention to? Cheers, xpenu

yuyou commented 7 years ago

@xpenu could you please share more information about your setup, particularly how you calibrate your own camera setup? Thanks.

zhangpj commented 7 years ago

@yuyou I calibrate my own camera as README.md says in step 3 and 4. In step 3 , I use Improved OcamCalib to calibrate my camera , and in step 4 , I use mcptam to calibrate my camera.Then I use my own camera parameter replace the parameter in Examples/Lafida folder and change mult_col_slam_lafida.cpp to use my own dataset.

yuyou commented 7 years ago

@xpenu Thanks. I will check out MCPTAM. BTW, do you have any benchmark between this one and MCPTAM, assumed you have had both systems built and tested?

antithing commented 7 years ago

Hi all, did you get this working properly? i am having the same issue (tracking lost, no keyframes). I have calibrated using:

https://sites.google.com/site/prclibo/toolbox

then converted the resulting 4x4 matrix into cayley parametrization using

cv::Matx<T, 6, 1> hom2cayley(const cv::Matx<T, 4, 4>& M)

in Utils.h.

Using my own images, and this data, I run the system. I get very few tracked points, no keyframes, and tracking is lost very quickly.

If anyone has any thoughts/assistance, it would be much appreciated!

urbste commented 7 years ago

Hi and sorry for my late reply.

I am currently traveling and do not work at the university anymore. In addition I can not test anything as I do not have access to a computer.

What you did so far sounds quite ok. Check that the transformation sequences are correct, i.e. that the calibrated transformations from MCPTAM are not the inverse and so on.

The initialization might be a problem. The way I implemented it assumes that there is a slight overlap between the FoV of the cameras. If this is not the case you might have to think about a different initialization strategy. This is not a trivial problem. You could for example try to use direct multi camera motion algorithms, e.g. seventeen point or the 6pt method implemented in OpenGV. For small initial movements this one might work best https://github.com/jonathanventura/multi-camera-motion Actually I tried a couple but did not have time to get them to work properly.

Concerning the number of Keyframes.. Have a look at the NeedNewKeyFrame function and maybe change the minimum baseline to the current reference keyframe. In addition you could try to change the values for the minimum and maximum number of frames that have past after last Keyframe insertion.

And so on. I am afraid I can not give you a conclusive answer. All of that stuff is research code and far from ready to use out of the box.

Cheers and good luck in improving my work ;-) Steffen

antithing commented 7 years ago

Thanks Steffen! I very much appreciate your time. :)

I have it running with live input now, using a 5 sensor 360degree camera. (Occam omni60) The calibration seems to be ok. In the viewer, the cameras are in the right place in relation to each other. There is about a 15% overlap on the sensors on each side (the cameras are in a 5-pointed star formation).

The issue i am seeing now is that when I start the system, it moves to 'TRACKING' successfully, but there are only blue keypoints in one camera frame, or there are a decent number in one view, and only one or two in another. (the camera that has a good amount will change each time) When i rotate the cameras, the system has a hard time finding new keypoints, and tracking is easily lost.

Sorry to bother you again, but can you recommend anything to help me improve this?

Thank you again!

antithing commented 7 years ago

Aha! After checking the point cloud in relation to the camera cluster in the viewer, I established that my camera positions were reversed. I swapped the coordinates so that camera 0 - 4 went in a clockwise circle, instead of anti-clockwise. And... Success! Tracking is looking good in all 5 sensors.

Thank you once again for the code, and for your help. :)

urbste commented 7 years ago

Cool!! I hope it helps you to push your research/work!

yuyou commented 7 years ago

@antithing I almost had the same issue with yours. Could you please help elaboratinh a bit what you meant "swapped the coordinates"? Thanks.

antithing commented 7 years ago

Sure. I mean that the calibration I ran gave the cameras in an anti clockwise circle. So instead of 0,1,2,3,4, the coordinates were in 0,4,3,2,1. In the MultiCamera calibration file, I simply swapped:

CameraSystem.cam2

for CameraSystem.cam5

and CameraSystem.cam3

for CameraSystem.cam4

The only issue i have now is that the returned coordinate values from Track are not in world space, i think because this is based on Mono slam. I am looking at scaling with a third party source. Hope this helps you!

yuyou commented 7 years ago

@antithing Thanks. Did not know the order matters. I think I have to calibrate my setup with the toolbox you used. Currently I use existing extrinsics such as yaw/roll/pitch and translations of my setup to build the 4x4 transformation matrix (those rotations are relative to the centre of the rig). I suppose they do not work out of the box and need to be re-calibrated, right?

lusj commented 7 years ago

I run the MultiCol-SLAM in 2-cam system( 2 sensor 360degree camera ), and encounter problems as you have @yuyou,that with time going on, it will track lost. And i use the method to calibratr my camers.

While, I find when i use only one camer data(data provided by @urbste ), the tracking is all ok, when use my own data, it is still easily tracking lost. So when you run MultiCol-SLAM , what is the fov of the camera? Should i pay attention to the fov problems? And the tracking environment matters, the environment can be open or must be closed? Or is there anything else I should pay attention to? Looking forward to your reply, thanks! @urbste @yuyou

ns15417 commented 5 years ago

@antithing Thanks for your answers and now I am running Multicam project on my own dataset, but I want run it on my cameras , not only on dataset, could you tell me how to set this part? should I use Ros or I can achieve by OpenCV, How did you achieve that? Thanks!!

Varun-Haris commented 4 years ago

@ns15417 , I'm using 3 cameras with the MultiCol SLAM code. I wrote a ROS driver node which fires up the cameras (30 fps), and publishes the image pointers into respective topics. I wrote a code equivalent to the MultiCol-SLAM driver code (mult_col_slam_lafida.cpp), which sets up the subscribers to these image topics and sends these images into the SLAM algorithm. Although I'm facing a lot of issue of the tracking being lost, there is no issue of lag or loss of frame rate.