Closed mx-pan closed 1 year ago
It depends a bit on what you want to do. Do you need to make a map and then localize it? Do you want poses for a data sequence in post processing? Maybe a few points, if you let me know I can give better advice:
Thank you for your reply! I would like to try using multiple cameras to perfoorm odometery first. It seems that ROVIOLI does not support pure multi-camera without imu according to your answer. Does ROVIOLI support multi-camera with imu data, like 6 cameras? BTW, do you know any other open source multi-camera slam methods?
ROVIOLI, like ROVIO, is just a single camera with an IMU. There's some experimental multicamera support, but from what I've heard, I wouldn't try it as it's not very stable. For stereo, there are multiple options, for example, the two I've listed above (DSO has a stereo version in another repo).
More than 2 cameras, I don't know of anything in particular. I know some people have done stuff where they run different instances of the same estimator on separate cameras (or camera pairs with overlap). And afterward, fuse the multiple odometry estimates using a filter (e.g. https://github.com/ethz-asl/ethzasl_msf). You would have the calibrations to do that, and the idea is that, if one camera diverges or drifts more, the rest should be fine. But still, it's a bit bootstrapping and not a nice "unified" solution.
ROVIOLI, like ROVIO, is just a single camera with an IMU. There's some experimental multicamera support, but from what I've heard, I wouldn't try it as it's not very stable. For stereo, there are multiple options, for example, the two I've listed above (DSO has a stereo version in another repo).
More than 2 cameras, I don't know of anything in particular. I know some people have done stuff where they run different instances of the same estimator on separate cameras (or camera pairs with overlap). And afterward, fuse the multiple odometry estimates using a filter (e.g. https://github.com/ethz-asl/ethzasl_msf). You would have the calibrations to do that, and the idea is that, if one camera diverges or drifts more, the rest should be fine. But still, it's a bit bootstrapping and not a nice "unified" solution.
THANKS!
Hi! After reviewed 2.0 paper,I noticed that maplab 2.0 support the use of multi-cameras (5 cameras mentioned in paper) for mapping or localization. I want to try to use multi-cameras to perform mapping and localization, like 4 or 5 or more cameras. But I couldn't find how to run the related function in documentation or code. Can you advise me about how to work on it?
In addition, the paper mentioned the possibility of not using IMU for localization and mapping, can I use only multi-cameras image data to perform localization and mapping without IMU? (I only have multi-camera data in my recordings)
Thanks!