COMO is a real-time monocular odometry and mapping system based on a compact 3D scene representation.
In the base anaconda environment, run
source install.sh
which runs all the commands needed to create and activate a new environment, install dependencies, and build the backend.
We provide dataloaders for Replica, TUM, and ScanNet. To run the single-threaded version of our system on TUM for example, call
python como/como_dataset.py --dataset_type=tum --dataset_dir=<path_to>/tum/rgbd_dataset_freiburg2_desk/
Specify the dataset type with replica
, tum
, scannet
, and realsense
.
We allow tracking and mapping to be configured for different devices, please see config/como.yml
. For example, tracking can be moved to cpu
and mapping can be configured for cuda:0
.
We provide a dataloader for using the RGB stream from a RealSense camera. Plug in the camera and run our multiprocessing version
python como/como_demo.py --dataset_type=realsense
To intialize the system, it is usually best to provide a small translational motion until the geometry shows up on the GUI.
We leverage the depth covariance function from DepthCov.
We would also like to thank the authors of the following open-source repositories:
If you found this code/work to be useful in your own research, please consider citing the following:
@inproceedings{dexheimer2024como,
title={{COMO}: Compact Mapping and Odometry},
author={Dexheimer, Eric and Davison, Andrew J.},
booktitle={Proceedings of the European Conference on Computer Vision ({ECCV})},
year={2024}
}
@inproceedings{dexheimer_depthcov_2023,
title={Learning a Depth Covariance Function},
author={Dexheimer, Eric and Davison, Andrew J.},
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition ({CVPR})},
year={2023}
}