facebookresearch / consistent_depth

We estimate dense, flicker-free, geometrically consistent depth from monocular video, for example hand-held cell phone video.
MIT License
1.61k stars 236 forks source link

Application to KITTI Dataset #37

Open cpauling opened 3 years ago

cpauling commented 3 years ago

Hi,

Please could you explain the steps / configuration required for application to the Eigen split of the KITTI dataset and evaluation of the results, as mentioned in the paper?

Thanks.

lkosh commented 3 years ago

This is a good question, I'd like to know the answer too. Also, could you share the script for testing the algorithm on TUM_RGBD? I'm specifically interested in using the ground truth poses in the pipeline. Do you apply any transformations to the ground truth poses before passing them to colmap? I've been trying to compute camera poses for TUM1 dataset with ORB_SLAM2 and pass them to the algorithm following the instructions in README file, but that leads to very poor dense reconstruction and after fine-tuning the depth maps are just blank