serizba / SfM-TTR

SfM-TTR: Using Structure from Motion for Test-Time Refinement of Single-View Depth Networks
GNU General Public License v3.0
23 stars 1 forks source link

SfM-TTR: Using Structure from Motion for Test-Time Refinement of Single-View Depth Networks

Code for refining depth estimation networks using COLMAP reconstructions.

Setup

Install required dependencies for SfM-TTR (for specific model dependencies check their corresponding repositories):

conda install pytorch==1.12 torchvision -c pytorch
conda install -c conda-forge statsmodels matplotlib yacs
conda install tqdm
pip install pytorch-lightning

This code is provided with the nested repositories of AdaBins, ManyDepth, CADepth and DIFFNet.

We provide the weights of DIFFNet to quickly test our method. Although for the rest of the networks all code is included, you need to manually download their weights. Once downloaded, place them in SfM-TTR/sfmttr/models/{model_name}/weights/.

Data

To quickly test our method, we included the input images, ground truth and sparse reconstruction of one scene within this code (SfM-TTR/example_sequence/).

To run and evaluate SfM-TTR with the complete KITTI dataset, please download the KITTI raw data and the KITTI ground truth. You also need to run COLMAP on each sequence to obtain a sparse reconstruction.

Running

You can run the provided example of SfM-TTR with:

python3 main.py \
  --kitti-raw-path ./example_sequence/kitti_raw/ \
  --kitti-gt-path ./example_sequence/kitti_gt  \
  --reconstruction-path ./example_sequence/colmap_reconstructions/ \
  --sequence 2011_09_26_drive_0002_sync