sshan-zhao / GASDA

Geometry-Aware Symmetric Domain Adaptation for Monocular Depth Estimation, CVPR 2019
133 stars 23 forks source link

GASDA

This is the PyTorch implementation for our CVPR'19 paper:

S. Zhao, H. Fu, M. Gong and D. Tao. Geometry-Aware Symmetric Domain Adaptation for Monocular Depth Estimation. PAPER POSTER

Framework

Environment

  1. Python 3.6
  2. PyTorch 0.4.1
  3. CUDA 9.0
  4. Ubuntu 16.04

Datasets

KITTI

vKITTI

Prepare the two datasets according to the datalists (*.txt in datasets)

datasets
  |----kitti 
         |----2011_09_26         
         |----2011_09_28        
         |----.........        
  |----vkitti 
         |----rgb        
               |----0006              
               |-----.......             
         |----depth       
               |----0006        
               |----.......

Training (Tesla V100, 16GB)

Test

MODELS.

Copy the provided models to GASDA/checkpoints/vkitti2kittigasda/, and rename the models 1* (e.g., 1_net_D_Src.pth), and then

python test.py --test_datafile 'test.txt' --which_epoch 1 --model gasda --gpu_ids 0 --batchSize 1 --loadSize 192 640

Citation

If you use this code for your research, please cite our paper.

@inproceedings{zhao2019geometry,
  title={Geometry-Aware Symmetric Domain Adaptation for Monocular Depth Estimation},
  author={Zhao, Shanshan and Fu, Huan and Gong, Mingming and Tao, Dacheng},
  booktitle={Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition},
  pages={9788--9798},
  year={2019}
}

Acknowledgments

Code is inspired by T^2Net and CycleGAN.

Contact

Shanshan Zhao: szha4333@uni.sydney.edu.au or sshan.zhao00@gmail.com