This repository implements the training and testing of T2Net for "T2Net: Synthetic-to-Realistic Translation for Depth Estimation Tasks" by Chuanxia Zheng, Tat-Jen Cham and Jianfei Cai at NTU. A video is available on YouTube. The repository offers the implementation of the paper in Pytoch.
This repository can be used for training and testing of
This code was tested with Pytoch 0.4.0, CUDA 8.0, Python 3.6 and Ubuntu 16.04
pip install visdom dominate
git clone https://github.com/lyndonzheng/Synthetic2Realistic
cd Synthetic2Realistic
The indoor Synthetic Dataset renders from SUNCG and indoor Realistic Dataset comes from NYUv2. The outdooe Synthetic Dataset is vKITTI and outdoor Realistic dataset is KITTI
Warning: The input sizes need to be muliples of 64. The feature GAN model needs to be change for different scale
python train.py --name Outdoor_nyu_wsupervised --model wsupervised
--img_source_file /dataset/Image2Depth31_KITTI/trainA_SYN.txt
--img_target_file /dataset/Image2Depth31_KITTI/trainA.txt
--lab_source_file /dataset/Image2Depth31_KITTI/trainB_SYN.txt
--lab_target_file /dataset/Image2Depth31_KITTI/trainB.txt
--shuffle --flip --rotation
python test.py --name Outdoor_nyu_wsupervised --model test
--img_source_file /dataset/Image2Depth31_KITTI/testA_SYN80
--img_target_file /dataset/Image2Depth31_KITTI/testA
python evaluation.py --split eigen --file_path ./datasplit/
--gt_path ''your path''/KITTI/raw_data_KITTI/
--predicted_depth_path ''your path''/result/KITTI/predicted_depth_vk
--garg_crop
The pretrained model for indoor scene weakly wsupervised.
The pretrained model for outdoor scene weakly wsupervised
Note: Since our orginal model in the paper trained on single-GPU, this pretrained model is for multi-GPU version.
If you use this code for your research, please cite our papers.
@inproceedings{zheng2018t2net,
title={T2Net: Synthetic-to-Realistic Translation for Solving Single-Image Depth Estimation Tasks},
author={Zheng, Chuanxia and Cham, Tat-Jen and Cai, Jianfei},
booktitle={Proceedings of the European Conference on Computer Vision (ECCV)},
pages={767--783},
year={2018}
}
Code is inspired by Pytorch-CycleGAN