TRI-ML / packnet-sfm

TRI-ML Monocular Depth Estimation Repository
https://tri-ml.github.io/packnet-sfm/
MIT License
1.24k stars 243 forks source link

How to split the Cityscapes for pretraining? #17

Closed UltronAI closed 3 years ago

UltronAI commented 4 years ago

Is there a .txt file containing training files? just like eigen_zhou_files you provide in the readme.

I'll appreciate it if you would like to share the dataset file for CS and its training files list. Thanks!

VitorGuizilini-TRI commented 4 years ago

I can look into that, will keep you informed!

UltronAI commented 4 years ago

@VitorGuizilini-TRI Thanks for quick reply! And another question is how di d you preprocess the images from CS, as they have different aspect ratios from Kitti ’s images. Crop or simply resize? I think it's helpful for fair comparison for future work in this line. Thanks!

VitorGuizilini-TRI commented 4 years ago

For that particular model we are resizing CS images to the same resolution used for KITTI images.

Mrils commented 4 years ago

By the way, the compression package you used in training CS datasets is the leftImg8bit_sequence_trainvaltest.zip ?

MingYang-buaa commented 4 years ago

@UltronAI @VitorGuizilini-TRI Have you got the split files for cityscapes datasets?

VitorGuizilini-TRI commented 3 years ago

We are using the standard 2975 training images with the corresponding 30 frame context for self-supervised depth learning.

hemangchawla commented 1 year ago

@VitorGuizilini-TRI just to be on the same page, could you please confirm if you use triplets for training cityscapes or is that a set of 30 images (rather 29 images with +/-14 context images)?