TRI-ML / packnet-sfm

TRI-ML Monocular Depth Estimation Repository
https://tri-ml.github.io/packnet-sfm/
MIT License
1.24k stars 243 forks source link

A question regarding the number of images used for training and evaluation #84

Closed hdjang closed 3 years ago

hdjang commented 4 years ago

Hi

I have a simple question regarding the number of images used for training and evaluation

When I train the model following a script of "train_kitti.yaml", I found that the the number of images used by the training(eigen_zhou_files.txt) and evaluation(eigen_test_files.txt) slightly differs from the actual number of images specified in each split file. The actual differences are as below. Why does this happen?

train: 39810 (eigen_zhou_files.txt) -> 39840 (during training)

test: 697 (eigen_test_files.txt) -> 704 (during evaluation in training)

VitorGuizilini-TRI commented 4 years ago

How many GPUs are you using? That's because some samples are duplicated by the sampler so each GPU has the same number of batches and batch sizes to process. During evaluation we account for that so each sample is only considered once.

https://github.com/TRI-ML/packnet-sfm/blob/2698f1fb27785275ef847f3dbbd550cf8fff1799/packnet_sfm/utils/reduce.py#L76

hdjang commented 4 years ago

I'm using 8 GPUs following the configuration stated in the paper. Now, I'm understood. Thanks for your reply :)