Closed killwy closed 2 years ago
Hi @killwy
Glad you liked our work and thanks a lot for the appreciation.
Since the KITTI dataset doesn't have a validation set and only a limited number of submissions can be made to the KITTI test benchmark, it is advisable to create a validation split to identify the best model/ learning parameters.
I do not fully recall, but we used the validation set (of 40 held-out images) to determine the best set of hyper-parameters to train our model (which could be model size, number of disparity samples in different stages, learning/ optimization parameters, etc).
The metrics used to determine the best model parameters were simply the standard disparity error metrics (as mentioned in the paper). The model was chosen based on the validation set.
Once we identified the best set of parameters (using the validation set), we then trained the chosen model on all the available (train + val) KITTI images before running it on the test benchmark.
I hope this helps!
Best Regards, Shivam
It really helped me a lot. Thank you for your answer!
Hi,it‘s a really nice work. But I still have a question about the training process on KITTI2015. In your paper,you mentioned that "We reserved out 40 images from the total 394 images for validation", and then you said "For submission to the KITTI test benchmark, we re-trained the model on all the 394 training images for 1040 epochs." So,what's the meaning of the first train with validation? And how did you choose the model without the validation set? I hope you can answer my questions, which is very important for my work. Thank you again for your excellent work!