irolaina / FCRN-DepthPrediction

Deeper Depth Prediction with Fully Convolutional Residual Networks (FCRN)
BSD 2-Clause "Simplified" License
1.11k stars 313 forks source link

How did you manage the detection of overfitting during training? #59

Closed nicolasrosa closed 6 years ago

nicolasrosa commented 6 years ago

During training, how many images did you use as validation set?

Since I'm still coding the training framework, I'm working with the plain NYU Depth Dataset (795 Images in total, 639 for training and 159 for validation). Right now, the network takes ~100 ms doing a single training step. However, evaluating all the 159 images would take a considerable amount of time, and all that would correspond to just one training step, right? I would like to know how did you manage this since later you increased the dataset to 12k -> 95k images. How many validation images did you evaluate from that?

chrirupp commented 6 years ago

As usual, we validate per epoch, not per gradient step. Validation is always just the validation set of NYDepth, no augmentations.

nicolasrosa commented 6 years ago

I learned to take a percentage of the training set (10%~20%) as the validation set and to use exclusively the test set (Validation set for you, those with 654 images on it, right?) after the training is finished. Anyway, Thank you!