Closed poincarelee closed 2 years ago
Hi, I followed the train/val split from FlowNet2 (https://github.com/lmb-freiburg/flownet2/issues/141) for a fair comparison. You can freely change if you want. If there are appearance overlaps between train and val split, it may not be easy to find a good early stopping point.
Thanks a lot. Seems that if much more data is used in the model, validation indices are hard to choose as fixed values due to the proportion. I was wondering if the choice of these indices were needed to keep as successive or more precisely part of a whole action. Would it influence the performance of model when indices are chosen randomly?
In that case, How about using the k-fold cross-validation instead of using the fixed validation set? In the end, what matters is to find a good stopping point that generalizes well to the test set, not to the validation set.
Ok, you are right.
Hi, in sintel.py you fixed the VALIDATION_INDICES as follows:
I was wondering if these values are chosen meaningfully. Means five clips? What will happen if I want to choose indices randomly?