pixelite1201 / BEDLAM

206 stars 19 forks source link

Questions regarding train/val split #19

Closed twehrbein closed 10 months ago

twehrbein commented 10 months ago

Hi! First, I'd like to thank you for releasing all the code and data :)

I have a few questions regarding the train/val split of BEDLAM. Following fetch_training_data.sh there are, including AGORA, 28 training tar files. Of each file, the first 80% is used for training, resulting in 847621 unique training crops. In the code, the remaining 20% is held out for validation. But on the bedlam website, there are 4 additional validation sequences (https://bedlam.is.tue.mpg.de/imagesgt.html). I'm wondering why these sequences are not used for validation in the published code? Looking at config.py the 4 validation sequences should be put under the training images/labels directory. So were the validation sequences actually used for training and not validation? How many unique training and validation crops did you use? Shouldn't it be best to use 100% of the training split for training, and the 4 validation sequences for validation, instead of 80% of the train&val split for training and the remaining 20% for validation? Would be great if you could clarify the exact BEDLAM train/val split you used for the experiments.

Additionally, could you share for how many epochs the models were trained and how you selected the final models?

pixelite1201 commented 10 months ago

Hello, Sorry for causing the confusion. You are correct that we use the first 80% of BEDLAM training set for training and the rest 20% for validation. The 4 additional validation sequences provided on BEDLAM website are something extra and were not used in either training or validation for the published results. We have release them as part of validation as we can't modify the training data after publishing the results. I will update the website with the information. Regarding selecting the final model, we use the best MVE on 3DPW validation set. The number of epochs were somewhere around 50-60 for the best model.