Closed ffabi closed 4 years ago
It's not a part of the validation set, we are evaluating both (validation and test) at the same time, but independently (individual metrics are reported for each).
I set cfg.checkpoint.monitor
to 'abs_rel'
, but I missed the cfg.checkpoint.monitor_index = 0
which selects the first split to be really used for selecting the best model during training.
Other splits in the list are "just evaluated".
Am I right?
Yes, it's just an easy way to track the progress of multiple splits during training. If you do the same for the training dataset (i.e. setting multiple splits) then they are concatenated and become one single dataset.
That's clear and straight-forward! Are you usually manually stopping the trainings? I see no early stopping implemented.
No early stopping for now, but it should be easy to implement and very useful, it will probably be added in the near future. Or, if you manage to make it work, PRs are welcome. :)
I am going to explore the codebase first, then might make my first small contribution to the project. Thank you for your time!
Why is the test set part of the validation set?
https://github.com/TRI-ML/packnet-sfm/blob/master/configs/train_kitti.yaml#L37
It may be just an incorrect sample config file.
Using the test set for validation would impact the overall performance in a positive way. Would this performance gain be noticeable?
Thanks, Fabian