Closed wuyujack closed 5 years ago
Hi, we never "tuned" the batch size and this is solely decided by GPU memory usage. Also, the best model is determined by a validation set, following Thinghui's practice (https://github.com/tinghuiz/SfMLearner/issues/12). If you train the model too long on KITTI dataset, there's a high risk of overfitting.
Hi Zhichao,
Thank you for sharing the code, and I am training the GeoNet from scratch. One question is that were you only tuning the parameter under the batch size (denotes as bs later) is equal to 4? Since when I change the bs = 2 and bs = 8 and keep other parameters the same, the results are worse than the same iteration under it seems like the bs = 4 is fine-tuned by you?
The other thing is that:
I follow the training guide and the final iteration results after 345000 iterations under bs=4 I got is:
It seems close to what you proposed in the paper and the best performance you have shown in the Github. Did you save all the iteration's result and choose the best one as your best performance to present in the paper and the GitHub? Since by default you only save nineteen iterations and then overlap them to save the memory.