Closed davideCremona closed 3 years ago
Hi @davideCremona, It's a bit tedious to check the performance during training because you need to create a new codebook every time. Nevertheless, with an old version of the code I did this experiment, you can find it in Figure 9b in our eccv paper https://openaccess.thecvf.com/content_ECCV_2018/papers/Martin_Sundermeyer_Implicit_3D_Orientation_ECCV_2018_paper.pdf
Yes, if one want to validate over rotational error that is the case. But have you validated / selected the best model by checking the loss function (reconstruction loss, or L2)?
No, I have always validated over the downstream task performance, i.e. rotational error, because that is what we care about in the end. Sure, we could validate the reconstruction loss on real images much easier during training. It could be a nice measure of the sim2real gap. On the other hand, we don't know how it correlates with orientation estimation.
It should not be hard to add. I would be happy to accept a PR :)
Hi, sorry if I have not replied but I was quite busy in the meantime.
I've conducted some experiments modifying the dataset.py script to use an offline-rendered dataset (so that I could use the same validation set in multiple experiments) and using a validation set to save the "best" checkpoint does not shows any big difference to just use the last checkpoint. But on the other hand in my experiments I have observed that the training process is not finished even after 50000 iterations and without signs of overfitting.
Hi, I've a question for you: why there is no mechanism to validate the performance on unseen poses during the training process? Have you done these experiments?
Thank you, Davide.