Open gch opened 2 years ago
Hello, I'm not the author of this paper, but may I ask if the performance improved when training with longer epochs? I have the same problem that the performance of this paper is not reproduced.
Performance kept improving, minus this oddity where the depth loss went negative (which I still need to debug). But, for example, my loss for car easy 3D @ 0.7 was 23.8, but the reported performance for the pretrained model is more like 26.
There's definitely general concern with this repo in the ability to reproduce the pretrained model reliably. Not sure if it simply takes a bunch of retrying to get the best performance?
Thank you for your reply. I am also making many attempts to reproduce, and if I find a solution, I will share it.
Performance kept improving, minus this oddity where the depth loss went negative (which I still need to debug). But, for example, my loss for car easy 3D @ 0.7 was 23.8, but the reported performance for the pretrained model is more like 26.
There's definitely general concern with this repo in the ability to reproduce the pretrained model reliably. Not sure if it simply takes a bunch of retrying to get the best performance?
cfg.SEED = 1903919922 can reproduce official result
To try to reproduce the paper results (like other bug reporters, I get worse results on a stock training compared to authors), I left my training running far past the typical 200 epochs. At around ~325 epochs, I found that the loss_depth value went negative. I'm assuming this is an error. Have you observed this in practice?
I will spend some time probing into why this is happening and update this ticket as needed.