Xianpeng919 / MonoCon

Learning Auxiliary Monocular Contexts Helps Monocular 3D Object Detection (AAAI'22)
146 stars 23 forks source link

Depth loss goes negative during long training run #13

Open gch opened 2 years ago

gch commented 2 years ago

To try to reproduce the paper results (like other bug reporters, I get worse results on a stock training compared to authors), I left my training running far past the typical 200 epochs. At around ~325 epochs, I found that the loss_depth value went negative. I'm assuming this is an error. Have you observed this in practice?

I will spend some time probing into why this is happening and update this ticket as needed.

2gunsu commented 2 years ago

Hello, I'm not the author of this paper, but may I ask if the performance improved when training with longer epochs? I have the same problem that the performance of this paper is not reproduced.

gch commented 2 years ago

Performance kept improving, minus this oddity where the depth loss went negative (which I still need to debug). But, for example, my loss for car easy 3D @ 0.7 was 23.8, but the reported performance for the pretrained model is more like 26.

There's definitely general concern with this repo in the ability to reproduce the pretrained model reliably. Not sure if it simply takes a bunch of retrying to get the best performance?

2gunsu commented 2 years ago

Thank you for your reply. I am also making many attempts to reproduce, and if I find a solution, I will share it.

FlyingAnt2018 commented 11 months ago

Performance kept improving, minus this oddity where the depth loss went negative (which I still need to debug). But, for example, my loss for car easy 3D @ 0.7 was 23.8, but the reported performance for the pretrained model is more like 26.

There's definitely general concern with this repo in the ability to reproduce the pretrained model reliably. Not sure if it simply takes a bunch of retrying to get the best performance?

cfg.SEED = 1903919922 can reproduce official result