Closed philleer closed 2 years ago
@philleer Hi, thanks for pointing out the bug in our code. As you mentioned, the gt_mean and gt_std should be calculated after filtering the gt values. Also, please delete this line during training: https://github.com/aim-uofa/AdelaiDepth/blob/c5370f150c10fd17761c835fca9b5956c3bff9fe/LeReS/Train/lib/models/ILNR_loss.py#L23. We will double-check the training code recently and retrain our model to ensure the correctness of code.
Thank you for your reply. I'll try it.
Hi, thanks for the great work.
I am trying to train the code on my machine but encountered the tensor size runtime error as shown in the title above.
Seems some data are filtered by the invalid depth threshold, while the gt_mean and gt_std are still computed from the original gt shown as the code INLR_loss.py.
Have anyone got such issue? Is it a bug? Or maybe I got some mistakes in my training?
Any suggestion will be appreciated, thank you in advance.