Closed ewrfcas closed 3 years ago
In my opinion, the input (log_prediction_d) to the GradientLoss is the log depth. In our method, we compute the losses in the inverse depth space. The gradient interval is 1 on each scale.
Thanks for the reply! Here are still some questions.
Should all depths be clipped to be larger than a min-depth such as 1e-3? (but 1/1e-3=1000 I think it is still a bad score for the model training)
I would appreciate it if you could provide or explain some necessary data preprocessing.
Sorry for the late reply. As we mentioned in our paper, we only compute the losses of valid pixels (gt_depth > 0). Note that, we compute the losses in inverse depth space, NOT the log one.
Thanks. Since the disparity has indicated inverse depth space, should I inverse the disparity for the grad loss?
No, you don't have to.
Thanks for your kind advice!
Thanks for the good work! I have some questions about the multi-scale scale-invariant gradient matching loss in inverse depth space. Here is the code in ref[22], but I think it is different from the grad_loss used in this method.
I think log is not used here and what is the 'inverse depth space'? Besides, what is the gradient interval? In the codes above, the interval is 2 in all scales. Thanks!
Update: this loss works not well in my re-implemented method, which reduces the performance.