Closed seb-le closed 1 year ago
Hi, indeed you are right !
Thanks for this catch ! If you submit a PR I'd happily merge it.
Thank you for your quick reply!
Ok I see. I will submit the PR to your repo.
I also saw this issue in loss_functions.py as shown below. https://github.com/ClementPinard/SfmLearner-Pytorch/blob/master/loss_functions.py
@torch.no_grad()
def compute_depth_errors(gt, pred, crop=True):
( ... )
valid_pred = current_pred[valid].clamp(1e-3, 80)
valid_pred = valid_pred * torch.median(valid_gt)/torch.median(valid_pred)
Is that also correct like below?
@torch.no_grad()
def compute_depth_errors(gt, pred, crop=True):
( ... )
valid_pred = current_pred[valid]
valid_pred = (valid_pred * torch.median(valid_gt)/torch.median(valid_pred)).clamp(1e-3, 80)
Thank you!
Yes, the problem is there as well. I will validate this change in your PR :)
Hi! Thank you for your nice implementation.
I have a question about clipping the depth value in test_disp.py. https://github.com/ClementPinard/SfmLearner-Pytorch/blob/master/test_disp.py
Currently, before applying the scale factor to predicted depth, depth is clipped by min/max depth value as shown below.
But isn't it correct to clipping after applying the scale factor as shown below?
Especially, the sq_rel and rms metric is sensitive to this issue.
Thank you.