dontLoveBugs / DORN_pytorch

PyTorch implementation of Deep Ordinal Regression Network for Monocular Depth Estimation
298 stars 67 forks source link

Regression layer mismatch on train and test times #30

Open evinpinar opened 3 years ago

evinpinar commented 3 years ago

It seems that while training, the regression prediction tensor is reshaped into [B, C, H, W]. Whereas in test time, only half of the output tensor is kept, resulting in [B, C/2, H, W] sized tensor. As in here Is there a reason for that, or is it a bug? I wish to calculate the validation loss, but it is not possible with the current setting. Should it be changed from: ord_prob = F.softmax(x, dim=1)[:, 0, :, :, :] to: prob = F.log_softmax(x, dim=1).view(N, C, H, W) ?

dontLoveBugs commented 3 years ago

yes

WBS-123 commented 2 years ago

It seems that while training, the regression prediction tensor is reshaped into [B, C, H, W]. Whereas in test time, only half of the output tensor is kept, resulting in [B, C/2, H, W] sized tensor. As in here Is there a reason for that, or is it a bug? I wish to calculate the validation loss, but it is not possible with the current setting. Should it be changed from: ord_prob = F.softmax(x, dim=1)[:, 0, :, :, :] to: prob = F.log_softmax(x, dim=1).view(N, C, H, W) ?

is that mean the code should be changed into: prob = F.log_softmax(x, dim=1).view(N, C, H, W) ord_label = torch.sum((prob > 0.5), dim=1) return prob, ord_label in other words,is that mean there is no need to distinguish between self.training and not self.training?