irfanICMLL / structure_knowledge_distillation

The official code for the paper 'Structured Knowledge Distillation for Semantic Segmentation'. (CVPR 2019 ORAL) and extension to other tasks.
BSD 2-Clause "Simplified" License
708 stars 103 forks source link

Example training method for Depth Estimation VNL network #47

Closed atrah22 closed 3 years ago

atrah22 commented 4 years ago

Hello,

The code shows to train for the Semantic Segmentation Task. Is any demo or code available for the Depth Estimation task?

BRs, Atul

irfanICMLL commented 4 years ago

I use this code base, and add the distillation loss on it.

https://github.com/YvanYin/VNL_Monocular_Depth_Prediction

Rui-Zhou-2 commented 2 years ago

Hello, when I apply the pixel distillation for depth estimation task, `class CriterionPixelWise(nn.Module): def init(self, ignore_index=255, use_weight=True, reduction=True): super(CriterionPixelWise, self).init() self.ignore_index = ignore_index self.criterion = torch.nn.CrossEntropyLoss(ignore_index=ignore_index, reduction=reduction)

def forward(self, preds_S, preds_T):
    assert preds_S.shape == preds_T.shape,'the output dim of teacher and student differ'
    N,C,W,H = preds_S.shape
    softmax_pred_T = F.softmax(preds_T.permute(0,2,3,1).contiguous().view(-1,C), dim=1)
    logsoftmax = nn.LogSoftmax(dim=1)
    loss = (torch.sum( - softmax_pred_T * logsoftmax(preds_S.permute(0,2,3,1).contiguous().view(-1,C))))/W/H
    return loss / N`

it will all become zero after the softmax

image

I am so confusing about how to implement this loss on the depth estimation task,

could you give some advice?

Best regards

I use this code base, and add the distillation loss on it.

https://github.com/YvanYin/VNL_Monocular_Depth_Prediction