isht7 / pytorch-deeplab-resnet

DeepLab resnet v2 model in pytorch
MIT License
602 stars 116 forks source link

Why is loss for multiple scales calculated in a strange way? #30

Closed omkar13 closed 6 years ago

omkar13 commented 6 years ago

I would like to know why you scaled the output of scale 0.75 to the same size as that of output of scale 1 (unscaled, 1/8 of the input size). At the same time, output of scale 0.5 is not scaled. The relevant piece of code is in deeplab_resnet.py in class MS_Deeplab:

def forward(self,x):
    input_size = x.size()[2]
    self.interp1 = nn.UpsamplingBilinear2d(size = (  int(input_size*0.75)+1,  int(input_size*0.75)+1  ))
    self.interp2 = nn.UpsamplingBilinear2d(size = (  int(input_size*0.5)+1,   int(input_size*0.5)+1   ))
    self.interp3 = nn.UpsamplingBilinear2d(size = (  outS(input_size),   outS(input_size)   ))
    out = []
    x2 = self.interp1(x)
    x3 = self.interp2(x)
    out.append(self.Scale(x))   # for original scale
    ####################################################
    out.append(self.interp3(self.Scale(x2)))    # for 0.75x scale
    out.append(self.Scale(x3))  # for 0.5x scale
    ####################################################

    x2Out_interp = out[1]
    x3Out_interp = self.interp3(out[2])
    temp1 = torch.max(out[0],x2Out_interp)
    out.append(torch.max(temp1,x3Out_interp))
    return out

Thank you!

isht7 commented 6 years ago

This is a reimplementation of the original paper mentioned in the readme. This is how they did it in their code released in caffe. I am not sure why they chose to do this, please contact the authors of the paper with their query.

omkar13 commented 6 years ago

Thanks for the quick reply.