qinenergy / corda

[ICCV 2021] Code for our paper Domain Adaptive Semantic Segmentation with Self-Supervised Depth Estimation
71 stars 12 forks source link

May deeplabv2_synthia.py have extra space symbol #12

Closed yuheyuan closed 1 year ago

yuheyuan commented 1 year ago

if the forward code, return out ,an extra space symbol

   def forward(self, x):
        out = self.conv2d_list[0](x)
        for i in range(len(self.conv2d_list)-1):
            out += self.conv2d_list[i+1](x)
            return out

this is the code in your code

class Classifier_Module(nn.Module):

    def __init__(self, dilation_series, padding_series, num_classes):
        super(Classifier_Module, self).__init__()
        self.conv2d_list = nn.ModuleList()
        for dilation, padding in zip(dilation_series, padding_series):
            self.conv2d_list.append(nn.Conv2d(256, num_classes, kernel_size=3, stride=1, padding=padding, dilation=dilation, bias = True))

        for m in self.conv2d_list:
            m.weight.data.normal_(0, 0.01)

    def forward(self, x):
        out = self.conv2d_list[0](x)
        for i in range(len(self.conv2d_list)-1):
            out += self.conv2d_list[i+1](x)
            return out

I this this forward is possible.beaceuse your code, use list contain four elements,if return out have space, this may do only twice without fourth

   def forward(self, x):
        out = self.conv2d_list[0](x)
        for i in range(len(self.conv2d_list)-1):
            out += self.conv2d_list[i+1](x)
        return out
   self.__make_pred_layer(Classifier_Module,[6,12,18,24],[6, 12,18, 24],NUM_OUTPUT[task]
   def _make_pred_layer(self,block, dilation_series, padding_series,num_classes):
        return block(dilation_series,padding_series,num_classes)
qinenergy commented 1 year ago

This is a known characteristic inherited from AdaptsegNet and an early version of deeplabv2. You can find more discussion here https://github.com/wasidennis/AdaptSegNet/issues/4. Most DA models interited this and did not fix it intentionally to keep a fair comparison.

Similar to common practice, to have a fair comparison with previous DA models (Adaptsegnet, DACS, and most DA methods), we did not fix the deeplabv2 model here.