Closed ShifuShen closed 3 years ago
Hi ShifuShen,
I don't belong to the author team, but I might know the answer for your confusion.
The feature is extracted right before the last layer of Classifier_Module2 in deeplabv2.py. If you look at the forward function of Classifier_Module2, you might see that: out['feat'] is input of layer nn.Conv2d(256, num_classes, kernel_size=1, padding=0, dilation=1, bias=False)
So it has 256 kernels in the end.
Hi ShifuShen,
I don't belong to the author team, but I might know the answer for your confusion.
The feature is extracted right before the last layer of Classifier_Module2 in deeplabv2.py. If you look at the forward function of Classifier_Module2, you might see that: out['feat'] is input of layer nn.Conv2d(256, num_classes, kernel_size=1, padding=0, dilation=1, bias=False)
So it has 256 kernels in the end.
Got it Thanks ~
Thanks for your great work and code sharing. Here is one confusing I concerted.
I saw you define the objective_vectors in models/adaptation_modelv2.py with a size of 256
self.objective_vectors = torch.zeros([self.class_numbers, 256]) self.objective_vectors_num = torch.zeros([self.class_numbers])
but I found that the channel of feature maps is 2048 in layer4.
self.layer4 = self._make_layer(block, 512, layers[3], stride=1, dilation=4, BatchNorm=BatchNorm) self.layer5 = self._make_pred_layer(Classifier_Module2, 2048, [6, 12, 18, 24], [6, 12, 18, 24], num_classes)
and in your command, you just use the output feature of layer4 as out['feat'] ` def forward(self, x, ssl=False, lbl=None): , , h, w = x.size() x = self.conv1(x) x = self.bn1(x) x = self.relu(x) x = self.maxpool(x) x = self.layer1(x) x = self.layer2(x) x = self.layer3(x) x = self.layer4(x) if self.bn_clr: x = self.bn_pretrain(x)
so....which is the correct size of prototype and if it's 256 how to get the feature~