HqWei / Distillation-of-Faster-rcnn

Distillation for faster rcnn in classification,regression,feature level,feature level +mask
26 stars 4 forks source link

stu_feature_adap #2

Open chumingqian opened 4 years ago

chumingqian commented 4 years ago

@HqWei 您好, Wei: 请问一下您的 , stu_feature_adap=model_adap(stu_feature) 是如何实现的, 是1 * 1 的卷积 , padding = , 因为看了 distillation with fine grained 中代码 , 有点 疑惑; 原文的 github 放的 如图,通道数前后未变化。

stu_adap
HqWei commented 3 years ago

我这是也是一个卷积层进行通道和featuremap大小变化,大小一样就可以计算相似度了: `import torch.nn as nn import torch.nn.functional as F

class Stu_Feature_Adap(nn.Module):

def __init__(self,input_channel=256, output_channel=1024,kernel_size=2,padding=0):
    super(Stu_Feature_Adap, self).__init__()

    self.conv1 = nn.Conv2d(input_channel, output_channel, kernel_size=kernel_size, padding=padding)
    self.relu = nn.ReLU()

def forward(self, x):
    x = self.conv1(x)
    x = self.relu(x)
    # x = self.leaky_relu(x)
    # x = self.conv2(x)

    return x

`