Open saladjay opened 5 years ago
在转换torch.nn.AdaptiveAvgPool2d()的时候出错了 另外就是不能转换乘法,例如 module_input = x x = self.avg_pool(x) x = self.fc1(x) x = self.relu(x) x = self.fc2(x) x = self.sigmoid(x) return module_input * x 在最后一部会出错,同样的也是在log.blobs中没有要找的key-value
在这种pytorch自动补全的地方会出错, 例子 a = torch.randn([1,3,5,5]) b1 = torch.ones([1,3,1,1]) b2 = torch.ones([1,3,5,5]) ab2可以正确的通过_mul转换成Eltwise的点乘layer, 而ab1则不行,ab1会提示 PytorchToCaffe-master\pytorch_to_caffe.py in _mul(input, args) 506 top_blobs = log.add_blobs([x], name='mul_blob') 507 layer = caffe_net.Layer_param(name=layer_name, type='Eltwise', --> 508 bottom=[log.blobs(input), log.blobs(args[0])], top=top_blobs) 509 layer.param.eltwise_param.operation = 0 # product is 1 510 log.cnet.add_layer(layer)
~\PytorchToCaffe-master\pytorch_to_caffe.py in blobs(self, var) 89 var=id(var) 90 if self.debug: ---> 91 print("{}:{} getting".format(var, self._blobs[var])) 92 try: 93 return self._blobs[var]
~\PytorchToCaffe-master\pytorch_to_caffe.py in getitem(self, key) 30 def getitem(self, key): 31 #if key in self.data.keys(): ---> 32 return self.data[key] 33 #else: 34 # return None KeyError: 1639445810200
在blob里没有这个key
@saladjay 请问,怎么解决a*b1的问题 ??
@xiezheng-cs 怎么解决???能讲解下不大佬
问题:
a = torch.randn([1,3,5,5]) b1 = torch.ones([1,3,1,1]) b2 = torch.ones([1,3,5,5]) ab2可以正确的通过_mul转换成Eltwise的点乘layer, 而ab1则不行
思路: 参考 https://zhuanlan.zhihu.com/p/65459972 可知
对应的python代码是
def forward(self, x):
b, c, _, _ = x.size()
y = self.avg_pool(x).view(b, c)
y = self.fc(y).view(b, c, 1, 1)
return x * y.expand_as(x)
那么思路就有了
需要先将b1拉到和a一样的size,再做点乘
实现:
对应的操作就是caffe的scale层
问题:
a = torch.randn([1,3,5,5]) b1 = torch.ones([1,3,1,1]) b2 = torch.ones([1,3,5,5]) ab2可以正确的通过_mul转换成Eltwise的点乘layer, 而ab1则不行
思路: 参考 https://zhuanlan.zhihu.com/p/65459972 可知
对应的python代码是
def forward(self, x): b, c, _, _ = x.size() y = self.avg_pool(x).view(b, c) y = self.fc(y).view(b, c, 1, 1) return x * y.expand_as(x)
那么思路就有了
需要先将b1拉到和a一样的size,再做点乘
实现:
对应的操作就是caffe的scale层
但是代码中的expand_as操作貌似不支持从(b, c, 1, 1)扩展到x.size
@VIRGoMerz 我自己仿照 _add 写 _expand_as 的转化时,还是出错了! 想请问下 bottom 那个地方怎么设置
@VIRGoMerz 我自己仿照 _add 写 _expand_as 的转化时,还是出错了! 想请问下 bottom 那个地方怎么设置
@ShockeenLee @174614361 这些层没有权重,所以我是直接跳过,生成caffemodel和prototxt之后,手动修改了prototxt
在out = model.forward(inputs)这一步卡住了 layer0.conv1 conv14 was added to layers 2925302473856:conv_blob14 was added to blobs Add blob conv_blob14 : torch.Size([1, 64, 75, 75]) 2925302216384:blob3 getting 2925302473856:conv_blob14 getting layer0.bn1 2925302473856:conv_blob14 getting batch_norm11 was added to layers 2925302475944:batch_norm_blob11 was added to blobs Add blob batch_norm_blob11 : torch.Size([1, 64, 75, 75]) bn_scale11 was added to layers layer0.relu1 2925302475944:batch_norm_blob11 getting relu7 was added to layers 2925302475944:relu_blob7 was added to blobs Add blob relu_blob7 : torch.Size([1, 64, 75, 75]) 2925302475944:relu_blob7 getting layer0.pool max_pool3 was added to layers 2925302479320:max_pool_blob3 was added to blobs Add blob max_pool_blob3 : torch.Size([1, 64, 37, 37]) 2925302475944:relu_blob7 getting layer1.0.conv1 conv15 was added to layers 2925302509640:conv_blob15 was added to blobs Add blob conv_blob15 : torch.Size([1, 128, 37, 37]) 2925302479320:max_pool_blob3 getting 2925302509640:conv_blob15 getting layer1.0.bn1 2925302509640:conv_blob15 getting batch_norm12 was added to layers 2925302197992:batch_norm_blob12 was added to blobs Add blob batch_norm_blob12 : torch.Size([1, 128, 37, 37]) bn_scale12 was added to layers layer1.0.relu 2925302197992:batch_norm_blob12 getting relu8 was added to layers 2925302197992:relu_blob8 was added to blobs Add blob relu_blob8 : torch.Size([1, 128, 37, 37]) 2925302197992:relu_blob8 getting layer1.0.conv2 conv16 was added to layers 2925302511584:conv_blob16 was added to blobs Add blob conv_blob16 : torch.Size([1, 128, 37, 37]) 2925302197992:relu_blob8 getting 2925302511584:conv_blob16 getting layer1.0.bn2 2925302511584:conv_blob16 getting batch_norm13 was added to layers 2925302513456:batch_norm_blob13 was added to blobs Add blob batch_norm_blob13 : torch.Size([1, 128, 37, 37]) bn_scale13 was added to layers layer1.0.relu 2925302513456:batch_norm_blob13 getting relu9 was added to layers 2925302513456:relu_blob9 was added to blobs Add blob relu_blob9 : torch.Size([1, 128, 37, 37]) 2925302513456:relu_blob9 getting layer1.0.conv3 conv17 was added to layers 2925588200688:conv_blob17 was added to blobs Add blob conv_blob17 : torch.Size([1, 256, 37, 37]) 2925302513456:relu_blob9 getting 2925588200688:conv_blob17 getting layer1.0.bn3 2925588200688:conv_blob17 getting batch_norm14 was added to layers 2925588226840:batch_norm_blob14 was added to blobs Add blob batch_norm_blob14 : torch.Size([1, 256, 37, 37]) bn_scale14 was added to layers layer1.0.downsample.0 conv18 was added to layers 2925588226912:conv_blob18 was added to blobs Add blob conv_blob18 : torch.Size([1, 256, 37, 37]) 2925302479320:max_pool_blob3 getting 2925588226912:conv_blob18 getting layer1.0.downsample.1 2925588226912:conv_blob18 getting batch_norm15 was added to layers 2925302510648:batch_norm_blob15 was added to blobs Add blob batch_norm_blob15 : torch.Size([1, 256, 37, 37]) bn_scale15 was added to layers layer1.0.se_module.fc1 conv19 was added to layers 2925302217248:conv_blob19 was added to blobs Add blob conv_blob19 : torch.Size([1, 16, 1, 1])
KeyError Traceback (most recent call last)