Open catsglasses opened 6 years ago
Hi, have you known how to implement the Upsample operator? I met some problems when converting pytorch model containing the upsample layer. Thanks a lot
Hi,
The upsampling operation is difficult to implement here. I am try to figure it out.
I have wrote a transfer function at pytorch_to_caffe.py
, but the output of the transferred layer in Caffe cannot reproduce the result. Could you help me to find the problem?
def _interpolate(raw,input, size=None, scale_factor=None, mode='nearest', align_corners=None):
if mode=='bilinear':
x=raw(input, size, scale_factor, mode, align_corners)
else:
raise NotImplementedError()
name=log.add_layer(name='interpolate')
log.add_blobs([x],name='interpolate_blob')
layer=caffe_net.Layer_param(name=name, type='Deconvolution',
bottom=[log.blobs(input)], top=[log.blobs(x)])
def bilinear_weight(shape):
weight = np.zeros(np.prod(shape), dtype='float32')
f = np.ceil(shape[3] / 2.)
c = (2 * f - 1 - f % 2) / (2. * f)
for i in range(np.prod(shape)):
x = i % shape[3]
y = (i / shape[3]) % shape[2]
weight[i] = (1 - abs(x / f - c)) * (1 - abs(y / f - c))
return weight.reshape(shape)
kernel_size=2*scale_factor-scale_factor%2
stride=scale_factor
pad=int(np.ceil((scale_factor-1)/2))
channels=x.size(1)
weight=bilinear_weight([channels,1,kernel_size,kernel_size])
layer.conv_param(channels,kernel_size,stride=stride,pad=pad,bias_term=False,groups=channels)
layer.add_data(weight)
log.cnet.add_layer(layer)
return x
Hi, Have you ever found any solution for converting the upsample operation? I have also met this problem.
Very great work, just find you code support the 0.3 version, and your implementation are very elegent, will you prepare to implement the Upsample operator?
Thanks a lot