xxradon / PytorchToCaffe

Pytorch model to caffe model, supported pytorch 0.3, 0.3.1, 0.4, 0.4.1 ,1.0 , 1.0.1 , 1.2 ,1.3 .notice that only pytorch 1.1 have some bugs
MIT License
783 stars 224 forks source link

File "../Caffe/layer_param.py", line 161, in upsample_param upsample_param.upsample_h = size[0] * scale_factor TypeError: unsupported operand type(s) for *: 'int' and 'NoneType' #62

Open yangchengtest opened 4 years ago

yangchengtest commented 4 years ago
    if size:
        if isinstance(size,int):
            upsample_param.upsample_h = size
        else:
            upsample_param.upsample_h = size[0] * scale_factor
            upsample_param.\
                upsample_w = size[1] * scale_factor

when size is not None,scale_factor is None.There is an error.

Can I change the code to: else: if not scale_factor: scale_factor = 1 upsample_param.upsample_h = size[0] * scale_factor

YFCYFC commented 4 years ago

Hi,@yangchengtest. I met the same problem, and it seems like only one of size and scale_factor in torch.nn.functional.interpolate should be defined, so I believe that the author forgot this.Absolutely, when size is given, the scale_factor is forbidden to use,thus your implemetation is right, and the best way to process scale_factor is to delete it, I assume,like this: _if size: if isinstance(size,int): upsample_param.upsample_h = size else:

scale_factor = 1

            upsample_param.upsample_h = size[0]
            upsample_param.upsample_w = size[1]_

I'm not familiar with Pytorch,so point out my error please.

happyday-lkj commented 4 years ago

Hi,@yangchengtest. I met the same problem, and it seems like only one of size and scale_factor in torch.nn.functional.interpolate should be defined, so I believe that the author forgot this.Absolutely, when size is given, the scale_factor is forbidden to use,thus your implemetation is right, and the best way to process scale_factor is to delete it, I assume,like this: _if size: if isinstance(size,int): upsample_param.upsample_h = size else: # scale_factor = 1 upsample_param.upsample_h = size[0] upsample_param.upsamplew = size[1] I'm not familiar with Pytorch,so point out my error please.

In fact, it can get same result