CaoWGG / CenterNet-CondInst

Instance Segmentation based on CenterNet and CondInst
MIT License
166 stars 27 forks source link

backbone mobilenet_light export to onnx #6

Closed haiyang-tju closed 4 years ago

haiyang-tju commented 4 years ago

When I export the backbone of mobilenet_light to onnx, some errors:

onnx.export(model, input, "./aa.onnx", export_params=True)

---> 10 onnx.export(model, input, "./aa.onnx", export_params=True)
2 frames
/usr/local/lib/python3.6/dist-packages/torch/onnx/utils.py in _export(model, args, f, export_params, verbose, training, input_names, output_names, operator_export_type, export_type, example_outputs, propagate, opset_version, _retain_param_name, do_constant_folding, strip_doc_string, dynamic_axes, keep_initializers_as_inputs, fixed_batch_size)
    392             proto, export_map = graph._export_onnx(
    393                 params_dict, opset_version, dynamic_axes, defer_weight_export,
--> 394                 operator_export_type, strip_doc_string, val_keep_init_as_ip)
    395         else:
    396             proto, export_map = graph._export_onnx(

RuntimeError: ONNX export failed: Couldn't export operator aten::upsample_bilinear2d

And added the opset_version=11:

torch.onnx.export(model, input, onnx_file_path, export_params=True, opset_version=11)

---> 10 torch.onnx.export(model, input, onnx_file_path, export_params=True, opset_version=11)

7 frames
/usr/local/lib/python3.6/dist-packages/torch/onnx/symbolic_helper.py in symbolic_fn(*args, **kwargs)
    173         raise RuntimeError("ONNX export failed on {}, which is not implemented for opset {}. "
    174                            "Try exporting with other opset versions."
--> 175                            .format(name, _export_onnx_opset_version))
    176     return symbolic_fn
    177 

RuntimeError: ONNX export failed on hardtanh, which is not implemented for opset 11. Try exporting with other opset versions.

I did not find anything about hardtanh or tanh.

edit: ReLU6 is a child of Hardtanh...

class ReLU6(Hardtanh):
   pass

Do you have any idea about that?

CaoWGG commented 4 years ago

@haiyang-tju maybe you need to fix input image size, then you can change here . replace *.size() to [int,int] or use scale_factor

haiyang-tju commented 4 years ago

Thanks for your reply. @CaoWGG

I searched and tried many options, and there is no good solutions. One painful way is to edit the parameter align_corners=False, and with the default opset_version. And this will give two UserWarnings, I don't know if it will actually have an impact.

 UserWarning: Default upsampling behavior when mode=bilinear is changed to align_corners=False since 0.4.0. Please specify align_corners=True if the old behavior is desired. See the documentation of nn.Upsample for details.
:undefined
  "See the documentation of nn.Upsample for details.".format(mode))
torch.onnx.export(model, input, "./bb.onnx", export_params=True)
None
***/lib/python3.6/site-packages/torch/onnx/symbolic_helper.py:198: UserWarning: You are trying to export the model with onnx:Upsample for ONNX opset version 9. This operator might cause results to not match the expected results by PyTorch.
:undefined
ONNX's Upsample/Resize operator did not match Pytorch's Interpolation until opset 11. Attributes to determine how to transform the input were added in onnx:Resize in opset 11 to support Pytorch's behavior (like coordinate_transformation_mode and nearest_mode).
We recommend using opset 11 and above for models using this operator. 
  "" + str(_export_onnx_opset_version) + ". "

And maybe this can solve the error. https://github.com/pytorch/pytorch/issues/22906#issuecomment-518317265

haiyang-tju commented 4 years ago

Reference here, I made a custom layer with double upsampling. And it can be successfully exported to onnx.

class resize_bilinear(nn.Module):
    def __init__(self, in_shape):
        super(resize_bilinear, self).__init__()
        self.in_shape = in_shape
        self.rhw = 2
        self.out_shape = self.in_shape*self.rhw
        self.cal_parameters()

    def cal_parameters(self):
        y = torch.arange(self.out_shape, dtype=torch.float32)
        ty = (y + 1) / self.rhw + 0.5 * (1 - 1.0 / self.rhw) - 1
        zero = torch.zeros([1])
        ty = torch.max(ty, zero)

        ty_floor = ty.floor()
        ty_ceil = ty.ceil()
        dy = ty - ty_floor
        dydy = dy.view(-1,1) * dy.view(-1)

        iy0 = ty_floor.long()
        iy1 = torch.clamp(ty_ceil,0,self.in_shape-1).long()

        self.iy0 = nn.Parameter(iy0, requires_grad=False)
        self.iy1 = nn.Parameter(iy1, requires_grad=False)
        self.dy = nn.Parameter(dy, requires_grad=False)
        self.dydy = nn.Parameter(dydy, requires_grad=False)

    def forward(self, x):
        if x is None:
            return x
        im_iy0 = x.index_select(2, self.iy0) 
        im_iy1 = x.index_select(2, self.iy1)
        d = im_iy0.index_select(3, self.iy0) * (1 - 2*self.dy + self.dydy) + \
            im_iy1.index_select(3, self.iy0) * (self.dy - self.dydy) + \
            im_iy0.index_select(3, self.iy1) * (self.dy - self.dydy) + \
            im_iy1.index_select(3, self.iy1) * self.dydy

        return d
WX20200107-095152@2x