Russzheng / ONNX_custom_layer

Guide on how to convert custom PyTorch layers when using ONNX.
MIT License
21 stars 3 forks source link

How to add convolution in symbolic function? #1

Open FenixFly opened 4 years ago

FenixFly commented 4 years ago

Hello! I see you have experience in symbolic functions. Can you help me? I'm tryig to make onnx realization of simple pytorch block. I made custom block with custom forward function.

class MyConvBlockFunction(Function):

    @staticmethod
    def symbolic(g, input, conv1):
        from torch.onnx.symbolic_opset9 import _shape_as_tensor, _convolution, relu

        conv = _convolution(g, input, conv1.weight, False, 1, 1, 1, False, (), 1, None, None, None)
        output = relu(g, conv)

        return output

    @staticmethod
    def forward(self, input, conv1):
        conv = conv1(input)
        relu1 = nn.ReLU()
        res = relu1(conv)
        return res

class MyConvBlock(nn.Module):

    def __init__(self):
        super(MyConvBlock, self).__init__()
        self.conv1 = nn.Conv2d(in_channels = 3, out_channels = 2, kernel_size=3, stride=1, padding=1, bias=False)
        self.relu = nn.ReLU()
        #self.weight = torch.tensor(self.conv1.weight, requires_grad=False)

    def forward(self, input):
        return MyConvBlockFunction.apply(input, self.conv1)

But when I run export to onnx, I get an error that weight tensor is wrong tensor.

weight_size = weight.type().sizes()
AttributeError: 'str' object has no attribute 'sizes'

I found that my input tensor is a good onnx tensor, and 'conv.weight' is pytorch tensor, not onnx tensor.

input.1 defined in (%input.1 : Float(1, 3, 4, 4), %conv1.weight : Float(2, 3, 3, 3) = prim::Param()
2 defined in (%2 : Tensor = onnx::Constant[value=<Tensor>](), scope: MyConvBlock)

How can I send weights to onnx _convolution operation?

Thank you for your help!

Yuzz1020 commented 4 years ago

I am facing similar issues, have you solved this?

FenixFly commented 4 years ago

I am facing similar issues, have you solved this?

Unfortunately, no. I didn't find any information how to solve my problem.