onnx / onnx-coreml

ONNX to Core ML Converter
MIT License
392 stars 79 forks source link

Issue with constant padding operator #568

Open xzimg opened 4 years ago

xzimg commented 4 years ago

🐞Describe the bug

Constant padding operator seems not properly managed by the onnx to coreml converter tool. When importing in xcode the mlmodel can't be compiled as the padding axis are misinterpreted.

Trace

coremlc: Error: compiler error: Espresso exception: "Invalid blob shape": generic_elementwise_kernel: cannot broadcast [7, 8, 2, 1, 1] and [8, 8, 1, 1, 1]

Command CoreMLModelCompile failed with a nonzero exit code

Screenshot 2020-05-07 at 11 03 59

To Reproduce

Below is the code to create the mlmodel (padding then addition). The onnx file is first created using pytorch (torch.onnx.export).

import torch
import torch.nn.functional as F

## -- This script to describe coreml issue
class net_pad_add(torch.nn.Module):
    def __init__(self):
        super(net_pad_add, self).__init__()

    def forward(self, x):
        t = F.pad(x[0], pad=(1, 1, 0, 0), value=0.)
        y = t + x[1]        
        return y

## -- Test in pytorch
N, B, H, W = 1, 1, 8, 8
t1 = torch.rand((B, N, H, W-2))
t2 = torch.rand((B, N, H, W))
model = net_pad_add()
y = model([t1, t2])
print(y.size())

## -- Export onnx then coreml (mlmodel)
print(model)
fn_onnx = "test-bug-ios.onnx"
torch.onnx.export(
    model,
    [t1, t2],
    fn_onnx,
    verbose=True)

from onnx_coreml import convert
args = dict(is_bgr=False)
mlmodel = convert(
    fn_onnx,
    minimum_ios_deployment_target='13')
mlmodel.save("test-bug-ios.mlmodel")

The onnx model can't be attached but can be created with above code.

System environment (please complete the following information):