Closed 3DTOPO closed 3 years ago
It's been a long time. Any progress on this?
I am getting the following error when convert the traced PyTorch model to CoreML with coremltools:
RuntimeError: PyTorch convert function for op 'reflection_pad2d' not implemented.
I really can't believe this hasn't been addressed yet myself. It is one of the single most important pieces of my iOS development tool chain.
Any how, there is a workaround for reflection_pad2d, see https://github.com/apple/coremltools/issues/855
@3DTOPO to me looks like your weren't able to write MIL operator for Reflection Padding 2D, right?
No, mushipand's solution works. Coremltools said they would add it and I was just bummed that it hasn't been added still.
@3DTOPO Ran your given code and got the following dims mismatch error:
ValueError: Dimension mismatch in concat ("x.28"): shapes [1, 32, 0, 0] vs. (1, 32, -128, -128)
Which PyTorch code works for converting the TransformerNet model to CoreML?
I was able to get it working. Sorry I can't recall the details or I would share. Try asking mushipand since it is his solution.
Looks like your conversion script isn't right.
This is irritating. Why can't Apple just do it for us!
They provide the tools, but it's up to use to use them properly. There is documentation and places to find help.
I'm trying to wrap up development of an update that I've spent 2 years working on. Is this glaring bug ever going to be addressed?
Otherwise I am facing shipping a product with a horrendous work around for a feature that is supposed to be supported. I can't express how frustrating this issue is and one of the most critical toolchains for my app development.
Thanks for the tip but it used to be possible and according to the docs should be possible - examples are shown how to do it.
I've tried that method (and so have others as reported in this forum) and doesn't work for me - that was the first thing I tried.
Are you using the model.py I have defined in the original post?
@3DTOPO Yeah, I also got the error right now. Turn out there should be a bug for flexible shape for the image input type ... A quick workaround for this is to use a TensorInput type instead:
mlmodel = ct.convert(
traced_model,
inputs=[ct.TensorType(name="input_1", shape=(1, channels, ct.RangeDim(256, 3072), ct.RangeDim(256, 3072)))],
# outputs must not be specified for PyTorch,
)
import numpy as np
np_input = np.random.rand(1, 3, 2500, 2500)
output = mlmodel.predict({"input_1": np_input})
print(output)
This code snippet works fine on my local.
But for the image input type with flexible, we need to investigate the issue.
Yeah that is the whole point - flexible image inputs are not possible. My work around is to use a flexible array input and convert the image to array using the Accelerate framework. It works but is a ridiculous work around compared to using an image input which is supposed to be supported.
There was a bug in the Core ML framework, in mac OS Big Sur, when using image inputs with rangeDim shapes. This has been fixed with mac OS Monterey. Please see #1263 for unit tests.
But it's not working for me on Monterey. In fact, it is now much worse for me. I used to be able to use a flexible array input with flexible output image, but now, that even doesn't work: https://github.com/apple/coremltools/issues/1244
Just so it's more clear, what specifically needs to happen on Monterey?
Does the model have to be compiled on Monterey? What about linux?
Or does the app have to be compiled in Xcode on Monterey?
Can you provide specific versions (Xcode, macOS, coremltools, python, PyTorch, etc) for the complete env where you have verified them to work?
Are you able to run TestFlexibleInputShapes
from this PR : #1263 and see if they pass for you? That is, run:
pytest -v coremltools/converters/mil/test_flexible_shape_inputs.py:: TestFlexibleInputShapes
?
python: 3.7 or 3.8 coremltools: 5.0b2 macOS: monterery Xcode: Xcode 13 pytorch: 1.9
Hello, I encountered the same problem while configuring a different model. Did you handle it?
🐞Describe the bug
I made changes to my model so I could use the recommended unified convertor. Conversion is successful without issue and shows that flexible shapes are supported (in both Python and Xcode).
Running prediction with a shape in the supported ranges (and any shape other than the fixed shape) will fail with an error. The fixed shape input works as expected. I've tried both GPU and CPU only.
Trace
To Reproduce
The source code and model is in the attached archive.
model.py:
System environment (please complete the following information):
Additional context
This issue severely restricts deploying MLModels across my workflow.
repo-pythorch-conversion.zip