apple / coremltools

Core ML tools contain supporting tools for Core ML model conversion, editing, and validation.
https://coremltools.readme.io
BSD 3-Clause "New" or "Revised" License
4.38k stars 631 forks source link

4.0b2: Flexible shapes not working but enumerated image sizes work #840

Open 3DTOPO opened 4 years ago

3DTOPO commented 4 years ago

🐞Bug

I have successfully converted a model to use images instead of multi-arrays (with 4.0b2).

I can add flexible shapes and the model compiles/exports fine. If I run predict on the resulting mode with an input image with a dimension that is not the fixed dimension (and inside the flexible shape range), I get the trace below:

Trace

Traceback (most recent call last): File "/Users/jeshua/produceImageWithMLModel.py", line 36, in main() File "/Users/jeshua/produceImageWithMLModel.py", line 30, in main outputImage = model.predict({args.inputLayer: image})[args.outputLayer] File "/Users/jeshua/coremltools4/lib/python3.8/site-packages/coremltools/models/model.py", line 329, in predict return self.proxy.predict(data, useCPUOnly) RuntimeError: { NSLocalizedDescription = "Error binding image input buffer input.";

Code snippet

I added the flexible shapes like this:

img_size_ranges = flexible_shape_utils.NeuralNetworkImageSizeRange()
img_size_ranges.add_height_range((64, 4096))
img_size_ranges.add_width_range((64, 4096))

flexible_shape_utils.update_image_size_range(spec, feature_name='input', size_range=img_size_ranges)
flexible_shape_utils.update_image_size_range(spec, feature_name='output', size_range=img_size_ranges)

If I add enumerated shapes, the enumerated shapes work (only):

image_sizes = [flexible_shape_utils.NeuralNetworkImageSize(512, 512)]
image_sizes.append(flexible_shape_utils.NeuralNetworkImageSize(1024, 1024))
image_sizes.append(flexible_shape_utils.NeuralNetworkImageSize(2048, 2048))
flexible_shape_utils.add_enumerated_image_sizes(spec, feature_name='input', sizes=image_sizes)
flexible_shape_utils.add_enumerated_image_sizes(spec, feature_name='output', sizes=image_sizes)

System environment:

leovinus2001 commented 4 years ago

Possibly related #756

3DTOPO commented 4 years ago

I've tried every workaround I can think of, and I get the same error. This is a critical issue for me, is there any plans to fix it for the 4.0 release?

Are there any known work-arounds? The information in issue #756 didn't really help me because I have to first convert using ONNX, and that thread is about using the unified converter.

Note I get a similar message trying to run predict with Swift:

MyApp[14739:5337178] [espresso] [Espresso::handle_ex_plan] exception=Invalid X-dimension 1/814 status=-7 MyApp[14739:5337178] [coreml] Error binding image input buffer input: -7 MyApp[14739:5337178] [coreml] Failure in bindInputsAndOutputs. prediction error: Error Domain=com.apple.CoreML Code=0 "Error binding image input buffer input." UserInfo={NSLocalizedDescription=Error binding image input buffer input.}

3DTOPO commented 4 years ago

So the issue can easily be reproduced and the model inspected, I've uploaded it to here:

FlexibleModel.mlmodel.zip

I would super appreciate any and all help.

3DTOPO commented 3 years ago

I see many posts about flexible shapes not working - esp. with PyTorch.

Can anyone offer any insight why the above model's flexible shapes don't work?

This really is a critical issue for me and apparently for others. Is there any thing that can be done? Any workarounds?

All the amazing features of coreml tools are useless if I can't deploy the model with flexible shapes.