apple / coremltools

Core ML tools contain supporting tools for Core ML model conversion, editing, and validation.
https://coremltools.readme.io
BSD 3-Clause "New" or "Revised" License
4.45k stars 645 forks source link

Question re. ANE Usage with Flexible Input Shapes #1764

Open rsomani95 opened 1 year ago

rsomani95 commented 1 year ago

❓Question

Not sure if this is a framework issue, or one with coremltools. My hunch is the latter, so I'm asking here.

I've exported a model that requires a flexible input shape, and set the default shape to 1. This model doesn't use the ANE at all, and only runs on CPU.

Out of curiosity, I set the input shape to be fixed to 1 to see if the model would run faster. This model uses the GPU / ANE and is significantly faster. Does this mean that ANE usage is out of the window with flexible input shapes, or is there scope to redefine the model to allow it to use the ANE with flexible shapes too?

Unfortunately, I cannot share the model definition publicly.

Fixed input shape:

CleanShot 2023-02-09 at 18 46 12

Flexible input shape:

CleanShot 2023-02-09 at 18 46 15

TobyRoseman commented 1 year ago

Not sure if this is a framework issue, or one with coremltools. My hunch is the latter, so I'm asking here.

I think this is much more likely to be an issue with the Core ML Framework. At a high level the coremltools package takes a source model (i.e. a TensorFlow or PyTorch model) and converts that to MIL Ops. The Core ML Framework decides which devices (i.e. CPU, GPU, ANE) runs each op.

For help with the Core ML Framework, you could post or search previous posts in the Apple Developer Forum. Submitting this issue via Feedback Assistant would also be good.

Without steps to reproduce this issue, I don't think there is much we can do here.

vade commented 1 year ago

Filed internal report FB12038163

aseemw commented 1 year ago

As discussed in #1763 , the model should continue to use ANE with EnumeratedShapes. Unless using the flexible input shapes causes some layers to be dynamic in which case they might not be supported on the neural engine. If the ops are exactly the same between the static and flexible models (say a fully convolutional model) and the static model runs on the NE but the enumerated shaped flexible model does not, then its likely a bug.