apple / coremltools

Core ML tools contain supporting tools for Core ML model conversion, editing, and validation.
https://coremltools.readme.io
BSD 3-Clause "New" or "Revised" License
4.38k stars 631 forks source link

Converted Pytroch model can't support flexible Input/Output Shape for ImageType #1075

Open eric573 opened 3 years ago

eric573 commented 3 years ago

🐞Describe the bug

Trace/Error Message

XCode

# Code
let output = try? model!.prediction(image: pixelBuffer!)

# Trace
[espresso] [Espresso::handle_ex_plan] exception=Espresso exception: "Invalid argument": generic_general_concat_kernel: axis out of bounds status=-6
[coreml] Failure dynamically resizing for sequence length.
[coreml] Failure in resetSizes.

Using coremltools API

# Code 
output = model.predict({'image': img})

# Trace
Traceback (most recent call last):
  File "/Users/user/Documents/fast_nst/iphone_test.py", line 23, in <module>
    out_dict = model.predict({'image': img})
  File "/Users/user/miniconda3/envs/NST/lib/python3.8/site-packages/coremltools/models/model.py", line 329, in predict
    return self.__proxy__.predict(data, useCPUOnly)
RuntimeError: {
    NSLocalizedDescription = "Failure dynamically resizing for sequence length.";
}

To Reproduce [FB8985338 with Source File]

  1. Convert a Pytroch model into CoreML Model & configure for flexible input/output [CODE BELOW]
  2. Run prediction using coremltools API on the .mlmodel generated (In this case, make sure to test it with an image of size different from the one configured for the model).

Configure spec to support flexible input & output image

def convert_flexible(spec): img_size_ranges = flexible_shape_utils.NeuralNetworkImageSizeRange() img_size_ranges.add_height_range((64, -1)) img_size_ranges.add_width_range((64, -1)) flexible_shape_utils.update_image_size_range(spec, feature_name=INPUT_NAME, size_range=img_size_ranges) flexible_shape_utils.update_image_size_range(spec, feature_name=OUTPUT_NAME, size_range=img_size_ranges)

net = Load_Model().eval() # Set in Eval mode! net.load_state_dict(torch.load(MODEL_WEIGHTS_PATH, map_location=torch.device(device))) net = net.to(device)

example_input = torch.rand(1, 3, HEIGHT, WIDTH) # Figure out the proper size traced_model = torch.jit.trace(net, example_input)

model_from_torch = ct.convert(traced_model, source="pytorch", inputs=[ct.ImageType(name=INPUT_NAME, shape=example_input.shape)]) model_from_torch = ct.models.MLModel(model_spec) model_from_torch.save(f"model.mlmodel")



## System environment (please complete the following information):
 - coremltools version  (e.g., 3.0b5): 4.0
 - OS (e.g., MacOS, Linux): MacOS Big Sur
 - macOS version (if applicable): 11.1
 - XCode version (if applicable): 12.4
 - How you install python (anaconda, virtualenv, system): miniconda
 - python version (e.g. 3.7): 3.8.5
 - any other relevant information:
     - torch==1.6.0

## Additional context
1. Submitted bug report with source file at FB8985338.
TobyRoseman commented 1 year ago

Is this still an issue with the latest macOS and Xcode?