Open glenn-jocher opened 5 years ago
In the mlmodel spec file, simply change the datatype of that multiarray to DOUBLE. I don’t have a code snippet handy, but it’s explained in my book Core ML Survival Guide.
Ah thank you! I found this code modifies the data types successfully, and the error went away. Yes I might have a go at your book!
spec = coreml_model.get_spec()
for i in range(2):
spec.description.output[i].type.multiArrayType.dataType = \
coremltools.proto.FeatureTypes_pb2.ArrayFeatureType.ArrayDataType.Value('DOUBLE')
I noticed my array shapes were off compared to your format as well, so I added a squeeze op (to remove the first dimension) and a transpose op (both in PyTorch) to align to your example. Now my mlmodel looks healthier:
Glad to hear you managed to fix it. You don't actually need to use the squeeze, you can simply drop the first dimension in the spec. (But using a squeeze will work fine too, of course.)
Hello, thanks for the great repo. I'm exporting a YOLOv3-tiny model from PyTorch > ONNX > CoreML using the instructions from your article.
The correctly-compiled mlmodel is here: https://storage.googleapis.com/ultralytics/yolov3-tiny-float32.mlmodel
All successful except that my mlmodel outputs Float32, and on attempting to tie its outputs to the NMS inputs I get the following error.
Casting the output to
torch.float64
in PyTorch produces an error on the CoreML conversion unfortunately (NotImplementedError: Unsupported ONNX ops of type: Cast
), so the conversion needs to be done after the existing mlmodel (but before NMS) in CoreML somehow. What's the easiest way to do this? Thanks! (BTW, I did all the box decoding in PyTorch natively, its only NMS I need to pipeline).