Closed rudetrue closed 8 months ago
The output of your .mlpackage
should be the same as your PyTorch model. So I would suggest checking the documentation for your PyTorch model.
Since this isn't really a coremltools
question, I'm going to close this issue. You can ask Xcode related questions in our developer forum: https://developer.apple.com/forums/.
I have a YOLOX Object Detection model I successfully converted to a .mlpackage using coremltools.
My issue is interpreting the output. When I open the
.mlpackage
in Xcode I can see the model output is aMultiArray (Float32 1 x 13125 x 15)
. How do I interpret that output, or get easy to use outputs like I do from a model trained with CreateML? I'd like to have a model Preview tab, confidence output, and coordinate output.This is roughly how I've converted my model:
Is there a guide similar to https://apple.github.io/coremltools/docs-guides/source/classifiers.html but for object detection models? It doesn't appear that there is an
ObjectDetectionConfig
. How do I set the class names for object detection? Any help/guidance would be appreciated!