ultralytics / ultralytics

Ultralytics YOLO11 🚀
https://docs.ultralytics.com
GNU Affero General Public License v3.0
36.33k stars 7k forks source link

New onnx format #18963

Open ankhafizov opened 1 week ago

ankhafizov commented 1 week ago

Search before asking

Question

Hello! I am converting yolo model with this code:

OPSET = 14
best_model = YOLO("runs/detect/train/weights/best.pt")
best_model.model.eval()

#first way of convertion
best_model.export(format="onnx",  opset=OPSET, dynamic=True)

In earlier yolo versions it gave me (screenshot fron netron app):

Image

and I loaded this mode to a triton server using this config:

name: "person_detector"
platform: "onnxruntime_onnx"
max_batch_size: 1
input [
  {
    name: "images"
    data_type: TYPE_FP32
    format: FORMAT_NCHW
    dims: [3, 640, 640]
  }
]
output [
  {
    name: "output0"
    data_type: TYPE_FP32
    dims: [-1, -1]
  }
]

but with the later updates, the model output changed to:

Image

and the mentioned triton config does not work.

Why?

Additional

No response

UltralyticsAssistant commented 1 week ago

👋 Hello @ankhafizov, thank you for your interest in Ultralytics 🚀! We recommend visiting the Docs for guidance on exporting models and troubleshooting, including examples for Python and CLI usage.

If this is a 🐛 Bug Report, please provide a minimum reproducible example (MRE) to help us debug it. This ensures we can reproduce any issues you're encountering, especially concerning changes to ONNX export behavior in recent updates.

If this is a ❓ Question, please provide more context about your setup, such as:

It seems the ONNX model output structure has changed between versions. We suggest you first check if you're running the latest ultralytics release by upgrading all dependencies in a clean Python>=3.8 environment with PyTorch>=1.8:

pip install -U ultralytics

Resources for Debugging and Community Support

Join us in the Ultralytics community for discussions and support:

Verified Environments for Testing

To verify ONNX export changes, we recommend testing in one of the following environments, which are pre-configured with all dependencies:

If you suspect your issue relates to Triton configuration, it's possible the ONNX model's output dimensions or format has shifted. Ensure your Triton configuration aligns with the model outputs, and feel free to share any relevant error logs, configs, or details here for further clarification.

An Ultralytics engineer will review this issue and provide additional insights soon. Thank you for being part of our community 😊!

Y-T-G commented 1 week ago

You can check the guide on how to load ONNX model in Triton

https://docs.ultralytics.com/guides/triton-inference-server/

You don't need to specify input or output