Open ankhafizov opened 1 week ago
👋 Hello @ankhafizov, thank you for your interest in Ultralytics 🚀! We recommend visiting the Docs for guidance on exporting models and troubleshooting, including examples for Python and CLI usage.
If this is a 🐛 Bug Report, please provide a minimum reproducible example (MRE) to help us debug it. This ensures we can reproduce any issues you're encountering, especially concerning changes to ONNX export behavior in recent updates.
If this is a ❓ Question, please provide more context about your setup, such as:
ultralytics
package you are using (pip show ultralytics
can help confirm this)It seems the ONNX model output structure has changed between versions. We suggest you first check if you're running the latest ultralytics
release by upgrading all dependencies in a clean Python>=3.8 environment with PyTorch>=1.8:
pip install -U ultralytics
Join us in the Ultralytics community for discussions and support:
To verify ONNX export changes, we recommend testing in one of the following environments, which are pre-configured with all dependencies:
If you suspect your issue relates to Triton configuration, it's possible the ONNX model's output dimensions or format has shifted. Ensure your Triton configuration aligns with the model outputs, and feel free to share any relevant error logs, configs, or details here for further clarification.
An Ultralytics engineer will review this issue and provide additional insights soon. Thank you for being part of our community 😊!
You can check the guide on how to load ONNX model in Triton
https://docs.ultralytics.com/guides/triton-inference-server/
You don't need to specify input or output
Search before asking
Question
Hello! I am converting yolo model with this code:
In earlier yolo versions it gave me (screenshot fron netron app):
and I loaded this mode to a triton server using this config:
but with the later updates, the model output changed to:
and the mentioned triton config does not work.
Why?
Additional
No response