Closed z0rimo closed 6 months ago
👋 Hello @z0rimo, thank you for your interest in YOLOv5 🚀! Please visit our ⭐️ Tutorials to get started, where you can find quickstart guides for simple tasks like Custom Data Training all the way to advanced concepts like Hyperparameter Evolution.
If this is a 🐛 Bug Report, please provide a minimum reproducible example to help us debug it.
If this is a custom training ❓ Question, please provide as much information as possible, including dataset image examples and training logs, and verify you are following our Tips for Best Training Results.
Python>=3.8.0 with all requirements.txt installed including PyTorch>=1.8. To get started:
git clone https://github.com/ultralytics/yolov5 # clone
cd yolov5
pip install -r requirements.txt # install
YOLOv5 may be run in any of the following up-to-date verified environments (with all dependencies including CUDA/CUDNN, Python and PyTorch preinstalled):
If this badge is green, all YOLOv5 GitHub Actions Continuous Integration (CI) tests are currently passing. CI tests verify correct operation of YOLOv5 training, validation, inference, export and benchmarks on macOS, Windows, and Ubuntu every 24 hours and on every commit.
We're excited to announce the launch of our latest state-of-the-art (SOTA) object detection model for 2023 - YOLOv8 🚀!
Designed to be fast, accurate, and easy to use, YOLOv8 is an ideal choice for a wide range of object detection, image segmentation and image classification tasks. With YOLOv8, you'll be able to quickly and accurately detect objects in real-time, streamline your workflows, and achieve new levels of accuracy in your projects.
Check out our YOLOv8 Docs for details and get started with:
pip install ultralytics
@z0rimo hello! It seems like you're encountering an issue with exporting a YOLOv5 model to ONNX and subsequently using it with Unity Barracuda. The error message you're seeing, "Must have input rank for /model.24/Expand_1_output_0 in order to convert axis for Unsqueeze," suggests there might be a compatibility issue between the exported ONNX model and the Barracuda importer.
Here are a few steps you can take to troubleshoot and potentially resolve this issue:
Ensure Compatibility: Verify that the version of YOLOv5 you're using is compatible with the version of Unity Barracuda. Sometimes, newer features or layers in YOLOv5 might not be fully supported by Barracuda.
ONNX Export Parameters: Double-check the parameters used during the ONNX export process. The --simplify
flag can help reduce the model complexity, but it's important to ensure that the --opset
version is compatible with Barracuda. You might want to experiment with different --opset
versions if possible.
Model Inspection: Use ONNX tools (like Netron) to inspect the exported ONNX model. This can help you identify if the issue lies within a specific layer or operation that might not be supported or correctly interpreted by Barracuda.
Barracuda Version: Ensure you're using the latest version of Unity Barracuda, as newer versions might have improved support for ONNX models and operations.
Community and Documentation: Check the Ultralytics Docs (https://docs.ultralytics.com/yolov5/) and Unity Barracuda forums or documentation for similar issues or guidance. Sometimes, specific workarounds or solutions might be available for known issues.
Simplify the Model: If possible, try simplifying your model architecture or reducing the complexity of certain operations that might be causing the issue.
If after trying these steps you're still facing issues, please provide more detailed information about your model architecture and the specific versions of YOLOv5 and Unity Barracuda you're using. This can help in diagnosing the problem more effectively.
Remember, the YOLO community and the Ultralytics team are here to support you. However, the complexity of integrating models with third-party platforms like Unity can sometimes lead to challenges that are outside our direct control. We'll do our best to assist you based on the information provided.
@glenn-jocher Hello! Thank you for your response.
the yolov5 I used is yolov5s. How can I check if that model is compatible with the barracuda version? I couldn't find it on the official website.
I tried to create a new onnx after deleting the --simplify flag, but I got the same error. For opset, I got the following error when using 8 or less. If I make it 10 or higher and import it, the same Unsueeze problem occurs.
Stacktrace
export: data=data/coco128.yaml, weights=['../drive/MyDrive/best. pt'], imgsz=[415], batch_size=1, device=cpu, half=False, inplace=False, keras=False, optimize=False, int8=False, per_tensor=False, dynamic=True, simplify=True, opset=8, verbose=False, workspace=4, nms=False, agnostic_nms=False, topk_per_class=100, topk_all=100, iou_thres=0. 45, conf_thres=0.25, include=['onnx']
YOLOv5 🚀 v7.0-290-gb2ffe055 Python-3.10.12 torch-2.1.0+cu121 CPU
Fusing layers...
YOLOv5s summary: 157 layers, 7180036 parameters, 0 gradients, 16.3 GFLOPs
WARNING ⚠️ --img-size 415 must be multiple of max stride 32, updating to 416
WARNING ⚠️ --img-size 415 must be multiple of max stride 32, updating to 416
PyTorch: starting from ../drive/MyDrive/best.pt with output shape (1, 10647, 68) (14.0 MB)
ONNX: starting export with onnx 1.15.0...
ONNX: export failure ❌ 0.3s: Unsupported: ONNX export of operator upsample_nearest2d, torch._C.Value (output_size) indexing. Please feel free to request support or submit a pull request on PyTorch GitHub: https://github.com/pytorch/pytorch/issues
import onnx
onnx_model_path = '/content/gdrive/MyDrive/best.onnx'
onnx_model = onnx.load(onnx_model_path)
onnx.checker.check_model(onnx_model)
print('The model is checked and valid!')
I am using version 3.0.0 of Barracuda. As far as I know, this is the most recent version.
I've looked up several similar issues, but haven't found any helpful answers other than to set opset to 9.
I would like to simplify the model a bit more, but it is difficult to change the structure since I have just started learning.
My current guess is that the problem is caused by the need for an input dimension (rank) for the Unsqueeze operation, but Unity Barracuda is not able to determine this information automatically. I have defined the input manually in /model.24/Expand_1_output_0, but I still get the same error.
Thanks.
Hello again @z0rimo, and thank you for the detailed follow-up. It seems you've done a thorough job trying to troubleshoot the issue. Let's address your points:
Compatibility Check: Unfortunately, there isn't a straightforward way to check compatibility between YOLOv5 models and Unity Barracuda versions directly. This often involves trial and error or checking the Barracuda release notes for any mentions of ONNX opset version support or specific layer support improvements.
ONNX Export and Opset Version: The error you encountered with opset 8 and the issues with --simplify
and higher opset versions suggest that the problem might indeed be related to specific operations not being supported or handled differently in Barracuda. The upsample_nearest2d
error with opset 8 indicates that this version is not suitable for your model's architecture.
Model Verification: Your approach to verifying the model with ONNX's checker is correct and a good practice. It ensures that the model is structurally sound and adheres to the ONNX specifications. Unfortunately, it doesn't guarantee compatibility with specific frameworks like Unity Barracuda.
Barracuda Version: Using the latest version of Barracuda is the best practice. However, as you've noticed, even the latest versions may have limitations regarding ONNX support.
Similar Issues: It's not uncommon to encounter unique challenges when working with cutting-edge tools and models. The community and documentation can sometimes lag behind the latest developments.
Model Simplification: I understand that modifying the model architecture might not be feasible at your current learning stage. However, it's worth noting that the complexity of certain operations, like Unsqueeze
, can indeed cause compatibility issues with frameworks like Barracuda that may expect explicit input dimensions.
Given your current situation, here are a few additional suggestions:
Manual Modification: Since you've identified the Unsqueeze
operation as a potential source of the issue, consider manually editing the ONNX model to specify the input dimensions explicitly. Tools like Netron (for visualization) and ONNX GraphSurgeon (for modification) might help, though this approach requires a deep understanding of the model's architecture and ONNX format.
Community and Support: Continue to seek insights from both the YOLOv5 and Unity Barracuda communities. Specific issues like yours might have been encountered and solved by others.
Barracuda Support: Consider reaching out to the Unity Barracuda team or community forums. They might offer insights or workarounds specific to Barracuda's handling of ONNX models.
Alternative Approaches: If the issue persists and you're unable to find a solution, exploring alternative deployment or model conversion strategies might be necessary. For instance, running the model in a different environment or using a different model format compatible with Unity.
Your dedication to resolving this issue is commendable, and I hope these suggestions provide some avenues for you to explore. Remember, challenges like these are part of the learning process and contribute to the broader knowledge base of the community.
👋 Hello there! We wanted to give you a friendly reminder that this issue has not had any recent activity and may be closed soon, but don't worry - you can always reopen it if needed. If you still have any questions or concerns, please feel free to let us know how we can help.
For additional resources and information, please see the links below:
Feel free to inform us of any other issues you discover or feature requests that come to mind in the future. Pull Requests (PRs) are also always welcomed!
Thank you for your contributions to YOLO 🚀 and Vision AI ⭐
Search before asking
Question
Hello. I'm working on creating an onnx using yolov5 and want to use it with barracuda. The source code I used is a jupyter file built in the co-lab environment.
The code that created ONNX is shown below.
!python export.py --weights ../drive/MyDrive/best.pt --img-size 415 --batch-size 1 --device cpu --simplify --dynamic --simplify --opset 9 --include onnx
ONNX Download Link
When I import my model into Unity as an asset, I get an error like the one below, and the model is not readable.
Unity versions are 2022.3.12f1
Stacktrace
Thanks.
Additional