ultralytics / yolov5

YOLOv5 🚀 in PyTorch > ONNX > CoreML > TFLite
https://docs.ultralytics.com
GNU Affero General Public License v3.0
50.77k stars 16.35k forks source link

Exception thrown: Input rank is required for Unsqueeze #12794

Closed z0rimo closed 6 months ago

z0rimo commented 8 months ago

Search before asking

Question

Hello. I'm working on creating an onnx using yolov5 and want to use it with barracuda. The source code I used is a jupyter file built in the co-lab environment.

The code that created ONNX is shown below. !python export.py --weights ../drive/MyDrive/best.pt --img-size 415 --batch-size 1 --device cpu --simplify --dynamic --simplify --opset 9 --include onnx

ONNX Download Link

When I import my model into Unity as an asset, I get an error like the one below, and the model is not readable.

Unity versions are 2022.3.12f1

Stacktrace

Exception: Must have input rank for /model.24/Expand_1_output_0 in order to convert axis for Unsqueeze
Unity.Barracuda.Compiler.Passes.NCHWToNHWCPass.<InstantiateRewriterNCHWToNHWC>b__4_4 (Unity.Barracuda.Layer layer, Unity.Barracuda.ModelBuilder net) (at ./Library/PackageCache/com.unity.barracuda@3.0.0/Barracuda/Runtime/Core/Compiler/Passes/NCHWToNHWC/RewriterNCHWToNHWC.cs:208)
Unity.Barracuda.Compiler.Passes.NCHWToNHWCPass.Rewrite (Unity.Barracuda.Model& model) (at ./Library/PackageCache/com.unity.barracuda@3.0.0/Barracuda/Runtime/Core/Compiler/Passes/NCHWToNHWCPass.cs:139)
Unity.Barracuda.Compiler.Passes.NCHWToNHWCPass.Run (Unity.Barracuda.Model& model) (at ./Library/PackageCache/com.unity.barracuda@3.0.0/Barracuda/Runtime/Core/Compiler/Passes/NCHWToNHWCPass.cs:39)
Unity.Barracuda.Compiler.Passes.IntermediateToRunnableNHWCPass.Run (Unity.Barracuda.Model& model) (at ./Library/PackageCache/com.unity.barracuda@3.0.0/Barracuda/Runtime/Core/Compiler/Passes/IntermediateToRunnableNHWCPass.cs:38)
Unity.Barracuda.ONNX.ONNXModelConverter.Convert (Google.Protobuf.CodedInputStream inputStream) (at ./Library/PackageCache/com.unity.barracuda@3.0.0/Barracuda/Runtime/ONNX/ONNXModelConverter.cs:188)
Unity.Barracuda.ONNX.ONNXModelConverter.Convert (System.String filePath) (at ./Library/PackageCache/com.unity.barracuda@3.0.0/Barracuda/Runtime/ONNX/ONNXModelConverter.cs:98)
Unity.Barracuda.ONNXModelImporter.OnImportAsset (UnityEditor.AssetImporters.AssetImportContext ctx) (at ./Library/PackageCache/com.unity.barracuda@3.0.0/Barracuda/Editor/ONNXModelImporter.cs:65)
UnityEditor.AssetImporters.ScriptedImporter.GenerateAssetData (UnityEditor.AssetImporters.AssetImportContext ctx) (at <53ddbed73faf4fe3b980a493ab4e6639>:0)
UnityEditorInternal.InternalEditorUtility:ProjectWindowDrag(HierarchyProperty, Boolean)
UnityEngine.GUIUtility:ProcessEvent(Int32, IntPtr, Boolean&)

Asset import failed, "Assets/Models/best.onnx" > Exception: Must have input rank for /model.24/Expand_1_output_0 in order to convert axis for Unsqueeze
Unity.Barracuda.Compiler.Passes.NCHWToNHWCPass.<InstantiateRewriterNCHWToNHWC>b__4_4 (Unity.Barracuda.Layer layer, Unity.Barracuda.ModelBuilder net) (at ./Library/PackageCache/com.unity.barracuda@3.0.0/Barracuda/Runtime/Core/Compiler/Passes/NCHWToNHWC/RewriterNCHWToNHWC.cs:208)
Unity.Barracuda.Compiler.Passes.NCHWToNHWCPass.Rewrite (Unity.Barracuda.Model& model) (at ./Library/PackageCache/com.unity.barracuda@3.0.0/Barracuda/Runtime/Core/Compiler/Passes/NCHWToNHWCPass.cs:139)
Unity.Barracuda.Compiler.Passes.NCHWToNHWCPass.Run (Unity.Barracuda.Model& model) (at ./Library/PackageCache/com.unity.barracuda@3.0.0/Barracuda/Runtime/Core/Compiler/Passes/NCHWToNHWCPass.cs:39)
Unity.Barracuda.Compiler.Passes.IntermediateToRunnableNHWCPass.Run (Unity.Barracuda.Model& model) (at ./Library/PackageCache/com.unity.barracuda@3.0.0/Barracuda/Runtime/Core/Compiler/Passes/IntermediateToRunnableNHWCPass.cs:38)
Unity.Barracuda.ONNX.ONNXModelConverter.Convert (Google.Protobuf.CodedInputStream inputStream) (at ./Library/PackageCache/com.unity.barracuda@3.0.0/Barracuda/Runtime/ONNX/ONNXModelConverter.cs:188)
Unity.Barracuda.ONNX.ONNXModelConverter.Convert (System.String filePath) (at ./Library/PackageCache/com.unity.barracuda@3.0.0/Barracuda/Runtime/ONNX/ONNXModelConverter.cs:98)
Unity.Barracuda.ONNXModelImporter.OnImportAsset (UnityEditor.AssetImporters.AssetImportContext ctx) (at ./Library/PackageCache/com.unity.barracuda@3.0.0/Barracuda/Editor/ONNXModelImporter.cs:65)
UnityEditor.AssetImporters.ScriptedImporter.GenerateAssetData (UnityEditor.AssetImporters.AssetImportContext ctx) (at <53ddbed73faf4fe3b980a493ab4e6639>:0)
UnityEditorInternal.InternalEditorUtility:ProjectWindowDrag(HierarchyProperty, Boolean)
UnityEngine.GUIUtility:ProcessEvent(Int32, IntPtr, Boolean&)

UnityEngine.GUIUtility:ProcessEvent (int,intptr,bool&)

Thanks.

Additional

image

github-actions[bot] commented 8 months ago

👋 Hello @z0rimo, thank you for your interest in YOLOv5 🚀! Please visit our ⭐️ Tutorials to get started, where you can find quickstart guides for simple tasks like Custom Data Training all the way to advanced concepts like Hyperparameter Evolution.

If this is a 🐛 Bug Report, please provide a minimum reproducible example to help us debug it.

If this is a custom training ❓ Question, please provide as much information as possible, including dataset image examples and training logs, and verify you are following our Tips for Best Training Results.

Requirements

Python>=3.8.0 with all requirements.txt installed including PyTorch>=1.8. To get started:

git clone https://github.com/ultralytics/yolov5  # clone
cd yolov5
pip install -r requirements.txt  # install

Environments

YOLOv5 may be run in any of the following up-to-date verified environments (with all dependencies including CUDA/CUDNN, Python and PyTorch preinstalled):

Status

YOLOv5 CI

If this badge is green, all YOLOv5 GitHub Actions Continuous Integration (CI) tests are currently passing. CI tests verify correct operation of YOLOv5 training, validation, inference, export and benchmarks on macOS, Windows, and Ubuntu every 24 hours and on every commit.

Introducing YOLOv8 🚀

We're excited to announce the launch of our latest state-of-the-art (SOTA) object detection model for 2023 - YOLOv8 🚀!

Designed to be fast, accurate, and easy to use, YOLOv8 is an ideal choice for a wide range of object detection, image segmentation and image classification tasks. With YOLOv8, you'll be able to quickly and accurately detect objects in real-time, streamline your workflows, and achieve new levels of accuracy in your projects.

Check out our YOLOv8 Docs for details and get started with:

pip install ultralytics
glenn-jocher commented 8 months ago

@z0rimo hello! It seems like you're encountering an issue with exporting a YOLOv5 model to ONNX and subsequently using it with Unity Barracuda. The error message you're seeing, "Must have input rank for /model.24/Expand_1_output_0 in order to convert axis for Unsqueeze," suggests there might be a compatibility issue between the exported ONNX model and the Barracuda importer.

Here are a few steps you can take to troubleshoot and potentially resolve this issue:

  1. Ensure Compatibility: Verify that the version of YOLOv5 you're using is compatible with the version of Unity Barracuda. Sometimes, newer features or layers in YOLOv5 might not be fully supported by Barracuda.

  2. ONNX Export Parameters: Double-check the parameters used during the ONNX export process. The --simplify flag can help reduce the model complexity, but it's important to ensure that the --opset version is compatible with Barracuda. You might want to experiment with different --opset versions if possible.

  3. Model Inspection: Use ONNX tools (like Netron) to inspect the exported ONNX model. This can help you identify if the issue lies within a specific layer or operation that might not be supported or correctly interpreted by Barracuda.

  4. Barracuda Version: Ensure you're using the latest version of Unity Barracuda, as newer versions might have improved support for ONNX models and operations.

  5. Community and Documentation: Check the Ultralytics Docs (https://docs.ultralytics.com/yolov5/) and Unity Barracuda forums or documentation for similar issues or guidance. Sometimes, specific workarounds or solutions might be available for known issues.

  6. Simplify the Model: If possible, try simplifying your model architecture or reducing the complexity of certain operations that might be causing the issue.

If after trying these steps you're still facing issues, please provide more detailed information about your model architecture and the specific versions of YOLOv5 and Unity Barracuda you're using. This can help in diagnosing the problem more effectively.

Remember, the YOLO community and the Ultralytics team are here to support you. However, the complexity of integrating models with third-party platforms like Unity can sometimes lead to challenges that are outside our direct control. We'll do our best to assist you based on the information provided.

z0rimo commented 8 months ago

@glenn-jocher Hello! Thank you for your response.

  1. the yolov5 I used is yolov5s. How can I check if that model is compatible with the barracuda version? I couldn't find it on the official website.

  2. I tried to create a new onnx after deleting the --simplify flag, but I got the same error. For opset, I got the following error when using 8 or less. If I make it 10 or higher and import it, the same Unsueeze problem occurs.

Stacktrace

export: data=data/coco128.yaml, weights=['../drive/MyDrive/best. pt'], imgsz=[415], batch_size=1, device=cpu, half=False, inplace=False, keras=False, optimize=False, int8=False, per_tensor=False, dynamic=True, simplify=True, opset=8, verbose=False, workspace=4, nms=False, agnostic_nms=False, topk_per_class=100, topk_all=100, iou_thres=0. 45, conf_thres=0.25, include=['onnx']
YOLOv5 🚀 v7.0-290-gb2ffe055 Python-3.10.12 torch-2.1.0+cu121 CPU

Fusing layers... 
YOLOv5s summary: 157 layers, 7180036 parameters, 0 gradients, 16.3 GFLOPs
WARNING ⚠️ --img-size 415 must be multiple of max stride 32, updating to 416
WARNING ⚠️ --img-size 415 must be multiple of max stride 32, updating to 416

PyTorch: starting from ../drive/MyDrive/best.pt with output shape (1, 10647, 68) (14.0 MB)

ONNX: starting export with onnx 1.15.0...
ONNX: export failure ❌ 0.3s: Unsupported: ONNX export of operator upsample_nearest2d, torch._C.Value (output_size) indexing. Please feel free to request support or submit a pull request on PyTorch GitHub: https://github.com/pytorch/pytorch/issues
  1. Here's how I verified the model, but if you have a better way, let me know and I'll try it.
import onnx

onnx_model_path = '/content/gdrive/MyDrive/best.onnx'
onnx_model = onnx.load(onnx_model_path)

onnx.checker.check_model(onnx_model)
print('The model is checked and valid!')
  1. I am using version 3.0.0 of Barracuda. As far as I know, this is the most recent version.

  2. I've looked up several similar issues, but haven't found any helpful answers other than to set opset to 9.

  3. I would like to simplify the model a bit more, but it is difficult to change the structure since I have just started learning.

My current guess is that the problem is caused by the need for an input dimension (rank) for the Unsqueeze operation, but Unity Barracuda is not able to determine this information automatically. I have defined the input manually in /model.24/Expand_1_output_0, but I still get the same error.

Thanks.

glenn-jocher commented 8 months ago

Hello again @z0rimo, and thank you for the detailed follow-up. It seems you've done a thorough job trying to troubleshoot the issue. Let's address your points:

  1. Compatibility Check: Unfortunately, there isn't a straightforward way to check compatibility between YOLOv5 models and Unity Barracuda versions directly. This often involves trial and error or checking the Barracuda release notes for any mentions of ONNX opset version support or specific layer support improvements.

  2. ONNX Export and Opset Version: The error you encountered with opset 8 and the issues with --simplify and higher opset versions suggest that the problem might indeed be related to specific operations not being supported or handled differently in Barracuda. The upsample_nearest2d error with opset 8 indicates that this version is not suitable for your model's architecture.

  3. Model Verification: Your approach to verifying the model with ONNX's checker is correct and a good practice. It ensures that the model is structurally sound and adheres to the ONNX specifications. Unfortunately, it doesn't guarantee compatibility with specific frameworks like Unity Barracuda.

  4. Barracuda Version: Using the latest version of Barracuda is the best practice. However, as you've noticed, even the latest versions may have limitations regarding ONNX support.

  5. Similar Issues: It's not uncommon to encounter unique challenges when working with cutting-edge tools and models. The community and documentation can sometimes lag behind the latest developments.

  6. Model Simplification: I understand that modifying the model architecture might not be feasible at your current learning stage. However, it's worth noting that the complexity of certain operations, like Unsqueeze, can indeed cause compatibility issues with frameworks like Barracuda that may expect explicit input dimensions.

Given your current situation, here are a few additional suggestions:

Your dedication to resolving this issue is commendable, and I hope these suggestions provide some avenues for you to explore. Remember, challenges like these are part of the learning process and contribute to the broader knowledge base of the community.

github-actions[bot] commented 7 months ago

👋 Hello there! We wanted to give you a friendly reminder that this issue has not had any recent activity and may be closed soon, but don't worry - you can always reopen it if needed. If you still have any questions or concerns, please feel free to let us know how we can help.

For additional resources and information, please see the links below:

Feel free to inform us of any other issues you discover or feature requests that come to mind in the future. Pull Requests (PRs) are also always welcomed!

Thank you for your contributions to YOLO 🚀 and Vision AI ⭐