cj-mills / christianjmills

My personal blog
https://christianjmills.com/
Other
4 stars 0 forks source link

posts/unity-onnxruntime-cv-plugin-tutorial/unity-integration/ #47

Open utterances-bot opened 8 months ago

utterances-bot commented 8 months ago

Christian Mills - Real-Time Object Detection in Unity with ONNX Runtime and DirectML

Learn how to integrate a native plugin within the Unity game engine for real-time object detection using ONNX Runtime.

https://christianjmills.com/posts/unity-onnxruntime-cv-plugin-tutorial/unity-integration/

saydek217 commented 8 months ago

Hello, what a splendid work, first of all i would like to thank for sharing your knowledge in such detailed manner, you, sir, are really a blessing.

I would like to ask you for some advice if i want to adapt your project to work with customed models ( or other models like YoloV8), where should i make changes ... Thank you in advance !

cj-mills commented 8 months ago

Hi @saydek217,

You can follow the first two tutorials in the following series to learn how to train a YOLOX model and export it to ONNX format:

Once you have your custom model and the associated JSON colormap file, you can add them to the Unity project as discussed in this section:

The annotations for your custom dataset will likely be in a different format than the HaGRID dataset in the training tutorial, so I also have another tutorial series showing how to work with bounding box annotations in a few formats in PyTorch:

As for using a different model, like YOLOv8, you would still need to generate your own JSON colormap file to use the bounding box visualizer used in this tutorial's Unity project (or at least to have labels associated with the model predictions).

Beyond that, you might need to change the input preprocessing and output post-processing steps used in the tutorial project. For example, the YOLOX ONNX model used in the tutorial includes the input normalization steps but does not include some of the post-processing steps.

You'd need to determine the preprocessing and post-processing steps included in the exported YOLOv8 ONNX model (or other object detection model) and adjust the Unity project code accordingly. You can do so by examining the code used to export the model to ONNX or with a tool like Netron.

saydek217 commented 8 months ago

Thank you very much for your comprehensive and detailed guidance. I greatly appreciate the effort you've put into helping not just me but many others with your thorough explanations and resources. I've started to delve into the tutorials and materials you've recommended to ensure I'm on the right track.

In the meantime, I attempted to integrate a YOLOX model that has been pre-trained on the COCO dataset (as per the documentation available at https://yolox.readthedocs.io/en/latest/demo/onnx_readme.html) into my Unity project. I also crafted a custom JSON colormap based on the guidelines and replaced both the model and the colormap as per the instructions in your Unity project setup. However, I've hit a snag - upon pressing play in Unity, even though my webcam activates and appears to function correctly, no object detections are displayed on the screen.

I've double-checked the integration steps to ensure that the model and colormap were correctly added, but it appears that there might be an issue that I'm not able to pinpoint. Could there be specific settings or configurations within Unity or the project that I need to adjust to accommodate the COCO-trained YOLOX model and my custom JSON colormap? Any insights or suggestions you might have to help troubleshoot this issue would be incredibly valuable.

Thank you once again for your time and assistance.

saydek217 commented 8 months ago

To follow up to my last question, based on your experience, how does the performance of the ONNXRuntime plugin compare to Barracuda or Sentis for running object detection models in Unity? I'm curious about whether one might offer significant advantages in terms of speed or efficiency for real-time object detection scenarios.

cj-mills commented 8 months ago

Hi @saydek217, There are probably multiple contributing factors, and I would need to investigate further to identify the issue. However, the ONNX models available from the official YOLOX resources do not include the input normalization steps, so you must apply that step separately before performing inference. Also, those ONNX models may have a fixed input resolution.

Or, there may be some weird bug or compatibility issue that I'd need to track down and fix. It might be a bit before I have time to do that.

As for your follow-up question, the main performance benefit of using ONNX Runtime over Unity's Barracuda or Sentis libraries is the specialized Execution Providers available for ONNX Runtime.

The Execution Providers provide interfaces to hardware-specific libraries like Intel's OpenVINO and NVIDIA's TensorRT. These libraries allow you to get the most out of the hardware and support lower-precision inference. The drawback is that you need separate code for each hardware platform you want your Unity application to support.

The main benefit of Unity's libraries is that they are cross-platform.

TadDiDio commented 7 months ago

Hello, this is amazing work! I followed this post with a custom trained YOLOv5 model and it worked great on Windows, and now I want to upload it to a Magic Leap 2 device which is Android. Am I correct in assuming that to do this I would need to replace the DirectML.dll with another one (TensorFlow Lite?)? If so would I need to also rewrite the custom dll from your other tutorial to support a different execution provider? Thanks!

cj-mills commented 7 months ago

Hi @TadDiDio,

That is correct. You can replace the DirectML Execution Provider with one that supports Android in the Visual Studio project from Part 1.

There is an Android Neural Networks API (NNAPI) execution provider, so you might be able to use that if the default CPU execution provider is too slow.

I have not used the Magic Leap 2, so I can't say how well it will work with the NNAPI execution provider.

TadDiDio commented 7 months ago

Thanks for the response! This project helped me a lot as an introduction to real time object detection!

connorhoehninfra commented 7 months ago

Thank you. I have been running into hiccups getting the DDL setup on VS Code mac. I can build a DDL without any dependencies.

One issue is that NuGet is not available. Being new to C/C++, how do you recommend building on OSX with the M3 architecture?

cj-mills commented 7 months ago

@connorhoehninfra Hi Connor,

I can't provide much specific guidance for adapting this tutorial to Arm Macs as I don't have one to verify anything.

You can check the platform-specific installation instructions on ONNX Runtime's Get Started page and the documentation for the CoreML execution provider:

Unity has a little bit of documentation and a simple (but old) example project you can take a look at: