torinos-yt / NNOnnx

Using CUDA for Faster Machine Learning Inference on Unity
MIT License
6 stars 0 forks source link
cuda machine-learning onnxruntime unity

NNOnnx

This is an alternative to Barracuda for even faster machine learning inference on Unity in limited situations using the onnxruntime and CUDA api.

By using CUDA's Graphics Interoperability feature, NNOnnx uses resources on the GPU such as GraphicsBuffer and Texture directly as CUDA resources without copying them to the CPU. This is more useful for models that require higher resolution image input.

This provides the inference on more diversified onnx models and faster runtime speeds on PC platforms where CUDA available compared to the Unity Barracuda.

NNOnnx is not intended to be a full wrapper around onnxruntime for Unity. If you want to use the full functionality of onnxruntime, use Microsoft.ML.OnnxRuntime Nuget.

System Requirements

For more information on CUDA, cuDNN, and TensorRT version compatibility, please check here and here. The easiest way to get these is to install the latest Azure Kinect Body Tracking SDK.

Install

NNOnnx uses the Scoped registry feature of Package Manager for installation. Open the Package Manager page in the Project Settings window and add the following entry to the Scoped Registries list:

Now you can install the package from My Registries page in the Package Manager window.

Related Project