microsoft / HoloLens2ForCV

Sample code and documentation for using the Microsoft HoloLens 2 for Computer Vision research.
MIT License
494 stars 144 forks source link

Is there any plan to support machine learning on holoLens2 #49

Open IkbeomJeon opened 4 years ago

IkbeomJeon commented 4 years ago

Hello,

Do you have any existing tries or plan to support machine learning library using its GPU in HoloLens2 ?

Recent days, I have tried to run onnx model in HoloLens2. It was successful to build and run on Its CPU (ARM architecture) using onnxruntime(or WinML) library . But to use its GPU, these library require additional GPU provider such as CUDA, DirectML, OpenCL.

As you know, The cpu of Hololens2 is 'Snapdragon 850 (based on ARM)' and GPU is 'Qualcomm Adreano 630'.

And according to follow documentation, The GPU supports to use OpenCL 2.0, and DX12 and so on. (see : https://www.qualcomm.com/products/snapdragon-850-mobile-compute-platform)

So I have been looking for a way to use these API in ML libraries. In onnxruntime and WinML , it is possible to use 'DirectML' provider, which use DX12. But unfortunately, it was not support to ARM but only x64, x86. (So it seems possible to run only hololens 1)

So I want to know is there any idea or plan to run any ML library on HoloLen2 using Its GPU.

I always thanks your contributions.

kysucix commented 4 years ago

hi,

did you try to use use official qualcomm library that support onnx inference?

IkbeomJeon commented 4 years ago

No. I just tried to built 'DirectML Execution Provider' in onnxruntime.(https://github.com/microsoft/onnxruntime/blob/master/docs/execution_providers/DirectML-ExecutionProvider.md) I thought that it will be worked if it can be built on ARM architecture. But the build configuration(onnxruntime + dml provider) can be supported only in x86 or x64 now.

zc-alexfan commented 3 years ago

I have a similar question. For example, I have scene understanding models written in PyTorch, but I am not sure how hard it is to apply that model on HoloLens for realtime inference.