microsoft / onnxruntime

ONNX Runtime: cross-platform, high performance ML inferencing and training accelerator
https://onnxruntime.ai
MIT License
14.85k stars 2.94k forks source link

Publish the providers with the release build #7628

Open arpadbarta-Accesa opened 3 years ago

arpadbarta-Accesa commented 3 years ago

Issue/Inconvenience Currently only the "onnxruntime.dll" is published from the release build (location: {package}\runtimes\win-x64\_native), no providers are included. Since we do need some of the providers we have to do a build on our own.

System information Any

Proposed improvement It would be a great help if the above mentioned providers could be included into the published release package/s. Likely reducing/simplifying CI/CD in many consuming projects.

I hope it can be considered.

faxu commented 3 years ago

Which providers specifically are you most interested in? Since ORT supports many, we try to prioritize published packages based on popularity of use.

arpadbarta-Accesa commented 3 years ago

In our specific case we would love to have the win-x64 native bits for the following providers:

Thanks for the consideration.

dkloving commented 3 years ago

I also support this.

We would also like to see the WinML provider. Since this is a MS project I would guess that you guys would have a much easier time getting a working build with WinML than I would, and could probably work out potential licensing headaches more easily than you could with TensorRT. It would be great to see ONNXRuntime become the default pathway for efficient inference on Windows. That of course means strong support for WinML.

We have been using ONNXRuntime for deployment on Windows for a while, but we are performance limited and need to move to a provider that can utilize fp16 tensor cores. Unfortunately right now it looks like we might have to move to Linux to get this working. If ONNXRuntime distributed a working build with WinML support it would make a huge difference for us.