microsoft / vcpkg

C++ Library Manager for Windows, Linux, and MacOS
MIT License
22.93k stars 6.33k forks source link

[New Port Request] ONNX Runtime #20548

Open horenmar opened 3 years ago

horenmar commented 3 years ago

Library name: ONNX Runtime

Library description: ONNX Runtime is a cross-platform inference and training machine-learning accelerator.

Source repository URL: https://github.com/microsoft/onnxruntime

Project homepage (if different from the source repository):

Anything else that is useful to know when adding (such as optional features the library may have that should be included): There is bunch of different backends and targets, e.g. cuda, CPU, mobiles, and starting the port with all of them is unlikely to be a good idea, so the initial port could provide just the CPU one.

There has been a PR, #14903, which fizzled out, and related issues, #16509, #14257, and #14635, which all ended up being closed for one reason or another, so I am opening this issue to have an open port request for this package.

JackBoosY commented 2 years ago

Now we already have a port named onnxruntime-gpu, is this what you need?

nxt007 commented 2 years ago

One main issue is that this port doesn't support linux at all. As @horenmar mentioned, some useful features are not implemented: cuda and CPU.

horenmar commented 2 years ago

Now we already have a port named onnxruntime-gpu, is this what you need?

No. It downloads old released binaries, doesn't support most platforms, aims towards GPU (I need CPU) and so on.

JackBoosY commented 2 years ago

I'm trying to re-factory this port and add cpu port.

Dr-Electron commented 6 months ago

@JackBoosY would it now make sense to work on top of your work in #23768 again. I wrote down that I have to wait for cccl 2.2.0. But tbh I don't remeber why. Probably because they planed for that version to contain all migrated repos like cub and so on. And now they are at version 2.3. Maybe worth a try again ❤️ . I would offer to help, but I probably wouldn't be of much use 😅

horenmar commented 6 months ago

There is an in-progress PR for from-source build at #36850

cjsdurj commented 3 months ago

hi. beside cpu & cuda EP , other onnxruntime EPs may also needed , such as : openvino ( inference onnx model in intel npu .) directml ( microsoft developed and can support many types of gpu: nvidia , amd, intel ... ) ;