microsoft / onnxruntime

ONNX Runtime: cross-platform, high performance ML inferencing and training accelerator
https://onnxruntime.ai
MIT License
14.14k stars 2.86k forks source link

How to build for multiple execution provider? #9756

Open shauryagoel opened 2 years ago

shauryagoel commented 2 years ago

Describe the bug I am trying to build onnxruntime for tensorrt and openvino together. I want to select at run time which EP to run. However I am receiving the following error- `/opt/onnxruntime/onnxruntime/core/providers/shared_library/provider_interfaces.h:8:10: fatal error: cuda_runtime.h: No such file or directory

include `

Though I am able to successfully build both of the EP separately.

Urgency None

System information

To Reproduce

Expected behavior I should have both of the EPs built together using the single build command.

hariharans29 commented 2 years ago

Not sure if someone has ever tried building the TensorRT EP and OpenVINO EP into a single build. Tagging @jywu-msft to see if he has some thoughts on this matter.

stale[bot] commented 2 years ago

This issue has been automatically marked as stale due to inactivity and will be closed in 7 days if no further activity occurs. If further support is needed, please provide an update and/or more details.

wanduoz commented 2 years ago

I met same problem. I installed onnxruntime using pip install onnxruntime-gpu.

image

Then I installed ort openvino using pip install onnxruntime-openvino

image

Can I build them together and change hardward acceleration by choicing different provider?

venki-thiyag commented 2 years ago

On Windows is it possible to build with both DirectML and OpenVino? Also with OpenVino CPU_FP32, GPU_FP32 and GPU_FP16 needs to be present.

Any ideas or suggestions on this?

2catycm commented 1 year ago

very good issue. That's the problem that I am facing now. I want all EPs together.

Fafa87 commented 1 year ago

The same on my side: if you want to do inference on GPU you need to handle both NVIDIA and Intel Xe GPU efficiently you need to have both DirectML and OpenVINO. But if you install the latter you no longer can select the former.

amblamps commented 2 months ago

Same here, I would like to build with CUDA, ROCm, and DirectML support.

senstar-hsoleimani commented 1 month ago

I would like to build with Cuda and OpenVino support. Is there any way yet? I have a Nvidia GPU and an Intel CPU. I like to switch between these two and run my onnx model.

wwguy commented 3 weeks ago

Same here, I would like to use both DirectML and OpenVINO on make runtime decision to run my ONNX model. Currently only one could be installed and not both