Closed weiji14 closed 2 years ago
Can you try !mamba install -y -c conda-forge onnxruntime
to see if that does the trick?
If that's successful I'll get it added to the gpu-pytorch image.
Can you try
!mamba install -y -c conda-forge onnxruntime
to see if that does the trick?
Nope, doesn't work. The conda-forge onnxruntime seems to be CPU only for now, need to wait for https://github.com/conda-forge/onnxruntime-feedstock/pull/7 to be merged.
I did manage to get it to work by updating cudatoolkit from 10.2 to 11.6 like so:
!mamba update -y cudatoolkit
!pip install onnxruntime-gpu
i.e. this line in the lockfile needs to change:
Is the plan to stick with CUDA 10.2? Or can the next container update use a newer CUDA version >11?
Thanks.
We should be able to update to CUDA 11.x. I'll take a look at that this week.
Hi again, just trying to use
onnxruntime
to run a neural network as a follow up from https://github.com/microsoft/planetary-computer-containers/issues/32#issuecomment-1100211839. The CPU execution works fine, but it seems that the GPU execution isn't working for some reason.Steps to reproduce on the
gpu-pytorch
container.then restart the kernel before running the below
so it seems to know there there is a CUDA-capable GPU. But when I try to get an onnxruntime session going, it only picks up the CPU. Get a sample .onnx file, e.g. from https://media.githubusercontent.com/media/onnx/models/main/vision/object_detection_segmentation/tiny-yolov2/model/tinyyolov2-7.onnx
produces a warning:
Looking at the output of
nvidia-smi
though, the CUDA version is 11.0 which should be ok if I understand https://onnxruntime.ai/docs/execution-providers/CUDA-ExecutionProvider.html#requirements correctly:So I'm wondering if there's some other library that needs to be added to the container to make onnxruntime's GPU execution work. Maybe related to https://github.com/microsoft/onnxruntime/issues/11092
Another thing I'd like to ask if there's room to get
onnxruntime
into thegpu-pytorch
image? Happy to submit a pull request to add it in.