Open offchan42 opened 1 year ago
seem to be a problem of onnxruntime
microsoft/onnxruntime#11092
microsoft/onnxruntime#13264
microsoft/onnxruntime#16128
I use GPU GTX 1650 but I cannot use Cuda. What should I do? onnx==1.15.0 onnxruntime==1.17.0 onnxruntime-gpu==1.17.0 insightface==0.7.3
NVIDIA-SMI 536.23 Driver Version: 536.23 CUDA Version: 12.2
The problem is that ONNX (which powers Insightface) doesn't know how to search for CUDA. PyTorch knows how to search for it, and adds it to Python's internal path, so that Insightface can later find it.
The bug/issue is with ONNX library. I have coded a workaround here:
Running without pytorch
If I run the above code, only the CPU models would be loaded. Here is the output:
Running with pytorch
Here is the output. Notice the GPU is loaded properly.
Import FaceAnalysis before importing torch
Only the CPU is loaded:
What is the cause of these 3 different outcomes? Is this expected?
Environment
I run my code on Runpod cloud instance.
I can't replicate this issue on my laptop. It can always load GPU versions but I can replicate this issue on Runpod. It happens somewhat recently. (It didn't happen a few days earlier, according to some mystery)