Open Karl48071 opened 1 year ago
This may be related to https://github.com/microsoft/onnxruntime/issues/13576 If you haven't tried that already I'd suggest installing it in a conda environment with python, cuda and cudnn already installed.
I've recently tested deface on Windows and found that the new DirectML execution provider also works quite well. It should work on any GPU since it is based on the built-in Direct3D12 API of Windows. I don't know how it compares to CUDA in terms of speed but if you are still having issues with CUDA it may be worth a shot. You can install it with
$ pip install onnx onnxruntime-directml
I had the same issue (windows). I tried multiple versions, and environment setups. I only got it running by installing a new anaconda environment and using anacondas supplied version of cuDDN and CUDA.
After doing the install on windows (CUDA, cudnn, onnxruntime-gpu), I get the following error when running deface... CUDA_PATH is set but CUDA wasn't able to be loaded.
I've watched countless videos on setting correct PATHs and so forth and am continuing to have the issue.
python -m pip install --upgrade setuptools pip python -m pip install deface pip install nvidia-pyindex pip install tf2onnx pip install onnx onnxruntime-gpu ## if gpu with cuda exist pip install onnxruntime-openvino ## intels acceleration
replace the v11.4 with the verison you have installed....IIRC 11.8 is max that this tool can use at the moment.
python -m pip install "C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.4\graphsurgeon\graphsurgeon-0.4.5-py2.py3-none-any.whl" python -m pip install "C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.4\uff\uff-0.6.9-py2.py3-none-any.whl" python -m pip install "C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.4\onnx_graphsurgeon\onnx_graphsurgeon-0.3.12-py2.py3-none-any.whl" python -m pip install "C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.4\python\tensorrt-8.2.3.0-cp39-none-win_amd64.whl"
to use tensorrt
python import tensorrt
you need cuda 11.X - the 8.9.4 works for me so far https://developer.nvidia.com/rdp/cudnn-archive
if you want tensor as will the 8.6.16 GA version works https://developer.nvidia.com/nvidia-tensorrt-8x-download
you will need the latest cuda 11.X toolkit --- its large at 3gb https://developer.nvidia.com/cuda-toolkit-archive
I've recently tested deface on Windows and found that the new DirectML execution provider also works quite well. It should work on any GPU since it is based on the built-in Direct3D12 API of Windows. I don't know how it compares to CUDA in terms of speed but if you are still having issues with CUDA it may be worth a shot. You can install it with
$ pip install onnx onnxruntime-directml
thanks, this worked
After doing the install on windows (CUDA, cudnn, onnxruntime-gpu), I get the following error when running deface... CUDA_PATH is set but CUDA wasn't able to be loaded.
I've watched countless videos on setting correct PATHs and so forth and am continuing to have the issue.