w-okada / voice-changer

リアルタイムボイスチェンジャー Realtime Voice Changer
Other
16.13k stars 1.75k forks source link

[ISSUE]: “CUDA_PATH is set but CUDA wasn't able to be loaded” #1110

Open 12EE3WFWEFDS opened 7 months ago

12EE3WFWEFDS commented 7 months ago

Voice Changer Version

MMVCServerSIO_win_onnxgpu-cuda_v.1.5.3.17b.zip

Operational System

Windows 11

GPU

intel(r) iris(r) xe graphics

Read carefully and check the options

Model Type

RVC

Issue Description

I've read through the tutorial and visited many tutorial videos on YouTube. However, none of it consists of this type of error which I cannot access the application. First it started off with "Init provider bridge failed" which should be an error and then it displays that CUDA cannot be loaded and my application crashes. I'm extremely new and inept at this and I'm completely clueless as to what to do; I haven't heard of anyone who encountered this error before except for me. I've been investigating this issue for hours and reinstalled python, pytorch and CUDA simply for the sake of this. I used both version 12.1 and latest version of CUDA and as well as latest version of python. I don't know what I've done wrong. 屏幕截图 2024-02-10 225550

Also, on top of that, the application won't load and will always display: [VCClient] wait web server...0 http://127.0.0.1:18888/ if I do not delete stored_settings.json

Application Screenshot

屏幕截图 2024-02-10 225530

Logs on console

C:\Users\sling\Downloads\新建文件夹\MMVCServerSIO>MMVCServerSIO.exe -p 18888 --https false --content_vec_500 pretrain/checkpoint_best_legacy_500.pt --content_vec_500_onnx pretrain/content_vec_500.onnx --content_vec_500_onnx_on true --hubert_base pretrain/hubert_base.pt --hubert_base_jp pretrain/rinna_hubert_base_jp.pt --hubert_soft pretrain/hubert/hubert-soft-0d54a1f4.pt --nsf_hifigan pretrain/nsf_hifigan/model --crepe_onnx_full pretrain/crepe_onnx_full.onnx --crepe_onnx_tiny pretrain/crepe_onnx_tiny.onnx --rmvpe pretrain/rmvpe.pt --model_dir model_dir --samples samples.json 2024-02-10 22:55:03.9851784 [W:onnxruntime:Default, onnxruntime_pybind_state.cc:1641 onnxruntime::python::CreateInferencePybindStateModule] Init provider bridge failed. Booting PHASE :main PYTHON:3.10.11 (tags/v3.10.11:7d4cc5a, Apr 5 2023, 00:38:17) [MSC v.1929 64 bit (AMD64)] Activating the Voice Changer. [Voice Changer] download sample catalog. samples_0004_t.json [Voice Changer] download sample catalog. samples_0004_o.json [Voice Changer] download sample catalog. samples_0004_d.json [Voice Changer] model_dir is already exists. skip download samples. Internal_Port:18888 protocol: HTTP


Please open the following URL in your browser.
http://<IP>:<PORT>/
In many cases, it will launch when you access any of the following URLs.
http://127.0.0.1:18888/

[VCClient] Access http://127.0.0.1:18888/ [VCClient] wait web server...0 http://127.0.0.1:18888/ [VCClient] wait web server... done 200 [2024-02-10 22:55:11] connet sid : q2zKPEGqwxW7Pqe0AAAB [2024-02-10 22:55:11] connet sid : AMXP0WUquPSS3yf0AAAD [Voice Changer] update configuration: modelSlotIndex 1707634514001 [Voice Changer] exception! loading inferencer D:\a_work\1\s\onnxruntime\python\onnxruntime_pybind_state.cc:574 onnxruntime::python::CreateExecutionProviderInstance CUDA_PATH is set but CUDA wasn't able to be loaded. Please install the correct version of CUDA and cuDNN as mentioned in the GPU requirements page (https://onnxruntime.ai/docs/reference/execution-providers/CUDA-ExecutionProvider.html#requirements), make sure they're in the PATH, and that your GPU is supported.

[Voice Changer] update configuration: modelSlotIndex 1707634514002 Traceback (most recent call last): File "voice_changer\RVC\pipeline\PipelineGenerator.py", line 22, in createPipeline File "voice_changer\RVC\inferencer\InferencerManager.py", line 25, in getInferencer File "voice_changer\RVC\inferencer\InferencerManager.py", line 55, in loadInferencer File "voice_changer\RVC\inferencer\OnnxRVCInferencer.py", line 17, in loadModel File "onnxruntime\capi\onnxruntime_inference_collection.py", line 347, in init File "onnxruntime\capi\onnxruntime_inference_collection.py", line 395, in _create_inference_session RuntimeError: D:\a_work\1\s\onnxruntime\python\onnxruntime_pybind_state.cc:574 onnxruntime::python::CreateExecutionProviderInstance CUDA_PATH is set but CUDA wasn't able to be loaded. Please install the correct version of CUDA and cuDNN as mentioned in the GPU requirements page (https://onnxruntime.ai/docs/reference/execution-providers/CUDA-ExecutionProvider.html#requirements), make sure they're in the PATH, and that your GPU is supported.

[Voice Changer] exception! loading inferencer D:\a_work\1\s\onnxruntime\python\onnxruntime_pybind_state.cc:574 onnxruntime::python::CreateExecutionProviderInstance CUDA_PATH is set but CUDA wasn't able to be loaded. Please install the correct version of CUDA and cuDNN as mentioned in the GPU requirements page (https://onnxruntime.ai/docs/reference/execution-providers/CUDA-ExecutionProvider.html#requirements), make sure they're in the PATH, and that your GPU is supported.

Traceback (most recent call last): File "voice_changer\RVC\pipeline\PipelineGenerator.py", line 22, in createPipeline File "voice_changer\RVC\inferencer\InferencerManager.py", line 25, in getInferencer File "voice_changer\RVC\inferencer\InferencerManager.py", line 55, in loadInferencer File "voice_changer\RVC\inferencer\OnnxRVCInferencer.py", line 17, in loadModel File "onnxruntime\capi\onnxruntime_inference_collection.py", line 347, in init File "onnxruntime\capi\onnxruntime_inference_collection.py", line 395, in _create_inference_session RuntimeError: D:\a_work\1\s\onnxruntime\python\onnxruntime_pybind_state.cc:574 onnxruntime::python::CreateExecutionProviderInstance CUDA_PATH is set but CUDA wasn't able to be loaded. Please install the correct version of CUDA and cuDNN as mentioned in the GPU requirements page (https://onnxruntime.ai/docs/reference/execution-providers/CUDA-ExecutionProvider.html#requirements), make sure they're in the PATH, and that your GPU is supported.

[Voice Changer] update configuration: modelSlotIndex 1707634516003 [Voice Changer] exception! loading inferencer D:\a_work\1\s\onnxruntime\python\onnxruntime_pybind_state.cc:574 onnxruntime::python::CreateExecutionProviderInstance CUDA_PATH is set but CUDA wasn't able to be loaded. Please install the correct version of CUDA and cuDNN as mentioned in the GPU requirements page (https://onnxruntime.ai/docs/reference/execution-providers/CUDA-ExecutionProvider.html#requirements), make sure they're in the PATH, and that your GPU is supported.

Traceback (most recent call last): File "voice_changer\RVC\pipeline\PipelineGenerator.py", line 22, in createPipeline File "voice_changer\RVC\inferencer\InferencerManager.py", line 25, in getInferencer File "voice_changer\RVC\inferencer\InferencerManager.py", line 55, in loadInferencer File "voice_changer\RVC\inferencer\OnnxRVCInferencer.py", line 17, in loadModel File "onnxruntime\capi\onnxruntime_inference_collection.py", line 347, in init File "onnxruntime\capi\onnxruntime_inference_collection.py", line 395, in _create_inference_session RuntimeError: D:\a_work\1\s\onnxruntime\python\onnxruntime_pybind_state.cc:574 onnxruntime::python::CreateExecutionProviderInstance CUDA_PATH is set but CUDA wasn't able to be loaded. Please install the correct version of CUDA and cuDNN as mentioned in the GPU requirements page (https://onnxruntime.ai/docs/reference/execution-providers/CUDA-ExecutionProvider.html#requirements), make sure they're in the PATH, and that your GPU is supported.

[Voice Changer] update configuration: modelSlotIndex 1707634517000Beatrice-JVS

12EE3WFWEFDS commented 7 months ago

Bump. Can someone help me with this?

SinLucyd commented 7 months ago

Got rid of that looks like problem page by closing spotify,

Im having that CUDA_PATH problem aswell

Running atm

WIN 10

RTX 2070 SUPER AMD Ryzen 9 3900X 12-Core Processor

VOICE CHANGER v.1.5.3.17b

ANACONDA PROMT

PYHTON 3.11.5.

PYTORCH 2.1.0

CUDA 11.8.

CUDNN 8.7.0.

ONNX RUNTIME 1.17.0