KwaiVGI / LivePortrait

Bring portraits to life!
https://liveportrait.github.io
Other
13.15k stars 1.4k forks source link

[ONNXRuntimeError] : 1 : FAIL : LoadLibrary failed with error 126 #337

Open ZeroCool22 opened 3 months ago

ZeroCool22 commented 3 months ago
Microsoft Windows [Versión 10.0.19045.4780]
(c) Microsoft Corporation. Todos los derechos reservados.

C:\Users\ZeroCool22\Desktop\LivePortrait\LivePortrait>conda activate LivePortrait

(LivePortrait) C:\Users\ZeroCool22\Desktop\LivePortrait\LivePortrait>python app.py
C:\Users\ZeroCool22\Desktop\LivePortrait\LivePortrait\src\utils\helper.py:170: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
  model.load_state_dict(torch.load(ckpt_path, map_location=lambda storage, loc: storage))
[13:59:48] Load appearance_feature_extractor from                                            live_portrait_wrapper.py:46
           C:\Users\ZeroCool22\Desktop\LivePortrait\LivePortrait\pretrained_weights\liveport
           rait\base_models\appearance_feature_extractor.pth done.
           Load motion_extractor from                                                        live_portrait_wrapper.py:49
           C:\Users\ZeroCool22\Desktop\LivePortrait\LivePortrait\pretrained_weights\liveport
           rait\base_models\motion_extractor.pth done.
           Load warping_module from                                                          live_portrait_wrapper.py:52
           C:\Users\ZeroCool22\Desktop\LivePortrait\LivePortrait\pretrained_weights\liveport
           rait\base_models\warping_module.pth done.
[13:59:49] Load spade_generator from                                                         live_portrait_wrapper.py:55
           C:\Users\ZeroCool22\Desktop\LivePortrait\LivePortrait\pretrained_weights\liveport
           rait\base_models\spade_generator.pth done.
C:\Users\ZeroCool22\Desktop\LivePortrait\LivePortrait\src\utils\helper.py:145: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
  checkpoint = torch.load(ckpt_path, map_location=lambda storage, loc: storage)
           Load stitching_retargeting_module from                                            live_portrait_wrapper.py:59
           C:\Users\ZeroCool22\Desktop\LivePortrait\LivePortrait\pretrained_weights\liveport
           rait\retargeting_models\stitching_retargeting_module.pth done.
2024-08-19 13:59:49.1165523 [E:onnxruntime:Default, provider_bridge_ort.cc:1744 onnxruntime::TryGetProviderInfo_CUDA] C:\a\_work\1\s\onnxruntime\core\session\provider_bridge_ort.cc:1426 onnxruntime::ProviderLibrary::Get [ONNXRuntimeError] : 1 : FAIL : LoadLibrary failed with error 126 "" when trying to load "C:\Users\ZeroCool22\anaconda3\envs\LivePortrait\lib\site-packages\onnxruntime\capi\onnxruntime_providers_cuda.dll"

*************** EP Error ***************
EP Error C:\a\_work\1\s\onnxruntime\python\onnxruntime_pybind_state.cc:866 onnxruntime::python::CreateExecutionProviderInstance CUDA_PATH is set but CUDA wasnt able to be loaded. Please install the correct version of CUDA andcuDNN as mentioned in the GPU requirements page  (https://onnxruntime.ai/docs/execution-providers/CUDA-ExecutionProvider.html#requirements),  make sure they're in the PATH, and that your GPU is supported.
 when using ['CUDAExecutionProvider']
Falling back to ['CUDAExecutionProvider', 'CPUExecutionProvider'] and retrying.
****************************************
2024-08-19 13:59:49.1337290 [E:onnxruntime:Default, provider_bridge_ort.cc:1744 onnxruntime::TryGetProviderInfo_CUDA] C:\a\_work\1\s\onnxruntime\core\session\provider_bridge_ort.cc:1426 onnxruntime::ProviderLibrary::Get [ONNXRuntimeError] : 1 : FAIL : LoadLibrary failed with error 126 "" when trying to load "C:\Users\ZeroCool22\anaconda3\envs\LivePortrait\lib\site-packages\onnxruntime\capi\onnxruntime_providers_cuda.dll"

Traceback (most recent call last):
  File "C:\Users\ZeroCool22\anaconda3\envs\LivePortrait\lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py", line 419, in __init__
    self._create_inference_session(providers, provider_options, disabled_optimizers)
  File "C:\Users\ZeroCool22\anaconda3\envs\LivePortrait\lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py", line 483, in _create_inference_session
    sess.initialize_session(providers, provider_options, disabled_optimizers)
RuntimeError: C:\a\_work\1\s\onnxruntime\python\onnxruntime_pybind_state.cc:866 onnxruntime::python::CreateExecutionProviderInstance CUDA_PATH is set but CUDA wasnt able to be loaded. Please install the correct version of CUDA andcuDNN as mentioned in the GPU requirements page  (https://onnxruntime.ai/docs/execution-providers/CUDA-ExecutionProvider.html#requirements),  make sure they're in the PATH, and that your GPU is supported.

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "C:\Users\ZeroCool22\Desktop\LivePortrait\LivePortrait\app.py", line 48, in <module>
    gradio_pipeline = GradioPipeline(
  File "C:\Users\ZeroCool22\Desktop\LivePortrait\LivePortrait\src\gradio_pipeline.py", line 42, in __init__
    super().__init__(inference_cfg, crop_cfg)
  File "C:\Users\ZeroCool22\Desktop\LivePortrait\LivePortrait\src\live_portrait_pipeline.py", line 39, in __init__
    self.cropper: Cropper = Cropper(crop_cfg=crop_cfg)
  File "C:\Users\ZeroCool22\Desktop\LivePortrait\LivePortrait\src\utils\cropper.py", line 63, in __init__
    self.face_analysis_wrapper = FaceAnalysisDIY(
  File "C:\Users\ZeroCool22\Desktop\LivePortrait\LivePortrait\src\utils\face_analysis_diy.py", line 37, in __init__
    super().__init__(name=name, root=root, allowed_modules=allowed_modules, **kwargs)
  File "C:\Users\ZeroCool22\Desktop\LivePortrait\LivePortrait\src\utils\dependencies\insightface\app\face_analysis.py", line 33, in __init__
    model = model_zoo.get_model(onnx_file, **kwargs)
  File "C:\Users\ZeroCool22\Desktop\LivePortrait\LivePortrait\src\utils\dependencies\insightface\model_zoo\model_zoo.py", line 96, in get_model
    model = router.get_model(providers=providers, provider_options=provider_options)
  File "C:\Users\ZeroCool22\Desktop\LivePortrait\LivePortrait\src\utils\dependencies\insightface\model_zoo\model_zoo.py", line 40, in get_model
    session = PickableInferenceSession(self.onnx_file, **kwargs)
  File "C:\Users\ZeroCool22\Desktop\LivePortrait\LivePortrait\src\utils\dependencies\insightface\model_zoo\model_zoo.py", line 25, in __init__
    super().__init__(model_path, **kwargs)
  File "C:\Users\ZeroCool22\anaconda3\envs\LivePortrait\lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py", line 432, in __init__
    raise fallback_error from e
  File "C:\Users\ZeroCool22\anaconda3\envs\LivePortrait\lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py", line 427, in __init__
    self._create_inference_session(self._fallback_providers, None)
  File "C:\Users\ZeroCool22\anaconda3\envs\LivePortrait\lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py", line 483, in _create_inference_session
    sess.initialize_session(providers, provider_options, disabled_optimizers)
RuntimeError: C:\a\_work\1\s\onnxruntime\python\onnxruntime_pybind_state.cc:866 onnxruntime::python::CreateExecutionProviderInstance CUDA_PATH is set but CUDA wasnt able to be loaded. Please install the correct version of CUDA andcuDNN as mentioned in the GPU requirements page  (https://onnxruntime.ai/docs/execution-providers/CUDA-ExecutionProvider.html#requirements),  make sure they're in the PATH, and that your GPU is supported.

(LivePortrait) C:\Users\ZeroCool22\Desktop\LivePortrait\LivePortrait>

Screenshot_7

Screenshot_5

Screenshot_6

In this step: Then, install the corresponding torch version. Here are examples for different CUDA versions, i used:

pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu124

I also downloaded the corresponding cuDNN 9.3.0 and place the files on the correct folders inside C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.6

Everything is where should be, why the hell it's giving me that error.

In the requirements.txt the version is onnxruntime-gpu==1.18.0

It's the reason of the problem?

Since in the web of onnxruntime, it says onnxruntime-gpu==1.18.0 ia compatible with cuDNN 8.x ?

https://onnxruntime.ai/docs/execution-providers/CUDA-ExecutionProvider.html#requirements

Should i install cuDNN v8.9.7 (December 5th, 2023), for CUDA 12.x ?

https://developer.nvidia.com/rdp/cudnn-archive

or i can try to update to onnxruntime to version 1.18.1?

EDIT:

Tried both, still same error.

zzzweakman commented 3 months ago

@ZeroCool22 Regarding this issue, I have two potential solutions:

  1. Given that you are using a Windows system, you might consider using the Windows one-click installation package. We have bundled the CUDA and cuDNN environments in it, and you can start it just like you would with conda.

  2. You might need to consider downgrading your CUDA version. The highest version we have tested so far is 12.1, and your version is newer than this.