Gourieff / sd-webui-reactor

Fast and Simple Face Swap Extension for StableDiffusion WebUI (A1111 SD WebUI, SD WebUI Forge, SD.Next, Cagliostro)
GNU Affero General Public License v3.0
2.17k stars 235 forks source link

Reactor wont work cause of cuda / onxruntime #388

Open GIadstone opened 2 months ago

GIadstone commented 2 months ago

First, confirm

What happened?

Won't work, here is the code :

16:06:31 - ReActor - STATUS - Working: source face index [0], target face index [0]████| 29/29 [00:17<00:00, 1.52it/s] 16:06:31 - ReActor - STATUS - Analyzing Source Image... 2024-03-08 16:06:31.5076937 [E:onnxruntime:Default, provider_bridge_ort.cc:1548 onnxruntime::TryGetProviderInfo_CUDA] D:\a_work\1\s\onnxruntime\core\session\provider_bridge_ort.cc:1209 onnxruntime::ProviderLibrary::Get [ONNXRuntimeError] : 1 : FAIL : LoadLibrary failed with error 126 "" when trying to load "C:\stable-diffusion-webui-1.8.0\venv\lib\site-packages\onnxruntime\capi\onnxruntime_providers_cuda.dll"

EP Error EP Error D:\a_work\1\s\onnxruntime\python\onnxruntime_pybind_state.cc:857 onnxruntime::python::CreateExecutionProviderInstance CUDA_PATH is set but CUDA wasnt able to be loaded. Please install the correct version of CUDA andcuDNN as mentioned in the GPU requirements page (https://onnxruntime.ai/docs/execution-providers/CUDA-ExecutionProvider.html#requirements), make sure they're in the PATH, and that your GPU is supported. when using ['CUDAExecutionProvider'] Falling back to ['CUDAExecutionProvider', 'CPUExecutionProvider'] and retrying.


2024-03-08 16:06:31.6020997 [E:onnxruntime:Default, provider_bridge_ort.cc:1548 onnxruntime::TryGetProviderInfo_CUDA] D:\a_work\1\s\onnxruntime\core\session\provider_bridge_ort.cc:1209 onnxruntime::ProviderLibrary::Get [ONNXRuntimeError] : 1 : FAIL : LoadLibrary failed with error 126 "" when trying to load "C:\stable-diffusion-webui-1.8.0\venv\lib\site-packages\onnxruntime\capi\onnxruntime_providers_cuda.dll"

*** Error running postprocess_image: C:\stable-diffusion-webui-1.8.0\extensions\sd-webui-reactor\scripts\reactor_faceswap.py Traceback (most recent call last): File "C:\stable-diffusion-webui-1.8.0\venv\lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py", line 419, in init self._create_inference_session(providers, provider_options, disabled_optimizers) File "C:\stable-diffusion-webui-1.8.0\venv\lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py", line 483, in _create_inference_session sess.initialize_session(providers, provider_options, disabled_optimizers) RuntimeError: D:\a_work\1\s\onnxruntime\python\onnxruntime_pybind_state.cc:857 onnxruntime::python::CreateExecutionProviderInstance CUDA_PATH is set but CUDA wasnt able to be loaded. Please install the correct version of CUDA andcuDNN as mentioned in the GPU requirements page (https://onnxruntime.ai/docs/execution-providers/CUDA-ExecutionProvider.html#requirements), make sure they're in the PATH, and that your GPU is supported.

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "C:\stable-diffusion-webui-1.8.0\modules\scripts.py", line 856, in postprocess_image
    script.postprocess_image(p, pp, *script_args)
  File "C:\stable-diffusion-webui-1.8.0\extensions\sd-webui-reactor\scripts\reactor_faceswap.py", line 391, in postprocess_image
    result, output, swapped = swap_face(
  File "C:\stable-diffusion-webui-1.8.0\extensions\sd-webui-reactor\scripts\reactor_swapper.py", line 515, in swap_face
    source_faces = analyze_faces(source_img)
  File "C:\stable-diffusion-webui-1.8.0\extensions\sd-webui-reactor\scripts\reactor_swapper.py", line 274, in analyze_faces
    face_analyser = copy.deepcopy(getAnalysisModel())
  File "C:\stable-diffusion-webui-1.8.0\extensions\sd-webui-reactor\scripts\reactor_swapper.py", line 118, in getAnalysisModel
    ANALYSIS_MODEL = insightface.app.FaceAnalysis(
  File "C:\stable-diffusion-webui-1.8.0\extensions\sd-webui-reactor\scripts\console_log_patch.py", line 48, in patched_faceanalysis_init
    model = model_zoo.get_model(onnx_file, **kwargs)
  File "C:\stable-diffusion-webui-1.8.0\venv\lib\site-packages\insightface\model_zoo\model_zoo.py", line 96, in get_model
    model = router.get_model(providers=providers, provider_options=provider_options)
  File "C:\stable-diffusion-webui-1.8.0\extensions\sd-webui-reactor\scripts\console_log_patch.py", line 21, in patched_get_model
    session = PickableInferenceSession(self.onnx_file, **kwargs)
  File "C:\stable-diffusion-webui-1.8.0\venv\lib\site-packages\insightface\model_zoo\model_zoo.py", line 25, in __init__
    super().__init__(model_path, **kwargs)
  File "C:\stable-diffusion-webui-1.8.0\venv\lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py", line 432, in __init__
    raise fallback_error from e
  File "C:\stable-diffusion-webui-1.8.0\venv\lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py", line 427, in __init__
    self._create_inference_session(self._fallback_providers, None)
  File "C:\stable-diffusion-webui-1.8.0\venv\lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py", line 483, in _create_inference_session
    sess.initialize_session(providers, provider_options, disabled_optimizers)
RuntimeError: D:\a\_work\1\s\onnxruntime\python\onnxruntime_pybind_state.cc:857 onnxruntime::python::CreateExecutionProviderInstance CUDA_PATH is set but CUDA wasnt able to be loaded. Please install the correct version of CUDA andcuDNN as mentioned in the GPU requirements page  (https://onnxruntime.ai/docs/execution-providers/CUDA-ExecutionProvider.html#requirements),  make sure they're in the PATH, and that your GPU is supported.

I've put PATH to the right directories for cuda, and set cuda_home. I've updated onxruntime for cuda 12.3 This bug happened after I updated to SD 1.8

Any help is welcome.

Steps to reproduce the problem

  1. Go to ....
  2. Press ....
  3. ...

Sysinfo

Windows 10 - Nvidia Rtx 2070 - intel i5 6th g - Stabble diffusion - Reactor

Relevant console log

16:06:31 - ReActor - STATUS - Working: source face index [0], target face index [0]████| 29/29 [00:17<00:00,  1.52it/s]
16:06:31 - ReActor - STATUS - Analyzing Source Image...
2024-03-08 16:06:31.5076937 [E:onnxruntime:Default, provider_bridge_ort.cc:1548 onnxruntime::TryGetProviderInfo_CUDA] D:\a\_work\1\s\onnxruntime\core\session\provider_bridge_ort.cc:1209 onnxruntime::ProviderLibrary::Get [ONNXRuntimeError] : 1 : FAIL : LoadLibrary failed with error 126 "" when trying to load "C:\stable-diffusion-webui-1.8.0\venv\lib\site-packages\onnxruntime\capi\onnxruntime_providers_cuda.dll"

*************** EP Error ***************
EP Error D:\a\_work\1\s\onnxruntime\python\onnxruntime_pybind_state.cc:857 onnxruntime::python::CreateExecutionProviderInstance CUDA_PATH is set but CUDA wasnt able to be loaded. Please install the correct version of CUDA andcuDNN as mentioned in the GPU requirements page  (https://onnxruntime.ai/docs/execution-providers/CUDA-ExecutionProvider.html#requirements),  make sure they're in the PATH, and that your GPU is supported.
 when using ['CUDAExecutionProvider']
Falling back to ['CUDAExecutionProvider', 'CPUExecutionProvider'] and retrying.
****************************************
2024-03-08 16:06:31.6020997 [E:onnxruntime:Default, provider_bridge_ort.cc:1548 onnxruntime::TryGetProviderInfo_CUDA] D:\a\_work\1\s\onnxruntime\core\session\provider_bridge_ort.cc:1209 onnxruntime::ProviderLibrary::Get [ONNXRuntimeError] : 1 : FAIL : LoadLibrary failed with error 126 "" when trying to load "C:\stable-diffusion-webui-1.8.0\venv\lib\site-packages\onnxruntime\capi\onnxruntime_providers_cuda.dll"

*** Error running postprocess_image: C:\stable-diffusion-webui-1.8.0\extensions\sd-webui-reactor\scripts\reactor_faceswap.py
    Traceback (most recent call last):
      File "C:\stable-diffusion-webui-1.8.0\venv\lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py", line 419, in __init__
        self._create_inference_session(providers, provider_options, disabled_optimizers)
      File "C:\stable-diffusion-webui-1.8.0\venv\lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py", line 483, in _create_inference_session
        sess.initialize_session(providers, provider_options, disabled_optimizers)
    RuntimeError: D:\a\_work\1\s\onnxruntime\python\onnxruntime_pybind_state.cc:857 onnxruntime::python::CreateExecutionProviderInstance CUDA_PATH is set but CUDA wasnt able to be loaded. Please install the correct version of CUDA andcuDNN as mentioned in the GPU requirements page  (https://onnxruntime.ai/docs/execution-providers/CUDA-ExecutionProvider.html#requirements),  make sure they're in the PATH, and that your GPU is supported.

    The above exception was the direct cause of the following exception:

    Traceback (most recent call last):
      File "C:\stable-diffusion-webui-1.8.0\modules\scripts.py", line 856, in postprocess_image
        script.postprocess_image(p, pp, *script_args)
      File "C:\stable-diffusion-webui-1.8.0\extensions\sd-webui-reactor\scripts\reactor_faceswap.py", line 391, in postprocess_image
        result, output, swapped = swap_face(
      File "C:\stable-diffusion-webui-1.8.0\extensions\sd-webui-reactor\scripts\reactor_swapper.py", line 515, in swap_face
        source_faces = analyze_faces(source_img)
      File "C:\stable-diffusion-webui-1.8.0\extensions\sd-webui-reactor\scripts\reactor_swapper.py", line 274, in analyze_faces
        face_analyser = copy.deepcopy(getAnalysisModel())
      File "C:\stable-diffusion-webui-1.8.0\extensions\sd-webui-reactor\scripts\reactor_swapper.py", line 118, in getAnalysisModel
        ANALYSIS_MODEL = insightface.app.FaceAnalysis(
      File "C:\stable-diffusion-webui-1.8.0\extensions\sd-webui-reactor\scripts\console_log_patch.py", line 48, in patched_faceanalysis_init
        model = model_zoo.get_model(onnx_file, **kwargs)
      File "C:\stable-diffusion-webui-1.8.0\venv\lib\site-packages\insightface\model_zoo\model_zoo.py", line 96, in get_model
        model = router.get_model(providers=providers, provider_options=provider_options)
      File "C:\stable-diffusion-webui-1.8.0\extensions\sd-webui-reactor\scripts\console_log_patch.py", line 21, in patched_get_model
        session = PickableInferenceSession(self.onnx_file, **kwargs)
      File "C:\stable-diffusion-webui-1.8.0\venv\lib\site-packages\insightface\model_zoo\model_zoo.py", line 25, in __init__
        super().__init__(model_path, **kwargs)
      File "C:\stable-diffusion-webui-1.8.0\venv\lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py", line 432, in __init__
        raise fallback_error from e
      File "C:\stable-diffusion-webui-1.8.0\venv\lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py", line 427, in __init__
        self._create_inference_session(self._fallback_providers, None)
      File "C:\stable-diffusion-webui-1.8.0\venv\lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py", line 483, in _create_inference_session
        sess.initialize_session(providers, provider_options, disabled_optimizers)
    RuntimeError: D:\a\_work\1\s\onnxruntime\python\onnxruntime_pybind_state.cc:857 onnxruntime::python::CreateExecutionProviderInstance CUDA_PATH is set but CUDA wasnt able to be loaded. Please install the correct version of CUDA andcuDNN as mentioned in the GPU requirements page  (https://onnxruntime.ai/docs/execution-providers/CUDA-ExecutionProvider.html#requirements),  make sure they're in the PATH, and that your GPU is supported.

Additional information

No response

Gourieff commented 2 months ago

I've updated onxruntime for cuda 12.3

https://onnxruntime.ai/docs/execution-providers/CUDA-ExecutionProvider.html#requirements Please rollback to CUDA 12.2 or 12.1 There's no info about 12.3 support for ORT-GPU

GIadstone commented 2 months ago

Thank you for the advice

Le sam. 9 mars 2024 à 05:33, Eugene Gourieff @.***> a écrit :

https://onnxruntime.ai/docs/execution-providers/CUDA-ExecutionProvider.html#requirements Please rollback to CUDA 12.2 or 12.1 There's no info about 12.3 support for ORT-GPU

— Reply to this email directly, view it on GitHub https://github.com/Gourieff/sd-webui-reactor/issues/388#issuecomment-1986725820, or unsubscribe https://github.com/notifications/unsubscribe-auth/A5V7DT37TNEYWR5D5AUM6JTYXKGITAVCNFSM6AAAAABEM7FN6CVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTSOBWG4ZDKOBSGA . You are receiving this because you authored the thread.Message ID: @.***>

dislive commented 2 months ago

same problem with new update to 1.8 CUDA 12.2

drdancm commented 2 months ago

I've had tons of trouble with A1111 ever since the update from 1.6 (I'm running inside Stability Matrix) even though Reactor was working fine w 1.6. Then I installed stablediffusion Forge and it solved a lot of problems, so now Reactor works as well as ever, possibly even better.

https://github.com/lllyasviel/stable-diffusion-webui-forge