Gourieff / sd-webui-reactor

Fast and Simple Face Swap Extension for StableDiffusion WebUI (A1111 SD WebUI, SD WebUI Forge, SD.Next, Cagliostro)
GNU Affero General Public License v3.0
2.55k stars 278 forks source link

Reactor Force + Onnxruntime issues....? #200

Closed strouder closed 11 months ago

strouder commented 1 year ago

First, confirm

What happened?

Issue Description

I followed Sarikas's tutorial for Reactor A1111 https://www.youtube.com/watch?v=jNmOGVFQwaY and I also went to Reactor's GitHub and tried trouble shooting... but I still get these errors... something to do with OnnXruntime not having the right graphics card version? or requirements? Can someone help?

100%|██████████████████████████████████████████████████████████████████████████████████| 30/30 [00:12<00:00,  2.49it/s]
01:01:37-674013 STATUS   Working: source face index [0], target face index [0]
01:01:37-687022 STATUS   Analyzing Source Image...
2023-11-16 01:01:38.0384906 [E:onnxruntime:Default, provider_bridge_ort.cc:1480 onnxruntime::TryGetProviderInfo_CUDA] D:\a\_work\1\s\onnxruntime\core\session\provider_bridge_ort.cc:1193 onnxruntime::ProviderLibrary::Get [ONNXRuntimeError] : 1 : FAIL : LoadLibrary failed with error 126 "" when trying to load "D:\AI\Vladimir\venv\lib\site-packages\onnxruntime\capi\onnxruntime_providers_cuda.dll"

EP Error D:\a\_work\1\s\onnxruntime\python\onnxruntime_pybind_state.cc:743 onnxruntime::python::CreateExecutionProviderInstance CUDA_PATH is set but CUDA wasn't able to be loaded. Please install the correct version of CUDA and cuDNN as mentioned in the GPU requirements page (https://onnxruntime.ai/docs/execution-providers/CUDA-ExecutionProvider.html#requirements), make sure they're in the PATH, and that your GPU is supported.
 when using ['CUDAExecutionProvider']
Falling back to ['CUDAExecutionProvider', 'CPUExecutionProvider'] and retrying.
2023-11-16 01:01:38.3542468 [E:onnxruntime:Default, provider_bridge_ort.cc:1480 onnxruntime::TryGetProviderInfo_CUDA] D:\a\_work\1\s\onnxruntime\core\session\provider_bridge_ort.cc:1193 onnxruntime::ProviderLibrary::Get [ONNXRuntimeError] : 1 : FAIL : LoadLibrary failed with error 126 "" when trying to load "D:\AI\Vladimir\venv\lib\site-packages\onnxruntime\capi\onnxruntime_providers_cuda.dll"

01:01:38-360791 ERROR    Running script postprocess image: extensions\sd-webui-reactor\scripts\reactor_faceswap.py:
                         RuntimeError
╭───────────────────────────────────────── Traceback (most recent call last) ──────────────────────────────────────────╮
│ D:\AI\Vladimir\venv\lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py:419 in __init__           │
│                                                                                                                      │
│   418 │   │   try:                                                                                                   │
│ ❱ 419 │   │   │   self._create_inference_session(providers, provider_options, disabled_optimiz                       │
│   420 │   │   except (ValueError, RuntimeError) as e:                                                                │
│                                                                                                                      │
│ D:\AI\Vladimir\venv\lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py:463 in                    │
│ _create_inference_session                                                                                            │
│                                                                                                                      │
│   462 │   │   # initialize the C++ InferenceSession                                                                  │
│ ❱ 463 │   │   sess.initialize_session(providers, provider_options, disabled_optimizers)                              │
│   464                                                                                                                │
╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
RuntimeError: D:\a\_work\1\s\onnxruntime\python\onnxruntime_pybind_state.cc:743
onnxruntime::python::CreateExecutionProviderInstance CUDA_PATH is set but CUDA wasn't able to be loaded. Please install
the correct version of CUDA and cuDNN as mentioned in the GPU requirements page
(https://onnxruntime.ai/docs/execution-providers/CUDA-ExecutionProvider.html#requirements), make sure they're in the
PATH, and that your GPU is supported.

The above exception was the direct cause of the following exception:

╭───────────────────────────────────────── Traceback (most recent call last) ──────────────────────────────────────────╮
│ D:\AI\Vladimir\modules\scripts.py:585 in postprocess_image                                                           │
│                                                                                                                      │
│   584 │   │   │   │   args = p.per_script_args.get(script.title(), p.script_args[script.args_f                       │
│ ❱ 585 │   │   │   │   script.postprocess_image(p, pp, *args)                                                         │
│   586 │   │   │   except Exception as e:                                                                             │
│                                                                                                                      │
│ D:\AI\Vladimir\extensions\sd-webui-reactor\scripts\reactor_faceswap.py:433 in postprocess_image                      │
│                                                                                                                      │
│   432 │   │   │   │   image: Image.Image = script_pp.image                                                           │
│ ❱ 433 │   │   │   │   result, output, swapped = swap_face(                                                           │
│   434 │   │   │   │   │   self.source,                                                                               │
│                                                                                                                      │
│ D:\AI\Vladimir\extensions\sd-webui-reactor\scripts\reactor_swapper.py:333 in swap_face                               │
│                                                                                                                      │
│   332 │   │   │   │   logger.status("Analyzing Source Image...")                                                     │
│ ❱ 333 │   │   │   │   source_faces = analyze_faces(source_img)                                                       │
│   334 │   │   │   │   SOURCE_FACES = source_faces                                                                    │
│                                                                                                                      │
│ D:\AI\Vladimir\extensions\sd-webui-reactor\scripts\reactor_swapper.py:225 in analyze_faces                           │
│                                                                                                                      │
│   224 │   logger.info("Applied Execution Provider: %s", PROVIDERS[0])                                                │
│ ❱ 225 │   face_analyser = copy.deepcopy(getAnalysisModel())                                                          │
│   226 │   face_analyser.prepare(ctx_id=0, det_size=det_size)                                                         │
│                                                                                                                      │
│ D:\AI\Vladimir\extensions\sd-webui-reactor\scripts\reactor_swapper.py:92 in getAnalysisModel                         │
│                                                                                                                      │
│    91 │   if ANALYSIS_MODEL is None:                                                                                 │
│ ❱  92 │   │   ANALYSIS_MODEL = insightface.app.FaceAnalysis(                                                         │
│    93 │   │   │   name="buffalo_l", providers=PROVIDERS, root=os.path.join(models_path, "insig                       │
│                                                                                                                      │
│                                               ... 2 frames hidden ...                                                │
│                                                                                                                      │
│ D:\AI\Vladimir\extensions\sd-webui-reactor\scripts\console_log_patch.py:21 in patched_get_model                      │
│                                                                                                                      │
│    20 def patched_get_model(self, **kwargs):                                                                         │
│ ❱  21 │   session = PickableInferenceSession(self.onnx_file, **kwargs)                                               │
│    22 │   inputs = session.get_inputs()                                                                              │
│                                                                                                                      │
│ D:\AI\Vladimir\venv\lib\site-packages\insightface\model_zoo\model_zoo.py:25 in __init__                              │
│                                                                                                                      │
│   24 │   def __init__(self, model_path, **kwargs):                                                                   │
│ ❱ 25 │   │   super().__init__(model_path, **kwargs)                                                                  │
│   26 │   │   self.model_path = model_path                                                                            │
│                                                                                                                      │
│ D:\AI\Vladimir\venv\lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py:430 in __init__           │
│                                                                                                                      │
│   429 │   │   │   │   except Exception as fallback_error:                                                            │
│ ❱ 430 │   │   │   │   │   raise fallback_error from e                                                                │
│   431 │   │   │   # Fallback is disabled. Raise the original error.                                                  │
│                                                                                                                      │
│ D:\AI\Vladimir\venv\lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py:425 in __init__           │
│                                                                                                                      │
│   424 │   │   │   │   │   print(f"Falling back to {self._fallback_providers} and retrying.")                         │
│ ❱ 425 │   │   │   │   │   self._create_inference_session(self._fallback_providers, None)                             │
│   426 │   │   │   │   │   # Fallback only once.                                                                      │
│                                                                                                                      │
│ D:\AI\Vladimir\venv\lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py:463 in                    │
│ _create_inference_session                                                                                            │
│                                                                                                                      │
│   462 │   │   # initialize the C++ InferenceSession                                                                  │
│ ❱ 463 │   │   sess.initialize_session(providers, provider_options, disabled_optimizers)                              │
│   464                                                                                                                │
╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
RuntimeError: D:\a\_work\1\s\onnxruntime\python\onnxruntime_pybind_state.cc:743
onnxruntime::python::CreateExecutionProviderInstance CUDA_PATH is set but CUDA wasn't able to be loaded. Please install
the correct version of CUDA and cuDNN as mentioned in the GPU requirements page
(https://onnxruntime.ai/docs/execution-providers/CUDA-ExecutionProvider.html#requirements), make sure they're in the
PATH, and that your GPU is supported.

01:01:38-636041 INFO     Processed: images=1 time=21.98 its=1.36 memory={'ram': {'used': 7.3, 'total': 31.92}, 'gpu':
                         {'used': 3.99, 'total': 10.0}, 'retries': 0, 'oom': 0}

Version Platform Description

Using VENV: D:\AI\Vladimir\venv
00:59:33-526183 INFO     Starting SD.Next
00:59:33-529184 INFO     Python 3.10.9 on Windows
00:59:33-649708 INFO     Version: app=sd.next updated=2023-11-13 hash=f1862579
                         url=https://github.com/vladmandic/automatic//tree/master
00:59:34-212508 INFO     Platform: arch=AMD64 cpu=AMD64 Family 23 Model 113 Stepping 0, AuthenticAMD system=Windows
                         release=Windows-10-10.0.22621-SP0 python=3.10.9
00:59:34-216514 INFO     nVidia CUDA toolkit detected: nvidia-smi present
00:59:34-296027 INFO     Extensions: disabled=[]
00:59:34-298027 INFO     Extensions: enabled=['Lora', 'sd-extension-chainner', 'sd-extension-system-info',
                         'sd-webui-agent-scheduler', 'sd-webui-controlnet', 'stable-diffusion-webui-images-browser',
                         'stable-diffusion-webui-rembg'] extensions-builtin
00:59:34-301028 INFO     Extensions: enabled=['a1111-sd-webui-lycoris', 'adetailer', 'sd-webui-infinite-image-browsing',
                         'sd-webui-reactor'] extensions
00:59:34-304029 INFO     Startup: standard
00:59:34-305027 INFO     Verifying requirements
00:59:34-318033 INFO     Verifying packages
00:59:34-320032 INFO     Verifying submodules
01:00:07-169111 INFO     Extension installed packages: sd-webui-reactor ['onnxruntime-gpu==1.16.2']
01:00:07-170110 INFO     Extensions enabled: ['Lora', 'sd-extension-chainner', 'sd-extension-system-info',
                         'sd-webui-agent-scheduler', 'sd-webui-controlnet', 'stable-diffusion-webui-images-browser',
                         'stable-diffusion-webui-rembg', 'a1111-sd-webui-lycoris', 'adetailer',
                         'sd-webui-infinite-image-browsing', 'sd-webui-reactor']
01:00:07-172111 INFO     Verifying requirements
01:00:07-191113 INFO     Extension preload: {'extensions-builtin': 0.0, 'extensions': 0.0}
01:00:07-193112 INFO     Command line args: []
01:00:13-527368 INFO     Load packages: torch=2.1.1+cu121 diffusers=0.23.0 gradio=3.43.2
01:00:14-032282 INFO     Engine: backend=Backend.ORIGINAL compute=cuda mode=no_grad device=cuda
                         cross-optimization="Scaled-Dot-Product"
01:00:14-082806 INFO     Device: device=NVIDIA GeForce RTX 3080 n=1 arch=sm_90 cap=(8, 6) cuda=12.1 cudnn=8801
                         driver=546.01
01:00:14-764556 INFO     Available VAEs: path="models\VAE" items=0
01:00:14-766556 INFO     Disabling uncompatible extensions: backend=Backend.ORIGINAL []
01:00:14-774556 INFO     Available models: path="models\Stable-diffusion" items=3 time=0.01
01:00:16-848453 INFO     Extension: script='extensions-builtin\sd-webui-agent-scheduler\scripts\task_scheduler.py' Using
                         sqlite file: extensions-builtin\sd-webui-agent-scheduler\task_scheduler.sqlite3
01:00:17-298539 INFO     Extension: script='extensions-builtin\sd-webui-controlnet\scripts\controlnet.py' Warning:
                         ControlNet failed to load SGM - will use LDM instead.
01:00:17-300541 INFO     Extension: script='extensions-builtin\sd-webui-controlnet\scripts\controlnet.py' ControlNet
                         preprocessor location:
                         D:\AI\Vladimir\extensions-builtin\sd-webui-controlnet\annotator\downloads
01:00:17-310539 INFO     Extension: script='extensions-builtin\sd-webui-controlnet\scripts\hook.py' Warning: ControlNet
                         failed to load SGM - will use LDM instead.
01:00:20-187447 INFO     Extension: script='extensions\adetailer\scripts\!adetailer.py' [-] ADetailer initialized.
                         version: 23.11.0, num models: 9
01:00:20-639085 INFO     Extensions time: 5.64 { Lora=1.02 sd-extension-chainner=0.10 sd-extension-system-info=0.05
                         sd-webui-agent-scheduler=0.66 sd-webui-controlnet=0.47
                         stable-diffusion-webui-images-browser=0.16 stable-diffusion-webui-rembg=0.85
                         a1111-sd-webui-lycoris=0.20 adetailer=1.66 sd-webui-infinite-image-browsing=0.14
                         sd-webui-reactor=0.31 }
01:00:20-908637 INFO     Load UI theme: name="black-teal" style=Auto base=style.css
01:00:23-389251 INFO     Local URL: http://127.0.0.1:7860/
01:00:23-390250 INFO     Initializing middleware
01:00:23-883830 INFO     [AgentScheduler] Task queue is empty
01:00:23-886830 INFO     [AgentScheduler] Registering APIs
01:00:24-043355 INFO     Startup time: 16.85 { torch=5.63 gradio=0.67 libraries=1.23 extensions=5.64 face-restore=0.22
                         upscalers=0.14 ui-extra-networks=0.26 ui-txt2img=0.08 ui-img2img=0.12 ui-models=0.25
                         ui-settings=0.21 ui-extensions=1.29 ui-defaults=0.07 launch=0.27 api=0.09 app-started=0.56 }
01:00:31-162951 INFO     MOTD: N/A
01:00:36-890294 INFO     Browser session: client=127.0.0.1 agent=Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:109.0)
                         Gecko/20100101 Firefox/119.0
01:01:06-008983 INFO     Select: model="Model15-photon_v1 [ec41bd2a82]"
Loading weights: D:\AI\Vladimir\models\Stable-diffusion\Model15-photon_v1.safetensors ━━━━━━━━━━━━━━━ 0.0/2.1 GB -:--:--
01:01:06-064986 INFO     Setting Torch parameters: device=cuda dtype=torch.bfloat16 vae=torch.bfloat16
                         unet=torch.bfloat16 context=no_grad fp16=False bf16=True

URL link of the extension

https://github.com/Gourieff/sd-webui-reactor

URL link of the issue reported in the extension repository

Reactor Force does not have an Issues page... strange.

Acknowledgements

Steps to reproduce the problem

Ran vladmandic's SD, updated... Created a folder in models called "insightface" added the insightface model there Installed Reactor in Extensions https://github.com/Gourieff/sd-webui-reactor Ran SD... no issues When using Reactor, got that issue

Sysinfo

Using VENV: D:\AI\Vladimir\venv
09:22:53-891138 INFO     Starting SD.Next
09:22:53-891138 INFO     Python 3.10.9 on Windows
09:22:54-047780 INFO     Version: app=sd.next updated=2023-11-13 hash=f1862579
                         url=https://github.com/vladmandic/automatic//tree/master
09:22:54-905298 INFO     Platform: arch=AMD64 cpu=AMD64 Family 23 Model 113 Stepping 0, AuthenticAMD system=Windows
                         release=Windows-10-10.0.22621-SP0 python=3.10.9
09:22:54-905298 INFO     nVidia CUDA toolkit detected: nvidia-smi present
09:22:55-014678 INFO     Extensions: disabled=[]
09:22:55-014678 INFO     Extensions: enabled=['Lora', 'sd-extension-chainner', 'sd-extension-system-info',
                         'sd-webui-agent-scheduler', 'sd-webui-controlnet', 'stable-diffusion-webui-images-browser',
                         'stable-diffusion-webui-rembg'] extensions-builtin
09:22:55-030297 INFO     Extensions: enabled=['a1111-sd-webui-lycoris', 'adetailer', 'sd-webui-infinite-image-browsing',
                         'sd-webui-reactor'] extensions
09:22:55-045924 INFO     Startup: standard
09:22:55-045924 INFO     Verifying requirements
09:22:55-061548 INFO     Verifying packages
09:22:55-061548 INFO     Verifying submodules
09:23:16-605651 INFO     Extensions enabled: ['Lora', 'sd-extension-chainner', 'sd-extension-system-info',
                         'sd-webui-agent-scheduler', 'sd-webui-controlnet', 'stable-diffusion-webui-images-browser',
                         'stable-diffusion-webui-rembg', 'a1111-sd-webui-lycoris', 'adetailer',
                         'sd-webui-infinite-image-browsing', 'sd-webui-reactor']
09:23:16-621276 INFO     Verifying requirements
09:23:16-637312 INFO     Extension preload: {'extensions-builtin': 0.0, 'extensions': 0.0}
09:23:16-637312 INFO     Command line args: []
09:23:29-461084 INFO     Load packages: torch=2.1.1+cu121 diffusers=0.23.0 gradio=3.43.2
09:23:30-461377 INFO     Engine: backend=Backend.ORIGINAL compute=cuda mode=no_grad device=cuda
                         cross-optimization="Scaled-Dot-Product"
09:23:30-523877 INFO     Device: device=NVIDIA GeForce RTX 3080 n=1 arch=sm_90 cap=(8, 6) cuda=12.1 cudnn=8801
                         driver=546.01
09:23:32-120417 INFO     Available VAEs: path="models\VAE" items=0
09:23:32-120417 INFO     Disabling uncompatible extensions: backend=Backend.ORIGINAL []
09:23:32-136043 INFO     Available models: path="models\Stable-diffusion" items=3 time=0.02
09:23:34-808330 INFO     Extension: script='extensions-builtin\sd-webui-agent-scheduler\scripts\task_scheduler.py' Using
                         sqlite file: extensions-builtin\sd-webui-agent-scheduler\task_scheduler.sqlite3
09:23:35-480628 INFO     Extension: script='extensions-builtin\sd-webui-controlnet\scripts\controlnet.py' Warning:
                         ControlNet failed to load SGM - will use LDM instead.
09:23:35-480628 INFO     Extension: script='extensions-builtin\sd-webui-controlnet\scripts\controlnet.py' ControlNet
                         preprocessor location:
                         D:\AI\Vladimir\extensions-builtin\sd-webui-controlnet\annotator\downloads
09:23:35-496244 INFO     Extension: script='extensions-builtin\sd-webui-controlnet\scripts\hook.py' Warning: ControlNet
                         failed to load SGM - will use LDM instead.
09:23:38-949773 INFO     Extension: script='extensions\adetailer\scripts\!adetailer.py' [-] ADetailer initialized.
                         version: 23.11.0, num models: 9
09:23:40-121638 INFO     Extensions time: 7.30 { Lora=0.98 sd-extension-chainner=0.12 sd-extension-system-info=0.06
                         sd-webui-agent-scheduler=0.80 sd-webui-controlnet=0.70
                         stable-diffusion-webui-images-browser=0.16 stable-diffusion-webui-rembg=1.31
                         a1111-sd-webui-lycoris=0.23 adetailer=1.73 sd-webui-infinite-image-browsing=0.14
                         sd-webui-reactor=1.03 }
09:23:40-887262 INFO     Load UI theme: name="black-teal" style=Auto base=style.css
09:23:43-623234 INFO     Local URL: http://127.0.0.1:7860/
09:23:43-638860 INFO     Initializing middleware
09:23:44-157745 INFO     [AgentScheduler] Task queue is empty
09:23:44-173371 INFO     [AgentScheduler] Registering APIs
09:23:44-346442 INFO     Model metadata saved: file="metadata.json" items=1 time=0.02
09:23:44-346442 INFO     Startup time: 27.71 { torch=11.44 gradio=1.33 libraries=2.66 extensions=7.30 face-restore=0.69
                         upscalers=0.17 extra-networks=0.47 ui-extra-networks=0.30 ui-txt2img=0.08 ui-img2img=0.12
                         ui-models=0.25 ui-settings=0.34 ui-extensions=1.39 ui-defaults=0.06 launch=0.28 api=0.08
                         app-started=0.61 }
09:24:12-276919 INFO     Browser session: client=127.0.0.1 agent=Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:109.0)
                         Gecko/20100101 Firefox/119.0
09:24:12-493021 INFO     MOTD: N/A

Relevant console log

Pls see original post

Additional information

No response

Gourieff commented 1 year ago

Load packages: torch=2.1.1+cu121 diffusers=0.23.0 gradio=3.43.2 Device: device=NVIDIA GeForce RTX 3080 n=1 arch=sm_90 cap=(8, 6) cuda=12.1 cudnn=8801

Try to downgrade CUDA to 11.8 (https://onnxruntime.ai/docs/execution-providers/CUDA-ExecutionProvider.html#requirements) Activate VENV and:

 pip install torch torchvision --index-url https://download.pytorch.org/whl/cu118
Thanh1122kkll commented 11 months ago

I encountered this issue in the morning on my Windows 11 Pro machine. No matter how I installed it, Reactor wouldn't run and show the same issue. I downloaded the ComfyUI portable zip from the official ComfyUI website (which now defaults to cu121 instead of 11.8). So, I want to ask you, what is the installation command forcu121 to fix this issue? I tried the command "pip install torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/cu122" but it didn't work at all. thank you and look for your reply

Thanh1122kkll commented 11 months ago

I have to uninstall the current installation, reinstall the cu11.8 package, and start over from the beginning in order to make it work. However, I'm concerned if installing version 11.8 might be outdated. If there are no issues with it, then it should be okay.