vladmandic / automatic

SD.Next: Advanced Implementation of Stable Diffusion and other Diffusion-based generative image models
https://github.com/vladmandic/automatic
GNU Affero General Public License v3.0
5.58k stars 408 forks source link

[Extension]: Reactor Force + Onnxruntime issues #2515

Closed strouder closed 11 months ago

strouder commented 11 months ago

Issue Description

I followed Sarikas's tutorial for Reactor A1111 https://www.youtube.com/watch?v=jNmOGVFQwaY and I also went to Reactor's GitHub and tried trouble shooting... but I still get these errors... something to do with OnnXruntime not having the right graphics card version? or requirements? Can someone help?

100%|██████████████████████████████████████████████████████████████████████████████████| 30/30 [00:12<00:00,  2.49it/s]
01:01:37-674013 STATUS   Working: source face index [0], target face index [0]
01:01:37-687022 STATUS   Analyzing Source Image...
2023-11-16 01:01:38.0384906 [E:onnxruntime:Default, provider_bridge_ort.cc:1480 onnxruntime::TryGetProviderInfo_CUDA] D:\a\_work\1\s\onnxruntime\core\session\provider_bridge_ort.cc:1193 onnxruntime::ProviderLibrary::Get [ONNXRuntimeError] : 1 : FAIL : LoadLibrary failed with error 126 "" when trying to load "D:\AI\Vladimir\venv\lib\site-packages\onnxruntime\capi\onnxruntime_providers_cuda.dll"

EP Error D:\a\_work\1\s\onnxruntime\python\onnxruntime_pybind_state.cc:743 onnxruntime::python::CreateExecutionProviderInstance CUDA_PATH is set but CUDA wasn't able to be loaded. Please install the correct version of CUDA and cuDNN as mentioned in the GPU requirements page (https://onnxruntime.ai/docs/execution-providers/CUDA-ExecutionProvider.html#requirements), make sure they're in the PATH, and that your GPU is supported.
 when using ['CUDAExecutionProvider']
Falling back to ['CUDAExecutionProvider', 'CPUExecutionProvider'] and retrying.
2023-11-16 01:01:38.3542468 [E:onnxruntime:Default, provider_bridge_ort.cc:1480 onnxruntime::TryGetProviderInfo_CUDA] D:\a\_work\1\s\onnxruntime\core\session\provider_bridge_ort.cc:1193 onnxruntime::ProviderLibrary::Get [ONNXRuntimeError] : 1 : FAIL : LoadLibrary failed with error 126 "" when trying to load "D:\AI\Vladimir\venv\lib\site-packages\onnxruntime\capi\onnxruntime_providers_cuda.dll"

01:01:38-360791 ERROR    Running script postprocess image: extensions\sd-webui-reactor\scripts\reactor_faceswap.py:
                         RuntimeError
╭───────────────────────────────────────── Traceback (most recent call last) ──────────────────────────────────────────╮
│ D:\AI\Vladimir\venv\lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py:419 in __init__           │
│                                                                                                                      │
│   418 │   │   try:                                                                                                   │
│ ❱ 419 │   │   │   self._create_inference_session(providers, provider_options, disabled_optimiz                       │
│   420 │   │   except (ValueError, RuntimeError) as e:                                                                │
│                                                                                                                      │
│ D:\AI\Vladimir\venv\lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py:463 in                    │
│ _create_inference_session                                                                                            │
│                                                                                                                      │
│   462 │   │   # initialize the C++ InferenceSession                                                                  │
│ ❱ 463 │   │   sess.initialize_session(providers, provider_options, disabled_optimizers)                              │
│   464                                                                                                                │
╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
RuntimeError: D:\a\_work\1\s\onnxruntime\python\onnxruntime_pybind_state.cc:743
onnxruntime::python::CreateExecutionProviderInstance CUDA_PATH is set but CUDA wasn't able to be loaded. Please install
the correct version of CUDA and cuDNN as mentioned in the GPU requirements page
(https://onnxruntime.ai/docs/execution-providers/CUDA-ExecutionProvider.html#requirements), make sure they're in the
PATH, and that your GPU is supported.

The above exception was the direct cause of the following exception:

╭───────────────────────────────────────── Traceback (most recent call last) ──────────────────────────────────────────╮
│ D:\AI\Vladimir\modules\scripts.py:585 in postprocess_image                                                           │
│                                                                                                                      │
│   584 │   │   │   │   args = p.per_script_args.get(script.title(), p.script_args[script.args_f                       │
│ ❱ 585 │   │   │   │   script.postprocess_image(p, pp, *args)                                                         │
│   586 │   │   │   except Exception as e:                                                                             │
│                                                                                                                      │
│ D:\AI\Vladimir\extensions\sd-webui-reactor\scripts\reactor_faceswap.py:433 in postprocess_image                      │
│                                                                                                                      │
│   432 │   │   │   │   image: Image.Image = script_pp.image                                                           │
│ ❱ 433 │   │   │   │   result, output, swapped = swap_face(                                                           │
│   434 │   │   │   │   │   self.source,                                                                               │
│                                                                                                                      │
│ D:\AI\Vladimir\extensions\sd-webui-reactor\scripts\reactor_swapper.py:333 in swap_face                               │
│                                                                                                                      │
│   332 │   │   │   │   logger.status("Analyzing Source Image...")                                                     │
│ ❱ 333 │   │   │   │   source_faces = analyze_faces(source_img)                                                       │
│   334 │   │   │   │   SOURCE_FACES = source_faces                                                                    │
│                                                                                                                      │
│ D:\AI\Vladimir\extensions\sd-webui-reactor\scripts\reactor_swapper.py:225 in analyze_faces                           │
│                                                                                                                      │
│   224 │   logger.info("Applied Execution Provider: %s", PROVIDERS[0])                                                │
│ ❱ 225 │   face_analyser = copy.deepcopy(getAnalysisModel())                                                          │
│   226 │   face_analyser.prepare(ctx_id=0, det_size=det_size)                                                         │
│                                                                                                                      │
│ D:\AI\Vladimir\extensions\sd-webui-reactor\scripts\reactor_swapper.py:92 in getAnalysisModel                         │
│                                                                                                                      │
│    91 │   if ANALYSIS_MODEL is None:                                                                                 │
│ ❱  92 │   │   ANALYSIS_MODEL = insightface.app.FaceAnalysis(                                                         │
│    93 │   │   │   name="buffalo_l", providers=PROVIDERS, root=os.path.join(models_path, "insig                       │
│                                                                                                                      │
│                                               ... 2 frames hidden ...                                                │
│                                                                                                                      │
│ D:\AI\Vladimir\extensions\sd-webui-reactor\scripts\console_log_patch.py:21 in patched_get_model                      │
│                                                                                                                      │
│    20 def patched_get_model(self, **kwargs):                                                                         │
│ ❱  21 │   session = PickableInferenceSession(self.onnx_file, **kwargs)                                               │
│    22 │   inputs = session.get_inputs()                                                                              │
│                                                                                                                      │
│ D:\AI\Vladimir\venv\lib\site-packages\insightface\model_zoo\model_zoo.py:25 in __init__                              │
│                                                                                                                      │
│   24 │   def __init__(self, model_path, **kwargs):                                                                   │
│ ❱ 25 │   │   super().__init__(model_path, **kwargs)                                                                  │
│   26 │   │   self.model_path = model_path                                                                            │
│                                                                                                                      │
│ D:\AI\Vladimir\venv\lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py:430 in __init__           │
│                                                                                                                      │
│   429 │   │   │   │   except Exception as fallback_error:                                                            │
│ ❱ 430 │   │   │   │   │   raise fallback_error from e                                                                │
│   431 │   │   │   # Fallback is disabled. Raise the original error.                                                  │
│                                                                                                                      │
│ D:\AI\Vladimir\venv\lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py:425 in __init__           │
│                                                                                                                      │
│   424 │   │   │   │   │   print(f"Falling back to {self._fallback_providers} and retrying.")                         │
│ ❱ 425 │   │   │   │   │   self._create_inference_session(self._fallback_providers, None)                             │
│   426 │   │   │   │   │   # Fallback only once.                                                                      │
│                                                                                                                      │
│ D:\AI\Vladimir\venv\lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py:463 in                    │
│ _create_inference_session                                                                                            │
│                                                                                                                      │
│   462 │   │   # initialize the C++ InferenceSession                                                                  │
│ ❱ 463 │   │   sess.initialize_session(providers, provider_options, disabled_optimizers)                              │
│   464                                                                                                                │
╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
RuntimeError: D:\a\_work\1\s\onnxruntime\python\onnxruntime_pybind_state.cc:743
onnxruntime::python::CreateExecutionProviderInstance CUDA_PATH is set but CUDA wasn't able to be loaded. Please install
the correct version of CUDA and cuDNN as mentioned in the GPU requirements page
(https://onnxruntime.ai/docs/execution-providers/CUDA-ExecutionProvider.html#requirements), make sure they're in the
PATH, and that your GPU is supported.

01:01:38-636041 INFO     Processed: images=1 time=21.98 its=1.36 memory={'ram': {'used': 7.3, 'total': 31.92}, 'gpu':
                         {'used': 3.99, 'total': 10.0}, 'retries': 0, 'oom': 0}

Version Platform Description

Using VENV: D:\AI\Vladimir\venv
00:59:33-526183 INFO     Starting SD.Next
00:59:33-529184 INFO     Python 3.10.9 on Windows
00:59:33-649708 INFO     Version: app=sd.next updated=2023-11-13 hash=f1862579
                         url=https://github.com/vladmandic/automatic//tree/master
00:59:34-212508 INFO     Platform: arch=AMD64 cpu=AMD64 Family 23 Model 113 Stepping 0, AuthenticAMD system=Windows
                         release=Windows-10-10.0.22621-SP0 python=3.10.9
00:59:34-216514 INFO     nVidia CUDA toolkit detected: nvidia-smi present
00:59:34-296027 INFO     Extensions: disabled=[]
00:59:34-298027 INFO     Extensions: enabled=['Lora', 'sd-extension-chainner', 'sd-extension-system-info',
                         'sd-webui-agent-scheduler', 'sd-webui-controlnet', 'stable-diffusion-webui-images-browser',
                         'stable-diffusion-webui-rembg'] extensions-builtin
00:59:34-301028 INFO     Extensions: enabled=['a1111-sd-webui-lycoris', 'adetailer', 'sd-webui-infinite-image-browsing',
                         'sd-webui-reactor'] extensions
00:59:34-304029 INFO     Startup: standard
00:59:34-305027 INFO     Verifying requirements
00:59:34-318033 INFO     Verifying packages
00:59:34-320032 INFO     Verifying submodules
01:00:07-169111 INFO     Extension installed packages: sd-webui-reactor ['onnxruntime-gpu==1.16.2']
01:00:07-170110 INFO     Extensions enabled: ['Lora', 'sd-extension-chainner', 'sd-extension-system-info',
                         'sd-webui-agent-scheduler', 'sd-webui-controlnet', 'stable-diffusion-webui-images-browser',
                         'stable-diffusion-webui-rembg', 'a1111-sd-webui-lycoris', 'adetailer',
                         'sd-webui-infinite-image-browsing', 'sd-webui-reactor']
01:00:07-172111 INFO     Verifying requirements
01:00:07-191113 INFO     Extension preload: {'extensions-builtin': 0.0, 'extensions': 0.0}
01:00:07-193112 INFO     Command line args: []
01:00:13-527368 INFO     Load packages: torch=2.1.1+cu121 diffusers=0.23.0 gradio=3.43.2
01:00:14-032282 INFO     Engine: backend=Backend.ORIGINAL compute=cuda mode=no_grad device=cuda
                         cross-optimization="Scaled-Dot-Product"
01:00:14-082806 INFO     Device: device=NVIDIA GeForce RTX 3080 n=1 arch=sm_90 cap=(8, 6) cuda=12.1 cudnn=8801
                         driver=546.01
01:00:14-764556 INFO     Available VAEs: path="models\VAE" items=0
01:00:14-766556 INFO     Disabling uncompatible extensions: backend=Backend.ORIGINAL []
01:00:14-774556 INFO     Available models: path="models\Stable-diffusion" items=3 time=0.01
01:00:16-848453 INFO     Extension: script='extensions-builtin\sd-webui-agent-scheduler\scripts\task_scheduler.py' Using
                         sqlite file: extensions-builtin\sd-webui-agent-scheduler\task_scheduler.sqlite3
01:00:17-298539 INFO     Extension: script='extensions-builtin\sd-webui-controlnet\scripts\controlnet.py' Warning:
                         ControlNet failed to load SGM - will use LDM instead.
01:00:17-300541 INFO     Extension: script='extensions-builtin\sd-webui-controlnet\scripts\controlnet.py' ControlNet
                         preprocessor location:
                         D:\AI\Vladimir\extensions-builtin\sd-webui-controlnet\annotator\downloads
01:00:17-310539 INFO     Extension: script='extensions-builtin\sd-webui-controlnet\scripts\hook.py' Warning: ControlNet
                         failed to load SGM - will use LDM instead.
01:00:20-187447 INFO     Extension: script='extensions\adetailer\scripts\!adetailer.py' [-] ADetailer initialized.
                         version: 23.11.0, num models: 9
01:00:20-639085 INFO     Extensions time: 5.64 { Lora=1.02 sd-extension-chainner=0.10 sd-extension-system-info=0.05
                         sd-webui-agent-scheduler=0.66 sd-webui-controlnet=0.47
                         stable-diffusion-webui-images-browser=0.16 stable-diffusion-webui-rembg=0.85
                         a1111-sd-webui-lycoris=0.20 adetailer=1.66 sd-webui-infinite-image-browsing=0.14
                         sd-webui-reactor=0.31 }
01:00:20-908637 INFO     Load UI theme: name="black-teal" style=Auto base=style.css
01:00:23-389251 INFO     Local URL: http://127.0.0.1:7860/
01:00:23-390250 INFO     Initializing middleware
01:00:23-883830 INFO     [AgentScheduler] Task queue is empty
01:00:23-886830 INFO     [AgentScheduler] Registering APIs
01:00:24-043355 INFO     Startup time: 16.85 { torch=5.63 gradio=0.67 libraries=1.23 extensions=5.64 face-restore=0.22
                         upscalers=0.14 ui-extra-networks=0.26 ui-txt2img=0.08 ui-img2img=0.12 ui-models=0.25
                         ui-settings=0.21 ui-extensions=1.29 ui-defaults=0.07 launch=0.27 api=0.09 app-started=0.56 }
01:00:31-162951 INFO     MOTD: N/A
01:00:36-890294 INFO     Browser session: client=127.0.0.1 agent=Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:109.0)
                         Gecko/20100101 Firefox/119.0
01:01:06-008983 INFO     Select: model="Model15-photon_v1 [ec41bd2a82]"
Loading weights: D:\AI\Vladimir\models\Stable-diffusion\Model15-photon_v1.safetensors ━━━━━━━━━━━━━━━ 0.0/2.1 GB -:--:--
01:01:06-064986 INFO     Setting Torch parameters: device=cuda dtype=torch.bfloat16 vae=torch.bfloat16
                         unet=torch.bfloat16 context=no_grad fp16=False bf16=True

URL link of the extension

https://github.com/Gourieff/sd-webui-reactor-force

URL link of the issue reported in the extension repository

Reactor Force does not have an Issues page... strange.

Acknowledgements

brknsoul commented 11 months ago

Perhaps you missed this? image

https://github.com/Gourieff/sd-webui-reactor

strouder commented 11 months ago

Perhaps you missed this? image

https://github.com/Gourieff/sd-webui-reactor

Sorry I will post issue there. That link is what I used to install reactor in SD.

vladmandic commented 11 months ago

this is deep inside onnx. first thing make sure its correctly installed (and the right version) as something may have changed it. run webui --reinstall. but other than that, i don't have much in a way of suggestions - troubleshooting onnxruntime used by reactor-force is not something i can do.