Gourieff / sd-webui-reactor

Fast and Simple Face Swap Extension for StableDiffusion WebUI (A1111 SD WebUI, SD WebUI Forge, SD.Next, Cagliostro)
GNU Affero General Public License v3.0
2.54k stars 277 forks source link

Intel Arc A770 + SD.Next: getting ValueError: This ORT build has ['OpenVINOExecutionProvider', 'CPUExecutionProvider'] enabled. #334

Open redrum-llik opened 9 months ago

redrum-llik commented 9 months ago

First, confirm

What happened?

Hello, I have installed the extension on SD.Next as per the instruction in the README.md. On an attempt to use the extension, I get the following exception:

E:\Automatic\venv\lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py:451 in                     │
│ _create_inference_session                                                                                           │
│                                                                                                                     │
│    450 │   │   │   self.disable_fallback()                                                                          │
│ >  451 │   │   │   raise ValueError(                                                                                │
│    452 │   │   │   │   f"This ORT build has {available_providers} enabled. "                                        │
└─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┘
ValueError: This ORT build has ['OpenVINOExecutionProvider', 'CPUExecutionProvider'] enabled. Since ORT 1.9, you are
required to explicitly set the providers parameter when instantiating InferenceSession. For example,
onnxruntime.InferenceSession(..., providers=['OpenVINOExecutionProvider', 'CPUExecutionProvider'], ...)

I have tried re-installing the dependency (onnxruntime) and force-reinstalling the insightface, with no luck. As a quick workaround, I have just amended the site-packages\onnxruntime\capi\onnxruntime_inference_collection.py as following:

        if not providers and len(available_providers) > 1:
            # self.disable_fallback()
            # raise ValueError(
                # f"This ORT build has {available_providers} enabled. "
                # "Since ORT 1.9, you are required to explicitly set "
                # "the providers parameter when instantiating InferenceSession. For example, "
                # f"onnxruntime.InferenceSession(..., providers={available_providers}, ...)"
            # )
            providers=['CPUExecutionProvider']

The commented section is the original code. This change allows to use the extension. The only reference to this issue I was able to find was posted here: https://github.com/neuralchen/SimSwap/issues/176, in the issues for SimSwap tool.

Please let me know if you would like any further details from me.

Steps to reproduce the problem

  1. Install extension
  2. Select single source image, leave the rest of settings as default
  3. Generate an image
  4. Navigate to console and witness the exception

Sysinfo

Windows 11, Chrome (latest), Intel Arc A770 + SD.Next (latest as of 21.01.2024). Other extensions: sd-webui-civbrowser.

Relevant console log

17:43:24-513499 INFO     Starting SD.Next
17:43:24-520012 INFO     Logger: file="E:\Automatic\sdnext.log" level=INFO size=1342668 mode=append
17:43:24-523011 INFO     Python 3.10.6 on Windows
17:43:24-724536 INFO     Version: app=sd.next updated=2024-01-13 hash=5fb290f4
                         url=https://github.com/vladmandic/automatic.git/tree/master
17:43:25-257492 INFO     Platform: arch=AMD64 cpu=Intel64 Family 6 Model 158 Stepping 10, GenuineIntel system=Windows
                         release=Windows-10-10.0.19045-SP0 python=3.10.6
17:43:25-312312 INFO     Using CPU-only Torch
17:43:25-386503 WARNING  Deleted files: ['modules/lora', '"blonde girl smiling at camera minima2023 minima2023
                         style.jpg"']
17:43:25-460470 INFO     Extensions: disabled=[]
17:43:25-463471 INFO     Extensions: enabled=['Lora', 'sd-extension-chainner', 'sd-extension-system-info',
                         'sd-webui-agent-scheduler', 'sd-webui-controlnet', 'stable-diffusion-webui-images-browser',
                         'stable-diffusion-webui-rembg'] extensions-builtin
17:43:25-471979 INFO     Extensions: enabled=['sd-webui-civbrowser', 'sd-webui-reactor'] extensions
17:43:25-475978 INFO     Startup: standard
17:43:25-476979 INFO     Verifying requirements
17:43:25-482978 INFO     Verifying packages
17:43:25-486979 INFO     Verifying submodules
17:43:44-047437 INFO     Extensions enabled: ['Lora', 'sd-extension-chainner', 'sd-extension-system-info',
                         'sd-webui-agent-scheduler', 'sd-webui-controlnet', 'stable-diffusion-webui-images-browser',
                         'stable-diffusion-webui-rembg', 'sd-webui-civbrowser', 'sd-webui-reactor']
17:43:44-053437 INFO     Verifying requirements
17:43:44-091437 INFO     Extension preload: {'extensions-builtin': 0.02, 'extensions': 0.0}
17:43:44-095438 INFO     Command line args: []
17:43:54-254274 INFO     Load packages: torch=2.0.0a0+gite9ebda2 diffusers=0.25.0 gradio=3.43.2
17:43:55-491822 INFO     Engine: backend=Backend.ORIGINAL compute=ipex mode=no_grad device=xpu
                         cross-optimization="Sub-quadratic"
17:43:55-498821 INFO     Device: device=Intel(R) Arc(TM) A770 Graphics n=1 ipex=2.0.110+gitc6ea20b
17:44:03-220400 INFO     Available VAEs: path="models\VAE" items=0
17:44:03-224391 INFO     Disabled extensions: []
17:44:03-233008 INFO     Available models: path="models\Stable-diffusion" items=6 time=0.01
17:44:08-871314 INFO     Extension: script='extensions-builtin\sd-webui-agent-scheduler\scripts\task_scheduler.py'
                         Using sqlite file: extensions-builtin\sd-webui-agent-scheduler\task_scheduler.sqlite3
17:44:09-325670 INFO     Extension: script='extensions-builtin\sd-webui-controlnet\scripts\api.py' ControlNet
                         preprocessor location: E:\Automatic\extensions-builtin\sd-webui-controlnet\annotator\downloads
17:44:09-656671 INFO     Extension: script='extensions-builtin\sd-webui-controlnet\scripts\controlnet.py' Warning:
                         ControlNet failed to load SGM - will use LDM instead.
17:44:09-678671 INFO     Extension: script='extensions-builtin\sd-webui-controlnet\scripts\hook.py' Warning: ControlNet
                         failed to load SGM - will use LDM instead.
17:44:10-362007 INFO     Extensions time: 6.04 { Automatic=1.87 Lora=1.96 sd-extension-chainner=0.09
                         sd-webui-agent-scheduler=0.58 sd-webui-controlnet=0.82
                         stable-diffusion-webui-images-browser=0.43 stable-diffusion-webui-rembg=0.07
                         sd-webui-civbrowser=0.07 sd-webui-reactor=0.10 }
17:44:11-888461 INFO     Load UI theme: name="invoked" style=Auto base=sdnext.css
17:44:17-616775 INFO     Local URL: http://127.0.0.1:7860/
17:44:17-622776 INFO     Initializing middleware
17:44:17-903057 INFO     [AgentScheduler] Task queue is empty
17:44:17-909058 INFO     [AgentScheduler] Registering APIs
17:44:18-165017 INFO     Startup time: 34.06 { torch=8.19 gradio=1.88 diffusers=0.08 libraries=8.96 extensions=6.04
                         face-restore=1.08 upscalers=0.19 extra-networks=1.34 ui-extra-networks=0.78 ui-txt2img=0.21
                         ui-img2img=0.20 ui-train=0.05 ui-models=0.05 ui-interrogate=0.08 ui-settings=0.49
                         ui-extensions=2.76 ui-defaults=0.47 launch=0.62 api=0.15 app-started=0.39 }
17:44:59-867787 INFO     MOTD: N/A
17:45:06-906897 INFO     Browser session: user=None client=127.0.0.1 agent=Mozilla/5.0 (Windows NT 10.0; Win64; x64)
                         AppleWebKit/537.36 (KHTML, like Gecko) Chrome/120.0.0.0 Safari/537.36
17:46:03-675740 INFO     Select: model="cyberrealistic_v41BackToBasics [925bd947d7]"
Loading model: E:\Automatic\models\Stable-diffusion\cyberrealistic_v41BackToBasics.safetensors -------- 0.0/4.3 -:--:--
                                                                                                        GB
17:46:03-889505 INFO     Setting Torch parameters: device=xpu dtype=torch.bfloat16 vae=torch.bfloat16
                         unet=torch.bfloat16 context=no_grad fp16=False bf16=True
17:46:10-164458 INFO     LDM: LatentDiffusion: mode=eps
17:46:10-170462 INFO     LDM: DiffusionWrapper params=859.52M
17:46:10-173460 INFO     Autodetect: model="Stable Diffusion" class=StableDiffusionPipeline
                         file="E:\Automatic\models\Stable-diffusion\cyberrealistic_v41BackToBasics.safetensors"
                         size=4068MB
17:46:16-361664 INFO     Applied IPEX Optimize.
17:46:16-364665 INFO     Cross-attention: optimization=Sub-quadratic options=[]
17:46:56-388834 INFO     Load embeddings: loaded=12 skipped=0 time=40.02
17:46:56-394833 INFO     Model loaded in 52.71 { load=0.07 config=0.16 create=6.26 apply=1.27 vae=3.79 move=1.07
                         hijack=0.06 embeddings=40.02 }
17:46:56-732548 INFO     Model load finished: {'ram': {'used': 6.49, 'total': 31.94}, 'gpu': {'used': 2.03, 'total':
                         15.56}, 'retries': 0, 'oom': 0} cached=0
Progress 1.56it/s ---------------------------------------- 100% 0:00:00 0:00:12
17:47:17-172606 INFO     Processed: images=1 time=20.16 its=0.99 memory={'ram': {'used': 3.95, 'total': 31.94}, 'gpu':
                         {'used': 2.83, 'total': 15.56}, 'retries': 0, 'oom': 0}
17:47:17-185605 TRACE    Working: source face index [0], target face index [0]
17:47:17-222605 TRACE    Analyzing Source Image 0...
17:47:18-843278 ERROR    Running script postprocess: extensions\sd-webui-reactor\scripts\reactor_faceswap.py:
                         ValueError
┌───────────────────────────────────────── Traceback (most recent call last) ─────────────────────────────────────────┐
│ E:\Automatic\modules\scripts.py:551 in postprocess                                                                  │
│                                                                                                                     │
│   550 │   │   │   │   args = p.per_script_args.get(script.title(), p.script_args[script.args_f                      │
│ > 551 │   │   │   │   script.postprocess(p, processed, *args)                                                       │
│   552 │   │   │   except Exception as e:                                                                            │
│                                                                                                                     │
│ E:\Automatic\extensions\sd-webui-reactor\scripts\reactor_faceswap.py:306 in postprocess                             │
│                                                                                                                     │
│   305 │   │   │   │   │   │   │   logger.status("Swap in %s", i)                                                    │
│ > 306 │   │   │   │   │   │   result, output, swapped = swap_face(                                                  │
│   307 │   │   │   │   │   │   │   self.source,                                                                      │
│                                                                                                                     │
│ E:\Automatic\extensions\sd-webui-reactor\scripts\reactor_swapper.py:415 in swap_face                                │
│                                                                                                                     │
│   414 │   │   │   │   │   │   │   logger.status(f"Analyzing Source Image {i}...")                                   │
│ > 415 │   │   │   │   │   │   │   source_faces = analyze_faces(source_image)                                        │
│   416 │   │   │   │   │   │   │   SOURCE_FACES_LIST = [source_faces]                                                │
│                                                                                                                     │
│ E:\Automatic\extensions\sd-webui-reactor\scripts\reactor_swapper.py:274 in analyze_faces                            │
│                                                                                                                     │
│   273 │   logger.info("Applied Execution Provider: %s", PROVIDERS[0])                                               │
│ > 274 │   face_analyser = copy.deepcopy(getAnalysisModel())                                                         │
│   275 │   face_analyser.prepare(ctx_id=0, det_size=det_size)                                                        │
│                                                                                                                     │
│ C:\Users\Rumi\AppData\Local\Programs\Python\Python38\lib\copy.py:172 in deepcopy                                    │
│                                                                                                                     │
│   171 │   │   │   │   else:                                                                                         │
│ > 172 │   │   │   │   │   y = _reconstruct(x, memo, *rv)                                                            │
│   173                                                                                                               │
│                                                                                                                     │
│                                              ... 11 frames hidden ...                                               │
│                                                                                                                     │
│ E:\Automatic\venv\lib\site-packages\insightface\model_zoo\model_zoo.py:33 in __setstate__                           │
│                                                                                                                     │
│   32 │   │   model_path = values['model_path']                                                                      │
│ > 33 │   │   self.__init__(model_path)                                                                              │
│   34                                                                                                                │
│                                                                                                                     │
│ E:\Automatic\venv\lib\site-packages\insightface\model_zoo\model_zoo.py:25 in __init__                               │
│                                                                                                                     │
│   24 │   def __init__(self, model_path, **kwargs):                                                                  │
│ > 25 │   │   super().__init__(model_path, **kwargs)                                                                 │
│   26 │   │   self.model_path = model_path                                                                           │
│                                                                                                                     │
│ E:\Automatic\venv\lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py:432 in __init__            │
│                                                                                                                     │
│    431 │   │   │   # Fallback is disabled. Raise the original error.                                                │
│ >  432 │   │   │   raise e                                                                                          │
│    433                                                                                                              │
│                                                                                                                     │
│ E:\Automatic\venv\lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py:419 in __init__            │
│                                                                                                                     │
│    418 │   │   try:                                                                                                 │
│ >  419 │   │   │   self._create_inference_session(providers, provider_options, disabled_optimiz                     │
│    420 │   │   except (ValueError, RuntimeError) as e:                                                              │
│                                                                                                                     │
│ E:\Automatic\venv\lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py:451 in                     │
│ _create_inference_session                                                                                           │
│                                                                                                                     │
│    450 │   │   │   self.disable_fallback()                                                                          │
│ >  451 │   │   │   raise ValueError(                                                                                │
│    452 │   │   │   │   f"This ORT build has {available_providers} enabled. "                                        │
└─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┘
ValueError: This ORT build has ['OpenVINOExecutionProvider', 'CPUExecutionProvider'] enabled. Since ORT 1.9, you are
required to explicitly set the providers parameter when instantiating InferenceSession. For example,
onnxruntime.InferenceSession(..., providers=['OpenVINOExecutionProvider', 'CPUExecutionProvider'], ...)

Additional information

No response

Gourieff commented 9 months ago

ReActor doesn't support OpenVINO yet Try to uninstall onnxruntime-openvino and install onnxruntime instead (latest 1.16.3)

azamet90 commented 9 months ago

onnxruntime-openvino

in my case, if i uninstall it, stable diffusion wont work at all (AMD GPU)