[X] I have searched the existing issues and checked the recent builds/commits
What happened?
After installing Inpaint Anything extension and restarting WebUI, WebUI can't be started and an error is displayed in console.
Steps to reproduce the problem
Go to extensions and install Inpaint Anything extension
Restart WebUI via Button in Extensions tab
WebUI can't restart anymore and and error is displayed in console log:
RuntimeError: Failed to import optimum.onnxruntime.modeling_ort because of the following error (look up to see its traceback):
Failed to import optimum.exporters.onnx.main because of the following error (look up to see its traceback):
We found an older version of diffusers 0.16.1 but we require diffusers to be >= 0.18.0. Please update diffusers by running pip install --upgrade diffusers
Stable diffusion model failed to load
Applying attention optimization: InvokeAI... done.
Drücken Sie eine beliebige Taste . . .
What should have happened?
Restarting should be possible. After restarting WebUI, the inpaint anything tab is available and can be used
Sysinfo
Please not: the sysinfo is from before installing the Inpaint Anything Extension
(Automatic1111) D:\AI\A1111_dml\stable-diffusion-webui-directml>webui.bat --onnx --backend directml --medvram
venv "D:\AI\A1111_dml\stable-diffusion-webui-directml\venv\Scripts\Python.exe"
fatal: No names found, cannot describe anything.
Python 3.10.6 | packaged by conda-forge | (main, Oct 24 2022, 16:02:16) [MSC v.1916 64 bit (AMD64)]
Version: 1.6.0
Commit hash: 7b0b721837e4e5324d64f4073b64bbd3da0755e7
Installing onnxruntime
Installing onnxruntime-directml
WARNING! Because Olive optimization does not support torch 2.0, some packages will be downgraded and it can occur version mismatches between packages. (Strongly recommend to create another virtual environment to run Olive)
Installing Olive
Launching Web UI with arguments: --onnx --backend directml --medvram
no module 'xformers'. Processing without...
No SDP backend available, likely because you are running in pytorch versions < 2.0. In fact, you are using PyTorch 1.13.1+cpu. You might want to consider upgrading.
no module 'xformers'. Processing without...
No module 'xformers'. Proceeding without it.
==============================================================================
You are running torch 1.13.1+cpu.
The program is tested to work with torch 2.0.0.
To reinstall the desired version, run with commandline flag --reinstall-torch.
Beware that this will cause a lot of large files to be downloaded, as well as
there are reports of issues with training tab on the latest version.
Use --skip-version-check commandline argument to disable this check.
==============================================================================
2023-09-26 12:47:48,282 - ControlNet - INFO - ControlNet v1.1.410
ControlNet preprocessor location: D:\AI\A1111_dml\stable-diffusion-webui-directml\extensions\sd-webui-controlnet\annotator\downloads
2023-09-26 12:47:48,412 - ControlNet - INFO - ControlNet v1.1.410
Model Photon [Optimized] loaded.
Applying attention optimization: InvokeAI... done.
D:\AI\A1111_dml\stable-diffusion-webui-directml\modules\ui.py:1665: GradioDeprecationWarning: The `style` method is deprecated. Please set these arguments in the constructor instead.
with gr.Row().style(equal_height=False):
D:\AI\A1111_dml\stable-diffusion-webui-directml\modules\ui.py:1788: GradioDeprecationWarning: The `style` method is deprecated. Please set these arguments in the constructor instead.
with gr.Row().style(equal_height=False):
Running on local URL: http://127.0.0.1:7860
To create a public link, set `share=True` in `launch()`.
Startup time: 14.3s (prepare environment: 5.6s, import torch: 3.9s, import gradio: 2.3s, setup paths: 2.3s, initialize shared: 2.5s, other imports: 0.3s, load scripts: 1.4s, create ui: 0.7s, gradio launch: 0.2s).
Installing requirements for segment_anything
Installing requirements for lama_cleaner
Installing requirements for ultralytics
fatal: No names found, cannot describe anything.
Python 3.10.6 | packaged by conda-forge | (main, Oct 24 2022, 16:02:16) [MSC v.1916 64 bit (AMD64)]
Version: 1.6.0
Commit hash: 7b0b721837e4e5324d64f4073b64bbd3da0755e7
Installing requirements
Installing onnxruntime
Installing onnxruntime-directml
WARNING! Because Olive optimization does not support torch 2.0, some packages will be downgraded and it can occur version mismatches between packages. (Strongly recommend to create another virtual environment to run Olive)
Installing Olive
Launching Web UI with arguments: --onnx --backend directml --medvram
no module 'xformers'. Processing without...
No SDP backend available, likely because you are running in pytorch versions < 2.0. In fact, you are using PyTorch 1.13.1+cpu. You might want to consider upgrading.
no module 'xformers'. Processing without...
No module 'xformers'. Proceeding without it.
==============================================================================
You are running torch 1.13.1+cpu.
The program is tested to work with torch 2.0.0.
To reinstall the desired version, run with commandline flag --reinstall-torch.
Beware that this will cause a lot of large files to be downloaded, as well as
there are reports of issues with training tab on the latest version.
Use --skip-version-check commandline argument to disable this check.
==============================================================================
2023-09-26 12:49:54,843 - ControlNet - INFO - ControlNet v1.1.410
ControlNet preprocessor location: D:\AI\A1111_dml\stable-diffusion-webui-directml\extensions\sd-webui-controlnet\annotator\downloads
2023-09-26 12:49:54,946 - ControlNet - INFO - ControlNet v1.1.410
Traceback (most recent call last):
File "D:\AI\A1111_dml\stable-diffusion-webui-directml\launch.py", line 48, in <module>
main()
File "D:\AI\A1111_dml\stable-diffusion-webui-directml\launch.py", line 44, in main
start()
File "D:\AI\A1111_dml\stable-diffusion-webui-directml\modules\launch_utils.py", line 480, in start
webui.webui()
File "D:\AI\A1111_dml\stable-diffusion-webui-directml\webui.py", line 64, in webui
shared.demo = ui.create_ui()
File "D:\AI\A1111_dml\stable-diffusion-webui-directml\modules\ui.py", line 1662, in create_ui
from modules.sd_onnx_ui import download_from_huggingface, save_device_map
File "D:\AI\A1111_dml\stable-diffusion-webui-directml\modules\sd_onnx_ui.py", line 4, in <module>
from modules.sd_onnx import device_map
ImportError: cannot import name 'device_map' from 'modules.sd_onnx' (D:\AI\A1111_dml\stable-diffusion-webui-directml\modules\sd_onnx.py)
loading stable diffusion model: RuntimeError
Traceback (most recent call last):
File "D:\AI\A1111_dml\stable-diffusion-webui-directml\venv\lib\site-packages\transformers\utils\import_utils.py", line 1130, in _get_module
return importlib.import_module("." + module_name, self.__name__)
File "C:\Users\Radek\miniconda3\envs\Automatic1111\lib\importlib\__init__.py", line 126, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 1050, in _gcd_import
File "<frozen importlib._bootstrap>", line 1027, in _find_and_load
File "<frozen importlib._bootstrap>", line 1006, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 688, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 883, in exec_module
File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
File "D:\AI\A1111_dml\stable-diffusion-webui-directml\venv\lib\site-packages\optimum\exporters\onnx\__main__.py", line 32, in <module>
from .convert import export_models, validate_models_outputs
File "D:\AI\A1111_dml\stable-diffusion-webui-directml\venv\lib\site-packages\optimum\exporters\onnx\convert.py", line 41, in <module>
from .utils import PickableInferenceSession, recursive_to_device, recursive_to_dtype
File "D:\AI\A1111_dml\stable-diffusion-webui-directml\venv\lib\site-packages\optimum\exporters\onnx\utils.py", line 41, in <module>
raise ImportError(
ImportError: We found an older version of diffusers 0.16.1 but we require diffusers to be >= 0.18.0. Please update diffusers by running `pip install --upgrade diffusers`
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "D:\AI\A1111_dml\stable-diffusion-webui-directml\venv\lib\site-packages\transformers\utils\import_utils.py", line 1130, in _get_module
return importlib.import_module("." + module_name, self.__name__)
File "C:\Users\Radek\miniconda3\envs\Automatic1111\lib\importlib\__init__.py", line 126, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 1050, in _gcd_import
File "<frozen importlib._bootstrap>", line 1027, in _find_and_load
File "<frozen importlib._bootstrap>", line 1006, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 688, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 883, in exec_module
File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
File "D:\AI\A1111_dml\stable-diffusion-webui-directml\venv\lib\site-packages\optimum\onnxruntime\modeling_ort.py", line 60, in <module>
from ..exporters.onnx import main_export
File "<frozen importlib._bootstrap>", line 1075, in _handle_fromlist
File "D:\AI\A1111_dml\stable-diffusion-webui-directml\venv\lib\site-packages\transformers\utils\import_utils.py", line 1120, in __getattr__
module = self._get_module(self._class_to_module[name])
File "D:\AI\A1111_dml\stable-diffusion-webui-directml\venv\lib\site-packages\transformers\utils\import_utils.py", line 1132, in _get_module
raise RuntimeError(
RuntimeError: Failed to import optimum.exporters.onnx.__main__ because of the following error (look up to see its traceback):
We found an older version of diffusers 0.16.1 but we require diffusers to be >= 0.18.0. Please update diffusers by running `pip install --upgrade diffusers`
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "C:\Users\Radek\miniconda3\envs\Automatic1111\lib\threading.py", line 973, in _bootstrap
self._bootstrap_inner()
File "C:\Users\Radek\miniconda3\envs\Automatic1111\lib\threading.py", line 1016, in _bootstrap_inner
self.run()
File "C:\Users\Radek\miniconda3\envs\Automatic1111\lib\threading.py", line 953, in run
self._target(*self._args, **self._kwargs)
File "D:\AI\A1111_dml\stable-diffusion-webui-directml\modules\initialize.py", line 147, in load_model
shared.sd_model # noqa: B018
File "D:\AI\A1111_dml\stable-diffusion-webui-directml\modules\shared_items.py", line 110, in sd_model
return modules.sd_models.model_data.get_sd_model()
File "D:\AI\A1111_dml\stable-diffusion-webui-directml\modules\sd_models.py", line 523, in get_sd_model
load_model()
File "D:\AI\A1111_dml\stable-diffusion-webui-directml\modules\sd_models.py", line 625, in load_model
return load_onnx_model(checkpoint_info, already_loaded_state_dict=already_loaded_state_dict)
File "D:\AI\A1111_dml\stable-diffusion-webui-directml\modules\sd_models.py", line 553, in load_onnx_model
from modules.sd_onnx_models import ONNXStableDiffusionModel
File "D:\AI\A1111_dml\stable-diffusion-webui-directml\modules\sd_onnx_models.py", line 5, in <module>
from modules.sd_onnx import BaseONNXModel, device_map
File "D:\AI\A1111_dml\stable-diffusion-webui-directml\modules\sd_onnx.py", line 20, in <module>
from modules.sd_onnx_hijack import do_hijack
File "D:\AI\A1111_dml\stable-diffusion-webui-directml\modules\sd_onnx_hijack.py", line 7, in <module>
import optimum.pipelines.diffusers.pipeline_stable_diffusion_xl
File "D:\AI\A1111_dml\stable-diffusion-webui-directml\venv\lib\site-packages\optimum\pipelines\__init__.py", line 16, in <module>
from .pipelines_base import (
File "D:\AI\A1111_dml\stable-diffusion-webui-directml\venv\lib\site-packages\optimum\pipelines\pipelines_base.py", line 53, in <module>
from ..onnxruntime import (
File "<frozen importlib._bootstrap>", line 1075, in _handle_fromlist
File "D:\AI\A1111_dml\stable-diffusion-webui-directml\venv\lib\site-packages\transformers\utils\import_utils.py", line 1120, in __getattr__
module = self._get_module(self._class_to_module[name])
File "D:\AI\A1111_dml\stable-diffusion-webui-directml\venv\lib\site-packages\transformers\utils\import_utils.py", line 1132, in _get_module
raise RuntimeError(
RuntimeError: Failed to import optimum.onnxruntime.modeling_ort because of the following error (look up to see its traceback):
Failed to import optimum.exporters.onnx.__main__ because of the following error (look up to see its traceback):
We found an older version of diffusers 0.16.1 but we require diffusers to be >= 0.18.0. Please update diffusers by running `pip install --upgrade diffusers`
Stable diffusion model failed to load
Applying attention optimization: InvokeAI... done.
Drücken Sie eine beliebige Taste . . .
Is there an existing issue for this?
What happened?
After installing Inpaint Anything extension and restarting WebUI, WebUI can't be started and an error is displayed in console.
Steps to reproduce the problem
pip install --upgrade diffusers
Stable diffusion model failed to load Applying attention optimization: InvokeAI... done. Drücken Sie eine beliebige Taste . . .
What should have happened?
Restarting should be possible. After restarting WebUI, the inpaint anything tab is available and can be used
Sysinfo
Please not: the sysinfo is from before installing the Inpaint Anything Extension
sysinfo-2023-09-26-08-14.txt
What browsers do you use to access the UI ?
No response
Console logs
Additional information
No response