Closed EygenCat closed 9 months ago
You have nvidia-smi
. Please remove NVIDIA driver if you don't have any NVIDIA card.
According to your log, there's no --use-directml
in your commandline arguments.
Try this:
.\venv\Scripts\activate
python launch.py --use-directml [put other commandline arguments here]
similar error appeared after today's update. Everything worked fine yesterday. But I can't install "onnxruntime-directml"
Installing collected packages: onnxruntime-directml stderr: ERROR: Could not install packages due to an OSError: [WinError 5] : 'C:\StableDiffuionWebUI\stable-diffusion-webui-directml\venv\Lib\site-packages\onnxruntime\capi\onnxruntime_providers_shared.dll' Check the permissions.
RX580 8gb, Updated: I solved my problem, the torch-directml component was not working. Reinstalled it, and everything worked again.
I really had the nvidia driver, ONNX is starting anyway, is that ok?
Creating venv in directory F:\IL\4\stable-diffusion-webui-directml\venv using python "C:\Users\Polzovatel\AppData\Local\Programs\Python\Python310\python.exe" venv "F:\IL\4\stable-diffusion-webui-directml\venv\Scripts\Python.exe" fatal: No names found, cannot describe anything. Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug 1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)] Version: 1.7.0 Commit hash: adaea46e1c19d9a7091f89b0a7c6e66dfa732528 Installing torch and torchvision Collecting torch==2.0.0 Using cached torch-2.0.0-cp310-cp310-win_amd64.whl (172.3 MB) Collecting torchvision==0.15.1 Using cached torchvision-0.15.1-cp310-cp310-win_amd64.whl (1.2 MB) Collecting torch-directml Using cached torch_directml-0.2.0.dev230426-cp310-cp310-win_amd64.whl (8.2 MB) Collecting typing-extensions Using cached typing_extensions-4.9.0-py3-none-any.whl (32 kB) Collecting jinja2 Using cached Jinja2-3.1.3-py3-none-any.whl (133 kB) Collecting sympy Using cached sympy-1.12-py3-none-any.whl (5.7 MB) Collecting filelock Using cached filelock-3.13.1-py3-none-any.whl (11 kB) Collecting networkx Using cached networkx-3.2.1-py3-none-any.whl (1.6 MB) Collecting numpy Using cached numpy-1.26.4-cp310-cp310-win_amd64.whl (15.8 MB) Collecting requests Using cached requests-2.31.0-py3-none-any.whl (62 kB) Collecting pillow!=8.3.*,>=5.3.0 Using cached pillow-10.2.0-cp310-cp310-win_amd64.whl (2.6 MB) Collecting MarkupSafe>=2.0 Using cached MarkupSafe-2.1.5-cp310-cp310-win_amd64.whl (17 kB) Collecting urllib3<3,>=1.21.1 Using cached urllib3-2.2.0-py3-none-any.whl (120 kB) Collecting charset-normalizer<4,>=2 Using cached charset_normalizer-3.3.2-cp310-cp310-win_amd64.whl (100 kB) Collecting idna<4,>=2.5 Using cached idna-3.6-py3-none-any.whl (61 kB) Collecting certifi>=2017.4.17 Using cached certifi-2024.2.2-py3-none-any.whl (163 kB) Collecting mpmath>=0.19 Using cached mpmath-1.3.0-py3-none-any.whl (536 kB) Installing collected packages: mpmath, urllib3, typing-extensions, sympy, pillow, numpy, networkx, MarkupSafe, idna, filelock, charset-normalizer, certifi, requests, jinja2, torch, torchvision, torch-directml Successfully installed MarkupSafe-2.1.5 certifi-2024.2.2 charset-normalizer-3.3.2 filelock-3.13.1 idna-3.6 jinja2-3.1.3mpmath-1.3.0 networkx-3.2.1 numpy-1.26.4 pillow-10.2.0 requests-2.31.0 sympy-1.12 torch-2.0.0 torch-directml-0.2.0.dev230426 torchvision-0.15.1 typing-extensions-4.9.0 urllib3-2.2.0
[notice] A new release of pip available: 22.2.1 -> 24.0 [notice] To update, run: F:\IL\4\stable-diffusion-webui-directml\venv\Scripts\python.exe -m pip install --upgrade pip Installing clip Installing open_clip Cloning Stable Diffusion into F:\IL\4\stable-diffusion-webui-directml\repositories\stable-diffusion-stability-ai... Cloning into 'F:\IL\4\stable-diffusion-webui-directml\repositories\stable-diffusion-stability-ai'... remote: Enumerating objects: 580, done. remote: Counting objects: 100% (357/357), done. remote: Compressing objects: 100% (128/128), done. remote: Total 580 (delta 260), reused 229 (delta 229), pack-reused 223Receiving objects: 98% (569/580), 65.63 MiB | 15.Receiving objects: 100% (580/580), 73.44 MiB | 16.14 MiB/s, done.
Resolving deltas: 100% (279/279), done.
Cloning Stable Diffusion XL into F:\IL\4\stable-diffusion-webui-directml\repositories\generative-models...
Cloning into 'F:\IL\4\stable-diffusion-webui-directml\repositories\generative-models'...
remote: Enumerating objects: 860, done.
remote: Counting objects: 100% (489/489), done.
remote: Compressing objects: 100% (224/224), done.
remote: Total 860 (delta 368), reused 265 (delta 265), pack-reused 371
Receiving objects: 100% (860/860), 42.67 MiB | 19.69 MiB/s, done.
Resolving deltas: 100% (445/445), done.
Cloning K-diffusion into F:\IL\4\stable-diffusion-webui-directml\repositories\k-diffusion...
Cloning into 'F:\IL\4\stable-diffusion-webui-directml\repositories\k-diffusion'...
remote: Enumerating objects: 1340, done.
remote: Counting objects: 100% (622/622), done.
remote: Compressing objects: 100% (86/86), done.
remote: Total 1340 (delta 576), reused 547 (delta 536), pack-reused 718
Receiving objects: 100% (1340/1340), 242.04 KiB | 1.78 MiB/s, done.
Resolving deltas: 100% (939/939), done.
Cloning CodeFormer into F:\IL\4\stable-diffusion-webui-directml\repositories\CodeFormer...
Cloning into 'F:\IL\4\stable-diffusion-webui-directml\repositories\CodeFormer'...
remote: Enumerating objects: 594, done.
remote: Counting objects: 100% (594/594), done.
remote: Compressing objects: 100% (316/316), done.
remote: Total 594 (delta 287), reused 493 (delta 269), pack-reused 0
Receiving objects: 100% (594/594), 17.31 MiB | 14.41 MiB/s, done.
Resolving deltas: 100% (287/287), done.
Cloning BLIP into F:\IL\4\stable-diffusion-webui-directml\repositories\BLIP...
Cloning into 'F:\IL\4\stable-diffusion-webui-directml\repositories\BLIP'...
remote: Enumerating objects: 277, done.
remote: Counting objects: 100% (165/165), done.
remote: Compressing objects: 100% (30/30), done.
remote: Total 277 (delta 137), reused 136 (delta 135), pack-reused 112Receiving objects: 99% (275/277), 4.97 MiB | 3.26 MiB/s
Receiving objects: 100% (277/277), 7.03 MiB | 3.76 MiB/s, done.
Resolving deltas: 100% (152/152), done.
Installing requirements for CodeFormer
Installing requirements
no module 'xformers'. Processing without...
no module 'xformers'. Processing without...
No module 'xformers'. Proceeding without it.
F:\IL\4\stable-diffusion-webui-directml\venv\lib\site-packages\pytorch_lightning\utilities\distributed.py:258: LightningDeprecationWarning: pytorch_lightning.utilities.distributed.rank_zero_only
has been deprecated in v1.8.1 and will be removed in v2.0.0. You can import it from pytorch_lightning.utilities
instead.
rank_zero_deprecation(
Installing onnxruntime-directml
Launching Web UI with arguments: --opt-sub-quad-attention --disable-nan-check --precision autocast --autolaunch --use-directml --listen
Style database not found: F:\IL\4\stable-diffusion-webui-directml\styles.csv
ONNX: selected=DmlExecutionProvider, available=['DmlExecutionProvider', 'CPUExecutionProvider']
Calculating sha256 for F:\IL\4\stable-diffusion-webui-directml\models\Stable-diffusion\airfucksWildMix_v10.safetensors:Running on local URL: http://0.0.0.0:7860
To create a public link, set share=True
in launch()
.
Startup time: 120.6s (prepare environment: 119.4s, initialize shared: 1.8s, list SD models: 0.2s, load scripts: 1.9s, create ui: 0.4s, gradio launch: 4.3s).
ERROR: Exception in ASGI application
Traceback (most recent call last):
File "F:\IL\4\stable-diffusion-webui-directml\venv\lib\site-packages\fastapi\encoders.py", line 152, in jsonable_encoder
data = dict(obj)
TypeError: '_abc._abc_data' object is not iterable
During handling of the above exception, another exception occurred:
Traceback (most recent call last): File "F:\IL\4\stable-diffusion-webui-directml\venv\lib\site-packages\fastapi\encoders.py", line 157, in jsonable_encoder data = vars(obj) TypeError: vars() argument must have dict attribute
The above exception was the direct cause of the following exception:
Traceback (most recent call last): File "F:\IL\4\stable-diffusion-webui-directml\venv\lib\site-packages\uvicorn\protocols\http\h11_impl.py", line 404, in run_asgi result = await app( # type: ignore[func-returns-value] File "F:\IL\4\stable-diffusion-webui-directml\venv\lib\site-packages\uvicorn\middleware\proxy_headers.py", line 84, in call return await self.app(scope, receive, send) File "F:\IL\4\stable-diffusion-webui-directml\venv\lib\site-packages\fastapi\applications.py", line 273, in call await super().call(scope, receive, send) File "F:\IL\4\stable-diffusion-webui-directml\venv\lib\site-packages\starlette\applications.py", line 122, in call await self.middleware_stack(scope, receive, send) File "F:\IL\4\stable-diffusion-webui-directml\venv\lib\site-packages\starlette\middleware\errors.py", line 184, in call raise exc File "F:\IL\4\stable-diffusion-webui-directml\venv\lib\site-packages\starlette\middleware\errors.py", line 162, in call await self.app(scope, receive, _send) File "F:\IL\4\stable-diffusion-webui-directml\venv\lib\site-packages\starlette\middleware\cors.py", line 92, in call await self.simple_response(scope, receive, send, request_headers=headers) File "F:\IL\4\stable-diffusion-webui-directml\venv\lib\site-packages\starlette\middleware\cors.py", line 147, in simple_response await self.app(scope, receive, send) File "F:\IL\4\stable-diffusion-webui-directml\venv\lib\site-packages\starlette\middleware\gzip.py", line 24, in call await responder(scope, receive, send) File "F:\IL\4\stable-diffusion-webui-directml\venv\lib\site-packages\starlette\middleware\gzip.py", line 44, in call await self.app(scope, receive, self.send_with_gzip) File "F:\IL\4\stable-diffusion-webui-directml\venv\lib\site-packages\starlette\middleware\exceptions.py", line 79, incall raise exc File "F:\IL\4\stable-diffusion-webui-directml\venv\lib\site-packages\starlette\middleware\exceptions.py", line 68, incall await self.app(scope, receive, sender) File "F:\IL\4\stable-diffusion-webui-directml\venv\lib\site-packages\fastapi\middleware\asyncexitstack.py", line 21, in call raise e File "F:\IL\4\stable-diffusion-webui-directml\venv\lib\site-packages\fastapi\middleware\asyncexitstack.py", line 18, in call await self.app(scope, receive, send) File "F:\IL\4\stable-diffusion-webui-directml\venv\lib\site-packages\starlette\routing.py", line 718, in call await route.handle(scope, receive, send) File "F:\IL\4\stable-diffusion-webui-directml\venv\lib\site-packages\starlette\routing.py", line 276, in handle await self.app(scope, receive, send) File "F:\IL\4\stable-diffusion-webui-directml\venv\lib\site-packages\starlette\routing.py", line 66, in app response = await func(request) File "F:\IL\4\stable-diffusion-webui-directml\venv\lib\site-packages\fastapi\routing.py", line 255, in app content = await serialize_response( File "F:\IL\4\stable-diffusion-webui-directml\venv\lib\site-packages\fastapi\routing.py", line 152, in serialize_response return jsonable_encoder(response_content) File "F:\IL\4\stable-diffusion-webui-directml\venv\lib\site-packages\fastapi\encoders.py", line 117, in jsonable_encoder encoded_value = jsonable_encoder( File "F:\IL\4\stable-diffusion-webui-directml\venv\lib\site-packages\fastapi\encoders.py", line 131, in jsonable_encoder jsonable_encoder( File "F:\IL\4\stable-diffusion-webui-directml\venv\lib\site-packages\fastapi\encoders.py", line 117, in jsonable_encoder encoded_value = jsonable_encoder( File "F:\IL\4\stable-diffusion-webui-directml\venv\lib\site-packages\fastapi\encoders.py", line 117, in jsonable_encoder encoded_value = jsonable_encoder( File "F:\IL\4\stable-diffusion-webui-directml\venv\lib\site-packages\fastapi\encoders.py", line 161, in jsonable_encoder return jsonable_encoder( File "F:\IL\4\stable-diffusion-webui-directml\venv\lib\site-packages\fastapi\encoders.py", line 161, in jsonable_encoder return jsonable_encoder( File "F:\IL\4\stable-diffusion-webui-directml\venv\lib\site-packages\fastapi\encoders.py", line 117, in jsonable_encoder encoded_value = jsonable_encoder( File "F:\IL\4\stable-diffusion-webui-directml\venv\lib\site-packages\fastapi\encoders.py", line 160, in jsonable_encoder raise ValueError(errors) from e ValueError: [TypeError("'_abc._abc_data' object is not iterable"), TypeError('vars() argument must have dict attribute')] 70525c199b353cbe1ee3738079c5322c6bcf33d0d70cbb5ae903960bfefefd42 Loading weights [70525c199b] from F:\IL\4\stable-diffusion-webui-directml\models\Stable-diffusion\airfucksWildMix_v10.safetensors Creating model from config: F:\IL\4\stable-diffusion-webui-directml\configs\v1-inference.yaml Applying attention optimization: sub-quadratic... done. Model loaded in 17.7s (calculate hash: 11.9s, load weights from disk: 0.2s, create model: 0.3s, apply weights to model:4.9s, calculate empty prompt: 0.2s).
onnx won't work if you didn't enable it in Settings.
I didn't change anything, I downloaded the commit and the first thing I see is a lot of errors related to onnx. I didn't even have time to go into the settings.
I unchecked all the boxes, how can I disable it if it continues to work anyway?
venv "F:\IL\4\stable-diffusion-webui-directml\venv\Scripts\Python.exe"
fatal: No names found, cannot describe anything.
Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug 1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)]
Version: 1.7.0
Commit hash: adaea46e1c19d9a7091f89b0a7c6e66dfa732528
no module 'xformers'. Processing without...
no module 'xformers'. Processing without...
No module 'xformers'. Proceeding without it.
F:\IL\4\stable-diffusion-webui-directml\venv\lib\site-packages\pytorch_lightning\utilities\distributed.py:258: LightningDeprecationWarning: pytorch_lightning.utilities.distributed.rank_zero_only
has been deprecated in v1.8.1 and will be removed in v2.0.0. You can import it from pytorch_lightning.utilities
instead.
rank_zero_deprecation(
Launching Web UI with arguments: --opt-sub-quad-attention --disable-nan-check --precision autocast --autolaunch --use-directml --listen
Style database not found: F:\IL\4\stable-diffusion-webui-directml\styles.csv
ONNX: selected=DmlExecutionProvider, available=['DmlExecutionProvider', 'CPUExecutionProvider']
Loading weights [0dd2276c04] from F:\IL\4\stable-diffusion-webui-directml\models\Stable-diffusion\indigoFurryMix_v105Hybrid.safetensors
Running on local URL: http://0.0.0.0:7860
Creating model from config: F:\IL\4\stable-diffusion-webui-directml\configs\v1-inference.yaml
To create a public link, set share=True
in launch()
.
Startup time: 7.3s (prepare environment: 4.0s, initialize shared: 1.3s, load scripts: 0.9s, create ui: 0.5s, gradio launch: 4.3s).
Applying attention optimization: sub-quadratic... done.
Model loaded in 5.8s (load weights from disk: 0.6s, create model: 0.2s, apply weights to model: 4.3s, apply half(): 0.3s, calculate empty prompt: 0.2s).
ERROR: Exception in ASGI application
Traceback (most recent call last):
File "F:\IL\4\stable-diffusion-webui-directml\venv\lib\site-packages\fastapi\encoders.py", line 152, in jsonable_encoder
data = dict(obj)
TypeError: '_abc._abc_data' object is not iterable
During handling of the above exception, another exception occurred:
Traceback (most recent call last): File "F:\IL\4\stable-diffusion-webui-directml\venv\lib\site-packages\fastapi\encoders.py", line 157, in jsonable_encoder data = vars(obj) TypeError: vars() argument must have dict attribute
The above exception was the direct cause of the following exception:
Traceback (most recent call last): File "F:\IL\4\stable-diffusion-webui-directml\venv\lib\site-packages\uvicorn\protocols\http\h11_impl.py", line 404, in run_asgi result = await app( # type: ignore[func-returns-value] File "F:\IL\4\stable-diffusion-webui-directml\venv\lib\site-packages\uvicorn\middleware\proxy_headers.py", line 84, in call return await self.app(scope, receive, send) File "F:\IL\4\stable-diffusion-webui-directml\venv\lib\site-packages\fastapi\applications.py", line 273, in call await super().call(scope, receive, send) File "F:\IL\4\stable-diffusion-webui-directml\venv\lib\site-packages\starlette\applications.py", line 122, in call await self.middleware_stack(scope, receive, send) File "F:\IL\4\stable-diffusion-webui-directml\venv\lib\site-packages\starlette\middleware\errors.py", line 184, in call raise exc File "F:\IL\4\stable-diffusion-webui-directml\venv\lib\site-packages\starlette\middleware\errors.py", line 162, in call await self.app(scope, receive, _send) File "F:\IL\4\stable-diffusion-webui-directml\venv\lib\site-packages\starlette\middleware\cors.py", line 92, in call await self.simple_response(scope, receive, send, request_headers=headers) File "F:\IL\4\stable-diffusion-webui-directml\venv\lib\site-packages\starlette\middleware\cors.py", line 147, in simple_response await self.app(scope, receive, send) File "F:\IL\4\stable-diffusion-webui-directml\venv\lib\site-packages\starlette\middleware\gzip.py", line 24, in call await responder(scope, receive, send) File "F:\IL\4\stable-diffusion-webui-directml\venv\lib\site-packages\starlette\middleware\gzip.py", line 44, in call await self.app(scope, receive, self.send_with_gzip) File "F:\IL\4\stable-diffusion-webui-directml\venv\lib\site-packages\starlette\middleware\exceptions.py", line 79, incall raise exc File "F:\IL\4\stable-diffusion-webui-directml\venv\lib\site-packages\starlette\middleware\exceptions.py", line 68, incall await self.app(scope, receive, sender) File "F:\IL\4\stable-diffusion-webui-directml\venv\lib\site-packages\fastapi\middleware\asyncexitstack.py", line 21, in call raise e File "F:\IL\4\stable-diffusion-webui-directml\venv\lib\site-packages\fastapi\middleware\asyncexitstack.py", line 18, in call await self.app(scope, receive, send) File "F:\IL\4\stable-diffusion-webui-directml\venv\lib\site-packages\starlette\routing.py", line 718, in call await route.handle(scope, receive, send) File "F:\IL\4\stable-diffusion-webui-directml\venv\lib\site-packages\starlette\routing.py", line 276, in handle await self.app(scope, receive, send) File "F:\IL\4\stable-diffusion-webui-directml\venv\lib\site-packages\starlette\routing.py", line 66, in app response = await func(request) File "F:\IL\4\stable-diffusion-webui-directml\venv\lib\site-packages\fastapi\routing.py", line 255, in app content = await serialize_response( File "F:\IL\4\stable-diffusion-webui-directml\venv\lib\site-packages\fastapi\routing.py", line 152, in serialize_response return jsonable_encoder(response_content) File "F:\IL\4\stable-diffusion-webui-directml\venv\lib\site-packages\fastapi\encoders.py", line 117, in jsonable_encoder encoded_value = jsonable_encoder( File "F:\IL\4\stable-diffusion-webui-directml\venv\lib\site-packages\fastapi\encoders.py", line 131, in jsonable_encoder jsonable_encoder( File "F:\IL\4\stable-diffusion-webui-directml\venv\lib\site-packages\fastapi\encoders.py", line 117, in jsonable_encoder encoded_value = jsonable_encoder( File "F:\IL\4\stable-diffusion-webui-directml\venv\lib\site-packages\fastapi\encoders.py", line 117, in jsonable_encoder encoded_value = jsonable_encoder( File "F:\IL\4\stable-diffusion-webui-directml\venv\lib\site-packages\fastapi\encoders.py", line 161, in jsonable_encoder return jsonable_encoder( File "F:\IL\4\stable-diffusion-webui-directml\venv\lib\site-packages\fastapi\encoders.py", line 161, in jsonable_encoder return jsonable_encoder( File "F:\IL\4\stable-diffusion-webui-directml\venv\lib\site-packages\fastapi\encoders.py", line 117, in jsonable_encoder encoded_value = jsonable_encoder( File "F:\IL\4\stable-diffusion-webui-directml\venv\lib\site-packages\fastapi\encoders.py", line 160, in jsonable_encoder raise ValueError(errors) from e ValueError: [TypeError("'_abc._abc_data' object is not iterable"), TypeError('vars() argument must have dict attribute')]
That's all. onnx will be installed and initialized, although it is disabled.
Is this a bug? I just don't know what to do about it...
Did you mean this one? ONNX: selected=DmlExecutionProvider, available=['DmlExecutionProvider', 'CPUExecutionProvider']
Or errors from fastapi?
fastapi seems to have an compatibility issue.
Take a look at the screenshots. In the error settings panel, select "Stable Diffusion checkpoint" - error. It gets in the way
I turned off all the checkboxes in the "ONNX Runtime", clicked on the "apply settings" and "ui interface" button. I completely rebooted the WebUI completely, errors remained in the console, it is inconvenient to work with the red inscription "error"
ONNX: selected=DmlExecutionProvider, available=['DmlExecutionProvider', 'CPUExecutionProvider']
Every error in console is from fastapi, which is out of my range and not critical for general usage. "Error" label on UI may be due to fastapi's errors. You can use webui and generate images as normal.
@lshqqytiger where can we find onnx and olive page now to convert models?
You don't need to optimize the models manually. It will automatically optimize it when you try to generate an image with a new model that hasn't been optimized. You can find converted/optimized models under ./models/ONNX/cache
.
good to know, althou i meant to convert models to onnx, is this still a thing? (like in the olive tab before)
ONNX: Failed to load ONNX pipeline: is_sdxl=False ONNX: You cannot load this model using the pipeline you selected. Please check Diffusers pipeline in ONNX Runtime Settings. this is what i get when i choose non onnx model with onnx and olive enabled there are no other piplines to choose Edit: fuck, am a moron xd so it is sdxl only? how can i use it with 1.5 models?
Which model did you try? And please attach a full log. For SD models: use ONNX Stable Diffusion For SDXL models: use ONNX Stable Diffusion XL
F:\Stable Diffusion\stable-diffusion-webui-directml new>git pull
Already up to date.
venv "F:\Stable Diffusion\stable-diffusion-webui-directml new\venv\Scripts\Python.exe"
fatal: No names found, cannot describe anything.
Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug 1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)]
Version: 1.7.0
Commit hash: 9e8f9bcf14f68099bb3562488361bd1a8393b2a5
no module 'xformers'. Processing without...
no module 'xformers'. Processing without...
No module 'xformers'. Proceeding without it.
F:\Stable Diffusion\stable-diffusion-webui-directml new\venv\lib\site-packages\pytorch_lightning\utilities\distributed.py:258: LightningDeprecationWarning: pytorch_lightning.utilities.distributed.rank_zero_only
has been deprecated in v1.8.1 and will be removed in v2.0.0. You can import it from pytorch_lightning.utilities
instead.
rank_zero_deprecation(
Launching Web UI with arguments: --use-cpu-torch
Style database not found: F:\Stable Diffusion\stable-diffusion-webui-directml new\styles.csv
Warning: caught exception 'Torch not compiled with CUDA enabled', memory monitor disabled
ONNX: selected=DmlExecutionProvider, available=['DmlExecutionProvider', 'CPUExecutionProvider']
Running on local URL: http://127.0.0.1:7860
To create a public link, set share=True
in launch()
.
Startup time: 3.9s (prepare environment: 34.5s, initialize shared: 1.2s, load scripts: 1.6s, create ui: 0.4s, gradio launch: 0.2s).
In this conversion only the non-EMA weights are extracted. If you want to instead extract the EMA weights (usually better for inference), please make sure to add the --extract_ema
flag.
You have disabled the safety checker for <class 'diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline'> by passing safety_checker=None
. Ensure that you abide to the conditions of the Stable Diffusion license and do not expose unfiltered results in services or applications open to the public. Both the diffusers team and Hugging Face strongly recommend to keep the safety filter enabled in all public facing circumstances, disabling it only for use-cases that involve analyzing network behavior or auditing its results. For more information, please have a look at https://github.com/huggingface/diffusers/pull/254 .
ONNX: Failed to load ONNX pipeline: is_sdxl=False
ONNX: You cannot load this model using the pipeline you selected. Please check Diffusers pipeline in ONNX Runtime Settings.
Applying attention optimization: InvokeAI... done.
Exception in thread Thread-16 (load_model):
Traceback (most recent call last):
File "C:\Users\piosa\AppData\Local\Programs\Python\Python310\lib\threading.py", line 1016, in _bootstrap_inner
self.run()
File "C:\Users\piosa\AppData\Local\Programs\Python\Python310\lib\threading.py", line 953, in run
self._target(*self._args, self._kwargs)
File "F:\Stable Diffusion\stable-diffusion-webui-directml new\modules\initialize.py", line 153, in load_model
devices.first_time_calculation()
File "F:\Stable Diffusion\stable-diffusion-webui-directml new\modules\devices.py", line 178, in first_time_calculation
linear(x)
File "F:\Stable Diffusion\stable-diffusion-webui-directml new\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, *kwargs)
File "F:\Stable Diffusion\stable-diffusion-webui-directml new\extensions-builtin\Lora\networks.py", line 486, in network_Linear_forward
return originals.Linear_forward(self, input)
File "F:\Stable Diffusion\stable-diffusion-webui-directml new\venv\lib\site-packages\torch\nn\modules\linear.py", line 114, in forward
return F.linear(input, self.weight, self.bias)
RuntimeError: "addmm_implcpu" not implemented for 'Half'
ONNX: Failed to convert model: model='v1-5-pruned.ckpt', error=[WinError 3] System nie może odnaleźć określonej ścieżki: 'F:\Stable Diffusion\backup\stable-diffusion-webui-directml old\models\ONNX\temp'
In this conversion only the non-EMA weights are extracted. If you want to instead extract the EMA weights (usually better for inference), please make sure to add the --extract_ema
flag.
You have disabled the safety checker for <class 'diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline'> by passing safety_checker=None
. Ensure that you abide to the conditions of the Stable Diffusion license and do not expose unfiltered results in services or applications open to the public. Both the diffusers team and Hugging Face strongly recommend to keep the safety filter enabled in all public facing circumstances, disabling it only for use-cases that involve analyzing network behavior or auditing its results. For more information, please have a look at https://github.com/huggingface/diffusers/pull/254 .
ONNX: Failed to load ONNX pipeline: is_sdxl=False
ONNX: You cannot load this model using the pipeline you selected. Please check Diffusers pipeline in ONNX Runtime Settings.
ONNX: processing=StableDiffusionProcessingTxt2Img, pipeline=OnnxRawPipeline
Error completing request
Arguments: ('task(6o16ajws8wml7oh)', '', '', [], 20, 'PNDM', 1, 1, 7, 512, 512, False, 0.7, 2, 'Latent', 0, 0, 0, 'Use same checkpoint', 'Use same sampler', '', '', [], <gradio.routes.Request object at 0x0000016FF6786EC0>, 0, False, '', 0.8, -1, False, -1, 0, 0, 0, False, False, False, False, 'positive', 'comma', 0, False, False, 'start', '', 1, '', [], 0, '', [], 0, '', [], True, False, False, False, 0, False) {}
Traceback (most recent call last):
File "F:\Stable Diffusion\stable-diffusion-webui-directml new\modules\call_queue.py", line 57, in f
res = list(func(args, kwargs))
File "F:\Stable Diffusion\stable-diffusion-webui-directml new\modules\call_queue.py", line 36, in f
res = func(*args, kwargs)
File "F:\Stable Diffusion\stable-diffusion-webui-directml new\modules\txt2img.py", line 55, in txt2img
processed = processing.process_images(p)
File "F:\Stable Diffusion\stable-diffusion-webui-directml new\modules\processing.py", line 736, in process_images
res = process_images_inner(p)
File "F:\Stable Diffusion\stable-diffusion-webui-directml new\modules\processing.py", line 841, in process_images_inner
result = shared.sd_model(kwargs)
TypeError: 'OnnxRawPipeline' object is not callable
full log from starting to txt2img generate click. The error goes twice because it tries to load the model at startup I do not have any other diffusers pipeline than "ONNX Stable Diffusion" to choose in settings model is "v1-5-pruned.ckpt" , just a basic 1.5 sd model from here https://huggingface.co/runwayml/stable-diffusion-v1-5 i also did those just before this log, but it is providing the same error
Edit: Actually, when i click on diffuser pipeline ui goes broke, and i no longer can click anything, switch tabs or nothing, so maybe thats why i don't see any in here
Edit 2: btw, maybe i misread you somewhere and i actually need onnx model to run it here?
Diffusers pipeline option is fixed. Pull latest commit and try again.
thanks, i can choose now, but getting the same error , tried img2img option, and also the same btw it's not a new bug, i m trying out onnx first time, but found out medvram instead of lowvram is working on my 6700 so i am having acceptable results without onnx, so if it won't work im fin
F:\Stable Diffusion\stable-diffusion-webui-directml new>git pull
Already up to date.
venv "F:\Stable Diffusion\stable-diffusion-webui-directml new\venv\Scripts\Python.exe"
fatal: No names found, cannot describe anything.
Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug 1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)]
Version: 1.7.0
Commit hash: f2f190b73e7deb97a59386fc52d155e858355e42
no module 'xformers'. Processing without...
no module 'xformers'. Processing without...
No module 'xformers'. Proceeding without it.
F:\Stable Diffusion\stable-diffusion-webui-directml new\venv\lib\site-packages\pytorch_lightning\utilities\distributed.py:258: LightningDeprecationWarning: pytorch_lightning.utilities.distributed.rank_zero_only
has been deprecated in v1.8.1 and will be removed in v2.0.0. You can import it from pytorch_lightning.utilities
instead.
rank_zero_deprecation(
Launching Web UI with arguments: --use-cpu-torch
Style database not found: F:\Stable Diffusion\stable-diffusion-webui-directml new\styles.csv
Warning: caught exception 'Torch not compiled with CUDA enabled', memory monitor disabled
ONNX: selected=DmlExecutionProvider, available=['DmlExecutionProvider', 'CPUExecutionProvider']
Running on local URL: http://127.0.0.1:7860
To create a public link, set share=True
in launch()
.
Startup time: 3.9s (prepare environment: 5.8s, initialize shared: 1.1s, load scripts: 1.6s, create ui: 0.4s, gradio launch: 0.2s).
You have disabled the safety checker for <class 'diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline'> by passing safety_checker=None
. Ensure that you abide to the conditions of the Stable Diffusion license and do not expose unfiltered results in services or applications open to the public. Both the diffusers team and Hugging Face strongly recommend to keep the safety filter enabled in all public facing circumstances, disabling it only for use-cases that involve analyzing network behavior or auditing its results. For more information, please have a look at https://github.com/huggingface/diffusers/pull/254 .
ONNX: Failed to load ONNX pipeline: is_sdxl=False
ONNX: You cannot load this model using the pipeline you selected. Please check Diffusers pipeline in ONNX Runtime Settings.
Applying attention optimization: InvokeAI... done.
Exception in thread Thread-16 (load_model):
Traceback (most recent call last):
File "C:\Users\piosa\AppData\Local\Programs\Python\Python310\lib\threading.py", line 1016, in _bootstrap_inner
self.run()
File "C:\Users\piosa\AppData\Local\Programs\Python\Python310\lib\threading.py", line 953, in run
self._target(*self._args, self._kwargs)
File "F:\Stable Diffusion\stable-diffusion-webui-directml new\modules\initialize.py", line 153, in load_model
devices.first_time_calculation()
File "F:\Stable Diffusion\stable-diffusion-webui-directml new\modules\devices.py", line 178, in first_time_calculation
linear(x)
File "F:\Stable Diffusion\stable-diffusion-webui-directml new\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, *kwargs)
File "F:\Stable Diffusion\stable-diffusion-webui-directml new\extensions-builtin\Lora\networks.py", line 486, in network_Linear_forward
return originals.Linear_forward(self, input)
File "F:\Stable Diffusion\stable-diffusion-webui-directml new\venv\lib\site-packages\torch\nn\modules\linear.py", line 114, in forward
return F.linear(input, self.weight, self.bias)
RuntimeError: "addmm_implcpu" not implemented for 'Half'
ONNX: Failed to convert model: model='v1-5-pruned-emaonly.ckpt', error=[WinError 3] System nie może odnaleźć określonej ścieżki: 'F:\Stable Diffusion\backup\stable-diffusion-webui-directml old\models\ONNX\temp'
You have disabled the safety checker for <class 'diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline'> by passing safety_checker=None
. Ensure that you abide to the conditions of the Stable Diffusion license and do not expose unfiltered results in services or applications open to the public. Both the diffusers team and Hugging Face strongly recommend to keep the safety filter enabled in all public facing circumstances, disabling it only for use-cases that involve analyzing network behavior or auditing its results. For more information, please have a look at https://github.com/huggingface/diffusers/pull/254 .
ONNX: Failed to load ONNX pipeline: is_sdxl=False
ONNX: You cannot load this model using the pipeline you selected. Please check Diffusers pipeline in ONNX Runtime Settings.
ONNX: processing=StableDiffusionProcessingTxt2Img, pipeline=OnnxRawPipeline
Error completing request
Arguments: ('task(ex0z03vb8yp9wm6)', '', '', [], 20, 'PNDM', 1, 1, 7, 512, 512, False, 0.7, 2, 'Latent', 0, 0, 0, 'Use same checkpoint', 'Use same sampler', '', '', [], <gradio.routes.Request object at 0x000001C1D1AA15D0>, 0, False, '', 0.8, -1, False, -1, 0, 0, 0, False, False, False, False, 'positive', 'comma', 0, False, False, 'start', '', 1, '', [], 0, '', [], 0, '', [], True, False, False, False, 0, False) {}
Traceback (most recent call last):
File "F:\Stable Diffusion\stable-diffusion-webui-directml new\modules\call_queue.py", line 57, in f
res = list(func(args, kwargs))
File "F:\Stable Diffusion\stable-diffusion-webui-directml new\modules\call_queue.py", line 36, in f
res = func(*args, kwargs)
File "F:\Stable Diffusion\stable-diffusion-webui-directml new\modules\txt2img.py", line 55, in txt2img
processed = processing.process_images(p)
File "F:\Stable Diffusion\stable-diffusion-webui-directml new\modules\processing.py", line 736, in process_images
res = process_images_inner(p)
File "F:\Stable Diffusion\stable-diffusion-webui-directml new\modules\processing.py", line 841, in process_images_inner
result = shared.sd_model(kwargs)
TypeError: 'OnnxRawPipeline' object is not callable
Do you have a folder named ONNX
under ./models
? It should be automatically created.
yes
Where can I get the same ckpt file? (v1-5-pruned-emaonly.ckpt) Did you download this one from huggingface?
yes, but also tried some other models, same exact error
Please run this command and upload output.txt
here.
.\venv\Scripts\activate
pip freeze > output.txt
output.txt Edit: insides, so you don't have to download : coloredlogs==15.0.1 flatbuffers==23.5.26 humanfriendly==10.0 mpmath==1.3.0 numpy==1.26.4 onnxruntime-directml==1.17.0 packaging==23.2 protobuf==4.25.2 pyreadline3==3.4.1 sympy==1.12
Is that all? Was there an error when running .\venv\Scripts\activate
?
no, no errors there, i see it's the same problem as with the new bug report, so i might post sth new there if i happen to have it Edit: btw, do i have to open it with just --use-cpu-torch or --use-directml too? Edit: tried it nothing new, and it's not the same error as the new bug report on model load i have this: ONNX: Failed to load ONNX pipeline: issdxl=False ONNX: You cannot load this model using the pipeline you selected. Please check Diffusers pipeline in ONNX Runtime Settings. which is strange, cause it shows on non sdxl pipeline and sdxl one (just checked for sure) edit: also, i feel like the onnx temp folder may be corrupted, but i did so many reinstalls i have no more energy to try it now ><
Fix
no, no errors there, i see it's the same problem as with the new bug report, so i might post sth new there if i happen to have it Edit: btw, do i have to open it with just --use-cpu-torch or --use-directml too? Edit: tried it nothing new, and it's not the same error as the new bug report on model load i have this: ONNX: Failed to load ONNX pipeline: issdxl=False ONNX: You cannot load this model using the pipeline you selected. Please check Diffusers pipeline in ONNX Runtime Settings. which is strange, cause it shows on non sdxl pipeline and sdxl one (just checked for sure) edit: also, i feel like the onnx temp folder may be corrupted, but i did so many reinstalls i have no more energy to try it now ><
Please attach full log
F:\Stable Diffusion\stable-diffusion-webui-directml new>git pull
Already up to date.
venv "F:\Stable Diffusion\stable-diffusion-webui-directml new\venv\Scripts\Python.exe"
fatal: No names found, cannot describe anything.
Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug 1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)]
Version: 1.7.0
Commit hash: 03a88617fcd9442313ce2ed7facfecf6cdf72c36
no module 'xformers'. Processing without...
no module 'xformers'. Processing without...
No module 'xformers'. Proceeding without it.
F:\Stable Diffusion\stable-diffusion-webui-directml new\venv\lib\site-packages\pytorch_lightning\utilities\distributed.py:258: LightningDeprecationWarning: pytorch_lightning.utilities.distributed.rank_zero_only
has been deprecated in v1.8.1 and will be removed in v2.0.0. You can import it from pytorch_lightning.utilities
instead.
rank_zero_deprecation(
Launching Web UI with arguments: --use-cpu-torch
Style database not found: F:\Stable Diffusion\stable-diffusion-webui-directml new\styles.csv
Warning: caught exception 'Torch not compiled with CUDA enabled', memory monitor disabled
ONNX: selected=DmlExecutionProvider, available=['DmlExecutionProvider', 'CPUExecutionProvider']
Running on local URL: http://127.0.0.1:7860
To create a public link, set share=True
in launch()
.
Startup time: 4.7s (prepare environment: 9.0s, initialize shared: 1.9s, setup codeformer: 0.1s, load scripts: 1.6s, create ui: 0.4s, gradio launch: 0.3s).
In this conversion only the non-EMA weights are extracted. If you want to instead extract the EMA weights (usually better for inference), please make sure to add the --extract_ema
flag.
You have disabled the safety checker for <class 'diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline'> by passing safety_checker=None
. Ensure that you abide to the conditions of the Stable Diffusion license and do not expose unfiltered results in services or applications open to the public. Both the diffusers team and Hugging Face strongly recommend to keep the safety filter enabled in all public facing circumstances, disabling it only for use-cases that involve analyzing network behavior or auditing its results. For more information, please have a look at https://github.com/huggingface/diffusers/pull/254 .
ONNX: Failed to load ONNX pipeline: is_sdxl=False
ONNX: You cannot load this model using the pipeline you selected. Please check Diffusers pipeline in ONNX Runtime Settings.
Applying attention optimization: InvokeAI... done.
Exception in thread Thread-16 (load_model):
Traceback (most recent call last):
File "C:\Users\piosa\AppData\Local\Programs\Python\Python310\lib\threading.py", line 1016, in _bootstrap_inner
self.run()
File "C:\Users\piosa\AppData\Local\Programs\Python\Python310\lib\threading.py", line 953, in run
self._target(*self._args, *self._kwargs)
File "F:\Stable Diffusion\stable-diffusion-webui-directml new\modules\initialize.py", line 153, in load_model
devices.first_time_calculation()
File "F:\Stable Diffusion\stable-diffusion-webui-directml new\modules\devices.py", line 178, in first_time_calculation
linear(x)
File "F:\Stable Diffusion\stable-diffusion-webui-directml new\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(args, **kwargs)
File "F:\Stable Diffusion\stable-diffusion-webui-directml new\extensions-builtin\Lora\networks.py", line 486, in network_Linear_forward
return originals.Linear_forward(self, input)
File "F:\Stable Diffusion\stable-diffusion-webui-directml new\venv\lib\site-packages\torch\nn\modules\linear.py", line 114, in forward
return F.linear(input, self.weight, self.bias)
RuntimeError: "addmm_implcpu" not implemented for 'Half'
this is on launch
and when i click generate it shows this :
ONNX: Failed to convert model: model='v1-5-pruned.ckpt', error=[WinError 3] System nie może odnaleźć określonej ścieżki: 'F:\Stable Diffusion\backup\stable-diffusion-webui-directml old\models\ONNX\temp'
In this conversion only the non-EMA weights are extracted. If you want to instead extract the EMA weights (usually better for inference), please make sure to add the --extract_ema
flag.
You have disabled the safety checker for <class 'diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline'> by passing safety_checker=None
. Ensure that you abide to the conditions of the Stable Diffusion license and do not expose unfiltered results in services or applications open to the public. Both the diffusers team and Hugging Face strongly recommend to keep the safety filter enabled in all public facing circumstances, disabling it only for use-cases that involve analyzing network behavior or auditing its results. For more information, please have a look at https://github.com/huggingface/diffusers/pull/254 .
ONNX: Failed to load ONNX pipeline: is_sdxl=False
ONNX: You cannot load this model using the pipeline you selected. Please check Diffusers pipeline in ONNX Runtime Settings.
ONNX: processing=StableDiffusionProcessingTxt2Img, pipeline=OnnxRawPipeline
Error completing request
Arguments: ('task(qnsyrptphvyvfup)', '', '', [], 20, 'PNDM', 1, 1, 7, 512, 512, False, 0.7, 2, 'Latent', 0, 0, 0, 'Use same checkpoint', 'Use same sampler', '', '', [], <gradio.routes.Request object at 0x000002610E3FD0C0>, 0, False, '', 0.8, -1, False, -1, 0, 0, 0, False, False, False, False, 'positive', 'comma', 0, False, False, 'start', '', 1, '', [], 0, '', [], 0, '', [], True, False, False, False, 0, False) {}
Traceback (most recent call last):
File "F:\Stable Diffusion\stable-diffusion-webui-directml new\modules\call_queue.py", line 57, in f
res = list(func(*args, *kwargs))
File "F:\Stable Diffusion\stable-diffusion-webui-directml new\modules\call_queue.py", line 36, in f
res = func(args, kwargs)
File "F:\Stable Diffusion\stable-diffusion-webui-directml new\modules\txt2img.py", line 55, in txt2img
processed = processing.process_images(p)
File "F:\Stable Diffusion\stable-diffusion-webui-directml new\modules\processing.py", line 736, in process_images
res = process_images_inner(p)
File "F:\Stable Diffusion\stable-diffusion-webui-directml new\modules\processing.py", line 841, in process_images_inner
result = shared.sd_model(kwargs)
TypeError: 'OnnxRawPipeline' object is not callable
"System nie może odnaleźć określonej ścieżki:" referrs to system couldn't find path
Checklist
What happened?
I downloaded the last commit, used the launch options --use-directml
Steps to reproduce the problem
@echo off
set PYTHON=py -3.10 set GIT= set VENV_DIR= set COMMANDLINE_ARGS=--opt-sub-quad-attention --disable-nan-check --precision autocast --autolaunch --use-directml --listen
call webui.bat
What should have happened?
WebUI should start in directml mode
What browsers do you use to access the UI ?
Microsoft Edge
Sysinfo
none
Console logs
Additional information
RX 6700xt, r5600x, 32 gb ram