Closed kyvaith closed 1 month ago
Fixed in a92b859e4090a1d33576036a688a44bb6b5d55b7 ~ 9514d9194d6a8a45d3ceb42567e45d020d5226c0.
I am still having the same issues that OP referenced. My steps are exactly the same as his, with the exact same results, even though I am on Master after your fixed commits https://github.com/lshqqytiger/stable-diffusion-webui-directml/commit/a92b859e4090a1d33576036a688a44bb6b5d55b7 and https://github.com/lshqqytiger/stable-diffusion-webui-directml/commit/9514d9194d6a8a45d3ceb42567e45d020d5226c0 were merged in. I am on the 7900 XT.
venv "C:\Users\camer\Documents\Automatic1111\stable-diffusion-webui-directml\venv\Scripts\Python.exe"
Python 3.10.14 | packaged by Anaconda, Inc. | (main, Mar 21 2024, 16:20:14) [MSC v.1916 64 bit (AMD64)]
Version: v1.9.3-amd-10-ge2cbdab3
Commit hash: e2cbdab3375deea621a48de269fd4ac47ff6cece
no module 'xformers'. Processing without...
no module 'xformers'. Processing without...
No module 'xformers'. Proceeding without it.
C:\Users\camer\Documents\Automatic1111\stable-diffusion-webui-directml\venv\lib\site-packages\pytorch_lightning\utilities\distributed.py:258: LightningDeprecationWarning: `pytorch_lightning.utilities.distributed.rank_zero_only` has been deprecated in v1.8.1 and will be removed in v2.0.0. You can import it from `pytorch_lightning.utilities` instead.
rank_zero_deprecation(
Launching Web UI with arguments: --use-cpu-torch
Warning: caught exception 'Torch not compiled with CUDA enabled', memory monitor disabled
ONNX: version=1.17.3 provider=DmlExecutionProvider, available=['DmlExecutionProvider', 'CPUExecutionProvider']
Checkpoint v1-5-pruned-emaonly.safetensors [6ce0161689] not found; loading fallback v1-5-pruned-emaonly.safetensors
Running on local URL: http://127.0.0.1:7860
To create a public link, set `share=True` in `launch()`.
Startup time: 8.3s (prepare environment: 12.6s, initialize shared: 0.8s, load scripts: 0.4s, create ui: 0.4s, gradio launch: 0.1s).
C:\Users\camer\Documents\Automatic1111\stable-diffusion-webui-directml\venv\lib\site-packages\huggingface_hub\file_download.py:1132: FutureWarning: `resume_download` is deprecated and will be removed in version 1.0.0. Downloads always resume when possible. If you want to force a new download, use `force_download=True`.
warnings.warn(
You have disabled the safety checker for <class 'diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline'> by passing `safety_checker=None`. Ensure that you abide to the conditions of the Stable Diffusion license and do not expose unfiltered results in services or applications open to the public. Both the diffusers team and Hugging Face strongly recommend to keep the safety filter enabled in all public facing circumstances, disabling it only for use-cases that involve analyzing network behavior or auditing its results. For more information, please have a look at https://github.com/huggingface/diffusers/pull/254 .
ONNX: Failed to load ONNX pipeline: is_sdxl=False
ONNX: You cannot load this model using the pipeline you selected. Please check Diffusers pipeline in ONNX Runtime Settings.
Applying attention optimization: InvokeAI... done.
WARNING: ONNX implementation works best with SD.Next. Please consider migrating to SD.Next.
ONNX: Failed to convert model: model='v1-5-pruned-emaonly.safetensors', error=[WinError 3] The system cannot find the path specified: 'C:\\Users\\camer\\Documents\\Automatic1111\\stable-diffusion-webui-directml\\models\\ONNX\\temp'
Checkpoint v1-5-pruned-emaonly.safetensors [6ce0161689] not found; loading fallback v1-5-pruned-emaonly.safetensors
You have disabled the safety checker for <class 'diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline'> by passing `safety_checker=None`. Ensure that you abide to the conditions of the Stable Diffusion license and do not expose unfiltered results in services or applications open to the public. Both the diffusers team and Hugging Face strongly recommend to keep the safety filter enabled in all public facing circumstances, disabling it only for use-cases that involve analyzing network behavior or auditing its results. For more information, please have a look at https://github.com/huggingface/diffusers/pull/254 .
ONNX: Failed to load ONNX pipeline: is_sdxl=False
ONNX: You cannot load this model using the pipeline you selected. Please check Diffusers pipeline in ONNX Runtime Settings.
ONNX: processing=StableDiffusionProcessingTxt2Img, pipeline=OnnxRawPipeline
*** Error completing request
*** Arguments: ('task(qhjs0cvbhjtfgd2)', <gradio.routes.Request object at 0x000001FA3388AAD0>, 'astronaut', '', [], 1, 1, 7, 512, 512, False, 0.7, 2, 'Latent', 0, 0, 0, 'Use same checkpoint', 'Use same sampler', 'Use same scheduler', '', '', [], 0, 20, 'PNDM', 'Automatic', False, '', 0.8, -1, False, -1, 0, 0, 0, False, False, 'positive', 'comma', 0, False, False, 'start', '', 1, '', [], 0, '', [], 0, '', [], True, False, False, False, False, False, False, 0, False) {}
Traceback (most recent call last):
File "C:\Users\camer\Documents\Automatic1111\stable-diffusion-webui-directml\modules\call_queue.py", line 57, in f
res = list(func(*args, **kwargs))
File "C:\Users\camer\Documents\Automatic1111\stable-diffusion-webui-directml\modules\call_queue.py", line 36, in f
res = func(*args, **kwargs)
File "C:\Users\camer\Documents\Automatic1111\stable-diffusion-webui-directml\modules\txt2img.py", line 109, in txt2img
processed = processing.process_images(p)
File "C:\Users\camer\Documents\Automatic1111\stable-diffusion-webui-directml\modules\processing.py", line 847, in process_images
res = process_images_inner(p)
File "C:\Users\camer\Documents\Automatic1111\stable-diffusion-webui-directml\modules\processing.py", line 952, in process_images_inner
result = shared.sd_model(**kwargs)
TypeError: 'OnnxRawPipeline' object is not callable
---
Remove incomplete conversion/optimization cache in models/ONNX/cache
and models/ONNX/temp
folder, and try again.
Remove incomplete conversion/optimization cache in
models/ONNX/cache
andmodels/ONNX/temp
folder, and try again.
I do not have a models/ONNX
folder, something I have found strange from the beginning. This is what my models folder looks like:
This is despite having that directory specified in the settings:
My ONXX Runtime settings:
EDIT: Just for confirmation that I am on the master branch when this is happening:
Create models\ONNX
folder, and try again.
This fixed it for me also
Create
models\ONNX
folder, and try again.
Something is not creating this when it should be.
Checklist
What happened?
Unable to generate images using AMG GPU with Onnx/Olive
Steps to reproduce the problem
git clone https://github.com/lshqqytiger/stable-diffusion-webui-directml && cd stable-diffusion-webui-directml && git submodule init && git submodule update
webui.bat --use-cpu-torch
.\venv\Scripts\activate pip uninstall torch torchvision torch-directml -y pip install onnxruntime-directml
webui.bat --use-cpu-torch
What should have happened?
The image generation should work
What browsers do you use to access the UI ?
Google Chrome
Sysinfo
https://pastebin.com/HU42VsmF
Console logs
Additional information
Fresh install, AMD RX 6750 XT, 16GB VRAM, 32GB RAM