[X] The issue exists after disabling all extensions
[X] The issue exists on a clean installation of webui
[ ] The issue is caused by an extension, but I believe it is caused by a bug in the webui
[X] The issue exists in the current version of the webui
[X] The issue has not been reported before recently
[ ] The issue has been reported before but has not been fixed yet
What happened?
When clicking generate while using an SDXL checkpoint model while ONNX/Olive is active, it returns the following error: AttributeError: 'Options' object has no attribute 'diffusers_vae_upcast'
Steps to reproduce the problem
Install A1111 from the setup for AMD users page.
Follow steps from #149
Before switching to ONNX SDXL, I can generate a SD1.5 image.
Switching to an SDXL model and ONNX SDXL, produces the above error.
venv "F:\automatic1111\stable-diffusion-webui-directml\venv\Scripts\Python.exe"
Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug 1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)]
Version: v1.9.3-amd-24-g2c29feb5
Commit hash: 2c29feb50e5cd3592b3ea831fe20b17588a2edb4
no module 'xformers'. Processing without...
no module 'xformers'. Processing without...
No module 'xformers'. Proceeding without it.
F:\automatic1111\stable-diffusion-webui-directml\venv\lib\site-packages\pytorch_lightning\utilities\distributed.py:258: LightningDeprecationWarning: `pytorch_lightning.utilities.distributed.rank_zero_only` has been deprecated in v1.8.1 and will be removed in v2.0.0. You can import it from `pytorch_lightning.utilities` instead.
rank_zero_deprecation(
Launching Web UI with arguments: --use-cpu-torch
Warning: caught exception 'Torch not compiled with CUDA enabled', memory monitor disabled
F:\automatic1111\stable-diffusion-webui-directml\venv\lib\site-packages\diffusers\models\transformers\transformer_2d.py:34: FutureWarning: `Transformer2DModelOutput` is deprecated and will be removed in version 1.0.0. Importing `Transformer2DModelOutput` from `diffusers.models.transformer_2d` is deprecated and this will be removed in a future version. Please use `from diffusers.models.modeling_outputs import Transformer2DModelOutput`, instead.
deprecate("Transformer2DModelOutput", "1.0.0", deprecation_message)
ONNX: version=1.18.0 provider=DmlExecutionProvider, available=['DmlExecutionProvider', 'CPUExecutionProvider']
Running on local URL: http://127.0.0.1:7860
To create a public link, set `share=True` in `launch()`.
Startup time: 8.2s (prepare environment: 12.2s, initialize shared: 0.9s, load scripts: 0.4s, create ui: 0.3s, gradio launch: 0.3s).
Fetching 17 files: 100%|█████████████████████████████████████████████████████████████| 17/17 [00:00<00:00, 8503.66it/s]
Loading pipeline components...: 100%|████████████████████████████████████████████████████| 7/7 [00:02<00:00, 2.62it/s]
Applying attention optimization: InvokeAI... done.
WARNING: ONNX implementation works best with SD.Next. Please consider migrating to SD.Next.
*** Error completing request
*** Arguments: ('task(fhgqq07lsp1cwfr)', <gradio.routes.Request object at 0x0000025B40C798A0>, 'a girl sitting', '', [], 1, 1, 7, 1024, 1024, False, 0.7, 2, 'Latent', 0, 0, 0, 'Use same checkpoint', 'Use same sampler', 'Use same scheduler', '', '', [], 0, 20, 'PNDM', 'Automatic', False, '', 0.8, -1, False, -1, 0, 0, 0, False, False, 'positive', 'comma', 0, False, False, 'start', '', 1, '', [], 0, '', [], 0, '', [], True, False, False, False, False, False, False, 0, False) {}
Traceback (most recent call last):
File "F:\automatic1111\stable-diffusion-webui-directml\modules\call_queue.py", line 57, in f
res = list(func(*args, **kwargs))
File "F:\automatic1111\stable-diffusion-webui-directml\modules\call_queue.py", line 36, in f
res = func(*args, **kwargs)
File "F:\automatic1111\stable-diffusion-webui-directml\modules\txt2img.py", line 109, in txt2img
processed = processing.process_images(p)
File "F:\automatic1111\stable-diffusion-webui-directml\modules\processing.py", line 847, in process_images
res = process_images_inner(p)
File "F:\automatic1111\stable-diffusion-webui-directml\modules\processing.py", line 910, in process_images_inner
shared.sd_model = preprocess_pipeline(p)
File "F:\automatic1111\stable-diffusion-webui-directml\modules\onnx_impl\__init__.py", line 181, in preprocess_pipeline
shared.sd_model = shared.sd_model.preprocess(p)
File "F:\automatic1111\stable-diffusion-webui-directml\modules\onnx_impl\pipelines\__init__.py", line 345, in preprocess
config.vae_sdxl_fp16_fix = self._is_sdxl and shared.opts.diffusers_vae_upcast == "false"
File "F:\automatic1111\stable-diffusion-webui-directml\modules\options.py", line 142, in __getattr__
return super(Options, self).__getattribute__(item)
AttributeError: 'Options' object has no attribute 'diffusers_vae_upcast'
---
Additional information
Have tried different SDXL models, seems not to have any affect on the outcome.
Similar issue found in #377
Checklist
What happened?
When clicking generate while using an SDXL checkpoint model while ONNX/Olive is active, it returns the following error: AttributeError: 'Options' object has no attribute 'diffusers_vae_upcast'
Steps to reproduce the problem
Install A1111 from the setup for AMD users page. Follow steps from #149 Before switching to ONNX SDXL, I can generate a SD1.5 image. Switching to an SDXL model and ONNX SDXL, produces the above error.
What should have happened?
WebUI should generate an image.
What browsers do you use to access the UI ?
Mozilla Firefox, Google Chrome, Other
Sysinfo
sysinfo-2024-06-05-19-36.json
Console logs
Additional information
Have tried different SDXL models, seems not to have any affect on the outcome. Similar issue found in #377