Open Freda-Chan opened 6 months ago
I'm also having this issue. Clean install here as well on the newest version.
*** Error completing request
*** Arguments: ('Lykon/dreamshaper-7', '', 'vae', 'dreamshaper-7', 'dreamshaper-7', 'runwayml/stable-diffusion-v1-5', '', 'vae', 'stable-diffusion-v1-5', 'stable-diffusion-v1-5', True, True, True, True, True, False, True, True, True, True, 'euler', True, 512, False, '', '', '') {}
Traceback (most recent call last):
File "J:\python\Automatic\stable-diffusion-webui-directml\modules\call_queue.py", line 57, in f
res = list(func(*args, **kwargs))
File "J:\python\Automatic\stable-diffusion-webui-directml\modules\call_queue.py", line 36, in f
res = func(*args, **kwargs)
File "J:\python\Automatic\stable-diffusion-webui-directml\modules\ui.py", line 1759, in optimize
return optimize_sd_from_ckpt(
File "J:\python\Automatic\stable-diffusion-webui-directml\modules\sd_olive_ui.py", line 67, in optimize_sd_from_ckpt
unoptimized_dir, optimized_dir = ready(unoptimized_dir, optimized_dir)
File "J:\python\Automatic\stable-diffusion-webui-directml\modules\sd_olive_ui.py", line 35, in ready
unload_model_weights(shared.sd_model)
File "J:\python\Automatic\stable-diffusion-webui-directml\modules\sd_models.py", line 881, in unload_model_weights
send_model_to_cpu(sd_model or shared.sd_model)
File "J:\python\Automatic\stable-diffusion-webui-directml\modules\sd_models.py", line 632, in send_model_to_cpu
if m.lowvram:
File "J:\python\Automatic\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\module.py", line 1269, in __getattr__
raise AttributeError("'{}' object has no attribute '{}'".format(
AttributeError: 'ONNXStableDiffusionModel' object has no attribute 'lowvram'
here's a temp fix to optimize models https://www.youtube.com/watch?v=mKxt0kxD5C0
I have the same issue as @Freda-Chan. The error "AttributeError: 'NoneType' object has no attribute 'lowvram'" was thrown because there's no loaded checkpoint. It seems that webui can't find my checkpoints if webui was started with --olive. Starting it without --olive works. Perhaps it's set to only see optimized checkpoints?
The straightforward fix would be to simply check for None in send_model_to_cpu. It seems to be called just to unload the loaded checkpoints, so it should be fine to just not execute it if there's no loaded checkpoint.
if m is not None:
if m.lowvram:
lowvram.send_everything_to_cpu()
else:
m.to(devices.cpu)
Last time I installed Olive version it came with no checkpoint installed by default.
Did you follow the instructions in the pictures listed on the AMD website?
Do not follow the instructions about anaconda, or the command line stuff there as I believe that is outdated.
However the httpx part might apply as that still has not been fixed in requirements_onnx.txt
Start from: Go to the Olive optimization tab and start the optimization pass
in Step 3
I received the error below. And fix it by export COMMANDLINE_ARGS="--use-cpu all --no-half --skip-torch-cuda-test --enable-insecure-extension-access"
in the websui-user.sh. This is probably because it is trying to find GPU attribute in a CPU.
*** Arguments: ('task(zxmbqpdd6uuxvyg)', <gradio.routes.Request object at 0x15ae635b0>, 'a women', '', [], 1, 1, 7, 512, 512, False, 0.7, 2, 'Latent', 0, 0, 0, 'Use same checkpoint', 'Use same sampler', 'Use same scheduler', '', '', [], 0, 20, 'DPM++ 2M', 'Automatic', False, '', 0.8, -1, False, -1, 0, 0, 0, False, False, 'positive', 'comma', 0, False, False, 'start', '', 1, '', [], 0, '', [], 0, '', [], True, False, False, False, False, False, False, 0, False) {}
Traceback (most recent call last):
File "/Users/lee/stable-diffusion-webui/modules/call_queue.py", line 57, in f
res = list(func(*args, **kwargs))
File "/Users/lee/stable-diffusion-webui/modules/call_queue.py", line 36, in f
res = func(*args, **kwargs)
File "/Users/lee/stable-diffusion-webui/modules/txt2img.py", line 109, in txt2img
processed = processing.process_images(p)
File "/Users/lee/stable-diffusion-webui/modules/processing.py", line 832, in process_images
sd_models.reload_model_weights()
File "/Users/lee/stable-diffusion-webui/modules/sd_models.py", line 860, in reload_model_weights
sd_model = reuse_model_from_already_loaded(sd_model, checkpoint_info, timer)
File "/Users/lee/stable-diffusion-webui/modules/sd_models.py", line 793, in reuse_model_from_already_loaded
send_model_to_cpu(sd_model)
File "/Users/lee/stable-diffusion-webui/modules/sd_models.py", line 662, in send_model_to_cpu
if m.lowvram:
AttributeError: 'NoneType' object has no attribute 'lowvram'
---
Checklist
What happened?
When I click on "optimize model using Olive" it throws an error:
AttributeError: 'NoneType' object has no attribute 'lowvram'
Steps to reproduce the problem
pip install httpx==0.24.1
will solve it.What should have happened?
Should be able to optimize model using Olive.
What browsers do you use to access the UI ?
Google Chrome
Sysinfo
sysinfo-2023-12-25-02-38.json
Console logs
Additional information
At first installation of onnx, there is an error about websocket. You can fix it by using
pip install httpx==0.24.1
Using arg--lowvram
doesn't change anything. I tried with and without it.