Open Enferlain opened 1 year ago
mmm weird... the models should be loaded and unloaded to avoid this behaviour
what's the webui version you are using?
My version is 1.6.0 https://github.com/AUTOMATIC1111/stable-diffusion-webui/commit/5ef669de080814067961f28357256e8fe27544f4
Maybe this error output has some information, I got it once but I don't remember if subsequent crashes cuz of ram fill had this exact error message
Saving merge to /home/imi/stable-diffusion-webui/models/Stable-diffusion/bbwm-cetusMix_Whalefall2-AOM3A3_orangemixs-best-16bit.safetensors
Unloading model 8 over the limit of 1: 218v5caa.safetensors [c5a1dcb091]
*** API error: POST: http://127.0.0.1:7860/bbwm/merge-models {'error': 'AttributeError', 'detail': '', 'body': '', 'errors': "'NoneType' object has no attribute 'lowvram'"}
Traceback (most recent call last):
File "/home/imi/stable-diffusion-webui/venv/lib/python3.11/site-packages/anyio/streams/memory.py", line 98, in receive
return self.receive_nowait()
^^^^^^^^^^^^^^^^^^^^^
File "/home/imi/stable-diffusion-webui/venv/lib/python3.11/site-packages/anyio/streams/memory.py", line 93, in receive_nowait
raise WouldBlock
anyio.WouldBlock
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/imi/stable-diffusion-webui/venv/lib/python3.11/site-packages/starlette/middleware/base.py", line 78, in call_next
message = await recv_stream.receive()
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/imi/stable-diffusion-webui/venv/lib/python3.11/site-packages/anyio/streams/memory.py", line 118, in receive
raise EndOfStream
anyio.EndOfStream
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/imi/stable-diffusion-webui/modules/api/api.py", line 187, in exception_handling
return await call_next(request)
^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/imi/stable-diffusion-webui/venv/lib/python3.11/site-packages/starlette/middleware/base.py", line 84, in call_next
raise app_exc
File "/home/imi/stable-diffusion-webui/venv/lib/python3.11/site-packages/starlette/middleware/base.py", line 70, in coro
await self.app(scope, receive_or_disconnect, send_no_error)
File "/home/imi/stable-diffusion-webui/venv/lib/python3.11/site-packages/starlette/middleware/base.py", line 108, in __call__
response = await self.dispatch_func(request, call_next)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/imi/stable-diffusion-webui/modules/api/api.py", line 151, in log_and_time
res: Response = await call_next(req)
^^^^^^^^^^^^^^^^^^^^
File "/home/imi/stable-diffusion-webui/venv/lib/python3.11/site-packages/starlette/middleware/base.py", line 84, in call_next
raise app_exc
File "/home/imi/stable-diffusion-webui/venv/lib/python3.11/site-packages/starlette/middleware/base.py", line 70, in coro
await self.app(scope, receive_or_disconnect, send_no_error)
File "/home/imi/stable-diffusion-webui/venv/lib/python3.11/site-packages/starlette/middleware/cors.py", line 84, in __call__
await self.app(scope, receive, send)
File "/home/imi/stable-diffusion-webui/venv/lib/python3.11/site-packages/starlette/middleware/gzip.py", line 24, in __call__
await responder(scope, receive, send)
File "/home/imi/stable-diffusion-webui/venv/lib/python3.11/site-packages/starlette/middleware/gzip.py", line 44, in __call__
await self.app(scope, receive, self.send_with_gzip)
File "/home/imi/stable-diffusion-webui/venv/lib/python3.11/site-packages/starlette/middleware/exceptions.py", line 79, in __call__
raise exc
File "/home/imi/stable-diffusion-webui/venv/lib/python3.11/site-packages/starlette/middleware/exceptions.py", line 68, in __call__
await self.app(scope, receive, sender)
File "/home/imi/stable-diffusion-webui/venv/lib/python3.11/site-packages/fastapi/middleware/asyncexitstack.py", line 21, in __call__
raise e
File "/home/imi/stable-diffusion-webui/venv/lib/python3.11/site-packages/fastapi/middleware/asyncexitstack.py", line 18, in __call__
await self.app(scope, receive, send)
File "/home/imi/stable-diffusion-webui/venv/lib/python3.11/site-packages/starlette/routing.py", line 718, in __call__
await route.handle(scope, receive, send)
File "/home/imi/stable-diffusion-webui/venv/lib/python3.11/site-packages/starlette/routing.py", line 276, in handle
await self.app(scope, receive, send)
File "/home/imi/stable-diffusion-webui/venv/lib/python3.11/site-packages/starlette/routing.py", line 66, in app
response = await func(request)
^^^^^^^^^^^^^^^^^^^
File "/home/imi/stable-diffusion-webui/venv/lib/python3.11/site-packages/fastapi/routing.py", line 237, in app
raw_response = await run_endpoint_function(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/imi/stable-diffusion-webui/venv/lib/python3.11/site-packages/fastapi/routing.py", line 163, in run_endpoint_function
return await dependant.call(**values)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/imi/stable-diffusion-webui/extensions/sd-webui-bayesian-merger/scripts/api.py", line 103, in merge_models_api
sd_models.reload_model_weights()
File "/home/imi/stable-diffusion-webui/modules/sd_models.py", line 732, in reload_model_weights
sd_model = reuse_model_from_already_loaded(sd_model, checkpoint_info, timer)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/imi/stable-diffusion-webui/modules/sd_models.py", line 681, in reuse_model_from_already_loaded
send_model_to_cpu(sd_model)
File "/home/imi/stable-diffusion-webui/modules/sd_models.py", line 541, in send_model_to_cpu
if m.lowvram:
^^^^^^^^^
AttributeError: 'NoneType' object has no attribute 'lowvram'
My config is
device: cpu
work_device: cpu
threads: 16
prune: True
weights_clip: False
rebasin: False
rebasin_iterations: 1
optimiser: bayes # tpe
bounds_transformer: False # bayes only
latin_hypercube_sampling: True # bayes only
guided_optimisation: False
batch_size: 4
init_points: 5
n_iters: 6
save_imgs: True
scorer_device: cpu # cuda
scorer_method: manual # chad, laion, manual
save_best: True
best_format: safetensors # ckpt
best_precision: 16 # 32
settings in webui
in the api section of the opened webui
merger.py
Same issue here, have you ever fixed this @Enferlain ?
I set iterations to 10 each, for like 10 images, and after the scoring process and when it starts a new iteration, it adds the I assume model size to the ram. I have 64gb, and it fills in after 10 iterations of warmup, I can't even get to the optimization part.
What should I do to fix this?