AUTOMATIC1111 / stable-diffusion-webui

Stable Diffusion web UI
GNU Affero General Public License v3.0
139.04k stars 26.39k forks source link

[Bug]: AttributeError: 'NoneType' object has no attribute 'lowvram' #16223

Open TypicaIDay opened 1 month ago

TypicaIDay commented 1 month ago

Checklist

What happened?

When I try to switch to a different model it won't work.

Steps to reproduce the problem

Switch SD model.

What should have happened?

SD model should have switched.

What browsers do you use to access the UI ?

No response

Sysinfo

sysinfo-2024-07-17-17-46.json

Console logs

You are up to date with the most recent release.
Launching Web UI with arguments: --medvram --xformers --update-check --skip-torch-cuda-test
CivitAI Browser+: Aria2 RPC started
Loading weights [6d7d23958a] from D:\stable-diffusion-webui\models\Stable-diffusion\hassakuXLHentai_v12.safetensors
[LyCORIS]-WARNING: LyCORIS legacy extension is now loaded, if you don't expext to see this message, please disable this extension.
Creating model from config: D:\stable-diffusion-webui\repositories\generative-models\configs\inference\sd_xl_base.yaml
D:\stable-diffusion-webui\venv\lib\site-packages\huggingface_hub\file_download.py:1132: FutureWarning: `resume_download` is deprecated and will be removed in version 1.0.0. Downloads always resume when possible. If you want to force a new download, use `force_download=True`.
  warnings.warn(
Running on local URL:  http://127.0.0.1:7860

To create a public link, set `share=True` in `launch()`.
Startup time: 392.1s (prepare environment: 254.7s, launcher: 0.1s, import torch: 77.6s, import gradio: 5.2s, setup paths: 20.2s, initialize shared: 0.9s, other imports: 5.7s, list SD models: 9.4s, load scripts: 2.8s, initialize extra networks: 0.8s, scripts before_ui_callback: 0.3s, create ui: 13.8s, gradio launch: 0.4s).
Loading VAE weights specified in settings: D:\stable-diffusion-webui\models\VAE\sdxlVAE_sdxlVAE.safetensors
Applying attention optimization: xformers... done.
Model loaded in 231.0s (load weights from disk: 11.0s, create model: 0.6s, apply weights to model: 204.2s, apply half(): 0.4s, load VAE: 7.8s, load textual inversion embeddings: 0.8s, calculate empty prompt: 5.9s).
Traceback (most recent call last):
  File "D:\stable-diffusion-webui\venv\lib\site-packages\gradio\routes.py", line 488, in run_predict
    output = await app.get_blocks().process_api(
  File "D:\stable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py", line 1431, in process_api
    result = await self.call_function(
  File "D:\stable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py", line 1103, in call_function
    prediction = await anyio.to_thread.run_sync(
  File "D:\stable-diffusion-webui\venv\lib\site-packages\anyio\to_thread.py", line 33, in run_sync
    return await get_asynclib().run_sync_in_worker_thread(
  File "D:\stable-diffusion-webui\venv\lib\site-packages\anyio\_backends\_asyncio.py", line 877, in run_sync_in_worker_thread
    return await future
  File "D:\stable-diffusion-webui\venv\lib\site-packages\anyio\_backends\_asyncio.py", line 807, in run
    result = context.run(func, *args)
  File "D:\stable-diffusion-webui\venv\lib\site-packages\gradio\utils.py", line 707, in wrapper
    response = f(*args, **kwargs)
  File "D:\stable-diffusion-webui\extensions\sd-webui-gelbooru-prompt\scripts\gelbooru_prompt.py", line 21, in fetch
    name = image.orig_name
AttributeError: 'NoneType' object has no attribute 'orig_name'
name: 1bbff9aeb698ab1e4bf0f6ce6dc26942.jpeg
hash: 1bbff9aeb698ab1e4bf0f6ce6dc26942
100%|██████████████████████████████████████████████████████████████████████████████████| 20/20 [00:16<00:00,  1.18it/s]
100%|██████████████████████████████████████████████████████████████████████████████████| 10/10 [00:04<00:00,  2.22it/s]
==========================================================================================0/30 [00:23<00:00,  2.20it/s]
A tensor with all NaNs was produced in VAE.
Web UI will now convert VAE into 32-bit float and retry.
To disable this behavior, disable the 'Automatically revert VAE to 32-bit floats' setting.
To always start with 32-bit VAE, use --no-half-vae commandline flag.
==========================================================================================
Total progress: 100%|██████████████████████████████████████████████████████████████████| 30/30 [00:32<00:00,  1.07s/it]
CivitAI Browser+: Model saved to: D:\stable-diffusion-webui\models\VAE\irisXLVAE_luna.safetensors0:32<00:00,  2.20it/s]
CivitAI Browser+: Model info saved to "D:\stable-diffusion-webui\models\VAE\irisXLVAE_luna.json"
CivitAI Browser+: HTML saved at "D:\stable-diffusion-webui\models\VAE\irisXLVAE_luna.html"
CivitAI Browser+: Preview saved at "D:\stable-diffusion-webui\models\VAE\irisXLVAE_luna.preview.png"
name: sample_17fe226374fd39b24fc6ba36bb4a1927.jpg
hash: 17fe226374fd39b24fc6ba36bb4a1927
100%|██████████████████████████████████████████████████████████████████████████████████| 35/35 [00:29<00:00,  1.18it/s]
100%|██████████████████████████████████████████████████████████████████████████████████| 20/20 [00:12<00:00,  1.57it/s]
Total progress: 100%|██████████████████████████████████████████████████████████████████| 55/55 [00:54<00:00,  1.01it/s]
Loading VAE weights specified in settings: D:\stable-diffusion-webui\models\VAE\irisXLVAE_luna.safetensors0,  1.88it/s]
changing setting sd_vae to irisXLVAE_luna.safetensors: SafetensorError
Traceback (most recent call last):
  File "D:\stable-diffusion-webui\modules\options.py", line 165, in set
    option.onchange()
  File "D:\stable-diffusion-webui\modules\call_queue.py", line 13, in f
    res = func(*args, **kwargs)
  File "D:\stable-diffusion-webui\modules\initialize_util.py", line 182, in <lambda>
    shared.opts.onchange("sd_vae", wrap_queued_call(lambda: sd_vae.reload_vae_weights()), call=False)
  File "D:\stable-diffusion-webui\modules\sd_vae.py", line 273, in reload_vae_weights
    load_vae(sd_model, vae_file, vae_source)
  File "D:\stable-diffusion-webui\modules\sd_vae.py", line 211, in load_vae
    vae_dict_1 = load_vae_dict(vae_file, map_location=shared.weight_load_location)
  File "D:\stable-diffusion-webui\modules\sd_vae.py", line 189, in load_vae_dict
    vae_ckpt = sd_models.read_state_dict(filename, map_location=map_location)
  File "D:\stable-diffusion-webui\modules\sd_models.py", line 304, in read_state_dict
    pl_sd = safetensors.torch.load_file(checkpoint_file, device=device)
  File "D:\stable-diffusion-webui\venv\lib\site-packages\safetensors\torch.py", line 308, in load_file
    with safe_open(filename, framework="pt", device=device) as f:
safetensors_rust.SafetensorError: Error while deserializing header: HeaderTooLarge

100%|██████████████████████████████████████████████████████████████████████████████████| 35/35 [00:25<00:00,  1.36it/s]
100%|██████████████████████████████████████████████████████████████████████████████████| 20/20 [00:16<00:00,  1.20it/s]
Total progress: 100%|██████████████████████████████████████████████████████████████████| 55/55 [00:54<00:00,  1.01it/s]
100%|██████████████████████████████████████████████████████████████████████████████████| 65/65 [00:44<00:00,  1.45it/s]
100%|██████████████████████████████████████████████████████████████████████████████████| 20/20 [00:15<00:00,  1.33it/s]
Total progress: 100%|██████████████████████████████████████████████████████████████████| 85/85 [01:19<00:00,  1.07it/s]
100%|██████████████████████████████████████████████████████████████████████████████████| 45/45 [00:30<00:00,  1.48it/s]
100%|██████████████████████████████████████████████████████████████████████████████████| 25/25 [00:17<00:00,  1.45it/s]
Total progress: 100%|██████████████████████████████████████████████████████████████████| 70/70 [00:55<00:00,  1.27it/s]
Loading model hassakuXLSfwNsfwBeta_betaV04.safetensors [03505967f5] (2 out of 3)███████| 70/70 [00:55<00:00,  1.60it/s]
Loading weights [03505967f5] from D:\stable-diffusion-webui\models\Stable-diffusion\hassakuXLSfwNsfwBeta_betaV04.safetensors
changing setting sd_model_checkpoint to hassakuXLSfwNsfwBeta_betaV04.safetensors [03505967f5]: SafetensorError
Traceback (most recent call last):
  File "D:\stable-diffusion-webui\modules\options.py", line 165, in set
    option.onchange()
  File "D:\stable-diffusion-webui\modules\call_queue.py", line 13, in f
    res = func(*args, **kwargs)
  File "D:\stable-diffusion-webui\modules\initialize_util.py", line 181, in <lambda>
    shared.opts.onchange("sd_model_checkpoint", wrap_queued_call(lambda: sd_models.reload_model_weights()), call=False)
  File "D:\stable-diffusion-webui\modules\sd_models.py", line 860, in reload_model_weights
    sd_model = reuse_model_from_already_loaded(sd_model, checkpoint_info, timer)
  File "D:\stable-diffusion-webui\modules\sd_models.py", line 826, in reuse_model_from_already_loaded
    load_model(checkpoint_info)
  File "D:\stable-diffusion-webui\modules\sd_models.py", line 705, in load_model
    state_dict = get_checkpoint_state_dict(checkpoint_info, timer)
  File "D:\stable-diffusion-webui\modules\sd_models.py", line 330, in get_checkpoint_state_dict
    res = read_state_dict(checkpoint_info.filename)
  File "D:\stable-diffusion-webui\modules\sd_models.py", line 304, in read_state_dict
    pl_sd = safetensors.torch.load_file(checkpoint_file, device=device)
  File "D:\stable-diffusion-webui\venv\lib\site-packages\safetensors\torch.py", line 308, in load_file
    with safe_open(filename, framework="pt", device=device) as f:
safetensors_rust.SafetensorError: Error while deserializing header: MetadataIncompleteBuffer

changing setting sd_model_checkpoint to hassakuXLSfwNsfwBeta_betaV04.safetensors [03505967f5]: AttributeError
Traceback (most recent call last):
  File "D:\stable-diffusion-webui\modules\options.py", line 165, in set
    option.onchange()
  File "D:\stable-diffusion-webui\modules\call_queue.py", line 13, in f
    res = func(*args, **kwargs)
  File "D:\stable-diffusion-webui\modules\initialize_util.py", line 181, in <lambda>
    shared.opts.onchange("sd_model_checkpoint", wrap_queued_call(lambda: sd_models.reload_model_weights()), call=False)
  File "D:\stable-diffusion-webui\modules\sd_models.py", line 860, in reload_model_weights
    sd_model = reuse_model_from_already_loaded(sd_model, checkpoint_info, timer)
  File "D:\stable-diffusion-webui\modules\sd_models.py", line 793, in reuse_model_from_already_loaded
    send_model_to_cpu(sd_model)
  File "D:\stable-diffusion-webui\modules\sd_models.py", line 662, in send_model_to_cpu
    if m.lowvram:
AttributeError: 'NoneType' object has no attribute 'lowvram'

changing setting sd_model_checkpoint to highrisemix_v25.safetensors: AttributeError
Traceback (most recent call last):
  File "D:\stable-diffusion-webui\modules\options.py", line 165, in set
    option.onchange()
  File "D:\stable-diffusion-webui\modules\call_queue.py", line 13, in f
    res = func(*args, **kwargs)
  File "D:\stable-diffusion-webui\modules\initialize_util.py", line 181, in <lambda>
    shared.opts.onchange("sd_model_checkpoint", wrap_queued_call(lambda: sd_models.reload_model_weights()), call=False)
  File "D:\stable-diffusion-webui\modules\sd_models.py", line 860, in reload_model_weights
    sd_model = reuse_model_from_already_loaded(sd_model, checkpoint_info, timer)
  File "D:\stable-diffusion-webui\modules\sd_models.py", line 793, in reuse_model_from_already_loaded
    send_model_to_cpu(sd_model)
  File "D:\stable-diffusion-webui\modules\sd_models.py", line 662, in send_model_to_cpu
    if m.lowvram:
AttributeError: 'NoneType' object has no attribute 'lowvram'

changing setting sd_model_checkpoint to 0002Pony_v2ALTERNATIVESTYLE3.safetensors: AttributeError
Traceback (most recent call last):
  File "D:\stable-diffusion-webui\modules\options.py", line 165, in set
    option.onchange()
  File "D:\stable-diffusion-webui\modules\call_queue.py", line 13, in f
    res = func(*args, **kwargs)
  File "D:\stable-diffusion-webui\modules\initialize_util.py", line 181, in <lambda>
    shared.opts.onchange("sd_model_checkpoint", wrap_queued_call(lambda: sd_models.reload_model_weights()), call=False)
  File "D:\stable-diffusion-webui\modules\sd_models.py", line 860, in reload_model_weights
    sd_model = reuse_model_from_already_loaded(sd_model, checkpoint_info, timer)
  File "D:\stable-diffusion-webui\modules\sd_models.py", line 793, in reuse_model_from_already_loaded
    send_model_to_cpu(sd_model)
  File "D:\stable-diffusion-webui\modules\sd_models.py", line 662, in send_model_to_cpu
    if m.lowvram:
AttributeError: 'NoneType' object has no attribute 'lowvram'

changing setting sd_model_checkpoint to 0002Pony_v2ALTERNATIVESTYLE3.safetensors: AttributeError
Traceback (most recent call last):
  File "D:\stable-diffusion-webui\modules\options.py", line 165, in set
    option.onchange()
  File "D:\stable-diffusion-webui\modules\call_queue.py", line 13, in f
    res = func(*args, **kwargs)
  File "D:\stable-diffusion-webui\modules\initialize_util.py", line 181, in <lambda>
    shared.opts.onchange("sd_model_checkpoint", wrap_queued_call(lambda: sd_models.reload_model_weights()), call=False)
  File "D:\stable-diffusion-webui\modules\sd_models.py", line 860, in reload_model_weights
    sd_model = reuse_model_from_already_loaded(sd_model, checkpoint_info, timer)
  File "D:\stable-diffusion-webui\modules\sd_models.py", line 793, in reuse_model_from_already_loaded
    send_model_to_cpu(sd_model)
  File "D:\stable-diffusion-webui\modules\sd_models.py", line 662, in send_model_to_cpu
    if m.lowvram:
AttributeError: 'NoneType' object has no attribute 'lowvram'

Additional information

No response

wudimenghuan commented 1 month ago

same problem

w-e-w commented 1 month ago

this should be fix in 1.10.0

the issus is caused by a bug in webui that if the initial model webui tries to load is corrupted, then webui won't be able to switch models

after the fix you should be able to switch even the model is corrupted

but regardless if it is fix or not you should delete the corrupted model

I can see the file is corrupted because

  1. safetensors_rust.SafetensorError: Error while deserializing header: MetadataIncompleteBuffer webui repors that it encountered an error when loading the model
  2. webui worte Loading model hassakuXLSfwNsfwBeta_betaV04.safetensors [03505967f5] 03505967f5 is the first 10 characters of the file sha256 of the file assuming this is the file https://huggingface.co/Makengo/HasskakuForRunpod/blob/main/hassakuXLSfwNsfwBeta_betaV04.safetensors then it should have the hash of 8031331664cd2d3804c69019fb1d914511d4d444b56ac9a2b3bb63c5176acb98 which dose not match 03505967f5 we know the file is different
TypicaIDay commented 1 month ago

this should be fix in 1.10.0

the issus is caused by a bug in webui that if the initial model webui tries to load is corrupted, then webui won't be able to switch models

after the fix you should be able to switch even the model is corrupted

but regardless if it is fix or not you should delete the corrupted model

I can see the file is corrupted because

  1. safetensors_rust.SafetensorError: Error while deserializing header: MetadataIncompleteBuffer webui repors that it encountered an error when loading the model
  2. webui worte Loading model hassakuXLSfwNsfwBeta_betaV04.safetensors [03505967f5] 03505967f5 is the first 10 characters of the file sha256 of the file assuming this is the file https://huggingface.co/Makengo/HasskakuForRunpod/blob/main/hassakuXLSfwNsfwBeta_betaV04.safetensors then it should have the hash of 8031331664cd2d3804c69019fb1d914511d4d444b56ac9a2b3bb63c5176acb98 which dose not match 03505967f5 we know the file is different

I don't know what's wrong, because it worked for a few generations but then it kept giving me that error message

TypicaIDay commented 1 month ago

also is there a way to find and see if any SD models are corrupt?

viking1304 commented 1 month ago

also is there a way to find and see if any SD models are corrupt?

You should be able to get sha256 of any file using this command:

certutil -hashfile C:\file\path\my_file.exe SHA256

Then, you can compare it with the sha256 of the desired model.

Just be sure that you are comparing sha256 and not some other checksum.

image

If you see anything other than sha256, click on arrow on the right, until you see sha256

image
TypicaIDay commented 3 weeks ago

also is there a way to find and see if any SD models are corrupt?

You should be able to get sha256 of any file using this command:

certutil -hashfile C:\file\path\my_file.exe SHA256

Then, you can compare it with the sha256 of the desired model.

Just be sure that you are comparing sha256 and not some other checksum.

image

If you see anything other than sha256, click on arrow on the right, until you see sha256

image

really late, but thanks so much