AUTOMATIC1111 / stable-diffusion-webui

Stable Diffusion web UI
GNU Affero General Public License v3.0
141.76k stars 26.78k forks source link

[Bug]: AttributeError: 'NoneType' object has no attribute 'lowvram' #15972

Closed Trowa8 closed 4 months ago

Trowa8 commented 4 months ago

Checklist

What happened?

Can't select a model. And the webui not generating images, just giving a error "AttributeError: 'NoneType' object has no attribute 'lowvram' "

Steps to reproduce the problem

  1. Install stable-diffusion-webui

  2. Install model to stable-diffusion-webui\models

  3. Run webui-user.bat to start stable-diffusion-webui

  4. Try to switch models Or Try to generate a image

  5. Bug

What should have happened?

I should have been able to select a model Or Generate a Image

What browsers do you use to access the UI ?

Other

Sysinfo

sysinfo-2024-06-08-17-50.json

Console logs

venv "D:\AI_Stuff\stable-diffusion-webui\venv\Scripts\Python.exe"
Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug  1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)]
Version: v1.9.4
Commit hash: feee37d75f1b168768014e4634dcb156ee649c05
Launching Web UI with arguments: --skip-torch-cuda-test
D:\AI_Stuff\stable-diffusion-webui\venv\lib\site-packages\torch\cuda\__init__.py:740: UserWarning: CUDA initialization: The NVIDIA driver on your system is too old (found version 11040). Please update your GPU driver by downloading and installing a new version from the URL: http://www.nvidia.com/Download/index.aspx Alternatively, go to: https://pytorch.org to install a PyTorch version that has been compiled with your version of the CUDA driver. (Triggered internally at ..\c10\cuda\CUDAFunctions.cpp:108.)
  return torch._C._cuda_getDeviceCount() if nvml_count < 0 else nvml_count
no module 'xformers'. Processing without...
no module 'xformers'. Processing without...
No module 'xformers'. Proceeding without it.
Warning: caught exception 'The NVIDIA driver on your system is too old (found version 11040). Please update your GPU driver by downloading and installing a new version from the URL: http://www.nvidia.com/Download/index.aspx Alternatively, go to: https://pytorch.org to install a PyTorch version that has been compiled with your version of the CUDA driver.', memory monitor disabled
Calculating sha256 for D:\AI_Stuff\stable-diffusion-webui\models\Stable-diffusion\sdxl_lightning_8step.safetensors: Running on local URL:  http://127.0.0.1:7860

To create a public link, set `share=True` in `launch()`.
Startup time: 85.7s (initial startup: 0.2s, prepare environment: 2.3s, import torch: 44.0s, import gradio: 10.3s, setup paths: 12.0s, initialize shared: 2.0s, other imports: 7.0s, setup gfpgan: 0.5s, list SD models: 0.4s, load scripts: 5.3s, create ui: 1.2s, gradio launch: 1.5s).
changing setting sd_model_checkpoint to sdxl_lightning_8step.safetensors: AttributeError
Traceback (most recent call last):
  File "D:\AI_Stuff\stable-diffusion-webui\modules\options.py", line 165, in set
    option.onchange()
  File "D:\AI_Stuff\stable-diffusion-webui\modules\call_queue.py", line 13, in f
    res = func(*args, **kwargs)
  File "D:\AI_Stuff\stable-diffusion-webui\modules\initialize_util.py", line 181, in <lambda>
    shared.opts.onchange("sd_model_checkpoint", wrap_queued_call(lambda: sd_models.reload_model_weights()), call=False)
  File "D:\AI_Stuff\stable-diffusion-webui\modules\sd_models.py", line 860, in reload_model_weights
    sd_model = reuse_model_from_already_loaded(sd_model, checkpoint_info, timer)
  File "D:\AI_Stuff\stable-diffusion-webui\modules\sd_models.py", line 793, in reuse_model_from_already_loaded
    send_model_to_cpu(sd_model)
  File "D:\AI_Stuff\stable-diffusion-webui\modules\sd_models.py", line 662, in send_model_to_cpu
    if m.lowvram:
AttributeError: 'NoneType' object has no attribute 'lowvram'

43f0501ac4ffcef84f3fc32a47779f24c181647fa97ef7f5ec4428107d732ae9
Loading weights [43f0501ac4] from D:\AI_Stuff\stable-diffusion-webui\models\Stable-diffusion\sdxl_lightning_8step.safetensors
Creating model from config: D:\AI_Stuff\stable-diffusion-webui\repositories\generative-models\configs\inference\sd_xl_base.yaml
D:\AI_Stuff\stable-diffusion-webui\venv\lib\site-packages\huggingface_hub\file_download.py:1132: FutureWarning: `resume_download` is deprecated and will be removed in version 1.0.0. Downloads always resume when possible. If you want to force a new download, use `force_download=True`.
  warnings.warn(

Additional information

I have updated my GPU driver recently. Yes, I am aware that my GPU is old. I use Opera GX as my default browser. I have reinstalled the model(s). Also, I always had this issue from the first time I installed the webui. I have reinstalled the repo, and I have done a clean reinstallation of the webui. There were people who fixed this bug by reinstalling the repo, it did not work for me. Any idea how to fix this?

Trowa8 commented 4 months ago

Not Fixing this because of too many other errors. Going to find another WebUI

Trowa8 commented 4 months ago

Not fixing this because of too many other errors. Going to find another WebUI

Giyu07 commented 4 months ago

does anyyone find solotion for this error