AUTOMATIC1111 / stable-diffusion-webui

Stable Diffusion web UI
GNU Affero General Public License v3.0
140.6k stars 26.61k forks source link

[Bug]: Show stopping error after latest git pull (01/30/23) #7408

Open rethink-studios opened 1 year ago

rethink-studios commented 1 year ago

Is there an existing issue for this?

What happened?

Last night, everything worked. As of 15mins ago, updated to the latest build with a git pull and a show stopping error has occurred.

Steps to reproduce the problem

Just trying to start the webUI

What should have happened?

webUI would have popped up

Commit where the problem happens

2c1bb46c7ad5b4536f6587d327a03f0ff7811c5d

What platforms do you use to access the UI ?

Windows

What browsers do you use to access the UI ?

Google Chrome

Command Line Arguments

set COMMANDLINE_ARGS= --xformers --allow-code --autolaunch --opt-channelslast --skip-torch-cuda-test --theme dark --api --cors-allow-origins=http://127.0.0.1:3456

List of extensions

image

Console logs

Already up to date.
venv "G:\StableDiffusion\stable-diffusion-webui\venv\Scripts\Python.exe"
Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug  1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)]
Commit hash: 2c1bb46c7ad5b4536f6587d327a03f0ff7811c5d
Installing requirements for Web UI
Installing requirements for Anime Background Remover
Installing requirements for Anime Background Remover
Installing requirements for Anime Background Remover

Installing requirements for Batch Face Swap

Installing sd-dynamic-prompts requirements.txt

Installing imageio-ffmpeg requirement for depthmap script
Installing pyqt5 requirement for depthmap script

Launching Web UI with arguments: --xformers --allow-code --autolaunch --opt-channelslast --theme dark --api --cors-allow-origins=http://127.0.0.1:3456
Loading weights [5decabbb40] from G:\StableDiffusion\stable-diffusion-webui\models\Stable-diffusion\768-v-ema.safetensors
Creating model from config: G:\StableDiffusion\stable-diffusion-webui\models\Stable-diffusion\768-v-ema.yaml
LatentDiffusion: Running in v-prediction mode
DiffusionWrapper has 865.91 M params.
Failed to create model quickly; will retry using slow method.
LatentDiffusion: Running in v-prediction mode
DiffusionWrapper has 865.91 M params.
loading stable diffusion model: RuntimeError
Traceback (most recent call last):
  File "G:\StableDiffusion\stable-diffusion-webui\webui.py", line 111, in initialize
    modules.sd_models.load_model()
  File "G:\StableDiffusion\stable-diffusion-webui\modules\sd_models.py", line 392, in load_model
    sd_model = instantiate_from_config(sd_config.model)
  File "G:\StableDiffusion\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\util.py", line 79, in instantiate_from_config
    return get_obj_from_str(config["target"])(**config.get("params", dict()))
  File "G:\StableDiffusion\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 563, in __init__
    self.instantiate_cond_stage(cond_stage_config)
  File "G:\StableDiffusion\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 630, in instantiate_cond_stage
    model = instantiate_from_config(config)
  File "G:\StableDiffusion\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\util.py", line 79, in instantiate_from_config
    return get_obj_from_str(config["target"])(**config.get("params", dict()))
  File "G:\StableDiffusion\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\encoders\modules.py", line 147, in __init__
    model, _, _ = open_clip.create_model_and_transforms(arch, device=torch.device('cpu'), pretrained=version)
  File "G:\StableDiffusion\stable-diffusion-webui\venv\lib\site-packages\open_clip\factory.py", line 201, in create_model_and_transforms
    model = create_model(
  File "G:\StableDiffusion\stable-diffusion-webui\venv\lib\site-packages\open_clip\factory.py", line 152, in create_model
    model = CLIP(**model_cfg, cast_dtype=cast_dtype)
  File "G:\StableDiffusion\stable-diffusion-webui\venv\lib\site-packages\open_clip\model.py", line 163, in __init__
    text = _build_text_tower(embed_dim, text_cfg, quick_gelu, cast_dtype)
  File "G:\StableDiffusion\stable-diffusion-webui\venv\lib\site-packages\open_clip\model.py", line 137, in _build_text_tower
    text = TextTransformer(
  File "G:\StableDiffusion\stable-diffusion-webui\venv\lib\site-packages\open_clip\transformer.py", line 347, in __init__
    self.transformer = Transformer(
  File "G:\StableDiffusion\stable-diffusion-webui\venv\lib\site-packages\open_clip\transformer.py", line 216, in __init__
    self.resblocks = nn.ModuleList([
  File "G:\StableDiffusion\stable-diffusion-webui\venv\lib\site-packages\open_clip\transformer.py", line 217, in <listcomp>
    ResidualAttentionBlock(
  File "G:\StableDiffusion\stable-diffusion-webui\venv\lib\site-packages\open_clip\transformer.py", line 137, in __init__
    self.attn = nn.MultiheadAttention(d_model, n_head)
  File "G:\StableDiffusion\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\activation.py", line 968, in __init__
    self.in_proj_weight = Parameter(torch.empty((3 * embed_dim, embed_dim), **factory_kwargs))
RuntimeError: [enforce fail at ..\c10\core\impl\alloc_cpu.cpp:72] data. DefaultCPUAllocator: not enough memory: you tried to allocate 12582912 bytes.

Stable diffusion model failed to load, exiting
Press any key to continue . . .

Additional information

No response

ataa commented 1 year ago

DefaultCPUAllocator: not enough memory: you tried to allocate 12582912 bytes.

Ran out of memory, increase your virtual memory.

gsgoldma commented 1 year ago

DefaultCPUAllocator: not enough memory: you tried to allocate 12582912 bytes.

Ran out of memory, increase your virtual memory.

they said it was working, so something changed in the update. shouldnt have to change the vram

ClashSAN commented 1 year ago

virtual memory means setting a large page file

vram is video ram if you are confused.

ram usage is high at the start due to the model loading into ram, which is the tradeoff webui makes for better lower gpu vram requirements for everybody. originally, the main compvis repo loads all of it to the gpu, which takes 6-7gb of vram, but it shouldn't be heavy on ram.

you can do like compvis does by setting '--lowram' flag if you need to.

gsgoldma commented 1 year ago

virtual memory means setting a large page file

vram is video ram if you are confused.

ram usage is high at the start due to the model loading into ram, which is the tradeoff webui makes for better lower gpu vram requirements for everybody. originally, the main compvis repo loads all of it to the gpu, which takes 6-7gb of vram, but it shouldn't be heavy on ram.

you can do like compvis does by setting '--lowram' flag if you need to.

my bad!