Yesterday, everything worked well.
Just now, as I as looking to create a mood board/frames, I loaded SD and received a show stopping error (see below).
Already up to date.
venv "G:\StableDiffusion\stable-diffusion-webui\venv\Scripts\Python.exe"
Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug 1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)]
Commit hash: 2c1bb46c7ad5b4536f6587d327a03f0ff7811c5d
Installing requirements for Web UI
Installing requirements for Anime Background Remover
Installing requirements for Anime Background Remover
Installing requirements for Anime Background Remover
Installing requirements for Batch Face Swap
Installing sd-dynamic-prompts requirements.txt
Installing imageio-ffmpeg requirement for depthmap script
Installing pyqt5 requirement for depthmap script
Launching Web UI with arguments: --xformers --allow-code --autolaunch --opt-channelslast --theme dark --api --cors-allow-origins=http://127.0.0.1:3456
Loading weights [5decabbb40] from G:\StableDiffusion\stable-diffusion-webui\models\Stable-diffusion\768-v-ema.safetensors
Creating model from config: G:\StableDiffusion\stable-diffusion-webui\models\Stable-diffusion\768-v-ema.yaml
LatentDiffusion: Running in v-prediction mode
DiffusionWrapper has 865.91 M params.
Failed to create model quickly; will retry using slow method.
LatentDiffusion: Running in v-prediction mode
DiffusionWrapper has 865.91 M params.
loading stable diffusion model: RuntimeError
Traceback (most recent call last):
File "G:\StableDiffusion\stable-diffusion-webui\webui.py", line 111, in initialize
modules.sd_models.load_model()
File "G:\StableDiffusion\stable-diffusion-webui\modules\sd_models.py", line 392, in load_model
sd_model = instantiate_from_config(sd_config.model)
File "G:\StableDiffusion\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\util.py", line 79, in instantiate_from_config
return get_obj_from_str(config["target"])(config.get("params", dict()))
File "G:\StableDiffusion\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 563, in init
self.instantiate_cond_stage(cond_stage_config)
File "G:\StableDiffusion\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 630, in instantiate_cond_stage
model = instantiate_from_config(config)
File "G:\StableDiffusion\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\util.py", line 79, in instantiate_from_config
return get_obj_from_str(config["target"])(config.get("params", dict()))
File "G:\StableDiffusion\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\encoders\modules.py", line 147, in init
model, , = open_clip.create_model_and_transforms(arch, device=torch.device('cpu'), pretrained=version)
File "G:\StableDiffusion\stable-diffusion-webui\venv\lib\site-packages\open_clip\factory.py", line 201, in create_model_and_transforms
model = create_model(
File "G:\StableDiffusion\stable-diffusion-webui\venv\lib\site-packages\open_clip\factory.py", line 152, in create_model
model = CLIP(model_cfg, cast_dtype=cast_dtype)
File "G:\StableDiffusion\stable-diffusion-webui\venv\lib\site-packages\open_clip\model.py", line 163, in init
text = _build_text_tower(embed_dim, text_cfg, quick_gelu, cast_dtype)
File "G:\StableDiffusion\stable-diffusion-webui\venv\lib\site-packages\open_clip\model.py", line 137, in _build_text_tower
text = TextTransformer(
File "G:\StableDiffusion\stable-diffusion-webui\venv\lib\site-packages\open_clip\transformer.py", line 347, in init
self.transformer = Transformer(
File "G:\StableDiffusion\stable-diffusion-webui\venv\lib\site-packages\open_clip\transformer.py", line 216, in init
self.resblocks = nn.ModuleList([
File "G:\StableDiffusion\stable-diffusion-webui\venv\lib\site-packages\open_clip\transformer.py", line 217, in
ResidualAttentionBlock(
File "G:\StableDiffusion\stable-diffusion-webui\venv\lib\site-packages\open_clip\transformer.py", line 143, in init
("c_fc", nn.Linear(d_model, mlp_width)),
File "G:\StableDiffusion\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\linear.py", line 96, in init
self.weight = Parameter(torch.empty((out_features, in_features), factory_kwargs))
RuntimeError: [enforce fail at ..\c10\core\impl\alloc_cpu.cpp:72] data. DefaultCPUAllocator: not enough memory: you tried to allocate 16777216 bytes.
Stable diffusion model failed to load, exiting
Press any key to continue . . .
Here's my webui-user:
@echo off
set PYTHON=C:\Python310\python.exe
set GIT=
set VENV_DIR=
set COMMANDLINE_ARGS= --xformers --allow-code --autolaunch --opt-channelslast --skip-torch-cuda-test --theme dark --api --cors-allow-origins=http://127.0.0.1:3456
set CUDA_VISIBLE_DEVICES=0
set SAFETENSORS_FAST_GPU=1
git pull
call webui.bat
Any ideas?
Is there a way I can go back to last night's build, so I can keep going while this gets fixed?
Yesterday, everything worked well. Just now, as I as looking to create a mood board/frames, I loaded SD and received a show stopping error (see below).
Already up to date. venv "G:\StableDiffusion\stable-diffusion-webui\venv\Scripts\Python.exe" Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug 1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)] Commit hash: 2c1bb46c7ad5b4536f6587d327a03f0ff7811c5d Installing requirements for Web UI Installing requirements for Anime Background Remover Installing requirements for Anime Background Remover Installing requirements for Anime Background Remover
Installing requirements for Batch Face Swap
Installing sd-dynamic-prompts requirements.txt
Installing imageio-ffmpeg requirement for depthmap script Installing pyqt5 requirement for depthmap script
Launching Web UI with arguments: --xformers --allow-code --autolaunch --opt-channelslast --theme dark --api --cors-allow-origins=http://127.0.0.1:3456 Loading weights [5decabbb40] from G:\StableDiffusion\stable-diffusion-webui\models\Stable-diffusion\768-v-ema.safetensors Creating model from config: G:\StableDiffusion\stable-diffusion-webui\models\Stable-diffusion\768-v-ema.yaml LatentDiffusion: Running in v-prediction mode DiffusionWrapper has 865.91 M params. Failed to create model quickly; will retry using slow method. LatentDiffusion: Running in v-prediction mode DiffusionWrapper has 865.91 M params. loading stable diffusion model: RuntimeError Traceback (most recent call last): File "G:\StableDiffusion\stable-diffusion-webui\webui.py", line 111, in initialize modules.sd_models.load_model() File "G:\StableDiffusion\stable-diffusion-webui\modules\sd_models.py", line 392, in load_model sd_model = instantiate_from_config(sd_config.model) File "G:\StableDiffusion\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\util.py", line 79, in instantiate_from_config return get_obj_from_str(config["target"])(config.get("params", dict())) File "G:\StableDiffusion\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 563, in init self.instantiate_cond_stage(cond_stage_config) File "G:\StableDiffusion\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 630, in instantiate_cond_stage model = instantiate_from_config(config) File "G:\StableDiffusion\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\util.py", line 79, in instantiate_from_config return get_obj_from_str(config["target"])(config.get("params", dict())) File "G:\StableDiffusion\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\encoders\modules.py", line 147, in init model, , = open_clip.create_model_and_transforms(arch, device=torch.device('cpu'), pretrained=version) File "G:\StableDiffusion\stable-diffusion-webui\venv\lib\site-packages\open_clip\factory.py", line 201, in create_model_and_transforms model = create_model( File "G:\StableDiffusion\stable-diffusion-webui\venv\lib\site-packages\open_clip\factory.py", line 152, in create_model model = CLIP(model_cfg, cast_dtype=cast_dtype) File "G:\StableDiffusion\stable-diffusion-webui\venv\lib\site-packages\open_clip\model.py", line 163, in init text = _build_text_tower(embed_dim, text_cfg, quick_gelu, cast_dtype) File "G:\StableDiffusion\stable-diffusion-webui\venv\lib\site-packages\open_clip\model.py", line 137, in _build_text_tower text = TextTransformer( File "G:\StableDiffusion\stable-diffusion-webui\venv\lib\site-packages\open_clip\transformer.py", line 347, in init self.transformer = Transformer( File "G:\StableDiffusion\stable-diffusion-webui\venv\lib\site-packages\open_clip\transformer.py", line 216, in init self.resblocks = nn.ModuleList([ File "G:\StableDiffusion\stable-diffusion-webui\venv\lib\site-packages\open_clip\transformer.py", line 217, in
ResidualAttentionBlock(
File "G:\StableDiffusion\stable-diffusion-webui\venv\lib\site-packages\open_clip\transformer.py", line 143, in init
("c_fc", nn.Linear(d_model, mlp_width)),
File "G:\StableDiffusion\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\linear.py", line 96, in init
self.weight = Parameter(torch.empty((out_features, in_features), factory_kwargs))
RuntimeError: [enforce fail at ..\c10\core\impl\alloc_cpu.cpp:72] data. DefaultCPUAllocator: not enough memory: you tried to allocate 16777216 bytes.
Stable diffusion model failed to load, exiting Press any key to continue . . .
Here's my webui-user: @echo off
set PYTHON=C:\Python310\python.exe set GIT= set VENV_DIR= set COMMANDLINE_ARGS= --xformers --allow-code --autolaunch --opt-channelslast --skip-torch-cuda-test --theme dark --api --cors-allow-origins=http://127.0.0.1:3456 set CUDA_VISIBLE_DEVICES=0 set SAFETENSORS_FAST_GPU=1
git pull
call webui.bat
Any ideas? Is there a way I can go back to last night's build, so I can keep going while this gets fixed?
Thanks in advance!