[X] I have searched the existing issues and checked the recent builds/commits
What happened?
Before December 9th, 2023, I was able to use the Intel Arc version of Stable Diffusion normally, which could load the sdxl1.0 model. However, after using git pull, I was still able to use and load the sd1.0 model, but I was unable to use the sdxl1.0 model.
This is the device information:
Windows11
Intel arc a750
Ram: 24g
Start parameters: webui.bat -- skip torch cuda test -- no half -- no half vae -- medvram
This is the error message:
I am able to start the program normally, use and load "v1-5 pruned emaonly. safesensors" normally, but cannot load "sd_xl_base_1.0. safesensors". However, when loading the sdxl model, the following message pops up:
"Changing setting sd_model_checkpoint to sdxl \ main \ sdxl_base_1.0. safesensors [31e35c80fc]: RuntimeError"
Traceback (most recent call last): ........"
What should have happened?
I hope to fix this bug
Sysinfo
Windows11
Intel arc a750
Ram: 24g
What browsers do you use to access the UI ?
No response
Console logs
venv "H:\AI\stable-diffusion-webui\venv\Scripts\Python.exe"
fatal: No names found, cannot describe anything.
Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug 1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)]
Version: 1.6.0
Commit hash: 44006297e03a07f28505d54d6ba5fd55e0c1292d
Launching Web UI with arguments: --allow-code --skip-torch-cuda-test --no-half --no-half-vae --medvram --opt-sdp-attention
WARNING:xformers:WARNING[XFORMERS]: xFormers can't load C++/CUDA extensions. xFormers was built for:
PyTorch 2.0.1+cu118 with CUDA 1108 (you have 2.1.0+cpu)
Python 3.10.11 (you have 3.10.6)
Please reinstall xformers (see https://github.com/facebookresearch/xformers#installing-xformers)
Memory-efficient attention, SwiGLU, sparse and more won't be available.
Set XFORMERS_MORE_DETAILS=1 for more details
No module 'xformers'. Proceeding without it.
Warning: caught exception 'Torch not compiled with CUDA enabled', memory monitor disabled
Loading weights [6ce0161689] from H:\AI\stable-diffusion-webui\models\Stable-diffusion\v1-5-pruned-emaonly.safetensors
Running on local URL: http://127.0.0.1:7860
To create a public link, set `share=True` in `launch()`.
Creating model from config: H:\AI\stable-diffusion-webui\configs\v1-inference.yaml
Startup time: 5.7s (prepare environment: 0.2s, import torch: 2.1s, import gradio: 0.7s, setup paths: 0.6s, other imports: 0.5s, load scripts: 0.7s, create ui: 0.5s, gradio launch: 0.4s).
Applying attention optimization: sdp... done.
Model loaded in 3.5s (load weights from disk: 0.8s, create model: 0.2s, apply weights to model: 2.4s).
Reusing loaded model v1-5-pruned-emaonly.safetensors [6ce0161689] to load sdxl\main\sd_xl_base_1.0.safetensors [31e35c80fc]
Loading weights [31e35c80fc] from H:\AI\stable-diffusion-webui\models\Stable-diffusion\sdxl\main\sd_xl_base_1.0.safetensors
Creating model from config: H:\AI\stable-diffusion-webui\repositories\generative-models\configs\inference\sd_xl_base.yaml
Loading VAE weights found near the checkpoint: H:\AI\stable-diffusion-webui\models\VAE\sd_xl_base_1.0_0.9vae.safetensors
changing setting sd_model_checkpoint to sdxl\main\sd_xl_base_1.0.safetensors [31e35c80fc]: RuntimeError
Traceback (most recent call last):
File "H:\AI\stable-diffusion-webui\modules\options.py", line 140, in set
option.onchange()
File "H:\AI\stable-diffusion-webui\modules\call_queue.py", line 13, in f
res = func(*args, **kwargs)
File "H:\AI\stable-diffusion-webui\modules\initialize_util.py", line 170, in <lambda>
shared.opts.onchange("sd_model_checkpoint", wrap_queued_call(lambda: sd_models.reload_model_weights()), call=False)
File "H:\AI\stable-diffusion-webui\modules\sd_models.py", line 751, in reload_model_weights
load_model(checkpoint_info, already_loaded_state_dict=state_dict)
File "H:\AI\stable-diffusion-webui\modules\sd_models.py", line 626, in load_model
load_model_weights(sd_model, checkpoint_info, state_dict, timer)
File "H:\AI\stable-diffusion-webui\modules\sd_models.py", line 409, in load_model_weights
sd_vae.load_vae(model, vae_file, vae_source)
File "H:\AI\stable-diffusion-webui\modules\sd_vae.py", line 212, in load_vae
_load_vae_dict(model, vae_dict_1)
File "H:\AI\stable-diffusion-webui\modules\sd_vae.py", line 239, in _load_vae_dict
model.first_stage_model.load_state_dict(vae_dict_1)
File "H:\AI\stable-diffusion-webui\modules\sd_disable_initialization.py", line 223, in <lambda>
module_load_state_dict = self.replace(torch.nn.Module, 'load_state_dict', lambda *args, **kwargs: load_state_dict(module_load_state_dict, *args, **kwargs))
File "H:\AI\stable-diffusion-webui\modules\sd_disable_initialization.py", line 221, in load_state_dict
original(module, state_dict, strict=strict)
File "H:\AI\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 2152, in load_state_dict
raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format(
RuntimeError: Error(s) in loading state_dict for AutoencoderKLInferenceWrapper:
Missing key(s) in state_dict: "encoder.conv_in.weight", "encoder.conv_in.bias", "encoder.down.0.block.0.norm1.weight", "encoder.down.0.block.0.norm1.bias", "encoder.down.0.block.0.conv1.weight", "encoder.down.0.block.0.conv1.bias", "encoder.down.0.block.0.norm2.weight", "encoder.down.0.block.0.norm2.bias", "encoder.down.0.block.0.conv2.weight", "encoder.down.0.block.0.conv2.bias", "encoder.down.0.block.1.norm1.weight", "encoder.down.0.block.1.norm1.bias",
Is there an existing issue for this?
What happened?
Before December 9th, 2023, I was able to use the Intel Arc version of Stable Diffusion normally, which could load the sdxl1.0 model. However, after using git pull, I was still able to use and load the sd1.0 model, but I was unable to use the sdxl1.0 model. This is the device information: Windows11 Intel arc a750 Ram: 24g Start parameters: webui.bat -- skip torch cuda test -- no half -- no half vae -- medvram This is the error message:
changing setting sd_model_checkpoint to sdxl\main\sd_xl_base_1.0.safetensors [31e35c80fc]: RuntimeError Traceback (most recent call last):
Steps to reproduce the problem
What should have happened?
I hope to fix this bug
Sysinfo
Windows11 Intel arc a750 Ram: 24g
What browsers do you use to access the UI ?
No response
Console logs
Additional information
No response