Open Vigilence opened 7 months ago
Downloading an earlier commit that was done this past Friday resolved the issue for me. Hopefully this eventually gets fixed.
Checked newest update and issue persists.
If anyone else has an issue with Loras and refiner models not working, you can use git checkout d2af6d1b4427928013b5594efb9e0f08ad3bfd16
to go back to when they were working together fine.
@lllyasviel
how would I use "git checkout d2af6d1b4427928013b5594efb9e0f08ad3bfd16"? Tried from a cmd prompt to open git.. maybe git that cd to where I have forge (a seperate drive) try it there. Let you know if I figure it out.
@BronxBob It would be from inside the base folder. You can add it to your webuser.bat file to make it easier.
@echo off
set PYTHON=
set GIT=
set VENV_DIR=
set COMMANDLINE_ARGS=
@REM Uncomment following code to reference an existing A1111 checkout.
@REM set A1111_HOME=Your A1111 checkout dir
@REM
@REM set VENV_DIR=%A1111_HOME%/venv
@REM set COMMANDLINE_ARGS=%COMMANDLINE_ARGS% ^
@REM --ckpt-dir %A1111_HOME%/models/Stable-diffusion ^
@REM --hypernetwork-dir %A1111_HOME%/models/hypernetworks ^
@REM --embeddings-dir %A1111_HOME%/embeddings ^
@REM --lora-dir %A1111_HOME%/models/Lora
git checkout d2af6d1b4427928013b5594efb9e0f08ad3bfd16
call webui.bat
@lllyasviel The issue might be because of comfyui
https://github.com/comfyanonymous/ComfyUI/issues/1144 https://github.com/comfyanonymous/ComfyUI/issues/887
@lllyasviel @Vigilence I am experiencing a bug similar to the one I previously reported and fixed in the latest version and others. issues161
The current latest version of the refiner does not have the Lora effect on the refiner.
The procedure for reproducing the bug is shown in the image below. The cause of the bug may be different from that in issue 161, but the procedure to reproduce the bug and the symptoms of the bug are the same as in issue 161.
@BronxBob It would be from inside the base folder. You can add it to your webuser.bat file to make it easier.
@echo off set PYTHON= set GIT= set VENV_DIR= set COMMANDLINE_ARGS= @REM Uncomment following code to reference an existing A1111 checkout. @REM set A1111_HOME=Your A1111 checkout dir @REM @REM set VENV_DIR=%A1111_HOME%/venv @REM set COMMANDLINE_ARGS=%COMMANDLINE_ARGS% ^ @REM --ckpt-dir %A1111_HOME%/models/Stable-diffusion ^ @REM --hypernetwork-dir %A1111_HOME%/models/hypernetworks ^ @REM --embeddings-dir %A1111_HOME%/embeddings ^ @REM --lora-dir %A1111_HOME%/models/Lora git checkout d2af6d1b4427928013b5594efb9e0f08ad3bfd16 call webui.bat
Sigh tried this and still loras are a no show. Forge works really well aside from the no Lora thing though.,.
@BronxBob What issue are you having?
@BronxBob What issue are you having?
Never mind, I guess it took a couple of restarts to download the older build cause now it seems to work. Took 4 restarts, the first three were this morning. I was going to screen shot the the whole thing but the first test worked and then the later ones started working as well. Not getting the "lora key not loaded" messages now.
@BronxBob What issue are you having?
Never mind, I guess it took a couple of restarts to download the older build cause now it seems to work. Took 4 restarts, the first three were this morning. I was going to screen shot the the whole thing but the first test worked and then the later ones started working as well. Not getting the "lora key not loaded" messages now.
Glad you got it working. If you don't mind sharing the error log for when the lora key not loaded
it would help @lllyasviel fix the issue for us.
@BronxBob What issue are you having?
Never mind, I guess it took a couple of restarts to download the older build cause now it seems to work. Took 4 restarts, the first three were this morning. I was going to screen shot the the whole thing but the first test worked and then the later ones started working as well. Not getting the "lora key not loaded" messages now.
Glad you got it working. If you don't mind sharing the error log for when the
lora key not loaded
it would help @lllyasviel fix the issue for us.
Just restarted it and the issue is back (???) so here is what was happening in CMD. Kind of long...
HEAD is now at d2af6d1b Contribution Guideline M webui-user.bat Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug 1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)] Version: f0.0.10-latest-95-gd2af6d1b Commit hash: d2af6d1b4427928013b5594efb9e0f08ad3bfd16 CUDA 12.1 Launching Web UI with arguments: --theme dark --ckpt-dir X:\SDXL_A111\stable-diffusion-webui\models\Stable-diffusion --lora-dir X:\SDXL_A111\stable-diffusion-webui\models\Lora Total VRAM 12282 MB, total RAM 65452 MB Set vram state to: NORMAL_VRAM Device: cuda:0 NVIDIA GeForce RTX 4070 : native VAE dtype: torch.bfloat16 Using pytorch cross attention ControlNet preprocessor location: X:\webui_forge\webui_forge_cu121_torch21\webui\models\ControlNetPreprocessor 21:28:43 - ReActor - STATUS - Running v0.7.0-a1 on Device: CUDA Loading weights [b6c0f1430a] from X:\SDXL_A111\stable-diffusion-webui\models\Stable-diffusion\sdxlSevenof9NSFW_sdxlSevenof94thNSFW.safetensors 2024-02-19 21:28:43,502 - ControlNet - INFO - ControlNet UI callback registered. Running on local URL: http://127.0.0.1:7860
To create a public link, set share=True
in launch()
.
Startup time: 14.8s (prepare environment: 4.9s, import torch: 3.6s, import gradio: 1.1s, setup paths: 0.9s, initialize shared: 0.1s, other imports: 0.6s, setup gfpgan: 0.1s, load scripts: 2.2s, create ui: 0.6s, gradio launch: 0.5s).
model_type EPS
UNet ADM Dimension 2816
Using pytorch attention in VAE
Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
Using pytorch attention in VAE
extra {'cond_stage_model.clip_g.transformer.text_model.embeddings.position_ids', 'cond_stage_model.clip_l.logit_scale', 'cond_stage_model.clip_l.text_projection'}
left over keys: dict_keys(['conditioner.embedders.0.logit_scale', 'conditioner.embedders.0.text_projection', 'conditioner.embedders.1.model.transformer.text_model.embeddings.position_ids'])
To load target model SDXLClipModel
Begin to load 1 model
Moving model(s) has taken 0.82 seconds
Model loaded in 8.9s (load weights from disk: 0.8s, forge instantiate config: 1.3s, forge load real models: 4.5s, load VAE: 0.8s, calculate empty prompt: 1.4s).
lora key not loaded lora_unet_down_blocks_0_attentions_0_proj_in.alpha
lora key not loaded lora_unet_down_blocks_0_attentions_0_proj_in.lora_down.weight
lora key not loaded lora_unet_down_blocks_0_attentions_0_proj_in.lora_up.weight
lora key not loaded lora_unet_down_blocks_0_attentions_0_proj_out.alpha
lora key not loaded lora_unet_down_blocks_0_attentions_0_proj_out.lora_down.weight
lora key not loaded lora_unet_down_blocks_0_attentions_0_proj_out.lora_up.weight
lora key not loaded lora_unet_down_blocks_0_attentions_0_transformer_blocks_0_attn1_to_k.alpha
lora key not loaded lora_unet_down_blocks_0_attentions_0_transformer_blocks_0_attn1_to_k.lora_down.weight
lora key not loaded lora_unet_down_blocks_0_attentions_0_transformer_blocks_0_attn1_to_k.lora_up.weight
lora key not loaded lora_unet_down_blocks_0_attentions_0_transformer_blocks_0_attn1_to_out_0.alpha
lora key not loaded lora_unet_down_blocks_0_attentions_0_transformer_blocks_0_attn1_to_out_0.lora_down.weight
lora key not loaded lora_unet_down_blocks_0_attentions_0_transformer_blocks_0_attn1_to_out_0.lora_up.weight
lora key not loaded lora_unet_down_blocks_0_attentions_0_transformer_blocks_0_attn1_to_q.alpha
lora key not loaded lora_unet_down_blocks_0_attentions_0_transformer_blocks_0_attn1_to_q.lora_down.weight
lora key not loaded lora_unet_down_blocks_0_attentions_0_transformer_blocks_0_attn1_to_q.lora_up.weight
lora key not loaded lora_unet_down_blocks_0_attentions_0_transformer_blocks_0_attn1_to_v.alpha
lora key not loaded lora_unet_down_blocks_0_attentions_0_transformer_blocks_0_attn1_to_v.lora_down.weight
lora key not loaded lora_unet_down_blocks_0_attentions_0_transformer_blocks_0_attn1_to_v.lora_up.weight
lora key not loaded lora_unet_down_blocks_0_attentions_0_transformer_blocks_0_attn2_to_k.alpha
lora key not loaded lora_unet_down_blocks_0_attentions_0_transformer_blocks_0_attn2_to_k.lora_down.weight
lora key not loaded lora_unet_down_blocks_0_attentions_0_transformer_blocks_0_attn2_to_k.lora_up.weight
lora key not loaded lora_unet_down_blocks_0_attentions_0_transformer_blocks_0_attn2_to_out_0.alpha
lora key not loaded lora_unet_down_blocks_0_attentions_0_transformer_blocks_0_attn2_to_out_0.lora_down.weight
lora key not loaded lora_unet_down_blocks_0_attentions_0_transformer_blocks_0_attn2_to_out_0.lora_up.weight
lora key not loaded lora_unet_down_blocks_0_attentions_0_transformer_blocks_0_attn2_to_q.alpha
lora key not loaded lora_unet_down_blocks_0_attentions_0_transformer_blocks_0_attn2_to_q.lora_down.weight
lora key not loaded lora_unet_down_blocks_0_attentions_0_transformer_blocks_0_attn2_to_q.lora_up.weight
lora key not loaded lora_unet_down_blocks_0_attentions_0_transformer_blocks_0_attn2_to_v.alpha
lora key not loaded lora_unet_down_blocks_0_attentions_0_transformer_blocks_0_attn2_to_v.lora_down.weight
lora key not loaded lora_unet_down_blocks_0_attentions_0_transformer_blocks_0_attn2_to_v.lora_up.weight
lora key not loaded lora_unet_down_blocks_0_attentions_0_transformer_blocks_0_ff_net_0_proj.alpha
lora key not loaded lora_unet_down_blocks_0_attentions_0_transformer_blocks_0_ff_net_0_proj.lora_down.weight
lora key not loaded lora_unet_down_blocks_0_attentions_0_transformer_blocks_0_ff_net_0_proj.lora_up.weight
lora key not loaded lora_unet_down_blocks_0_attentions_0_transformer_blocks_0_ff_net_2.alpha
lora key not loaded lora_unet_down_blocks_0_attentions_0_transformer_blocks_0_ff_net_2.lora_down.weight
lora key not loaded lora_unet_down_blocks_0_attentions_0_transformer_blocks_0_ff_net_2.lora_up.weight
lora key not loaded lora_unet_down_blocks_0_attentions_1_proj_in.alpha
lora key not loaded lora_unet_down_blocks_0_attentions_1_proj_in.lora_down.weight
lora key not loaded lora_unet_down_blocks_0_attentions_1_proj_in.lora_up.weight
lora key not loaded lora_unet_down_blocks_0_attentions_1_proj_out.alpha
lora key not loaded lora_unet_down_blocks_0_attentions_1_proj_out.lora_down.weight
lora key not loaded lora_unet_down_blocks_0_attentions_1_proj_out.lora_up.weight
lora key not loaded lora_unet_down_blocks_0_attentions_1_transformer_blocks_0_attn1_to_k.alpha
lora key not loaded lora_unet_down_blocks_0_attentions_1_transformer_blocks_0_attn1_to_k.lora_down.weight
lora key not loaded lora_unet_down_blocks_0_attentions_1_transformer_blocks_0_attn1_to_k.lora_up.weight
lora key not loaded lora_unet_down_blocks_0_attentions_1_transformer_blocks_0_attn1_to_out_0.alpha
lora key not loaded lora_unet_down_blocks_0_attentions_1_transformer_blocks_0_attn1_to_out_0.lora_down.weight
lora key not loaded lora_unet_down_blocks_0_attentions_1_transformer_blocks_0_attn1_to_out_0.lora_up.weight
lora key not loaded lora_unet_down_blocks_0_attentions_1_transformer_blocks_0_attn1_to_q.alpha
lora key not loaded lora_unet_down_blocks_0_attentions_1_transformer_blocks_0_attn1_to_q.lora_down.weight
lora key not loaded lora_unet_down_blocks_0_attentions_1_transformer_blocks_0_attn1_to_q.lora_up.weight
lora key not loaded lora_unet_down_blocks_0_attentions_1_transformer_blocks_0_attn1_to_v.alpha
lora key not loaded lora_unet_down_blocks_0_attentions_1_transformer_blocks_0_attn1_to_v.lora_down.weight
lora key not loaded lora_unet_down_blocks_0_attentions_1_transformer_blocks_0_attn1_to_v.lora_up.weight
lora key not loaded lora_unet_down_blocks_0_attentions_1_transformer_blocks_0_attn2_to_k.alpha
lora key not loaded lora_unet_down_blocks_0_attentions_1_transformer_blocks_0_attn2_to_k.lora_down.weight
lora key not loaded lora_unet_down_blocks_0_attentions_1_transformer_blocks_0_attn2_to_k.lora_up.weight
lora key not loaded lora_unet_down_blocks_0_attentions_1_transformer_blocks_0_attn2_to_out_0.alpha
lora key not loaded lora_unet_down_blocks_0_attentions_1_transformer_blocks_0_attn2_to_out_0.lora_down.weight
lora key not loaded lora_unet_down_blocks_0_attentions_1_transformer_blocks_0_attn2_to_out_0.lora_up.weight
lora key not loaded lora_unet_down_blocks_0_attentions_1_transformer_blocks_0_attn2_to_q.alpha
lora key not loaded lora_unet_down_blocks_0_attentions_1_transformer_blocks_0_attn2_to_q.lora_down.weight
lora key not loaded lora_unet_down_blocks_0_attentions_1_transformer_blocks_0_attn2_to_q.lora_up.weight
lora key not loaded lora_unet_down_blocks_0_attentions_1_transformer_blocks_0_attn2_to_v.alpha
lora key not loaded lora_unet_down_blocks_0_attentions_1_transformer_blocks_0_attn2_to_v.lora_down.weight
lora key not loaded lora_unet_down_blocks_0_attentions_1_transformer_blocks_0_attn2_to_v.lora_up.weight
lora key not loaded lora_unet_down_blocks_0_attentions_1_transformer_blocks_0_ff_net_0_proj.alpha
lora key not loaded lora_unet_down_blocks_0_attentions_1_transformer_blocks_0_ff_net_0_proj.lora_down.weight
lora key not loaded lora_unet_down_blocks_0_attentions_1_transformer_blocks_0_ff_net_0_proj.lora_up.weight
lora key not loaded lora_unet_down_blocks_0_attentions_1_transformer_blocks_0_ff_net_2.alpha
lora key not loaded lora_unet_down_blocks_0_attentions_1_transformer_blocks_0_ff_net_2.lora_down.weight
lora key not loaded lora_unet_down_blocks_0_attentions_1_transformer_blocks_0_ff_net_2.lora_up.weight
lora key not loaded lora_unet_up_blocks_2_attentions_0_proj_in.alpha
lora key not loaded lora_unet_up_blocks_2_attentions_0_proj_in.lora_down.weight
lora key not loaded lora_unet_up_blocks_2_attentions_0_proj_in.lora_up.weight
lora key not loaded lora_unet_up_blocks_2_attentions_0_proj_out.alpha
lora key not loaded lora_unet_up_blocks_2_attentions_0_proj_out.lora_down.weight
lora key not loaded lora_unet_up_blocks_2_attentions_0_proj_out.lora_up.weight
lora key not loaded lora_unet_up_blocks_2_attentions_0_transformer_blocks_0_attn1_to_k.alpha
lora key not loaded lora_unet_up_blocks_2_attentions_0_transformer_blocks_0_attn1_to_k.lora_down.weight
lora key not loaded lora_unet_up_blocks_2_attentions_0_transformer_blocks_0_attn1_to_k.lora_up.weight
lora key not loaded lora_unet_up_blocks_2_attentions_0_transformer_blocks_0_attn1_to_out_0.alpha
lora key not loaded lora_unet_up_blocks_2_attentions_0_transformer_blocks_0_attn1_to_out_0.lora_down.weight
lora key not loaded lora_unet_up_blocks_2_attentions_0_transformer_blocks_0_attn1_to_out_0.lora_up.weight
lora key not loaded lora_unet_up_blocks_2_attentions_0_transformer_blocks_0_attn1_to_q.alpha
lora key not loaded lora_unet_up_blocks_2_attentions_0_transformer_blocks_0_attn1_to_q.lora_down.weight
lora key not loaded lora_unet_up_blocks_2_attentions_0_transformer_blocks_0_attn1_to_q.lora_up.weight
lora key not loaded lora_unet_up_blocks_2_attentions_0_transformer_blocks_0_attn1_to_v.alpha
lora key not loaded lora_unet_up_blocks_2_attentions_0_transformer_blocks_0_attn1_to_v.lora_down.weight
lora key not loaded lora_unet_up_blocks_2_attentions_0_transformer_blocks_0_attn1_to_v.lora_up.weight
lora key not loaded lora_unet_up_blocks_2_attentions_0_transformer_blocks_0_attn2_to_k.alpha
lora key not loaded lora_unet_up_blocks_2_attentions_0_transformer_blocks_0_attn2_to_k.lora_down.weight
lora key not loaded lora_unet_up_blocks_2_attentions_0_transformer_blocks_0_attn2_to_k.lora_up.weight
lora key not loaded lora_unet_up_blocks_2_attentions_0_transformer_blocks_0_attn2_to_out_0.alpha
lora key not loaded lora_unet_up_blocks_2_attentions_0_transformer_blocks_0_attn2_to_out_0.lora_down.weight
lora key not loaded lora_unet_up_blocks_2_attentions_0_transformer_blocks_0_attn2_to_out_0.lora_up.weight
lora key not loaded lora_unet_up_blocks_2_attentions_0_transformer_blocks_0_attn2_to_q.alpha
lora key not loaded lora_unet_up_blocks_2_attentions_0_transformer_blocks_0_attn2_to_q.lora_down.weight
lora key not loaded lora_unet_up_blocks_2_attentions_0_transformer_blocks_0_attn2_to_q.lora_up.weight
lora key not loaded lora_unet_up_blocks_2_attentions_0_transformer_blocks_0_attn2_to_v.alpha
lora key not loaded lora_unet_up_blocks_2_attentions_0_transformer_blocks_0_attn2_to_v.lora_down.weight
lora key not loaded lora_unet_up_blocks_2_attentions_0_transformer_blocks_0_attn2_to_v.lora_up.weight
lora key not loaded lora_unet_up_blocks_2_attentions_0_transformer_blocks_0_ff_net_0_proj.alpha
lora key not loaded lora_unet_up_blocks_2_attentions_0_transformer_blocks_0_ff_net_0_proj.lora_down.weight
lora key not loaded lora_unet_up_blocks_2_attentions_0_transformer_blocks_0_ff_net_0_proj.lora_up.weight
lora key not loaded lora_unet_up_blocks_2_attentions_0_transformer_blocks_0_ff_net_2.alpha
lora key not loaded lora_unet_up_blocks_2_attentions_0_transformer_blocks_0_ff_net_2.lora_down.weight
lora key not loaded lora_unet_up_blocks_2_attentions_0_transformer_blocks_0_ff_net_2.lora_up.weight
lora key not loaded lora_unet_up_blocks_2_attentions_1_proj_in.alpha
lora key not loaded lora_unet_up_blocks_2_attentions_1_proj_in.lora_down.weight
lora key not loaded lora_unet_up_blocks_2_attentions_1_proj_in.lora_up.weight
lora key not loaded lora_unet_up_blocks_2_attentions_1_proj_out.alpha
lora key not loaded lora_unet_up_blocks_2_attentions_1_proj_out.lora_down.weight
lora key not loaded lora_unet_up_blocks_2_attentions_1_proj_out.lora_up.weight
lora key not loaded lora_unet_up_blocks_2_attentions_1_transformer_blocks_0_attn1_to_k.alpha
lora key not loaded lora_unet_up_blocks_2_attentions_1_transformer_blocks_0_attn1_to_k.lora_down.weight
lora key not loaded lora_unet_up_blocks_2_attentions_1_transformer_blocks_0_attn1_to_k.lora_up.weight
lora key not loaded lora_unet_up_blocks_2_attentions_1_transformer_blocks_0_attn1_to_out_0.alpha
lora key not loaded lora_unet_up_blocks_2_attentions_1_transformer_blocks_0_attn1_to_out_0.lora_down.weight
lora key not loaded lora_unet_up_blocks_2_attentions_1_transformer_blocks_0_attn1_to_out_0.lora_up.weight
lora key not loaded lora_unet_up_blocks_2_attentions_1_transformer_blocks_0_attn1_to_q.alpha
lora key not loaded lora_unet_up_blocks_2_attentions_1_transformer_blocks_0_attn1_to_q.lora_down.weight
lora key not loaded lora_unet_up_blocks_2_attentions_1_transformer_blocks_0_attn1_to_q.lora_up.weight
lora key not loaded lora_unet_up_blocks_2_attentions_1_transformer_blocks_0_attn1_to_v.alpha
lora key not loaded lora_unet_up_blocks_2_attentions_1_transformer_blocks_0_attn1_to_v.lora_down.weight
lora key not loaded lora_unet_up_blocks_2_attentions_1_transformer_blocks_0_attn1_to_v.lora_up.weight
lora key not loaded lora_unet_up_blocks_2_attentions_1_transformer_blocks_0_attn2_to_k.alpha
lora key not loaded lora_unet_up_blocks_2_attentions_1_transformer_blocks_0_attn2_to_k.lora_down.weight
lora key not loaded lora_unet_up_blocks_2_attentions_1_transformer_blocks_0_attn2_to_k.lora_up.weight
lora key not loaded lora_unet_up_blocks_2_attentions_1_transformer_blocks_0_attn2_to_out_0.alpha
lora key not loaded lora_unet_up_blocks_2_attentions_1_transformer_blocks_0_attn2_to_out_0.lora_down.weight
lora key not loaded lora_unet_up_blocks_2_attentions_1_transformer_blocks_0_attn2_to_out_0.lora_up.weight
lora key not loaded lora_unet_up_blocks_2_attentions_1_transformer_blocks_0_attn2_to_q.alpha
lora key not loaded lora_unet_up_blocks_2_attentions_1_transformer_blocks_0_attn2_to_q.lora_down.weight
lora key not loaded lora_unet_up_blocks_2_attentions_1_transformer_blocks_0_attn2_to_q.lora_up.weight
lora key not loaded lora_unet_up_blocks_2_attentions_1_transformer_blocks_0_attn2_to_v.alpha
lora key not loaded lora_unet_up_blocks_2_attentions_1_transformer_blocks_0_attn2_to_v.lora_down.weight
lora key not loaded lora_unet_up_blocks_2_attentions_1_transformer_blocks_0_attn2_to_v.lora_up.weight
lora key not loaded lora_unet_up_blocks_2_attentions_1_transformer_blocks_0_ff_net_0_proj.alpha
lora key not loaded lora_unet_up_blocks_2_attentions_1_transformer_blocks_0_ff_net_0_proj.lora_down.weight
lora key not loaded lora_unet_up_blocks_2_attentions_1_transformer_blocks_0_ff_net_0_proj.lora_up.weight
lora key not loaded lora_unet_up_blocks_2_attentions_1_transformer_blocks_0_ff_net_2.alpha
lora key not loaded lora_unet_up_blocks_2_attentions_1_transformer_blocks_0_ff_net_2.lora_down.weight
lora key not loaded lora_unet_up_blocks_2_attentions_1_transformer_blocks_0_ff_net_2.lora_up.weight
lora key not loaded lora_unet_up_blocks_2_attentions_2_proj_in.alpha
lora key not loaded lora_unet_up_blocks_2_attentions_2_proj_in.lora_down.weight
lora key not loaded lora_unet_up_blocks_2_attentions_2_proj_in.lora_up.weight
lora key not loaded lora_unet_up_blocks_2_attentions_2_proj_out.alpha
lora key not loaded lora_unet_up_blocks_2_attentions_2_proj_out.lora_down.weight
lora key not loaded lora_unet_up_blocks_2_attentions_2_proj_out.lora_up.weight
lora key not loaded lora_unet_up_blocks_2_attentions_2_transformer_blocks_0_attn1_to_k.alpha
lora key not loaded lora_unet_up_blocks_2_attentions_2_transformer_blocks_0_attn1_to_k.lora_down.weight
lora key not loaded lora_unet_up_blocks_2_attentions_2_transformer_blocks_0_attn1_to_k.lora_up.weight
lora key not loaded lora_unet_up_blocks_2_attentions_2_transformer_blocks_0_attn1_to_out_0.alpha
lora key not loaded lora_unet_up_blocks_2_attentions_2_transformer_blocks_0_attn1_to_out_0.lora_down.weight
lora key not loaded lora_unet_up_blocks_2_attentions_2_transformer_blocks_0_attn1_to_out_0.lora_up.weight
lora key not loaded lora_unet_up_blocks_2_attentions_2_transformer_blocks_0_attn1_to_q.alpha
lora key not loaded lora_unet_up_blocks_2_attentions_2_transformer_blocks_0_attn1_to_q.lora_down.weight
lora key not loaded lora_unet_up_blocks_2_attentions_2_transformer_blocks_0_attn1_to_q.lora_up.weight
lora key not loaded lora_unet_up_blocks_2_attentions_2_transformer_blocks_0_attn1_to_v.alpha
lora key not loaded lora_unet_up_blocks_2_attentions_2_transformer_blocks_0_attn1_to_v.lora_down.weight
lora key not loaded lora_unet_up_blocks_2_attentions_2_transformer_blocks_0_attn1_to_v.lora_up.weight
lora key not loaded lora_unet_up_blocks_2_attentions_2_transformer_blocks_0_attn2_to_k.alpha
lora key not loaded lora_unet_up_blocks_2_attentions_2_transformer_blocks_0_attn2_to_k.lora_down.weight
lora key not loaded lora_unet_up_blocks_2_attentions_2_transformer_blocks_0_attn2_to_k.lora_up.weight
lora key not loaded lora_unet_up_blocks_2_attentions_2_transformer_blocks_0_attn2_to_out_0.alpha
lora key not loaded lora_unet_up_blocks_2_attentions_2_transformer_blocks_0_attn2_to_out_0.lora_down.weight
lora key not loaded lora_unet_up_blocks_2_attentions_2_transformer_blocks_0_attn2_to_out_0.lora_up.weight
lora key not loaded lora_unet_up_blocks_2_attentions_2_transformer_blocks_0_attn2_to_q.alpha
lora key not loaded lora_unet_up_blocks_2_attentions_2_transformer_blocks_0_attn2_to_q.lora_down.weight
lora key not loaded lora_unet_up_blocks_2_attentions_2_transformer_blocks_0_attn2_to_q.lora_up.weight
lora key not loaded lora_unet_up_blocks_2_attentions_2_transformer_blocks_0_attn2_to_v.alpha
lora key not loaded lora_unet_up_blocks_2_attentions_2_transformer_blocks_0_attn2_to_v.lora_down.weight
lora key not loaded lora_unet_up_blocks_2_attentions_2_transformer_blocks_0_attn2_to_v.lora_up.weight
lora key not loaded lora_unet_up_blocks_2_attentions_2_transformer_blocks_0_ff_net_0_proj.alpha
lora key not loaded lora_unet_up_blocks_2_attentions_2_transformer_blocks_0_ff_net_0_proj.lora_down.weight
lora key not loaded lora_unet_up_blocks_2_attentions_2_transformer_blocks_0_ff_net_0_proj.lora_up.weight
lora key not loaded lora_unet_up_blocks_2_attentions_2_transformer_blocks_0_ff_net_2.alpha
lora key not loaded lora_unet_up_blocks_2_attentions_2_transformer_blocks_0_ff_net_2.lora_down.weight
lora key not loaded lora_unet_up_blocks_2_attentions_2_transformer_blocks_0_ff_net_2.lora_up.weight
lora key not loaded lora_unet_up_blocks_3_attentions_0_proj_in.alpha
lora key not loaded lora_unet_up_blocks_3_attentions_0_proj_in.lora_down.weight
lora key not loaded lora_unet_up_blocks_3_attentions_0_proj_in.lora_up.weight
lora key not loaded lora_unet_up_blocks_3_attentions_0_proj_out.alpha
lora key not loaded lora_unet_up_blocks_3_attentions_0_proj_out.lora_down.weight
lora key not loaded lora_unet_up_blocks_3_attentions_0_proj_out.lora_up.weight
lora key not loaded lora_unet_up_blocks_3_attentions_0_transformer_blocks_0_attn1_to_k.alpha
lora key not loaded lora_unet_up_blocks_3_attentions_0_transformer_blocks_0_attn1_to_k.lora_down.weight
lora key not loaded lora_unet_up_blocks_3_attentions_0_transformer_blocks_0_attn1_to_k.lora_up.weight
lora key not loaded lora_unet_up_blocks_3_attentions_0_transformer_blocks_0_attn1_to_out_0.alpha
lora key not loaded lora_unet_up_blocks_3_attentions_0_transformer_blocks_0_attn1_to_out_0.lora_down.weight
lora key not loaded lora_unet_up_blocks_3_attentions_0_transformer_blocks_0_attn1_to_out_0.lora_up.weight
lora key not loaded lora_unet_up_blocks_3_attentions_0_transformer_blocks_0_attn1_to_q.alpha
lora key not loaded lora_unet_up_blocks_3_attentions_0_transformer_blocks_0_attn1_to_q.lora_down.weight
lora key not loaded lora_unet_up_blocks_3_attentions_0_transformer_blocks_0_attn1_to_q.lora_up.weight
lora key not loaded lora_unet_up_blocks_3_attentions_0_transformer_blocks_0_attn1_to_v.alpha
lora key not loaded lora_unet_up_blocks_3_attentions_0_transformer_blocks_0_attn1_to_v.lora_down.weight
lora key not loaded lora_unet_up_blocks_3_attentions_0_transformer_blocks_0_attn1_to_v.lora_up.weight
lora key not loaded lora_unet_up_blocks_3_attentions_0_transformer_blocks_0_attn2_to_k.alpha
lora key not loaded lora_unet_up_blocks_3_attentions_0_transformer_blocks_0_attn2_to_k.lora_down.weight
lora key not loaded lora_unet_up_blocks_3_attentions_0_transformer_blocks_0_attn2_to_k.lora_up.weight
lora key not loaded lora_unet_up_blocks_3_attentions_0_transformer_blocks_0_attn2_to_out_0.alpha
lora key not loaded lora_unet_up_blocks_3_attentions_0_transformer_blocks_0_attn2_to_out_0.lora_down.weight
lora key not loaded lora_unet_up_blocks_3_attentions_0_transformer_blocks_0_attn2_to_out_0.lora_up.weight
lora key not loaded lora_unet_up_blocks_3_attentions_0_transformer_blocks_0_attn2_to_q.alpha
lora key not loaded lora_unet_up_blocks_3_attentions_0_transformer_blocks_0_attn2_to_q.lora_down.weight
lora key not loaded lora_unet_up_blocks_3_attentions_0_transformer_blocks_0_attn2_to_q.lora_up.weight
lora key not loaded lora_unet_up_blocks_3_attentions_0_transformer_blocks_0_attn2_to_v.alpha
lora key not loaded lora_unet_up_blocks_3_attentions_0_transformer_blocks_0_attn2_to_v.lora_down.weight
lora key not loaded lora_unet_up_blocks_3_attentions_0_transformer_blocks_0_attn2_to_v.lora_up.weight
lora key not loaded lora_unet_up_blocks_3_attentions_0_transformer_blocks_0_ff_net_0_proj.alpha
lora key not loaded lora_unet_up_blocks_3_attentions_0_transformer_blocks_0_ff_net_0_proj.lora_down.weight
lora key not loaded lora_unet_up_blocks_3_attentions_0_transformer_blocks_0_ff_net_0_proj.lora_up.weight
lora key not loaded lora_unet_up_blocks_3_attentions_0_transformer_blocks_0_ff_net_2.alpha
lora key not loaded lora_unet_up_blocks_3_attentions_0_transformer_blocks_0_ff_net_2.lora_down.weight
lora key not loaded lora_unet_up_blocks_3_attentions_0_transformer_blocks_0_ff_net_2.lora_up.weight
lora key not loaded lora_unet_up_blocks_3_attentions_1_proj_in.alpha
lora key not loaded lora_unet_up_blocks_3_attentions_1_proj_in.lora_down.weight
lora key not loaded lora_unet_up_blocks_3_attentions_1_proj_in.lora_up.weight
lora key not loaded lora_unet_up_blocks_3_attentions_1_proj_out.alpha
lora key not loaded lora_unet_up_blocks_3_attentions_1_proj_out.lora_down.weight
lora key not loaded lora_unet_up_blocks_3_attentions_1_proj_out.lora_up.weight
lora key not loaded lora_unet_up_blocks_3_attentions_1_transformer_blocks_0_attn1_to_k.alpha
lora key not loaded lora_unet_up_blocks_3_attentions_1_transformer_blocks_0_attn1_to_k.lora_down.weight
lora key not loaded lora_unet_up_blocks_3_attentions_1_transformer_blocks_0_attn1_to_k.lora_up.weight
lora key not loaded lora_unet_up_blocks_3_attentions_1_transformer_blocks_0_attn1_to_out_0.alpha
lora key not loaded lora_unet_up_blocks_3_attentions_1_transformer_blocks_0_attn1_to_out_0.lora_down.weight
lora key not loaded lora_unet_up_blocks_3_attentions_1_transformer_blocks_0_attn1_to_out_0.lora_up.weight
lora key not loaded lora_unet_up_blocks_3_attentions_1_transformer_blocks_0_attn1_to_q.alpha
lora key not loaded lora_unet_up_blocks_3_attentions_1_transformer_blocks_0_attn1_to_q.lora_down.weight
lora key not loaded lora_unet_up_blocks_3_attentions_1_transformer_blocks_0_attn1_to_q.lora_up.weight
lora key not loaded lora_unet_up_blocks_3_attentions_1_transformer_blocks_0_attn1_to_v.alpha
lora key not loaded lora_unet_up_blocks_3_attentions_1_transformer_blocks_0_attn1_to_v.lora_down.weight
lora key not loaded lora_unet_up_blocks_3_attentions_1_transformer_blocks_0_attn1_to_v.lora_up.weight
lora key not loaded lora_unet_up_blocks_3_attentions_1_transformer_blocks_0_attn2_to_k.alpha
lora key not loaded lora_unet_up_blocks_3_attentions_1_transformer_blocks_0_attn2_to_k.lora_down.weight
lora key not loaded lora_unet_up_blocks_3_attentions_1_transformer_blocks_0_attn2_to_k.lora_up.weight
lora key not loaded lora_unet_up_blocks_3_attentions_1_transformer_blocks_0_attn2_to_out_0.alpha
lora key not loaded lora_unet_up_blocks_3_attentions_1_transformer_blocks_0_attn2_to_out_0.lora_down.weight
lora key not loaded lora_unet_up_blocks_3_attentions_1_transformer_blocks_0_attn2_to_out_0.lora_up.weight
lora key not loaded lora_unet_up_blocks_3_attentions_1_transformer_blocks_0_attn2_to_q.alpha
lora key not loaded lora_unet_up_blocks_3_attentions_1_transformer_blocks_0_attn2_to_q.lora_down.weight
lora key not loaded lora_unet_up_blocks_3_attentions_1_transformer_blocks_0_attn2_to_q.lora_up.weight
lora key not loaded lora_unet_up_blocks_3_attentions_1_transformer_blocks_0_attn2_to_v.alpha
lora key not loaded lora_unet_up_blocks_3_attentions_1_transformer_blocks_0_attn2_to_v.lora_down.weight
lora key not loaded lora_unet_up_blocks_3_attentions_1_transformer_blocks_0_attn2_to_v.lora_up.weight
lora key not loaded lora_unet_up_blocks_3_attentions_1_transformer_blocks_0_ff_net_0_proj.alpha
lora key not loaded lora_unet_up_blocks_3_attentions_1_transformer_blocks_0_ff_net_0_proj.lora_down.weight
lora key not loaded lora_unet_up_blocks_3_attentions_1_transformer_blocks_0_ff_net_0_proj.lora_up.weight
lora key not loaded lora_unet_up_blocks_3_attentions_1_transformer_blocks_0_ff_net_2.alpha
lora key not loaded lora_unet_up_blocks_3_attentions_1_transformer_blocks_0_ff_net_2.lora_down.weight
lora key not loaded lora_unet_up_blocks_3_attentions_1_transformer_blocks_0_ff_net_2.lora_up.weight
lora key not loaded lora_unet_up_blocks_3_attentions_2_proj_in.alpha
lora key not loaded lora_unet_up_blocks_3_attentions_2_proj_in.lora_down.weight
lora key not loaded lora_unet_up_blocks_3_attentions_2_proj_in.lora_up.weight
lora key not loaded lora_unet_up_blocks_3_attentions_2_proj_out.alpha
lora key not loaded lora_unet_up_blocks_3_attentions_2_proj_out.lora_down.weight
lora key not loaded lora_unet_up_blocks_3_attentions_2_proj_out.lora_up.weight
lora key not loaded lora_unet_up_blocks_3_attentions_2_transformer_blocks_0_attn1_to_k.alpha
lora key not loaded lora_unet_up_blocks_3_attentions_2_transformer_blocks_0_attn1_to_k.lora_down.weight
lora key not loaded lora_unet_up_blocks_3_attentions_2_transformer_blocks_0_attn1_to_k.lora_up.weight
lora key not loaded lora_unet_up_blocks_3_attentions_2_transformer_blocks_0_attn1_to_out_0.alpha
lora key not loaded lora_unet_up_blocks_3_attentions_2_transformer_blocks_0_attn1_to_out_0.lora_down.weight
lora key not loaded lora_unet_up_blocks_3_attentions_2_transformer_blocks_0_attn1_to_out_0.lora_up.weight
lora key not loaded lora_unet_up_blocks_3_attentions_2_transformer_blocks_0_attn1_to_q.alpha
lora key not loaded lora_unet_up_blocks_3_attentions_2_transformer_blocks_0_attn1_to_q.lora_down.weight
lora key not loaded lora_unet_up_blocks_3_attentions_2_transformer_blocks_0_attn1_to_q.lora_up.weight
lora key not loaded lora_unet_up_blocks_3_attentions_2_transformer_blocks_0_attn1_to_v.alpha
lora key not loaded lora_unet_up_blocks_3_attentions_2_transformer_blocks_0_attn1_to_v.lora_down.weight
lora key not loaded lora_unet_up_blocks_3_attentions_2_transformer_blocks_0_attn1_to_v.lora_up.weight
lora key not loaded lora_unet_up_blocks_3_attentions_2_transformer_blocks_0_attn2_to_k.alpha
lora key not loaded lora_unet_up_blocks_3_attentions_2_transformer_blocks_0_attn2_to_k.lora_down.weight
lora key not loaded lora_unet_up_blocks_3_attentions_2_transformer_blocks_0_attn2_to_k.lora_up.weight
lora key not loaded lora_unet_up_blocks_3_attentions_2_transformer_blocks_0_attn2_to_out_0.alpha
lora key not loaded lora_unet_up_blocks_3_attentions_2_transformer_blocks_0_attn2_to_out_0.lora_down.weight
lora key not loaded lora_unet_up_blocks_3_attentions_2_transformer_blocks_0_attn2_to_out_0.lora_up.weight
lora key not loaded lora_unet_up_blocks_3_attentions_2_transformer_blocks_0_attn2_to_q.alpha
lora key not loaded lora_unet_up_blocks_3_attentions_2_transformer_blocks_0_attn2_to_q.lora_down.weight
lora key not loaded lora_unet_up_blocks_3_attentions_2_transformer_blocks_0_attn2_to_q.lora_up.weight
lora key not loaded lora_unet_up_blocks_3_attentions_2_transformer_blocks_0_attn2_to_v.alpha
lora key not loaded lora_unet_up_blocks_3_attentions_2_transformer_blocks_0_attn2_to_v.lora_down.weight
lora key not loaded lora_unet_up_blocks_3_attentions_2_transformer_blocks_0_attn2_to_v.lora_up.weight
lora key not loaded lora_unet_up_blocks_3_attentions_2_transformer_blocks_0_ff_net_0_proj.alpha
lora key not loaded lora_unet_up_blocks_3_attentions_2_transformer_blocks_0_ff_net_0_proj.lora_down.weight
lora key not loaded lora_unet_up_blocks_3_attentions_2_transformer_blocks_0_ff_net_0_proj.lora_up.weight
lora key not loaded lora_unet_up_blocks_3_attentions_2_transformer_blocks_0_ff_net_2.alpha
lora key not loaded lora_unet_up_blocks_3_attentions_2_transformer_blocks_0_ff_net_2.lora_down.weight
lora key not loaded lora_unet_up_blocks_3_attentions_2_transformer_blocks_0_ff_net_2.lora_up.weight
lora key not loaded lora_unet_down_blocks_0_attentions_0_proj_in.alpha
lora key not loaded lora_unet_down_blocks_0_attentions_0_proj_in.lora_down.weight
lora key not loaded lora_unet_down_blocks_0_attentions_0_proj_in.lora_up.weight
lora key not loaded lora_unet_down_blocks_0_attentions_0_proj_out.alpha
lora key not loaded lora_unet_down_blocks_0_attentions_0_proj_out.lora_down.weight
lora key not loaded lora_unet_down_blocks_0_attentions_0_proj_out.lora_up.weight
lora key not loaded lora_unet_down_blocks_0_attentions_0_transformer_blocks_0_attn1_to_k.alpha
lora key not loaded lora_unet_down_blocks_0_attentions_0_transformer_blocks_0_attn1_to_k.lora_down.weight
lora key not loaded lora_unet_down_blocks_0_attentions_0_transformer_blocks_0_attn1_to_k.lora_up.weight
lora key not loaded lora_unet_down_blocks_0_attentions_0_transformer_blocks_0_attn1_to_out_0.alpha
lora key not loaded lora_unet_down_blocks_0_attentions_0_transformer_blocks_0_attn1_to_out_0.lora_down.weight
lora key not loaded lora_unet_down_blocks_0_attentions_0_transformer_blocks_0_attn1_to_out_0.lora_up.weight
lora key not loaded lora_unet_down_blocks_0_attentions_0_transformer_blocks_0_attn1_to_q.alpha
lora key not loaded lora_unet_down_blocks_0_attentions_0_transformer_blocks_0_attn1_to_q.lora_down.weight
lora key not loaded lora_unet_down_blocks_0_attentions_0_transformer_blocks_0_attn1_to_q.lora_up.weight
lora key not loaded lora_unet_down_blocks_0_attentions_0_transformer_blocks_0_attn1_to_v.alpha
lora key not loaded lora_unet_down_blocks_0_attentions_0_transformer_blocks_0_attn1_to_v.lora_down.weight
lora key not loaded lora_unet_down_blocks_0_attentions_0_transformer_blocks_0_attn1_to_v.lora_up.weight
lora key not loaded lora_unet_down_blocks_0_attentions_0_transformer_blocks_0_attn2_to_k.alpha
lora key not loaded lora_unet_down_blocks_0_attentions_0_transformer_blocks_0_attn2_to_k.lora_down.weight
lora key not loaded lora_unet_down_blocks_0_attentions_0_transformer_blocks_0_attn2_to_k.lora_up.weight
lora key not loaded lora_unet_down_blocks_0_attentions_0_transformer_blocks_0_attn2_to_out_0.alpha
lora key not loaded lora_unet_down_blocks_0_attentions_0_transformer_blocks_0_attn2_to_out_0.lora_down.weight
lora key not loaded lora_unet_down_blocks_0_attentions_0_transformer_blocks_0_attn2_to_out_0.lora_up.weight
lora key not loaded lora_unet_down_blocks_0_attentions_0_transformer_blocks_0_attn2_to_q.alpha
lora key not loaded lora_unet_down_blocks_0_attentions_0_transformer_blocks_0_attn2_to_q.lora_down.weight
lora key not loaded lora_unet_down_blocks_0_attentions_0_transformer_blocks_0_attn2_to_q.lora_up.weight
lora key not loaded lora_unet_down_blocks_0_attentions_0_transformer_blocks_0_attn2_to_v.alpha
lora key not loaded lora_unet_down_blocks_0_attentions_0_transformer_blocks_0_attn2_to_v.lora_down.weight
lora key not loaded lora_unet_down_blocks_0_attentions_0_transformer_blocks_0_attn2_to_v.lora_up.weight
lora key not loaded lora_unet_down_blocks_0_attentions_0_transformer_blocks_0_ff_net_0_proj.alpha
lora key not loaded lora_unet_down_blocks_0_attentions_0_transformer_blocks_0_ff_net_0_proj.lora_down.weight
lora key not loaded lora_unet_down_blocks_0_attentions_0_transformer_blocks_0_ff_net_0_proj.lora_up.weight
lora key not loaded lora_unet_down_blocks_0_attentions_0_transformer_blocks_0_ff_net_2.alpha
lora key not loaded lora_unet_down_blocks_0_attentions_0_transformer_blocks_0_ff_net_2.lora_down.weight
lora key not loaded lora_unet_down_blocks_0_attentions_0_transformer_blocks_0_ff_net_2.lora_up.weight
lora key not loaded lora_unet_down_blocks_0_attentions_1_proj_in.alpha
lora key not loaded lora_unet_down_blocks_0_attentions_1_proj_in.lora_down.weight
lora key not loaded lora_unet_down_blocks_0_attentions_1_proj_in.lora_up.weight
lora key not loaded lora_unet_down_blocks_0_attentions_1_proj_out.alpha
lora key not loaded lora_unet_down_blocks_0_attentions_1_proj_out.lora_down.weight
lora key not loaded lora_unet_down_blocks_0_attentions_1_proj_out.lora_up.weight
lora key not loaded lora_unet_down_blocks_0_attentions_1_transformer_blocks_0_attn1_to_k.alpha
lora key not loaded lora_unet_down_blocks_0_attentions_1_transformer_blocks_0_attn1_to_k.lora_down.weight
lora key not loaded lora_unet_down_blocks_0_attentions_1_transformer_blocks_0_attn1_to_k.lora_up.weight
lora key not loaded lora_unet_down_blocks_0_attentions_1_transformer_blocks_0_attn1_to_out_0.alpha
lora key not loaded lora_unet_down_blocks_0_attentions_1_transformer_blocks_0_attn1_to_out_0.lora_down.weight
lora key not loaded lora_unet_down_blocks_0_attentions_1_transformer_blocks_0_attn1_to_out_0.lora_up.weight
lora key not loaded lora_unet_down_blocks_0_attentions_1_transformer_blocks_0_attn1_to_q.alpha
lora key not loaded lora_unet_down_blocks_0_attentions_1_transformer_blocks_0_attn1_to_q.lora_down.weight
lora key not loaded lora_unet_down_blocks_0_attentions_1_transformer_blocks_0_attn1_to_q.lora_up.weight
lora key not loaded lora_unet_down_blocks_0_attentions_1_transformer_blocks_0_attn1_to_v.alpha
lora key not loaded lora_unet_down_blocks_0_attentions_1_transformer_blocks_0_attn1_to_v.lora_down.weight
lora key not loaded lora_unet_down_blocks_0_attentions_1_transformer_blocks_0_attn1_to_v.lora_up.weight
lora key not loaded lora_unet_down_blocks_0_attentions_1_transformer_blocks_0_attn2_to_k.alpha
lora key not loaded lora_unet_down_blocks_0_attentions_1_transformer_blocks_0_attn2_to_k.lora_down.weight
lora key not loaded lora_unet_down_blocks_0_attentions_1_transformer_blocks_0_attn2_to_k.lora_up.weight
lora key not loaded lora_unet_down_blocks_0_attentions_1_transformer_blocks_0_attn2_to_out_0.alpha
lora key not loaded lora_unet_down_blocks_0_attentions_1_transformer_blocks_0_attn2_to_out_0.lora_down.weight
lora key not loaded lora_unet_down_blocks_0_attentions_1_transformer_blocks_0_attn2_to_out_0.lora_up.weight
lora key not loaded lora_unet_down_blocks_0_attentions_1_transformer_blocks_0_attn2_to_q.alpha
lora key not loaded lora_unet_down_blocks_0_attentions_1_transformer_blocks_0_attn2_to_q.lora_down.weight
lora key not loaded lora_unet_down_blocks_0_attentions_1_transformer_blocks_0_attn2_to_q.lora_up.weight
lora key not loaded lora_unet_down_blocks_0_attentions_1_transformer_blocks_0_attn2_to_v.alpha
lora key not loaded lora_unet_down_blocks_0_attentions_1_transformer_blocks_0_attn2_to_v.lora_down.weight
lora key not loaded lora_unet_down_blocks_0_attentions_1_transformer_blocks_0_attn2_to_v.lora_up.weight
lora key not loaded lora_unet_down_blocks_0_attentions_1_transformer_blocks_0_ff_net_0_proj.alpha
lora key not loaded lora_unet_down_blocks_0_attentions_1_transformer_blocks_0_ff_net_0_proj.lora_down.weight
lora key not loaded lora_unet_down_blocks_0_attentions_1_transformer_blocks_0_ff_net_0_proj.lora_up.weight
lora key not loaded lora_unet_down_blocks_0_attentions_1_transformer_blocks_0_ff_net_2.alpha
lora key not loaded lora_unet_down_blocks_0_attentions_1_transformer_blocks_0_ff_net_2.lora_down.weight
lora key not loaded lora_unet_down_blocks_0_attentions_1_transformer_blocks_0_ff_net_2.lora_up.weight
lora key not loaded lora_unet_up_blocks_2_attentions_0_proj_in.alpha
lora key not loaded lora_unet_up_blocks_2_attentions_0_proj_in.lora_down.weight
lora key not loaded lora_unet_up_blocks_2_attentions_0_proj_in.lora_up.weight
lora key not loaded lora_unet_up_blocks_2_attentions_0_proj_out.alpha
lora key not loaded lora_unet_up_blocks_2_attentions_0_proj_out.lora_down.weight
lora key not loaded lora_unet_up_blocks_2_attentions_0_proj_out.lora_up.weight
lora key not loaded lora_unet_up_blocks_2_attentions_0_transformer_blocks_0_attn1_to_k.alpha
lora key not loaded lora_unet_up_blocks_2_attentions_0_transformer_blocks_0_attn1_to_k.lora_down.weight
lora key not loaded lora_unet_up_blocks_2_attentions_0_transformer_blocks_0_attn1_to_k.lora_up.weight
lora key not loaded lora_unet_up_blocks_2_attentions_0_transformer_blocks_0_attn1_to_out_0.alpha
lora key not loaded lora_unet_up_blocks_2_attentions_0_transformer_blocks_0_attn1_to_out_0.lora_down.weight
lora key not loaded lora_unet_up_blocks_2_attentions_0_transformer_blocks_0_attn1_to_out_0.lora_up.weight
lora key not loaded lora_unet_up_blocks_2_attentions_0_transformer_blocks_0_attn1_to_q.alpha
lora key not loaded lora_unet_up_blocks_2_attentions_0_transformer_blocks_0_attn1_to_q.lora_down.weight
lora key not loaded lora_unet_up_blocks_2_attentions_0_transformer_blocks_0_attn1_to_q.lora_up.weight
lora key not loaded lora_unet_up_blocks_2_attentions_0_transformer_blocks_0_attn1_to_v.alpha
lora key not loaded lora_unet_up_blocks_2_attentions_0_transformer_blocks_0_attn1_to_v.lora_down.weight
lora key not loaded lora_unet_up_blocks_2_attentions_0_transformer_blocks_0_attn1_to_v.lora_up.weight
lora key not loaded lora_unet_up_blocks_2_attentions_0_transformer_blocks_0_attn2_to_k.alpha
lora key not loaded lora_unet_up_blocks_2_attentions_0_transformer_blocks_0_attn2_to_k.lora_down.weight
lora key not loaded lora_unet_up_blocks_2_attentions_0_transformer_blocks_0_attn2_to_k.lora_up.weight
lora key not loaded lora_unet_up_blocks_2_attentions_0_transformer_blocks_0_attn2_to_out_0.alpha
lora key not loaded lora_unet_up_blocks_2_attentions_0_transformer_blocks_0_attn2_to_out_0.lora_down.weight
lora key not loaded lora_unet_up_blocks_2_attentions_0_transformer_blocks_0_attn2_to_out_0.lora_up.weight
lora key not loaded lora_unet_up_blocks_2_attentions_0_transformer_blocks_0_attn2_to_q.alpha
lora key not loaded lora_unet_up_blocks_2_attentions_0_transformer_blocks_0_attn2_to_q.lora_down.weight
lora key not loaded lora_unet_up_blocks_2_attentions_0_transformer_blocks_0_attn2_to_q.lora_up.weight
lora key not loaded lora_unet_up_blocks_2_attentions_0_transformer_blocks_0_attn2_to_v.alpha
lora key not loaded lora_unet_up_blocks_2_attentions_0_transformer_blocks_0_attn2_to_v.lora_down.weight
lora key not loaded lora_unet_up_blocks_2_attentions_0_transformer_blocks_0_attn2_to_v.lora_up.weight
lora key not loaded lora_unet_up_blocks_2_attentions_0_transformer_blocks_0_ff_net_0_proj.alpha
lora key not loaded lora_unet_up_blocks_2_attentions_0_transformer_blocks_0_ff_net_0_proj.lora_down.weight
lora key not loaded lora_unet_up_blocks_2_attentions_0_transformer_blocks_0_ff_net_0_proj.lora_up.weight
lora key not loaded lora_unet_up_blocks_2_attentions_0_transformer_blocks_0_ff_net_2.alpha
lora key not loaded lora_unet_up_blocks_2_attentions_0_transformer_blocks_0_ff_net_2.lora_down.weight
lora key not loaded lora_unet_up_blocks_2_attentions_0_transformer_blocks_0_ff_net_2.lora_up.weight
lora key not loaded lora_unet_up_blocks_2_attentions_1_proj_in.alpha
lora key not loaded lora_unet_up_blocks_2_attentions_1_proj_in.lora_down.weight
lora key not loaded lora_unet_up_blocks_2_attentions_1_proj_in.lora_up.weight
lora key not loaded lora_unet_up_blocks_2_attentions_1_proj_out.alpha
lora key not loaded lora_unet_up_blocks_2_attentions_1_proj_out.lora_down.weight
lora key not loaded lora_unet_up_blocks_2_attentions_1_proj_out.lora_up.weight
lora key not loaded lora_unet_up_blocks_2_attentions_1_transformer_blocks_0_attn1_to_k.alpha
lora key not loaded lora_unet_up_blocks_2_attentions_1_transformer_blocks_0_attn1_to_k.lora_down.weight
lora key not loaded lora_unet_up_blocks_2_attentions_1_transformer_blocks_0_attn1_to_k.lora_up.weight
lora key not loaded lora_unet_up_blocks_2_attentions_1_transformer_blocks_0_attn1_to_out_0.alpha
lora key not loaded lora_unet_up_blocks_2_attentions_1_transformer_blocks_0_attn1_to_out_0.lora_down.weight
lora key not loaded lora_unet_up_blocks_2_attentions_1_transformer_blocks_0_attn1_to_out_0.lora_up.weight
lora key not loaded lora_unet_up_blocks_2_attentions_1_transformer_blocks_0_attn1_to_q.alpha
lora key not loaded lora_unet_up_blocks_2_attentions_1_transformer_blocks_0_attn1_to_q.lora_down.weight
lora key not loaded lora_unet_up_blocks_2_attentions_1_transformer_blocks_0_attn1_to_q.lora_up.weight
lora key not loaded lora_unet_up_blocks_2_attentions_1_transformer_blocks_0_attn1_to_v.alpha
lora key not loaded lora_unet_up_blocks_2_attentions_1_transformer_blocks_0_attn1_to_v.lora_down.weight
lora key not loaded lora_unet_up_blocks_2_attentions_1_transformer_blocks_0_attn1_to_v.lora_up.weight
lora key not loaded lora_unet_up_blocks_2_attentions_1_transformer_blocks_0_attn2_to_k.alpha
lora key not loaded lora_unet_up_blocks_2_attentions_1_transformer_blocks_0_attn2_to_k.lora_down.weight
lora key not loaded lora_unet_up_blocks_2_attentions_1_transformer_blocks_0_attn2_to_k.lora_up.weight
lora key not loaded lora_unet_up_blocks_2_attentions_1_transformer_blocks_0_attn2_to_out_0.alpha
lora key not loaded lora_unet_up_blocks_2_attentions_1_transformer_blocks_0_attn2_to_out_0.lora_down.weight
lora key not loaded lora_unet_up_blocks_2_attentions_1_transformer_blocks_0_attn2_to_out_0.lora_up.weight
lora key not loaded lora_unet_up_blocks_2_attentions_1_transformer_blocks_0_attn2_to_q.alpha
lora key not loaded lora_unet_up_blocks_2_attentions_1_transformer_blocks_0_attn2_to_q.lora_down.weight
lora key not loaded lora_unet_up_blocks_2_attentions_1_transformer_blocks_0_attn2_to_q.lora_up.weight
lora key not loaded lora_unet_up_blocks_2_attentions_1_transformer_blocks_0_attn2_to_v.alpha
lora key not loaded lora_unet_up_blocks_2_attentions_1_transformer_blocks_0_attn2_to_v.lora_down.weight
lora key not loaded lora_unet_up_blocks_2_attentions_1_transformer_blocks_0_attn2_to_v.lora_up.weight
lora key not loaded lora_unet_up_blocks_2_attentions_1_transformer_blocks_0_ff_net_0_proj.alpha
lora key not loaded lora_unet_up_blocks_2_attentions_1_transformer_blocks_0_ff_net_0_proj.lora_down.weight
lora key not loaded lora_unet_up_blocks_2_attentions_1_transformer_blocks_0_ff_net_0_proj.lora_up.weight
lora key not loaded lora_unet_up_blocks_2_attentions_1_transformer_blocks_0_ff_net_2.alpha
lora key not loaded lora_unet_up_blocks_2_attentions_1_transformer_blocks_0_ff_net_2.lora_down.weight
lora key not loaded lora_unet_up_blocks_2_attentions_1_transformer_blocks_0_ff_net_2.lora_up.weight
lora key not loaded lora_unet_up_blocks_2_attentions_2_proj_in.alpha
lora key not loaded lora_unet_up_blocks_2_attentions_2_proj_in.lora_down.weight
lora key not loaded lora_unet_up_blocks_2_attentions_2_proj_in.lora_up.weight
lora key not loaded lora_unet_up_blocks_2_attentions_2_proj_out.alpha
lora key not loaded lora_unet_up_blocks_2_attentions_2_proj_out.lora_down.weight
lora key not loaded lora_unet_up_blocks_2_attentions_2_proj_out.lora_up.weight
lora key not loaded lora_unet_up_blocks_2_attentions_2_transformer_blocks_0_attn1_to_k.alpha
lora key not loaded lora_unet_up_blocks_2_attentions_2_transformer_blocks_0_attn1_to_k.lora_down.weight
lora key not loaded lora_unet_up_blocks_2_attentions_2_transformer_blocks_0_attn1_to_k.lora_up.weight
lora key not loaded lora_unet_up_blocks_2_attentions_2_transformer_blocks_0_attn1_to_out_0.alpha
lora key not loaded lora_unet_up_blocks_2_attentions_2_transformer_blocks_0_attn1_to_out_0.lora_down.weight
lora key not loaded lora_unet_up_blocks_2_attentions_2_transformer_blocks_0_attn1_to_out_0.lora_up.weight
lora key not loaded lora_unet_up_blocks_2_attentions_2_transformer_blocks_0_attn1_to_q.alpha
lora key not loaded lora_unet_up_blocks_2_attentions_2_transformer_blocks_0_attn1_to_q.lora_down.weight
lora key not loaded lora_unet_up_blocks_2_attentions_2_transformer_blocks_0_attn1_to_q.lora_up.weight
lora key not loaded lora_unet_up_blocks_2_attentions_2_transformer_blocks_0_attn1_to_v.alpha
lora key not loaded lora_unet_up_blocks_2_attentions_2_transformer_blocks_0_attn1_to_v.lora_down.weight
lora key not loaded lora_unet_up_blocks_2_attentions_2_transformer_blocks_0_attn1_to_v.lora_up.weight
lora key not loaded lora_unet_up_blocks_2_attentions_2_transformer_blocks_0_attn2_to_k.alpha
lora key not loaded lora_unet_up_blocks_2_attentions_2_transformer_blocks_0_attn2_to_k.lora_down.weight
lora key not loaded lora_unet_up_blocks_2_attentions_2_transformer_blocks_0_attn2_to_k.lora_up.weight
lora key not loaded lora_unet_up_blocks_2_attentions_2_transformer_blocks_0_attn2_to_out_0.alpha
lora key not loaded lora_unet_up_blocks_2_attentions_2_transformer_blocks_0_attn2_to_out_0.lora_down.weight
lora key not loaded lora_unet_up_blocks_2_attentions_2_transformer_blocks_0_attn2_to_out_0.lora_up.weight
lora key not loaded lora_unet_up_blocks_2_attentions_2_transformer_blocks_0_attn2_to_q.alpha
lora key not loaded lora_unet_up_blocks_2_attentions_2_transformer_blocks_0_attn2_to_q.lora_down.weight
lora key not loaded lora_unet_up_blocks_2_attentions_2_transformer_blocks_0_attn2_to_q.lora_up.weight
lora key not loaded lora_unet_up_blocks_2_attentions_2_transformer_blocks_0_attn2_to_v.alpha
lora key not loaded lora_unet_up_blocks_2_attentions_2_transformer_blocks_0_attn2_to_v.lora_down.weight
lora key not loaded lora_unet_up_blocks_2_attentions_2_transformer_blocks_0_attn2_to_v.lora_up.weight
lora key not loaded lora_unet_up_blocks_2_attentions_2_transformer_blocks_0_ff_net_0_proj.alpha
lora key not loaded lora_unet_up_blocks_2_attentions_2_transformer_blocks_0_ff_net_0_proj.lora_down.weight
lora key not loaded lora_unet_up_blocks_2_attentions_2_transformer_blocks_0_ff_net_0_proj.lora_up.weight
lora key not loaded lora_unet_up_blocks_2_attentions_2_transformer_blocks_0_ff_net_2.alpha
lora key not loaded lora_unet_up_blocks_2_attentions_2_transformer_blocks_0_ff_net_2.lora_down.weight
lora key not loaded lora_unet_up_blocks_2_attentions_2_transformer_blocks_0_ff_net_2.lora_up.weight
lora key not loaded lora_unet_up_blocks_3_attentions_0_proj_in.alpha
lora key not loaded lora_unet_up_blocks_3_attentions_0_proj_in.lora_down.weight
lora key not loaded lora_unet_up_blocks_3_attentions_0_proj_in.lora_up.weight
lora key not loaded lora_unet_up_blocks_3_attentions_0_proj_out.alpha
lora key not loaded lora_unet_up_blocks_3_attentions_0_proj_out.lora_down.weight
lora key not loaded lora_unet_up_blocks_3_attentions_0_proj_out.lora_up.weight
lora key not loaded lora_unet_up_blocks_3_attentions_0_transformer_blocks_0_attn1_to_k.alpha
lora key not loaded lora_unet_up_blocks_3_attentions_0_transformer_blocks_0_attn1_to_k.lora_down.weight
lora key not loaded lora_unet_up_blocks_3_attentions_0_transformer_blocks_0_attn1_to_k.lora_up.weight
lora key not loaded lora_unet_up_blocks_3_attentions_0_transformer_blocks_0_attn1_to_out_0.alpha
lora key not loaded lora_unet_up_blocks_3_attentions_0_transformer_blocks_0_attn1_to_out_0.lora_down.weight
lora key not loaded lora_unet_up_blocks_3_attentions_0_transformer_blocks_0_attn1_to_out_0.lora_up.weight
lora key not loaded lora_unet_up_blocks_3_attentions_0_transformer_blocks_0_attn1_to_q.alpha
lora key not loaded lora_unet_up_blocks_3_attentions_0_transformer_blocks_0_attn1_to_q.lora_down.weight
lora key not loaded lora_unet_up_blocks_3_attentions_0_transformer_blocks_0_attn1_to_q.lora_up.weight
lora key not loaded lora_unet_up_blocks_3_attentions_0_transformer_blocks_0_attn1_to_v.alpha
lora key not loaded lora_unet_up_blocks_3_attentions_0_transformer_blocks_0_attn1_to_v.lora_down.weight
lora key not loaded lora_unet_up_blocks_3_attentions_0_transformer_blocks_0_attn1_to_v.lora_up.weight
lora key not loaded lora_unet_up_blocks_3_attentions_0_transformer_blocks_0_attn2_to_k.alpha
lora key not loaded lora_unet_up_blocks_3_attentions_0_transformer_blocks_0_attn2_to_k.lora_down.weight
lora key not loaded lora_unet_up_blocks_3_attentions_0_transformer_blocks_0_attn2_to_k.lora_up.weight
lora key not loaded lora_unet_up_blocks_3_attentions_0_transformer_blocks_0_attn2_to_out_0.alpha
lora key not loaded lora_unet_up_blocks_3_attentions_0_transformer_blocks_0_attn2_to_out_0.lora_down.weight
lora key not loaded lora_unet_up_blocks_3_attentions_0_transformer_blocks_0_attn2_to_out_0.lora_up.weight
lora key not loaded lora_unet_up_blocks_3_attentions_0_transformer_blocks_0_attn2_to_q.alpha
lora key not loaded lora_unet_up_blocks_3_attentions_0_transformer_blocks_0_attn2_to_q.lora_down.weight
lora key not loaded lora_unet_up_blocks_3_attentions_0_transformer_blocks_0_attn2_to_q.lora_up.weight
lora key not loaded lora_unet_up_blocks_3_attentions_0_transformer_blocks_0_attn2_to_v.alpha
lora key not loaded lora_unet_up_blocks_3_attentions_0_transformer_blocks_0_attn2_to_v.lora_down.weight
lora key not loaded lora_unet_up_blocks_3_attentions_0_transformer_blocks_0_attn2_to_v.lora_up.weight
lora key not loaded lora_unet_up_blocks_3_attentions_0_transformer_blocks_0_ff_net_0_proj.alpha
lora key not loaded lora_unet_up_blocks_3_attentions_0_transformer_blocks_0_ff_net_0_proj.lora_down.weight
lora key not loaded lora_unet_up_blocks_3_attentions_0_transformer_blocks_0_ff_net_0_proj.lora_up.weight
lora key not loaded lora_unet_up_blocks_3_attentions_0_transformer_blocks_0_ff_net_2.alpha
lora key not loaded lora_unet_up_blocks_3_attentions_0_transformer_blocks_0_ff_net_2.lora_down.weight
lora key not loaded lora_unet_up_blocks_3_attentions_0_transformer_blocks_0_ff_net_2.lora_up.weight
lora key not loaded lora_unet_up_blocks_3_attentions_1_proj_in.alpha
lora key not loaded lora_unet_up_blocks_3_attentions_1_proj_in.lora_down.weight
lora key not loaded lora_unet_up_blocks_3_attentions_1_proj_in.lora_up.weight
lora key not loaded lora_unet_up_blocks_3_attentions_1_proj_out.alpha
lora key not loaded lora_unet_up_blocks_3_attentions_1_proj_out.lora_down.weight
lora key not loaded lora_unet_up_blocks_3_attentions_1_proj_out.lora_up.weight
lora key not loaded lora_unet_up_blocks_3_attentions_1_transformer_blocks_0_attn1_to_k.alpha
lora key not loaded lora_unet_up_blocks_3_attentions_1_transformer_blocks_0_attn1_to_k.lora_down.weight
lora key not loaded lora_unet_up_blocks_3_attentions_1_transformer_blocks_0_attn1_to_k.lora_up.weight
lora key not loaded lora_unet_up_blocks_3_attentions_1_transformer_blocks_0_attn1_to_out_0.alpha
lora key not loaded lora_unet_up_blocks_3_attentions_1_transformer_blocks_0_attn1_to_out_0.lora_down.weight
lora key not loaded lora_unet_up_blocks_3_attentions_1_transformer_blocks_0_attn1_to_out_0.lora_up.weight
lora key not loaded lora_unet_up_blocks_3_attentions_1_transformer_blocks_0_attn1_to_q.alpha
lora key not loaded lora_unet_up_blocks_3_attentions_1_transformer_blocks_0_attn1_to_q.lora_down.weight
lora key not loaded lora_unet_up_blocks_3_attentions_1_transformer_blocks_0_attn1_to_q.lora_up.weight
lora key not loaded lora_unet_up_blocks_3_attentions_1_transformer_blocks_0_attn1_to_v.alpha
lora key not loaded lora_unet_up_blocks_3_attentions_1_transformer_blocks_0_attn1_to_v.lora_down.weight
lora key not loaded lora_unet_up_blocks_3_attentions_1_transformer_blocks_0_attn1_to_v.lora_up.weight
lora key not loaded lora_unet_up_blocks_3_attentions_1_transformer_blocks_0_attn2_to_k.alpha
lora key not loaded lora_unet_up_blocks_3_attentions_1_transformer_blocks_0_attn2_to_k.lora_down.weight
lora key not loaded lora_unet_up_blocks_3_attentions_1_transformer_blocks_0_attn2_to_k.lora_up.weight
lora key not loaded lora_unet_up_blocks_3_attentions_1_transformer_blocks_0_attn2_to_out_0.alpha
lora key not loaded lora_unet_up_blocks_3_attentions_1_transformer_blocks_0_attn2_to_out_0.lora_down.weight
lora key not loaded lora_unet_up_blocks_3_attentions_1_transformer_blocks_0_attn2_to_out_0.lora_up.weight
lora key not loaded lora_unet_up_blocks_3_attentions_1_transformer_blocks_0_attn2_to_q.alpha
lora key not loaded lora_unet_up_blocks_3_attentions_1_transformer_blocks_0_attn2_to_q.lora_down.weight
lora key not loaded lora_unet_up_blocks_3_attentions_1_transformer_blocks_0_attn2_to_q.lora_up.weight
lora key not loaded lora_unet_up_blocks_3_attentions_1_transformer_blocks_0_attn2_to_v.alpha
lora key not loaded lora_unet_up_blocks_3_attentions_1_transformer_blocks_0_attn2_to_v.lora_down.weight
lora key not loaded lora_unet_up_blocks_3_attentions_1_transformer_blocks_0_attn2_to_v.lora_up.weight
lora key not loaded lora_unet_up_blocks_3_attentions_1_transformer_blocks_0_ff_net_0_proj.alpha
lora key not loaded lora_unet_up_blocks_3_attentions_1_transformer_blocks_0_ff_net_0_proj.lora_down.weight
lora key not loaded lora_unet_up_blocks_3_attentions_1_transformer_blocks_0_ff_net_0_proj.lora_up.weight
lora key not loaded lora_unet_up_blocks_3_attentions_1_transformer_blocks_0_ff_net_2.alpha
lora key not loaded lora_unet_up_blocks_3_attentions_1_transformer_blocks_0_ff_net_2.lora_down.weight
lora key not loaded lora_unet_up_blocks_3_attentions_1_transformer_blocks_0_ff_net_2.lora_up.weight
lora key not loaded lora_unet_up_blocks_3_attentions_2_proj_in.alpha
lora key not loaded lora_unet_up_blocks_3_attentions_2_proj_in.lora_down.weight
lora key not loaded lora_unet_up_blocks_3_attentions_2_proj_in.lora_up.weight
lora key not loaded lora_unet_up_blocks_3_attentions_2_proj_out.alpha
lora key not loaded lora_unet_up_blocks_3_attentions_2_proj_out.lora_down.weight
lora key not loaded lora_unet_up_blocks_3_attentions_2_proj_out.lora_up.weight
lora key not loaded lora_unet_up_blocks_3_attentions_2_transformer_blocks_0_attn1_to_k.alpha
lora key not loaded lora_unet_up_blocks_3_attentions_2_transformer_blocks_0_attn1_to_k.lora_down.weight
lora key not loaded lora_unet_up_blocks_3_attentions_2_transformer_blocks_0_attn1_to_k.lora_up.weight
lora key not loaded lora_unet_up_blocks_3_attentions_2_transformer_blocks_0_attn1_to_out_0.alpha
lora key not loaded lora_unet_up_blocks_3_attentions_2_transformer_blocks_0_attn1_to_out_0.lora_down.weight
lora key not loaded lora_unet_up_blocks_3_attentions_2_transformer_blocks_0_attn1_to_out_0.lora_up.weight
lora key not loaded lora_unet_up_blocks_3_attentions_2_transformer_blocks_0_attn1_to_q.alpha
lora key not loaded lora_unet_up_blocks_3_attentions_2_transformer_blocks_0_attn1_to_q.lora_down.weight
lora key not loaded lora_unet_up_blocks_3_attentions_2_transformer_blocks_0_attn1_to_q.lora_up.weight
lora key not loaded lora_unet_up_blocks_3_attentions_2_transformer_blocks_0_attn1_to_v.alpha
lora key not loaded lora_unet_up_blocks_3_attentions_2_transformer_blocks_0_attn1_to_v.lora_down.weight
lora key not loaded lora_unet_up_blocks_3_attentions_2_transformer_blocks_0_attn1_to_v.lora_up.weight
lora key not loaded lora_unet_up_blocks_3_attentions_2_transformer_blocks_0_attn2_to_k.alpha
lora key not loaded lora_unet_up_blocks_3_attentions_2_transformer_blocks_0_attn2_to_k.lora_down.weight
lora key not loaded lora_unet_up_blocks_3_attentions_2_transformer_blocks_0_attn2_to_k.lora_up.weight
lora key not loaded lora_unet_up_blocks_3_attentions_2_transformer_blocks_0_attn2_to_out_0.alpha
lora key not loaded lora_unet_up_blocks_3_attentions_2_transformer_blocks_0_attn2_to_out_0.lora_down.weight
lora key not loaded lora_unet_up_blocks_3_attentions_2_transformer_blocks_0_attn2_to_out_0.lora_up.weight
lora key not loaded lora_unet_up_blocks_3_attentions_2_transformer_blocks_0_attn2_to_q.alpha
lora key not loaded lora_unet_up_blocks_3_attentions_2_transformer_blocks_0_attn2_to_q.lora_down.weight
lora key not loaded lora_unet_up_blocks_3_attentions_2_transformer_blocks_0_attn2_to_q.lora_up.weight
lora key not loaded lora_unet_up_blocks_3_attentions_2_transformer_blocks_0_attn2_to_v.alpha
lora key not loaded lora_unet_up_blocks_3_attentions_2_transformer_blocks_0_attn2_to_v.lora_down.weight
lora key not loaded lora_unet_up_blocks_3_attentions_2_transformer_blocks_0_attn2_to_v.lora_up.weight
lora key not loaded lora_unet_up_blocks_3_attentions_2_transformer_blocks_0_ff_net_0_proj.alpha
lora key not loaded lora_unet_up_blocks_3_attentions_2_transformer_blocks_0_ff_net_0_proj.lora_down.weight
lora key not loaded lora_unet_up_blocks_3_attentions_2_transformer_blocks_0_ff_net_0_proj.lora_up.weight
lora key not loaded lora_unet_up_blocks_3_attentions_2_transformer_blocks_0_ff_net_2.alpha
lora key not loaded lora_unet_up_blocks_3_attentions_2_transformer_blocks_0_ff_net_2.lora_down.weight
lora key not loaded lora_unet_up_blocks_3_attentions_2_transformer_blocks_0_ff_net_2.lora_up.weight
To load target model SDXLClipModel
Begin to load 1 model
unload clone 0
Moving model(s) has taken 1.40 seconds
To load target model SDXL
Begin to load 1 model
ERROR diffusion_model.input_blocks.4.1.transformer_blocks.0.attn2.to_k.weight shape '[640, 2048]' is invalid for input of size 491520
ERROR diffusion_model.input_blocks.4.1.transformer_blocks.0.attn2.to_k.weight shape '[640, 2048]' is invalid for input of size 491520
ERROR diffusion_model.input_blocks.4.1.transformer_blocks.0.attn2.to_v.weight shape '[640, 2048]' is invalid for input of size 491520
ERROR diffusion_model.input_blocks.4.1.transformer_blocks.0.attn2.to_v.weight shape '[640, 2048]' is invalid for input of size 491520
ERROR diffusion_model.input_blocks.5.1.transformer_blocks.0.attn2.to_k.weight shape '[640, 2048]' is invalid for input of size 491520
ERROR diffusion_model.input_blocks.5.1.transformer_blocks.0.attn2.to_k.weight shape '[640, 2048]' is invalid for input of size 491520
ERROR diffusion_model.input_blocks.5.1.transformer_blocks.0.attn2.to_v.weight shape '[640, 2048]' is invalid for input of size 491520
ERROR diffusion_model.input_blocks.5.1.transformer_blocks.0.attn2.to_v.weight shape '[640, 2048]' is invalid for input of size 491520
ERROR diffusion_model.input_blocks.7.1.transformer_blocks.0.attn2.to_k.weight shape '[1280, 2048]' is invalid for input of size 983040
ERROR diffusion_model.input_blocks.7.1.transformer_blocks.0.attn2.to_k.weight shape '[1280, 2048]' is invalid for input of size 983040
ERROR diffusion_model.input_blocks.7.1.transformer_blocks.0.attn2.to_v.weight shape '[1280, 2048]' is invalid for input of size 983040
ERROR diffusion_model.input_blocks.7.1.transformer_blocks.0.attn2.to_v.weight shape '[1280, 2048]' is invalid for input of size 983040
ERROR diffusion_model.input_blocks.8.1.transformer_blocks.0.attn2.to_k.weight shape '[1280, 2048]' is invalid for input of size 983040
ERROR diffusion_model.input_blocks.8.1.transformer_blocks.0.attn2.to_k.weight shape '[1280, 2048]' is invalid for input of size 983040
ERROR diffusion_model.input_blocks.8.1.transformer_blocks.0.attn2.to_v.weight shape '[1280, 2048]' is invalid for input of size 983040
ERROR diffusion_model.input_blocks.8.1.transformer_blocks.0.attn2.to_v.weight shape '[1280, 2048]' is invalid for input of size 983040
ERROR diffusion_model.middle_block.1.transformer_blocks.0.attn2.to_k.weight shape '[1280, 2048]' is invalid for input of size 983040
ERROR diffusion_model.middle_block.1.transformer_blocks.0.attn2.to_k.weight shape '[1280, 2048]' is invalid for input of size 983040
ERROR diffusion_model.middle_block.1.transformer_blocks.0.attn2.to_v.weight shape '[1280, 2048]' is invalid for input of size 983040
ERROR diffusion_model.middle_block.1.transformer_blocks.0.attn2.to_v.weight shape '[1280, 2048]' is invalid for input of size 983040
ERROR diffusion_model.output_blocks.3.1.proj_out.weight shape '[640, 640]' is invalid for input of size 1638400
ERROR diffusion_model.output_blocks.3.1.proj_out.weight shape '[640, 640]' is invalid for input of size 1638400
ERROR diffusion_model.output_blocks.3.1.proj_in.weight shape '[640, 640]' is invalid for input of size 1638400
ERROR diffusion_model.output_blocks.3.1.proj_in.weight shape '[640, 640]' is invalid for input of size 1638400
ERROR diffusion_model.output_blocks.3.1.transformer_blocks.0.attn1.to_q.weight shape '[640, 640]' is invalid for input of size 1638400
ERROR diffusion_model.output_blocks.3.1.transformer_blocks.0.attn1.to_q.weight shape '[640, 640]' is invalid for input of size 1638400
ERROR diffusion_model.output_blocks.3.1.transformer_blocks.0.attn1.to_out.0.weight shape '[640, 640]' is invalid for input of size 1638400
ERROR diffusion_model.output_blocks.3.1.transformer_blocks.0.attn1.to_out.0.weight shape '[640, 640]' is invalid for input of size 1638400
ERROR diffusion_model.output_blocks.3.1.transformer_blocks.0.attn1.to_k.weight shape '[640, 640]' is invalid for input of size 1638400
ERROR diffusion_model.output_blocks.3.1.transformer_blocks.0.attn1.to_k.weight shape '[640, 640]' is invalid for input of size 1638400
ERROR diffusion_model.output_blocks.3.1.transformer_blocks.0.attn1.to_v.weight shape '[640, 640]' is invalid for input of size 1638400
ERROR diffusion_model.output_blocks.3.1.transformer_blocks.0.attn1.to_v.weight shape '[640, 640]' is invalid for input of size 1638400
ERROR diffusion_model.output_blocks.3.1.transformer_blocks.0.attn2.to_q.weight shape '[640, 640]' is invalid for input of size 1638400
ERROR diffusion_model.output_blocks.3.1.transformer_blocks.0.attn2.to_q.weight shape '[640, 640]' is invalid for input of size 1638400
ERROR diffusion_model.output_blocks.3.1.transformer_blocks.0.ff.net.2.weight shape '[640, 2560]' is invalid for input of size 6553600
ERROR diffusion_model.output_blocks.3.1.transformer_blocks.0.ff.net.2.weight shape '[640, 2560]' is invalid for input of size 6553600
ERROR diffusion_model.output_blocks.3.1.transformer_blocks.0.ff.net.0.proj.weight shape '[5120, 640]' is invalid for input of size 13107200
ERROR diffusion_model.output_blocks.3.1.transformer_blocks.0.ff.net.0.proj.weight shape '[5120, 640]' is invalid for input of size 13107200
ERROR diffusion_model.output_blocks.3.1.transformer_blocks.0.attn2.to_out.0.weight shape '[640, 640]' is invalid for input of size 1638400
ERROR diffusion_model.output_blocks.3.1.transformer_blocks.0.attn2.to_out.0.weight shape '[640, 640]' is invalid for input of size 1638400
ERROR diffusion_model.output_blocks.3.1.transformer_blocks.0.attn2.to_k.weight shape '[640, 2048]' is invalid for input of size 983040
ERROR diffusion_model.output_blocks.3.1.transformer_blocks.0.attn2.to_k.weight shape '[640, 2048]' is invalid for input of size 983040
ERROR diffusion_model.output_blocks.3.1.transformer_blocks.0.attn2.to_v.weight shape '[640, 2048]' is invalid for input of size 983040
ERROR diffusion_model.output_blocks.3.1.transformer_blocks.0.attn2.to_v.weight shape '[640, 2048]' is invalid for input of size 983040
ERROR diffusion_model.output_blocks.4.1.proj_out.weight shape '[640, 640]' is invalid for input of size 1638400
ERROR diffusion_model.output_blocks.4.1.proj_out.weight shape '[640, 640]' is invalid for input of size 1638400
ERROR diffusion_model.output_blocks.4.1.proj_in.weight shape '[640, 640]' is invalid for input of size 1638400
ERROR diffusion_model.output_blocks.4.1.proj_in.weight shape '[640, 640]' is invalid for input of size 1638400
ERROR diffusion_model.output_blocks.4.1.transformer_blocks.0.attn1.to_q.weight shape '[640, 640]' is invalid for input of size 1638400
ERROR diffusion_model.output_blocks.4.1.transformer_blocks.0.attn1.to_q.weight shape '[640, 640]' is invalid for input of size 1638400
ERROR diffusion_model.output_blocks.4.1.transformer_blocks.0.attn1.to_out.0.weight shape '[640, 640]' is invalid for input of size 1638400
ERROR diffusion_model.output_blocks.4.1.transformer_blocks.0.attn1.to_out.0.weight shape '[640, 640]' is invalid for input of size 1638400
ERROR diffusion_model.output_blocks.4.1.transformer_blocks.0.attn1.to_k.weight shape '[640, 640]' is invalid for input of size 1638400
ERROR diffusion_model.output_blocks.4.1.transformer_blocks.0.attn1.to_k.weight shape '[640, 640]' is invalid for input of size 1638400
ERROR diffusion_model.output_blocks.4.1.transformer_blocks.0.attn1.to_v.weight shape '[640, 640]' is invalid for input of size 1638400
ERROR diffusion_model.output_blocks.4.1.transformer_blocks.0.attn1.to_v.weight shape '[640, 640]' is invalid for input of size 1638400
ERROR diffusion_model.output_blocks.4.1.transformer_blocks.0.attn2.to_q.weight shape '[640, 640]' is invalid for input of size 1638400
ERROR diffusion_model.output_blocks.4.1.transformer_blocks.0.attn2.to_q.weight shape '[640, 640]' is invalid for input of size 1638400
ERROR diffusion_model.output_blocks.4.1.transformer_blocks.0.ff.net.2.weight shape '[640, 2560]' is invalid for input of size 6553600
ERROR diffusion_model.output_blocks.4.1.transformer_blocks.0.ff.net.2.weight shape '[640, 2560]' is invalid for input of size 6553600
ERROR diffusion_model.output_blocks.4.1.transformer_blocks.0.ff.net.0.proj.weight shape '[5120, 640]' is invalid for input of size 13107200
ERROR diffusion_model.output_blocks.4.1.transformer_blocks.0.ff.net.0.proj.weight shape '[5120, 640]' is invalid for input of size 13107200
ERROR diffusion_model.output_blocks.4.1.transformer_blocks.0.attn2.to_out.0.weight shape '[640, 640]' is invalid for input of size 1638400
ERROR diffusion_model.output_blocks.4.1.transformer_blocks.0.attn2.to_out.0.weight shape '[640, 640]' is invalid for input of size 1638400
ERROR diffusion_model.output_blocks.4.1.transformer_blocks.0.attn2.to_k.weight shape '[640, 2048]' is invalid for input of size 983040
ERROR diffusion_model.output_blocks.4.1.transformer_blocks.0.attn2.to_k.weight shape '[640, 2048]' is invalid for input of size 983040
ERROR diffusion_model.output_blocks.4.1.transformer_blocks.0.attn2.to_v.weight shape '[640, 2048]' is invalid for input of size 983040
ERROR diffusion_model.output_blocks.4.1.transformer_blocks.0.attn2.to_v.weight shape '[640, 2048]' is invalid for input of size 983040
ERROR diffusion_model.output_blocks.5.1.proj_out.weight shape '[640, 640]' is invalid for input of size 1638400
ERROR diffusion_model.output_blocks.5.1.proj_out.weight shape '[640, 640]' is invalid for input of size 1638400
ERROR diffusion_model.output_blocks.5.1.proj_in.weight shape '[640, 640]' is invalid for input of size 1638400
ERROR diffusion_model.output_blocks.5.1.proj_in.weight shape '[640, 640]' is invalid for input of size 1638400
ERROR diffusion_model.output_blocks.5.1.transformer_blocks.0.attn1.to_q.weight shape '[640, 640]' is invalid for input of size 1638400
ERROR diffusion_model.output_blocks.5.1.transformer_blocks.0.attn1.to_q.weight shape '[640, 640]' is invalid for input of size 1638400
ERROR diffusion_model.output_blocks.5.1.transformer_blocks.0.attn1.to_out.0.weight shape '[640, 640]' is invalid for input of size 1638400
ERROR diffusion_model.output_blocks.5.1.transformer_blocks.0.attn1.to_out.0.weight shape '[640, 640]' is invalid for input of size 1638400
ERROR diffusion_model.output_blocks.5.1.transformer_blocks.0.attn1.to_k.weight shape '[640, 640]' is invalid for input of size 1638400
ERROR diffusion_model.output_blocks.5.1.transformer_blocks.0.attn1.to_k.weight shape '[640, 640]' is invalid for input of size 1638400
ERROR diffusion_model.output_blocks.5.1.transformer_blocks.0.attn1.to_v.weight shape '[640, 640]' is invalid for input of size 1638400
ERROR diffusion_model.output_blocks.5.1.transformer_blocks.0.attn1.to_v.weight shape '[640, 640]' is invalid for input of size 1638400
ERROR diffusion_model.output_blocks.5.1.transformer_blocks.0.attn2.to_q.weight shape '[640, 640]' is invalid for input of size 1638400
ERROR diffusion_model.output_blocks.5.1.transformer_blocks.0.attn2.to_q.weight shape '[640, 640]' is invalid for input of size 1638400
ERROR diffusion_model.output_blocks.5.1.transformer_blocks.0.ff.net.2.weight shape '[640, 2560]' is invalid for input of size 6553600
ERROR diffusion_model.output_blocks.5.1.transformer_blocks.0.ff.net.2.weight shape '[640, 2560]' is invalid for input of size 6553600
ERROR diffusion_model.output_blocks.5.1.transformer_blocks.0.ff.net.0.proj.weight shape '[5120, 640]' is invalid for input of size 13107200
ERROR diffusion_model.output_blocks.5.1.transformer_blocks.0.ff.net.0.proj.weight shape '[5120, 640]' is invalid for input of size 13107200
ERROR diffusion_model.output_blocks.5.1.transformer_blocks.0.attn2.to_out.0.weight shape '[640, 640]' is invalid for input of size 1638400
ERROR diffusion_model.output_blocks.5.1.transformer_blocks.0.attn2.to_out.0.weight shape '[640, 640]' is invalid for input of size 1638400
ERROR diffusion_model.output_blocks.5.1.transformer_blocks.0.attn2.to_k.weight shape '[640, 2048]' is invalid for input of size 983040
ERROR diffusion_model.output_blocks.5.1.transformer_blocks.0.attn2.to_k.weight shape '[640, 2048]' is invalid for input of size 983040
ERROR diffusion_model.output_blocks.5.1.transformer_blocks.0.attn2.to_v.weight shape '[640, 2048]' is invalid for input of size 983040
ERROR diffusion_model.output_blocks.5.1.transformer_blocks.0.attn2.to_v.weight shape '[640, 2048]' is invalid for input of size 983040
Moving model(s) has taken 1.83 seconds
100%|██████████████████████████████████████████████████████████████████████████████████| 35/35 [00:11<00:00, 3.08it/s]
To load target model AutoencoderKL█████████████████████████████████████████████████████| 35/35 [00:10<00:00, 3.27it/s]
Begin to load 1 model
Moving model(s) has taken 0.65 seconds
Total progress: 100%|██████████████████████████████████████████████████████████████████| 35/35 [00:12<00:00, 2.89it/s]
Total progress: 100%|██████████████████████████████████████████████████████████████████| 35/35 [00:12<00:00, 3.27it/s]
Ok here is something weird. I can get Loras working if when I start forge I start with a 512x512 generation using loras. After that I can expand to 1024x1024 generations. Here is the CMD dump:
HEAD is now at d2af6d1b Contribution Guideline M webui-user.bat Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug 1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)] Version: f0.0.10-latest-95-gd2af6d1b Commit hash: d2af6d1b4427928013b5594efb9e0f08ad3bfd16 CUDA 12.1 Launching Web UI with arguments: --theme dark --ckpt-dir X:\SDXL_A111\stable-diffusion-webui\models\Stable-diffusion --lora-dir X:\SDXL_A111\stable-diffusion-webui\models\Lora Total VRAM 12282 MB, total RAM 65452 MB Set vram state to: NORMAL_VRAM Device: cuda:0 NVIDIA GeForce RTX 4070 : native VAE dtype: torch.bfloat16 Using pytorch cross attention ControlNet preprocessor location: X:\webui_forge\webui_forge_cu121_torch21\webui\models\ControlNetPreprocessor 21:38:32 - ReActor - STATUS - Running v0.7.0-a1 on Device: CUDA Loading weights [b6c0f1430a] from X:\SDXL_A111\stable-diffusion-webui\models\Stable-diffusion\sdxlSevenof9NSFW_sdxlSevenof94thNSFW.safetensors 2024-02-19 21:38:32,310 - ControlNet - INFO - ControlNet UI callback registered. Running on local URL: http://127.0.0.1:7860
To create a public link, set share=True
in launch()
.
Startup time: 13.1s (prepare environment: 4.5s, import torch: 3.2s, import gradio: 0.9s, setup paths: 0.7s, other imports: 0.5s, setup gfpgan: 0.1s, load scripts: 2.0s, create ui: 0.6s, gradio launch: 0.4s).
model_type EPS
UNet ADM Dimension 2816
Using pytorch attention in VAE
Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
Using pytorch attention in VAE
extra {'cond_stage_model.clip_g.transformer.text_model.embeddings.position_ids', 'cond_stage_model.clip_l.text_projection', 'cond_stage_model.clip_l.logit_scale'}
left over keys: dict_keys(['conditioner.embedders.0.logit_scale', 'conditioner.embedders.0.text_projection', 'conditioner.embedders.1.model.transformer.text_model.embeddings.position_ids'])
To load target model SDXLClipModel
Begin to load 1 model
Moving model(s) has taken 0.62 seconds
Model loaded in 8.4s (load weights from disk: 0.8s, forge instantiate config: 1.3s, forge load real models: 4.4s, load VAE: 0.8s, calculate empty prompt: 1.0s).
To load target model SDXL
Begin to load 1 model
Moving model(s) has taken 0.90 seconds
100%|██████████████████████████████████████████████████████████████████████████████████| 20/20 [00:02<00:00, 9.68it/s]
To load target model AutoencoderKL█████████████████████████████████████████████████▋ | 19/20 [00:01<00:00, 11.98it/s]
Begin to load 1 model
Moving model(s) has taken 0.30 seconds
Total progress: 100%|██████████████████████████████████████████████████████████████████| 20/20 [00:02<00:00, 9.25it/s]
To load target model SDXLClipModel█████████████████████████████████████████████████████| 20/20 [00:02<00:00, 11.98it/s]
Begin to load 1 model
unload clone 2
Moving model(s) has taken 0.89 seconds
To load target model SDXL
Begin to load 1 model
unload clone 2
Moving model(s) has taken 2.11 seconds
100%|██████████████████████████████████████████████████████████████████████████████████| 20/20 [00:01<00:00, 12.07it/s]
Total progress: 100%|██████████████████████████████████████████████████████████████████| 20/20 [00:02<00:00, 9.79it/s]
Total progress: 100%|██████████████████████████████████████████████████████████████████| 20/20 [00:02<00:00, 12.29it/s]
Going to try latest branch and see if this does the same.
Going to try latest branch and see if this does the same.
And yes! do a 512x512 first and it works correctly after that.. wierd. Here is the CMD stuff...
Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug 1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)] Version: f0.0.10-latest-95-gd2af6d1b Commit hash: d2af6d1b4427928013b5594efb9e0f08ad3bfd16 CUDA 12.1 Launching Web UI with arguments: --theme dark --ckpt-dir X:\SDXL_A111\stable-diffusion-webui\models\Stable-diffusion --lora-dir X:\SDXL_A111\stable-diffusion-webui\models\Lora Total VRAM 12282 MB, total RAM 65452 MB Set vram state to: NORMAL_VRAM Device: cuda:0 NVIDIA GeForce RTX 4070 : native VAE dtype: torch.bfloat16 Using pytorch cross attention ControlNet preprocessor location: X:\webui_forge\webui_forge_cu121_torch21\webui\models\ControlNetPreprocessor 21:48:54 - ReActor - STATUS - Running v0.7.0-a1 on Device: CUDA Loading weights [b6c0f1430a] from X:\SDXL_A111\stable-diffusion-webui\models\Stable-diffusion\sdxlSevenof9NSFW_sdxlSevenof94thNSFW.safetensors 2024-02-19 21:48:55,054 - ControlNet - INFO - ControlNet UI callback registered. Running on local URL: http://127.0.0.1:7860
To create a public link, set share=True
in launch()
.
Startup time: 13.1s (prepare environment: 4.5s, import torch: 3.2s, import gradio: 0.9s, setup paths: 0.7s, other imports: 0.5s, setup gfpgan: 0.1s, load scripts: 2.1s, create ui: 0.7s, gradio launch: 0.3s).
model_type EPS
UNet ADM Dimension 2816
Using pytorch attention in VAE
Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
Using pytorch attention in VAE
extra {'cond_stage_model.clip_l.text_projection', 'cond_stage_model.clip_l.logit_scale', 'cond_stage_model.clip_g.transformer.text_model.embeddings.position_ids'}
left over keys: dict_keys(['conditioner.embedders.0.logit_scale', 'conditioner.embedders.0.text_projection', 'conditioner.embedders.1.model.transformer.text_model.embeddings.position_ids'])
To load target model SDXLClipModel
Begin to load 1 model
Moving model(s) has taken 0.88 seconds
Model loaded in 8.7s (load weights from disk: 0.8s, forge instantiate config: 1.3s, forge load real models: 4.4s, load VAE: 0.8s, calculate empty prompt: 1.3s).
Loading weights [aeb7e9e689] from X:\SDXL_A111\stable-diffusion-webui\models\Stable-diffusion\juggernautXL_v8Rundiffusion.safetensors
model_type EPS
UNet ADM Dimension 2816
Using pytorch attention in VAE
Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
Using pytorch attention in VAE
extra {'cond_stage_model.clip_l.text_projection', 'cond_stage_model.clip_l.logit_scale', 'cond_stage_model.clip_g.transformer.text_model.embeddings.position_ids'}
To load target model SDXLClipModel
Begin to load 1 model
Moving model(s) has taken 0.60 seconds
Model loaded in 6.8s (unload existing model: 0.7s, forge instantiate config: 1.1s, forge load real models: 4.2s, calculate empty prompt: 0.7s).
To load target model SDXLClipModel
Begin to load 1 model
unload clone 0
Moving model(s) has taken 0.88 seconds
To load target model SDXL
Begin to load 1 model
Moving model(s) has taken 0.96 seconds
100%|██████████████████████████████████████████████████████████████████████████████████| 20/20 [00:02<00:00, 9.67it/s]
To load target model AutoencoderKL█████████████████████████████████████████████████▋ | 19/20 [00:01<00:00, 11.76it/s]
Begin to load 1 model
Moving model(s) has taken 0.25 seconds
Total progress: 100%|██████████████████████████████████████████████████████████████████| 20/20 [00:02<00:00, 9.30it/s]
100%|██████████████████████████████████████████████████████████████████████████████████| 20/20 [00:06<00:00, 3.32it/s]
Moving model(s) skipped. Freeing memory has taken 0.56 seconds█████████████████████████| 20/20 [00:05<00:00, 3.34it/s]
Total progress: 100%|██████████████████████████████████████████████████████████████████| 20/20 [00:07<00:00, 2.81it/s]
Total progress: 100%|██████████████████████████████████████████████████████████████████| 20/20 [00:07<00:00, 3.34it/s]
Going to try latest branch and see if this does the same.
And yes! do a 512x512 first and it works correctly after that.. wierd. Here is the CMD stuff...
Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug 1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)] Version: f0.0.10-latest-95-gd2af6d1b Commit hash: d2af6d1 CUDA 12.1 Launching Web UI with arguments: --theme dark --ckpt-dir X:\SDXL_A111\stable-diffusion-webui\models\Stable-diffusion --lora-dir X:\SDXL_A111\stable-diffusion-webui\models\Lora Total VRAM 12282 MB, total RAM 65452 MB Set vram state to: NORMAL_VRAM Device: cuda:0 NVIDIA GeForce RTX 4070 : native VAE dtype: torch.bfloat16 Using pytorch cross attention ControlNet preprocessor location: X:\webui_forge\webui_forge_cu121_torch21\webui\models\ControlNetPreprocessor 21:48:54 - ReActor - STATUS - Running v0.7.0-a1 on Device: CUDA Loading weights [b6c0f1430a] from X:\SDXL_A111\stable-diffusion-webui\models\Stable-diffusion\sdxlSevenof9NSFW_sdxlSevenof94thNSFW.safetensors 2024-02-19 21:48:55,054 - ControlNet - INFO - ControlNet UI callback registered. Running on local URL: http://127.0.0.1:7860
To create a public link, set
share=True
inlaunch()
. Startup time: 13.1s (prepare environment: 4.5s, import torch: 3.2s, import gradio: 0.9s, setup paths: 0.7s, other imports: 0.5s, setup gfpgan: 0.1s, load scripts: 2.1s, create ui: 0.7s, gradio launch: 0.3s). model_type EPS UNet ADM Dimension 2816 Using pytorch attention in VAE Working with z of shape (1, 4, 32, 32) = 4096 dimensions. Using pytorch attention in VAE extra {'cond_stage_model.clip_l.text_projection', 'cond_stage_model.clip_l.logit_scale', 'cond_stage_model.clip_g.transformer.text_model.embeddings.position_ids'} left over keys: dict_keys(['conditioner.embedders.0.logit_scale', 'conditioner.embedders.0.text_projection', 'conditioner.embedders.1.model.transformer.text_model.embeddings.position_ids']) To load target model SDXLClipModel Begin to load 1 model Moving model(s) has taken 0.88 seconds Model loaded in 8.7s (load weights from disk: 0.8s, forge instantiate config: 1.3s, forge load real models: 4.4s, load VAE: 0.8s, calculate empty prompt: 1.3s). Loading weights [aeb7e9e689] from X:\SDXL_A111\stable-diffusion-webui\models\Stable-diffusion\juggernautXL_v8Rundiffusion.safetensors model_type EPS UNet ADM Dimension 2816 Using pytorch attention in VAE Working with z of shape (1, 4, 32, 32) = 4096 dimensions. Using pytorch attention in VAE extra {'cond_stage_model.clip_l.text_projection', 'cond_stage_model.clip_l.logit_scale', 'cond_stage_model.clip_g.transformer.text_model.embeddings.position_ids'} To load target model SDXLClipModel Begin to load 1 model Moving model(s) has taken 0.60 seconds Model loaded in 6.8s (unload existing model: 0.7s, forge instantiate config: 1.1s, forge load real models: 4.2s, calculate empty prompt: 0.7s). To load target model SDXLClipModel Begin to load 1 model unload clone 0 Moving model(s) has taken 0.88 seconds To load target model SDXL Begin to load 1 model Moving model(s) has taken 0.96 seconds 100%|██████████████████████████████████████████████████████████████████████████████████| 20/20 [00:02<00:00, 9.67it/s] To load target model AutoencoderKL█████████████████████████████████████████████████▋ | 19/20 [00:01<00:00, 11.76it/s] Begin to load 1 model Moving model(s) has taken 0.25 seconds Total progress: 100%|██████████████████████████████████████████████████████████████████| 20/20 [00:02<00:00, 9.30it/s] 100%|██████████████████████████████████████████████████████████████████████████████████| 20/20 [00:06<00:00, 3.32it/s] Moving model(s) skipped. Freeing memory has taken 0.56 seconds█████████████████████████| 20/20 [00:05<00:00, 3.34it/s] Total progress: 100%|██████████████████████████████████████████████████████████████████| 20/20 [00:07<00:00, 2.81it/s] Total progress: 100%|██████████████████████████████████████████████████████████████████| 20/20 [00:07<00:00, 3.34it/s]
Heyy, i get the same problem as yours. i already reinstalling repeatedly and still get the error "lora key not loaded" and error diffusion. maybe i will try your method to generate a 512x512 image the first time launch forge.
You can fix the error downloading the commit I shared above. However, if you don't add it to the user-bat file you run the risk of updating forge if you launch it with a different bat file.
@echo off
set PYTHON=
set GIT=
set VENV_DIR=
set COMMANDLINE_ARGS=
@REM Uncomment following code to reference an existing A1111 checkout.
@REM set A1111_HOME=Your A1111 checkout dir
@REM
@REM set VENV_DIR=%A1111_HOME%/venv
@REM set COMMANDLINE_ARGS=%COMMANDLINE_ARGS% ^
@REM --ckpt-dir %A1111_HOME%/models/Stable-diffusion ^
@REM --hypernetwork-dir %A1111_HOME%/models/hypernetworks ^
@REM --embeddings-dir %A1111_HOME%/embeddings ^
@REM --lora-dir %A1111_HOME%/models/Lora
git checkout d2af6d1b4427928013b5594efb9e0f08ad3bfd16
call webui.bat
Going to try latest branch and see if this does the same.
And yes! do a 512x512 first and it works correctly after that.. wierd. Here is the CMD stuff... Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug 1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)] Version: f0.0.10-latest-95-gd2af6d1b Commit hash: d2af6d1 CUDA 12.1 Launching Web UI with arguments: --theme dark --ckpt-dir X:\SDXL_A111\stable-diffusion-webui\models\Stable-diffusion --lora-dir X:\SDXL_A111\stable-diffusion-webui\models\Lora Total VRAM 12282 MB, total RAM 65452 MB Set vram state to: NORMAL_VRAM Device: cuda:0 NVIDIA GeForce RTX 4070 : native VAE dtype: torch.bfloat16 Using pytorch cross attention ControlNet preprocessor location: X:\webui_forge\webui_forge_cu121_torch21\webui\models\ControlNetPreprocessor 21:48:54 - ReActor - STATUS - Running v0.7.0-a1 on Device: CUDA Loading weights [b6c0f1430a] from X:\SDXL_A111\stable-diffusion-webui\models\Stable-diffusion\sdxlSevenof9NSFW_sdxlSevenof94thNSFW.safetensors 2024-02-19 21:48:55,054 - ControlNet - INFO - ControlNet UI callback registered. Running on local URL: http://127.0.0.1:7860 To create a public link, set
share=True
inlaunch()
. Startup time: 13.1s (prepare environment: 4.5s, import torch: 3.2s, import gradio: 0.9s, setup paths: 0.7s, other imports: 0.5s, setup gfpgan: 0.1s, load scripts: 2.1s, create ui: 0.7s, gradio launch: 0.3s). model_type EPS UNet ADM Dimension 2816 Using pytorch attention in VAE Working with z of shape (1, 4, 32, 32) = 4096 dimensions. Using pytorch attention in VAE extra {'cond_stage_model.clip_l.text_projection', 'cond_stage_model.clip_l.logit_scale', 'cond_stage_model.clip_g.transformer.text_model.embeddings.position_ids'} left over keys: dict_keys(['conditioner.embedders.0.logit_scale', 'conditioner.embedders.0.text_projection', 'conditioner.embedders.1.model.transformer.text_model.embeddings.position_ids']) To load target model SDXLClipModel Begin to load 1 model Moving model(s) has taken 0.88 seconds Model loaded in 8.7s (load weights from disk: 0.8s, forge instantiate config: 1.3s, forge load real models: 4.4s, load VAE: 0.8s, calculate empty prompt: 1.3s). Loading weights [aeb7e9e689] from X:\SDXL_A111\stable-diffusion-webui\models\Stable-diffusion\juggernautXL_v8Rundiffusion.safetensors model_type EPS UNet ADM Dimension 2816 Using pytorch attention in VAE Working with z of shape (1, 4, 32, 32) = 4096 dimensions. Using pytorch attention in VAE extra {'cond_stage_model.clip_l.text_projection', 'cond_stage_model.clip_l.logit_scale', 'cond_stage_model.clip_g.transformer.text_model.embeddings.position_ids'} To load target model SDXLClipModel Begin to load 1 model Moving model(s) has taken 0.60 seconds Model loaded in 6.8s (unload existing model: 0.7s, forge instantiate config: 1.1s, forge load real models: 4.2s, calculate empty prompt: 0.7s). To load target model SDXLClipModel Begin to load 1 model unload clone 0 Moving model(s) has taken 0.88 seconds To load target model SDXL Begin to load 1 model Moving model(s) has taken 0.96 seconds 100%|██████████████████████████████████████████████████████████████████████████████████| 20/20 [00:02<00:00, 9.67it/s] To load target model AutoencoderKL█████████████████████████████████████████████████▋ | 19/20 [00:01<00:00, 11.76it/s] Begin to load 1 model Moving model(s) has taken 0.25 seconds Total progress: 100%|██████████████████████████████████████████████████████████████████| 20/20 [00:02<00:00, 9.30it/s] 100%|██████████████████████████████████████████████████████████████████████████████████| 20/20 [00:06<00:00, 3.32it/s] Moving model(s) skipped. Freeing memory has taken 0.56 seconds█████████████████████████| 20/20 [00:05<00:00, 3.34it/s] Total progress: 100%|██████████████████████████████████████████████████████████████████| 20/20 [00:07<00:00, 2.81it/s] Total progress: 100%|██████████████████████████████████████████████████████████████████| 20/20 [00:07<00:00, 3.34it/s]Heyy, i get the same problem as yours. i already reinstalling repeatedly and still get the error "lora key not loaded" and error diffusion. maybe i will try your method to generate a 512x512 image the first time launch forge.
i still get error even i run 512x512 first generation , i still get error lora key not loaded.
Did you run the first 512x512 with a lora? I think I left that part out
https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail Virus-free.www.avast.com https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail <#DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2>
On Tue, Feb 20, 2024 at 7:23 AM Mikayori @.***> wrote:
Going to try latest branch and see if this does the same.
And yes! do a 512x512 first and it works correctly after that.. wierd. Here is the CMD stuff... Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug 1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)] Version: f0.0.10-latest-95-gd2af6d1b Commit hash: d2af6d1 https://github.com/lllyasviel/stable-diffusion-webui-forge/commit/d2af6d1b4427928013b5594efb9e0f08ad3bfd16 CUDA 12.1 Launching Web UI with arguments: --theme dark --ckpt-dir X:\SDXL_A111\stable-diffusion-webui\models\Stable-diffusion --lora-dir X:\SDXL_A111\stable-diffusion-webui\models\Lora Total VRAM 12282 MB, total RAM 65452 MB Set vram state to: NORMAL_VRAM Device: cuda:0 NVIDIA GeForce RTX 4070 : native VAE dtype: torch.bfloat16 Using pytorch cross attention ControlNet preprocessor location: X:\webui_forge\webui_forge_cu121_torch21\webui\models\ControlNetPreprocessor 21:48:54 - ReActor - STATUS - Running v0.7.0-a1 on Device: CUDA Loading weights [b6c0f1430a] from X:\SDXL_A111\stable-diffusion-webui\models\Stable-diffusion\sdxlSevenof9NSFW_sdxlSevenof94thNSFW.safetensors 2024-02-19 21:48:55,054 - ControlNet - INFO - ControlNet UI callback registered. Running on local URL: http://127.0.0.1:7860 To create a public link, set share=True in launch(). Startup time: 13.1s (prepare environment: 4.5s, import torch: 3.2s, import gradio: 0.9s, setup paths: 0.7s, other imports: 0.5s, setup gfpgan: 0.1s, load scripts: 2.1s, create ui: 0.7s, gradio launch: 0.3s). model_type EPS UNet ADM Dimension 2816 Using pytorch attention in VAE Working with z of shape (1, 4, 32, 32) = 4096 dimensions. Using pytorch attention in VAE extra {'cond_stage_model.clip_l.text_projection', 'cond_stage_model.clip_l.logit_scale', 'cond_stage_model.clip_g.transformer.text_model.embeddings.position_ids'} left over keys: dict_keys(['conditioner.embedders.0.logit_scale', 'conditioner.embedders.0.text_projection', 'conditioner.embedders.1.model.transformer.text_model.embeddings.position_ids']) To load target model SDXLClipModel Begin to load 1 model Moving model(s) has taken 0.88 seconds Model loaded in 8.7s (load weights from disk: 0.8s, forge instantiate config: 1.3s, forge load real models: 4.4s, load VAE: 0.8s, calculate empty prompt: 1.3s). Loading weights [aeb7e9e689] from X:\SDXL_A111\stable-diffusion-webui\models\Stable-diffusion\juggernautXL_v8Rundiffusion.safetensors model_type EPS UNet ADM Dimension 2816 Using pytorch attention in VAE Working with z of shape (1, 4, 32, 32) = 4096 dimensions. Using pytorch attention in VAE extra {'cond_stage_model.clip_l.text_projection', 'cond_stage_model.clip_l.logit_scale', 'cond_stage_model.clip_g.transformer.text_model.embeddings.position_ids'} To load target model SDXLClipModel Begin to load 1 model Moving model(s) has taken 0.60 seconds Model loaded in 6.8s (unload existing model: 0.7s, forge instantiate config: 1.1s, forge load real models: 4.2s, calculate empty prompt: 0.7s). To load target model SDXLClipModel Begin to load 1 model unload clone 0 Moving model(s) has taken 0.88 seconds To load target model SDXL Begin to load 1 model Moving model(s) has taken 0.96 seconds 100%|██████████████████████████████████████████████████████████████████████████████████| 20/20 [00:02<00:00, 9.67it/s] To load target model AutoencoderKL█████████████████████████████████████████████████▋ | 19/20 [00:01<00:00, 11.76it/s] Begin to load 1 model Moving model(s) has taken 0.25 seconds Total progress: 100%|██████████████████████████████████████████████████████████████████| 20/20 [00:02<00:00, 9.30it/s] 100%|██████████████████████████████████████████████████████████████████████████████████| 20/20 [00:06<00:00, 3.32it/s] Moving model(s) skipped. Freeing memory has taken 0.56 seconds█████████████████████████| 20/20 [00:05<00:00, 3.34it/s] Total progress: 100%|██████████████████████████████████████████████████████████████████| 20/20 [00:07<00:00, 2.81it/s] Total progress: 100%|██████████████████████████████████████████████████████████████████| 20/20 [00:07<00:00, 3.34it/s]
Heyy, i get the same problem as yours. i already reinstalling repeatedly and still get the error "lora key not loaded" and error diffusion. maybe i will try your method to generate a 512x512 image the first time launch forge.
i still get error even i run 512x512 first generation , i still get error lora key not loaded.
— Reply to this email directly, view it on GitHub https://github.com/lllyasviel/stable-diffusion-webui-forge/issues/194#issuecomment-1954104290, or unsubscribe https://github.com/notifications/unsubscribe-auth/AITODRNSHGT7QWSD64YTWTDYUSIVLAVCNFSM6AAAAABDDO3VU6VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTSNJUGEYDIMRZGA . You are receiving this because you were mentioned.Message ID: @.***>
Did you run the first 512x512 with a lora? I think I left that part out https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail Virus-free.www.avast.com https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail <#DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2> … On Tue, Feb 20, 2024 at 7:23 AM Mikayori @.> wrote: Going to try latest branch and see if this does the same. And yes! do a 512x512 first and it works correctly after that.. wierd. Here is the CMD stuff... Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug 1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)] Version: f0.0.10-latest-95-gd2af6d1b Commit hash: d2af6d1 <d2af6d1> CUDA 12.1 Launching Web UI with arguments: --theme dark --ckpt-dir X:\SDXL_A111\stable-diffusion-webui\models\Stable-diffusion --lora-dir X:\SDXL_A111\stable-diffusion-webui\models\Lora Total VRAM 12282 MB, total RAM 65452 MB Set vram state to: NORMAL_VRAM Device: cuda:0 NVIDIA GeForce RTX 4070 : native VAE dtype: torch.bfloat16 Using pytorch cross attention ControlNet preprocessor location: X:\webui_forge\webui_forge_cu121_torch21\webui\models\ControlNetPreprocessor 21:48:54 - ReActor - STATUS - Running v0.7.0-a1 on Device: CUDA Loading weights [b6c0f1430a] from X:\SDXL_A111\stable-diffusion-webui\models\Stable-diffusion\sdxlSevenof9NSFW_sdxlSevenof94thNSFW.safetensors 2024-02-19 21:48:55,054 - ControlNet - INFO - ControlNet UI callback registered. Running on local URL: http://127.0.0.1:7860 To create a public link, set share=True in launch(). Startup time: 13.1s (prepare environment: 4.5s, import torch: 3.2s, import gradio: 0.9s, setup paths: 0.7s, other imports: 0.5s, setup gfpgan: 0.1s, load scripts: 2.1s, create ui: 0.7s, gradio launch: 0.3s). model_type EPS UNet ADM Dimension 2816 Using pytorch attention in VAE Working with z of shape (1, 4, 32, 32) = 4096 dimensions. Using pytorch attention in VAE extra {'cond_stage_model.clip_l.text_projection', 'cond_stage_model.clip_l.logit_scale', 'cond_stage_model.clip_g.transformer.text_model.embeddings.position_ids'} left over keys: dict_keys(['conditioner.embedders.0.logit_scale', 'conditioner.embedders.0.text_projection', 'conditioner.embedders.1.model.transformer.text_model.embeddings.position_ids']) To load target model SDXLClipModel Begin to load 1 model Moving model(s) has taken 0.88 seconds Model loaded in 8.7s (load weights from disk: 0.8s, forge instantiate config: 1.3s, forge load real models: 4.4s, load VAE: 0.8s, calculate empty prompt: 1.3s). Loading weights [aeb7e9e689] from X:\SDXL_A111\stable-diffusion-webui\models\Stable-diffusion\juggernautXL_v8Rundiffusion.safetensors model_type EPS UNet ADM Dimension 2816 Using pytorch attention in VAE Working with z of shape (1, 4, 32, 32) = 4096 dimensions. Using pytorch attention in VAE extra {'cond_stage_model.clip_l.text_projection', 'cond_stage_model.clip_l.logit_scale', 'cond_stage_model.clip_g.transformer.text_model.embeddings.position_ids'} To load target model SDXLClipModel Begin to load 1 model Moving model(s) has taken 0.60 seconds Model loaded in 6.8s (unload existing model: 0.7s, forge instantiate config: 1.1s, forge load real models: 4.2s, calculate empty prompt: 0.7s). To load target model SDXLClipModel Begin to load 1 model unload clone 0 Moving model(s) has taken 0.88 seconds To load target model SDXL Begin to load 1 model Moving model(s) has taken 0.96 seconds 100%|██████████████████████████████████████████████████████████████████████████████████| 20/20 [00:02<00:00, 9.67it/s] To load target model AutoencoderKL█████████████████████████████████████████████████▋ | 19/20 [00:01<00:00, 11.76it/s] Begin to load 1 model Moving model(s) has taken 0.25 seconds Total progress: 100%|██████████████████████████████████████████████████████████████████| 20/20 [00:02<00:00, 9.30it/s] 100%|██████████████████████████████████████████████████████████████████████████████████| 20/20 [00:06<00:00, 3.32it/s] Moving model(s) skipped. Freeing memory has taken 0.56 seconds█████████████████████████| 20/20 [00:05<00:00, 3.34it/s] Total progress: 100%|██████████████████████████████████████████████████████████████████| 20/20 [00:07<00:00, 2.81it/s] Total progress: 100%|██████████████████████████████████████████████████████████████████| 20/20 [00:07<00:00, 3.34it/s] Heyy, i get the same problem as yours. i already reinstalling repeatedly and still get the error "lora key not loaded" and error diffusion. maybe i will try your method to generate a 512x512 image the first time launch forge. i still get error even i run 512x512 first generation , i still get error lora key not loaded. — Reply to this email directly, view it on GitHub <#194 (comment)>, or unsubscribe https://github.com/notifications/unsubscribe-auth/AITODRNSHGT7QWSD64YTWTDYUSIVLAVCNFSM6AAAAABDDO3VU6VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTSNJUGEYDIMRZGA . You are receiving this because you were mentioned.Message ID: @.>
ya already use lora at first generation 512x512 but the error still pops up on cmd
你有什么问题?
没关系,我想下载旧版本需要几次重新启动,因为现在它似乎可以工作了。重新启动了 4 次,前三次是今天早上。我打算对整个事情进行屏幕截图,但第一个测试有效,然后后来的测试也开始工作。现在没有收到“lora 密钥未加载”消息。
很高兴你让它工作。如果您不介意共享错误日志,这将有助于我们解决问题。
lora key not loaded
How do I add it to the comfyUI webUI bat (run_nvidia_gpu.bat) because it only has 2 lines? .\python_embeded\python.exe -s ComfyUI\main.py --windows-standalone-build pause
do I add it before or after or NOT at all?
Checklist
What happened?
Lora models are no longer working properly with a refiner model on the newest update. Yesterday it was working fine with loras and the refiner option. I even used the same prompt and settings as I am currently experimenting with workflows.
Now I get a message that "Lora key not loaded". The final image does not have the lora applied.
The error does not appear until the refiner model is loaded and applied to the process. The issue exists in img2img and txt2img.
The models being used are:
Steps to reproduce the problem
What should have happened?
The lora should also load when a refiner model is used.
What browsers do you use to access the UI ?
Brave
Sysinfo
sysinfo-2024-02-11-10-48.json
Console logs
https://pastebin.com/ju8H0T1n
Additional information