elias-gaeros / resize_lora

MIT License
12 stars 1 forks source link

Could not resize my SDXL LoCON #3

Open DarkAlchy opened 4 months ago

DarkAlchy commented 4 months ago

D:\resize_lora>python resize_lora.py F:/stable-diffusion-webui/models/Stable-diffusion/sd_xl_base_1.0.safetensors F:/stable-diffusion-webui/models/Lora/123_XL_V1.safetensors -o .\ -v -r fro_ckpt=1,thr=-2.0 INFO:root:Processing LoRA model: F:/stable-diffusion-webui/models/Lora/Claymation_XL_V1.safetensors INFO:root:No LoRA layer for 'model.diffusion_model.input_blocks.0.0.weight' (320, 4, 3, 3), expected LoRA key: 'lora_unet_input_blocks_0_0' INFO:root:No LoRA layer for 'model.diffusion_model.input_blocks.1.0.emb_layers.1.weight' (320, 1280), expected LoRA key: 'lora_unet_input_blocks_1_0_emb_layers_1' INFO:root:No LoRA layer for 'model.diffusion_model.input_blocks.1.0.in_layers.2.weight' (320, 320, 3, 3), expected LoRA key: 'lora_unet_input_blocks_1_0_in_layers_2' INFO:root:No LoRA layer for 'model.diffusion_model.input_blocks.1.0.out_layers.3.weight' (320, 320, 3, 3), expected LoRA key: 'lora_unet_input_blocks_1_0_out_layers_3' INFO:root:No LoRA layer for 'model.diffusion_model.input_blocks.2.0.emb_layers.1.weight' (320, 1280), expected LoRA key: 'lora_unet_input_blocks_2_0_emb_layers_1' INFO:root:No LoRA layer for 'model.diffusion_model.input_blocks.2.0.in_layers.2.weight' (320, 320, 3, 3), expected LoRA key: 'lora_unet_input_blocks_2_0_in_layers_2' INFO:root:No LoRA layer for 'model.diffusion_model.input_blocks.2.0.out_layers.3.weight' (320, 320, 3, 3), expected LoRA key: 'lora_unet_input_blocks_2_0_out_layers_3' INFO:root:No LoRA layer for 'model.diffusion_model.input_blocks.3.0.op.weight' (320, 320, 3, 3), expected LoRA key: 'lora_unet_input_blocks_3_0_op' INFO:root:No LoRA layer for 'model.diffusion_model.label_emb.0.0.weight' (1280, 2816), expected LoRA key: 'lora_unet_label_emb_0_0' INFO:root:No LoRA layer for 'model.diffusion_model.label_emb.0.2.weight' (1280, 1280), expected LoRA key: 'lora_unet_label_emb_0_2' INFO:root:No LoRA layer for 'model.diffusion_model.out.2.weight' (4, 320, 3, 3), expected LoRA key: 'lora_unet_out_2' INFO:root:No LoRA layer for 'model.diffusion_model.output_blocks.6.0.emb_layers.1.weight' (320, 1280), expected LoRA key: 'lora_unet_output_blocks_6_0_emb_layers_1' INFO:root:No LoRA layer for 'model.diffusion_model.output_blocks.6.0.in_layers.2.weight' (320, 960, 3, 3), expected LoRA key: 'lora_unet_output_blocks_6_0_in_layers_2' INFO:root:No LoRA layer for 'model.diffusion_model.output_blocks.6.0.out_layers.3.weight' (320, 320, 3, 3), expected LoRA key: 'lora_unet_output_blocks_6_0_out_layers_3' INFO:root:No LoRA layer for 'model.diffusion_model.output_blocks.6.0.skip_connection.weight' (320, 960, 1, 1), expected LoRA key: 'lora_unet_output_blocks_6_0_skip_connection' INFO:root:No LoRA layer for 'model.diffusion_model.output_blocks.7.0.emb_layers.1.weight' (320, 1280), expected LoRA key: 'lora_unet_output_blocks_7_0_emb_layers_1' INFO:root:No LoRA layer for 'model.diffusion_model.output_blocks.7.0.in_layers.2.weight' (320, 640, 3, 3), expected LoRA key: 'lora_unet_output_blocks_7_0_in_layers_2' INFO:root:No LoRA layer for 'model.diffusion_model.output_blocks.7.0.out_layers.3.weight' (320, 320, 3, 3), expected LoRA key: 'lora_unet_output_blocks_7_0_out_layers_3' INFO:root:No LoRA layer for 'model.diffusion_model.output_blocks.7.0.skip_connection.weight' (320, 640, 1, 1), expected LoRA key: 'lora_unet_output_blocks_7_0_skip_connection' INFO:root:No LoRA layer for 'model.diffusion_model.output_blocks.8.0.emb_layers.1.weight' (320, 1280), expected LoRA key: 'lora_unet_output_blocks_8_0_emb_layers_1' INFO:root:No LoRA layer for 'model.diffusion_model.output_blocks.8.0.in_layers.2.weight' (320, 640, 3, 3), expected LoRA key: 'lora_unet_output_blocks_8_0_in_layers_2' INFO:root:No LoRA layer for 'model.diffusion_model.output_blocks.8.0.out_layers.3.weight' (320, 320, 3, 3), expected LoRA key: 'lora_unet_output_blocks_8_0_out_layers_3' INFO:root:No LoRA layer for 'model.diffusion_model.output_blocks.8.0.skip_connection.weight' (320, 640, 1, 1), expected LoRA key: 'lora_unet_output_blocks_8_0_skip_connection' INFO:root:No LoRA layer for 'model.diffusion_model.time_embed.0.weight' (1280, 320), expected LoRA key: 'lora_unet_time_embed_0' INFO:root:No LoRA layer for 'model.diffusion_model.time_embed.2.weight' (1280, 1280), expected LoRA key: 'lora_unet_time_embed_2' Traceback (most recent call last): File "D:\resize_lora\resize_lora.py", line 314, in main() File "D:\resize_lora\resize_lora.py", line 301, in main paired = PairedLoraModel(lora_model_path, checkpoint) File "D:\resize_lora\loralib__init.py", line 120, in init__ raise ValueError(f"Target layer not found for LoRA {lora_layer_keys}") ValueError: Target layer not found for LoRA lora_unet_input_blocks_1_0_emb_layers_1.diff

elias-gaeros commented 4 months ago

Thank for reporting this!

Could you tell me more about how this LoCon was trained? Is the safetensors available online?

DarkAlchy commented 3 months ago

WOW, I am not sure what is wrong with github as some of my tickets, including this one, is not emailing me to tell me I had a response. Not online as it was one I had just trained using Kohya.