Open sunhengzhe opened 7 months ago
I didn't use the
train_dreambooth_lora_sdxl
scripts provided by this project for training. Instead, I chose two models that I previously trained using Kohya sd-scripts.When I ran the
train_dreambooth_ziplora_sdxl
, it showedKeyError: 'unet.unet.down_blocks.1.attentions.0.transformer_blocks.0.attn1.to_q.lora.up.weight."
replace prefix from "unet.unet" to "unet." in ziplora_pytorch/utils.py: merge_lora_weights function , in line42
@wuliebucha I don't think it's that easy, the kohya key format is completely different, using underscores instead of .
@wuliebucha @Xynonners tried both, and it seems neither worked 😂. I printed tensors.keys()
in this function and they seem to be in a format like lora_unet_input_blocks_7_1_transformer_blocks_0_attn1_to_k.lora_down.weight
. It might require a more complex formatting.
@wuliebucha @Xynonners tried both, and it seems neither worked 😂. I printed
tensors.keys()
in this function and they seem to be in a format likelora_unet_input_blocks_7_1_transformer_blocks_0_attn1_to_k.lora_down.weight
. It might require a more complex formatting.
yeah, there are no SDXL LoRA interops between diffusers and kohya, I've searched pretty much everywhere.
The format is completely different, down to the numbers in the keys.
merged_lora_weights_dict = merge_lora_weights(lora_weights, attn_name,prefix='unet.') merged_lora_weights_dict_2 = merge_lora_weights(lora_weights_2, attn_name,prefix='unet.')
@xiaohaipeng Have you tried it on your side? I have tried this but it still shows KeyError.
I tried format with reference to this sd-scripts codes and although I totally don't know the implementation details, it does work 😂. The training stopped at 100% and it shows requires too much memory for my machine (A10, 24G) to finish.
@sunhengzhe have you tried to merge 2 models trained with train_dreambooth_lora_sdxl? Is it also running out of memory? Do you know what rank parameter was the lora that you're trying to merge was trained with? maybe that's affecting the amount of memory needed for the merge.
@pedropaf Sorry for lately reply, I modified unet = unet.to(torch.float32)
to unet = unet.to(torch.float16)
in train_dreambooth_ziplora_sdxl.py
and it can be merged, but I didn't reproduce the paper. I will keep testing.
I didn't use the
train_dreambooth_lora_sdxl
scripts provided by this project for training. Instead, I chose two models that I previously trained using Kohya sd-scripts.When I ran the
train_dreambooth_ziplora_sdxl
, it showed