Open xingyueye opened 9 months ago
I have re-train the LCM-LoRA with scripts[https://github.com/luosiallen/latent-consistency-model/blob/main/LCM_Training_Script/consistency_distillation/train_lcm_distill_lora_sd_wds.py] However, the saved checkpoint can not be directly used because of the key mismatching. Is there a continued conversion process to make checkpoints the same format as your released version?
@xingyueye i have face the same problem, have you find an solutions?
Same issue. It's weird. Any solution?
I have re-train the LCM-LoRA with scripts[https://github.com/luosiallen/latent-consistency-model/blob/main/LCM_Training_Script/consistency_distillation/train_lcm_distill_lora_sd_wds.py] However, the saved checkpoint can not be directly used because of the key mismatching. Is there a continued conversion process to make checkpoints the same format as your released version?