Open chlinfeng1997 opened 6 months ago
Hi, you can combine unet, which you trained in the first stage, with ckpt from animatediff.
just like:
import torch
def merge_ckpts(ckpt1_path, ckpt2_path, output_path): ckpt1 = torch.load(ckpt1_path, map_location="cpu") ckpt2 = torch.load(ckpt2_path, map_location="cpu") merged_state_dict = {ckpt1, ckpt2} torch.save(merged_state_dict, output_path)
trained_unet_path = './pretrained_models/unet_stage_1.ckpt' motion_module_v1_path = './pretrained_models/AnimateDiff/mm_sd_v15.ckpt' merged_ckpt_path = "./pretrained_models/merge_trained_unet_motion.ckpt"
Why don't you use it? Is the performance not good?! @guoqincode
Why don't you use it? Is the performance not good?! @guoqincode
I used it.
Why don't you use it? Is the performance not good?! @guoqincode
I used it.
Have you do comparison experiments? Is training temporal module from scratch much different from fine-tuning networks?
Thanks for open source! The original paper mentions using the temporal layer weight of AnimateDiff for initialization, why does it not appear in the code? Looking forward to your reply!