G-U-N / AnimateLCM

AnimateLCM: Let's Accelerate the Video Generation within 4 Steps!
https://animatelcm.github.io
MIT License
567 stars 41 forks source link

Distillation of Video Diffusion Based Model #8

Closed m-muaz closed 4 months ago

m-muaz commented 7 months ago

Hi, Firstly, Great work.

I have a question regarding the distillation of video diffusion model. Did you used the DDIM sampler while distilling from the video based diffusion model and did the training used skip timesteps while training the online consistency distillation model?

Also, how many optimization steps did the training involved for generating good results with the distilled model?

Thanks for the help.

G-U-N commented 4 months ago

Yes, basically using DDIM for simplicity. You may observe good quality with only 5k steps. But continuing training will make it better.