magic-research / magic-animate

[CVPR 2024] MagicAnimate: Temporally Consistent Human Image Animation using Diffusion Model
https://showlab.github.io/magicanimate/
BSD 3-Clause "New" or "Revised" License
10.49k stars 1.07k forks source link

LCM sampler #37

Open Priestru opened 11 months ago

Priestru commented 11 months ago

https://github.com/magic-research/magic-animate/assets/108554892/590e4986-1859-4008-be58-9e82e97bf70f

This thing is 8 steps, guidance 4 with LCM trained checkpoint. Seems almost like usual generation with 25 steps. It should work a bit better if we use LCM sampler I guess. So how do we change sampler?

NolenBrolen commented 11 months ago

Awesome, which LCM trained checkpoint did you use if you don't mind sharing?

Priestru commented 11 months ago

Awesome, which LCM trained checkpoint did you use if you don't mind sharing?

On civit.ai you can filter SD 1.5 LCM models and pick one. It's currently down so i can't give you a link.

CyberTimon commented 11 months ago

How did you change the SD 1.5 model? Do I have to create the entire pipeline out of it? Please tell me how you did it. Thanks

NolenBrolen commented 11 months ago

On civit.ai you can filter SD 1.5 LCM models and pick one. It's currently down so i can't give you a link.

I found a dreamshaper_v7+LCM model on Huggingface, and also tried some other LCM 1.5 checkpoints, but when I run it with just 4-6 steps it doesn't look like the LCM is working very much, and the output looks terrible. Did you also update any of the files in Unet/text_encoder/tokeninzer/scheduler folders?

qijunf commented 11 months ago

how to use LCM? Pls

Don-Chad commented 11 months ago

@Priestru would be amazing if you could share how you used the LCM model! Just replacing the standard does not work..