Open apolinario opened 8 months ago
Thank you for your interest! We have also noticed this work. And we are trying to apply LCM in our framework to speed up the inference process. We think it would be possible to merge LCM and VDM.
It would be greatful if we can speed up the inference speed. It's too slow on current apporach. Inference 25 seconds video takes 14 minutes on A800 GPU.
Thanks for the model and pipelines, superb work!
As the technique is based on SD1.5 + AnimateDiff, I was wondering if it would be possible to create a super fast version based on the LCM-LoRA for SD1.5 plus Animate Diff Lightning (https://huggingface.co/ByteDance/AnimateDiff-Lightning)