Closed liemthanh-playgroundvina closed 4 months ago
i'm using scripts at ./scripts/inference.py
In fact, we have fine-tuned the SD1.5 weights, so directly using the SD1.5 lcm-lora weights might not be suitable. If you insist on using them, you may consider fine-tuning them.
Thanks for your assistance! Also, do you have other methods or suggestions you might have for increasing the interface's speed? I'm using 15steps that take 5 minutes for a 10s 512x512 video and it's too long to use.
Thanks for your assistance! Also, do you have other methods or suggestions you might have for increasing the interface's speed? I'm using 15steps that take 5 minutes for a 10s 512x512 video and it's too long to use.
Is the result legit for 15 steps, what machine did you use for generation?
Thanks for your assistance! Also, do you have other methods or suggestions you might have for increasing the interface's speed? I'm using 15steps that take 5 minutes for a 10s 512x512 video and it's too long to use.
Is the result legit for 15 steps, what machine did you use for generation?
I'm using 3090ti with cu118+py3.9, the result is quite good with a little noise in the background
We have received a lot of issues about inference time, and we have plans to optimize it, but not in the near future. Thanks!
I want to add LORA: https://huggingface.co/latent-consistency/lcm-lora-sdv1-5 into the pipeline. How to do it??