RQ-Wu / LAMP

Official implement code of LAMP: Learn a Motion Pattern by Few-Shot Tuning a Text-to-Image Diffusion Model (Few-shot-based text-to-video diffusion)
https://rq-wu.github.io/projects/LAMP/index.html
Other
236 stars 10 forks source link

How to inference for video editing? #7

Open Kyfafyd opened 8 months ago

Kyfafyd commented 8 months ago

Hi, Thanks for sharing the great work! I woul dlike to learn how to try video editing after training?

RQ-Wu commented 8 months ago

Thanks for your interest! We will release the code for video editing in 3-5 days. Please stay tuned to our repo!

akk-123 commented 7 months ago

can lamp support lora? each lora for a motion

RQ-Wu commented 7 months ago

Hi, Thanks for sharing the great work! I woul dlike to learn how to try video editing after training?

Hi~The code of video editing is released! Sorry for my late update!

RQ-Wu commented 7 months ago

can lamp support lora? each lora for a motion

Maybe lora a T2V model is more reasonable, our lamp is based on a T2I model. Whatever, you can give it a try, and feel free to share any novel findings!