G-U-N / AnimateLCM

[SIGGRAPH ASIA 2024 TCS] AnimateLCM: Computation-Efficient Personalized Style Video Generation without Personalized Video Data
https://animatelcm.github.io
MIT License
612 stars 45 forks source link

How to replicate the i2v results? #6

Closed aijinkela closed 9 months ago

aijinkela commented 9 months ago

I made modifications to the code https://huggingface.co/spaces/wangfuyun/AnimateLCM based on the animatediff i2v process at https://github.com/talesofai/AnimateDiff/blob/04b2715b39d4a02334b08cb6ee3dfe79f0a6cd7c/animatediff/pipelines/pipeline_animation.py#L288, but it is difficult for me to achieve the same results as shown on the project homepage. Is there a better way to implement it?

G-U-N commented 9 months ago

Nice try. It seems that you are trying to achieve i2v in zero-shot manner. While the results of i2v shown in our page is obtained by training an additional image encoder without requiring teacher model (see Sec. 4.3 in our paper). We haven't release the weight yet.

JACKHAHA363 commented 8 months ago

@aijinkela Hi does your modification work with original animatediff?