G-U-N / AnimateLCM

AnimateLCM: Let's Accelerate the Video Generation within 4 Steps!
https://animatelcm.github.io
MIT License
567 stars 41 forks source link

How to replicate the i2v results? #6

Closed aijinkela closed 7 months ago

aijinkela commented 7 months ago

I made modifications to the code https://huggingface.co/spaces/wangfuyun/AnimateLCM based on the animatediff i2v process at https://github.com/talesofai/AnimateDiff/blob/04b2715b39d4a02334b08cb6ee3dfe79f0a6cd7c/animatediff/pipelines/pipeline_animation.py#L288, but it is difficult for me to achieve the same results as shown on the project homepage. Is there a better way to implement it?

G-U-N commented 7 months ago

Nice try. It seems that you are trying to achieve i2v in zero-shot manner. While the results of i2v shown in our page is obtained by training an additional image encoder without requiring teacher model (see Sec. 4.3 in our paper). We haven't release the weight yet.

JACKHAHA363 commented 5 months ago

@aijinkela Hi does your modification work with original animatediff?