Closed aijinkela closed 9 months ago
Nice try. It seems that you are trying to achieve i2v in zero-shot manner. While the results of i2v shown in our page is obtained by training an additional image encoder without requiring teacher model (see Sec. 4.3 in our paper). We haven't release the weight yet.
@aijinkela Hi does your modification work with original animatediff?
I made modifications to the code https://huggingface.co/spaces/wangfuyun/AnimateLCM based on the animatediff i2v process at https://github.com/talesofai/AnimateDiff/blob/04b2715b39d4a02334b08cb6ee3dfe79f0a6cd7c/animatediff/pipelines/pipeline_animation.py#L288, but it is difficult for me to achieve the same results as shown on the project homepage. Is there a better way to implement it?