Kosinkadink / ComfyUI-AnimateDiff-Evolved

Improved AnimateDiff for ComfyUI and Advanced Sampling Support
Apache License 2.0
2.3k stars 177 forks source link

Does anybody have a workflow for AnimateLCM-I2V? #383

Open GeLi1989 opened 1 month ago

GeLi1989 commented 1 month ago

I find this,but I try to use it , can not make it .

AnimateLCM-I2V support, big thanks to Fu-Yun Wang for providing me the original diffusers code he created during his work on the paper - NOTE: Requires same settings as described for AnimateLCM above. Requires Apply AnimateLCM-I2V Model Gen2 node usage so that ref_latent can be provided; use Scale Ref Image and VAE Encode node to preprocess input images. While this was intended as an img2video model, I found it works best for vid2vid purposes with ref_drift=0.0, and to use it for only at least 1 step before switching over to other models via chaining with toher Apply AnimateDiff Model (Adv.) nodes. The apply_ref_when_disabled can be set to True to allow the img_encoder to do its thing even when the end_percent is reached. AnimateLCM-I2V is also extremely useful for maintaining coherence at higher resolutions (with ControlNet and SD LoRAs active, I could easily upscale from 512x512 source to 1024x1024 in a single pass). TODO: add examples

thanks