AnimateDiff integration for ComfyUI, adapts from sd-webui-animatediff. Please read the original repo README for more information.
custom_nodes
folder.comfyui-animatediff/models/
.Download motion LoRAs and put them under comfyui-animatediff/loras/
folder.
Note: LoRAs only work with AnimateDiff v2 mm_sd_v15_v2.ckpt module.
AnimateDiffLoraLoader
Example workflow:
Workflow: lora.json
Samples:
The sliding window feature enables you to generate GIFs without a frame length limit. It divides frames into smaller batches with a slight overlap. This feature is activated automatically when generating more than 16 frames. To modify the trigger number and other settings, utilize the SlidingWindowOptions
node. See the sample workflow bellow.
KSampler
motion_module
: use AnimateDiffLoader
to load the motion moduleinject_method
: should left defaultframe_number
: animation lengthlatent_image
: You can pass an EmptyLatentImage
sliding_window_opts
: custom sliding window optionsframe_rate
: number of frame per secondloop_count
: use 0 for infinite loopsave_image
: should GIF be saved to diskformat
: supports image/gif
, image/webp
(better compression), video/webm
, video/h264-mp4
, video/h265-mp4
. To use video formats, you'll need ffmpeg installed and available in PATH
Custom sliding window options
context_length
: number of frame per window. Use 16 to get the best results. Reduce it if you have low VRAM.context_stride
:
context_overlap
: overlap frames between each window sliceclosed_loop
: make the GIF a closed loop, will add more sampling stepLoad GIF or video as images. Usefull to load a GIF as ControlNet input.
frame_start
: Skip some begining frames and start at frame_start
frame_limit
: Only take frame_limit
framesWorkflow: simple.json
Samples:
Workflow: sliding-window.json
Samples:
Upscale latent output using LatentUpscale
then do a 2nd pass with AnimateDiffSampler
.
Workflow: latent-upscale.json
Samples:
You will need following additional nodes:
LatentKeyframe
and TimestampKeyframe
from ComfyUI-Advanced-ControlNet to apply diffrent weights for each latent index.Workflow: cn-2images.json
Samples:
Using a GIF (or video, or a list of images) as ControlNet input.
Workflow: cn-vid2vid.json
Samples:
It's an xformers
bug accidentally triggered by the way the original AnimateDiff CrossAttention is passed in. The current workaround is to disable xformers with --disable-xformers
when booting ComfyUI.
Work around:
--disable-xformers
See: https://github.com/continue-revolution/sd-webui-animatediff/issues/31
Training data used by the authors of the AnimateDiff paper contained Shutterstock watermarks. Since mm_sd_v15 was finetuned on finer, less drastic movement, the motion module attempts to replicate the transparency of that watermark and does not get blurred away like mm_sd_v14. Try other community finetuned modules.