Closed jnc-nj closed 3 months ago
Inside Gen2 Nodes
Inside Gen2 Nodes
How to use it? Is there any workfolw? thanks
Looks like it's broken, no other node seem to accept M_MODELS and there are zero examples about using those nodes.
It's not broken, m_models is the input for Gen2 (Use Evolved Sampling node). Gen1 is limited to only the base AnimateDiff features, Gen2 is modular so that I can expose different features like AnimateLCM-I2V, PIA, and CameraCtrl.
@Kosinkadink thanks for that clarification, that was exactly the missing bit of information I (or we) needed in order to look further. lol
Comfy should implement a way to search for extensions that can accept certain inputs.
From the readme:
AnimateLCM-I2V support, big thanks to Fu-Yun Wang for providing me the original diffusers code he created during his work on the paper NOTE: Requires same settings as described for AnimateLCM above. Requires Apply AnimateLCM-I2V Model Gen2 node usage so that ref_latent can be provided; use Scale Ref Image and VAE Encode node to preprocess input images. While this was intended as an img2video model, I found it works best for vid2vid purposes with ref_drift=0.0, and to use it for only at least 1 step before switching over to other models via chaining with toher Apply AnimateDiff Model (Adv.) nodes. The apply_ref_when_disabled can be set to True to allow the img_encoder to do its thing even when the end_percent is reached. AnimateLCM-I2V is also extremely useful for maintaining coherence at higher resolutions (with ControlNet and SD LoRAs active, I could easily upscale from 512x512 source to 1024x1024 in a single pass). TODO: add examples
I could not find the Apply AnimateLCM-I2V Model and Scale Ref Image and VAE Encode nodes in the latest pull of the codebase and was wondering where they were (regular AnimateLCM-T2V works fine). If the author could kindly point me to them that would be great.
Best