Open skyler14 opened 3 weeks ago
This should be a technically still valid but outdated explanation: https://github.com/InstantID/InstantID/pull/89
NO innswapper faceswap is used. ONLY instantId
Something like this is happening: 1) mask -> find the face -> cut out and resize the masked region to make it bigger (we will work on this image) 2) prepare control_image from step 1. 3) load face embed 4) perform inpaint with: your model + instantId + controller_net + control_image from step 2 + resized image from step 1 and face embed from step 3.
5) resize the resulting image to its original width and height and try to paste it back into the original image by using blur and magic around the edges.
Thanks for making this. I really like this project but unfortunately I am doing things with the newer native implementation of InstantID in comfyUI, and cant afford to load two full sized models in my workflows. Can you describe a bit more about how this workflow works and how it can be recreated as nodes.
Is this basically just masking and inpainting, are we doing an innswapper faceswap first, then an img2img instantid-sdxl noising/denoising inference. A mix of both? The fidelity on this swap can be great so it'd be nice to be able to recreate the results.