Closed not-ski closed 3 months ago
This PR addresses #322 #323 #324
Examples (performed on a 2048x2048 SDXL + SUPIR gen, using GPEN 1024):
The face model I used is just a blend of a few random stock photos I found online.
Source image: Immediate restore: Old method: Source image: Immediate restore: Old method:
The difference should be pretty obvious! On the second set of images, you may need to zoom in a bit to really see the difference in clarity/detail. This difference is further exaggerated if the subject's head is tilted significantly. Feel free to play around and generate your own examples if you'd like!
Also, an added bonus of the new approach is that the swapped face appears to be slightly better aligned than before. You can clearly see this by swapping between the second set of images. I imagine this has to do with eliminating the multiple steps of warpAffine transformations the old approach employs. All those transformations back-and-forth probably end up running into degraded float precision, especially at these higher resolutions.
Additionally, note that the masking helper was not used in these examples, so as to give the most direct comparison possible.
Thanks for this PR, I will take a closer look this week!
Hi @not-ski trying to understand the face booster node better.
Thanks
![Uploading image.png…]()
@ckao10301
restore_with_main_after
toggles whether you want to run another pass of a restoration model after the swapped face has been pasted into the image. If toggled, this will use the restoration model selected in the main Reactor nodeHope this helps :)
Thanks @not-ski. How do I restore before pasting it back into the image?
As you described above:
"Adds a toggle to the main node to enable restoration on the 128px swapped face before it gets pasted back into the target image"
I only see the option to restore after pasting, not before
@ckao10301 the Face Booster node restores before pasting
@not-ski Face boost is messing up the eyes. Do you have advice on how to prevent this?
Without boost:
With boost (GFPGAN)
Another problem is, if I boost with GPEN instead of GFPGAN, the result looks overcooked in addition to eyes being messed up. How do I prevent the overcooking? When should I use GPEN instead of GFPGAN?
Workflow: boost test workflow.json
Inputs:
@ckao10301 you can try to reduce visibility
of Face Booster to 0.5-0.6 - just play with this value
Lower visibility is slightly better, but in this case I think the result without faceboost and using restore after swap actually looks better.
gfpgan faceboost
visibility=0.1
visibility=0.5
visibility =1
faceboost off (with restore post ![Uploading faceboost off.png…]() swap)
I've been messing around with my local install of Reactor for awhile now, and I've finally landed on a subset of changes that I think would be highly beneficial to merge into the public release:
half_det_size()
if no faces are found when attempting to swapThat last change is the biggest one; the way Reactor currently handles swapping -> restoration involves multiple steps of upscaling/downscaling the cropped face (and affine transformations). This produces a lot of artifacting and detail loss that is especially palpable when using the larger 1024/2048px restoration models.
By performing restoration on the original 128px swapped face, we can ensure maximal detail/likeness preservation, and the final result is often noticeably better. I will follow up with some examples showcasing this in the thread below.