Closed xxxbrokenboi closed 2 months ago
It is common problem. First, VAE blurs the images. You can try to load image, VAE encode it, then VAE decode and look at result. Second, SD repaints whole image even during inpainting with mask. So the solution is to blend both images, such as you can preserve original image at non-masked areas. Look at new example I added.
I was not aware of this. Does this mean that all other implementations of inpainting do this basic blending of the original and inpainted image with the mask applied (it is just a hidden step) ?
As far as I know no, but you usually have blur all across the image. BTW do you know that when you zoom image in ComfyUI workflow your browser "helps" you by softening and smoothing it? To compare real images you should use something like (https://www.faststone.org/).
I also had this thought, when I was playing around with a lot of open ipaint demos deployed on huggingface, such as Tencent's brushnet demo, and running the same product background transformation task, I was getting outputs that were not blurred, and I wondered what kind of trick they were using, because normal blend will always result in a slightly unnatural product lighting and shadowing, but it didn't seem to have these issues on their demo
When I use your nodes, I noticed that after inverting the mask, my areas that shouldn't have been inpainted were affected, like img 2 and 3 below, the text that was regular got blurred by the inpaint, and I was wondering if you know why, and if it could be some parameter reason, I adjusted the STEPS & SCALE, and it looks like the problem isn't fixed. Thanks for your work!