divamgupta / diffusionbee-stable-diffusion-ui

Diffusion Bee is the easiest way to run Stable Diffusion locally on your M1 Mac. Comes with a one-click installer. No dependencies or technical knowledge needed.
https://diffusionbee.com
GNU Affero General Public License v3.0
12.6k stars 627 forks source link

Out/Inpainting affects image outside mask #269

Open Leland opened 2 years ago

Leland commented 2 years ago

This issue may be inherent to StableDiffusion – I have not tried other inpainting UI's to know if they also exhibit this behavior.

Inpainting causes the parts not under the mask to still change. This makes Inpainting in DiffusionBee quickly degrade input images, losing detail even in the first pass – and causing multiple passes to dramatically erode quality.

This might be naive, but couldn't we composite the masked inpainted result against the original to fix this?

Tests

Original 1 Run 5 Runs
f133f 400718_Closedeyes ƒ32f4
357046_Closedeye

Masks:

Screenshot 2022-10-30 at 4 16 35 PM Screenshot 2022-10-30 at 4 30 53 PM

Source: stock images from Google, I do not own these images.

barryanders commented 2 years ago

I was in the middle of writing this issue, but since you've created it I'll just add notes here. There's a similar problem in outpainting. As you move the square around, it degrades everything within the square (as mentioned above). You can see the difference between where your square is at and the rest of the image. It's seemingly the same problem. I don't know if this happens if you don't use masks on outpainting, as I was using masks for most of the runs. I had a mask that was not doing what I wanted in outpainting so I ran it several times and that's when I started noticing that the image was getting worse in quality in areas that were not supposed to change. I was in outpainting because I was simultaneously extending the image (which was working).

I have not seen the same problem when using AUTOMATIC1111/stable-diffusion-webui, although my testing is pretty limited. I'm currently trying out a few different UIs to see if I can get better results from inpainting and outpainting.

divamgupta commented 2 years ago

The SD1.5 model by runway ML does not do the composite image. We would need to try an experiment where we compare both composite vs no composite on multiple images / test cases. Im not sure if using a naive composite thing might produce good blending.

It would be great if someone could try it and post it here.

barryanders commented 2 years ago

@divamgupta I'm not sure what you mean when you say composite vs no composite.

Here's an example of what I was talking about without doing any masking in outpainting. Prompt is  because a blank prompt was not allowed. Apart from that burned kind of effect, it did a good job extending the image.

  1. First outpainting Screenshot 2022-10-31 at 12 53 45 AM
  2. Second outpainting Screenshot 2022-10-31 at 12 54 33 AM
  3. The quality degrade is noticeable here by the outline of the first two outpaintings. Screenshot 2022-10-31 at 12 55 01 AM
  4. Several more outpaintings later, it's clear that it's getting cooked each time a bit more. Screenshot 2022-10-31 at 1 08 59 AM

Edit: Also, since I mentioned that I was testing different UIs, I found that I got remarkable results for outpainting using this.

Polyculturerawks commented 2 years ago

I have also noticed the same issues with in/out painting tests on 1.4.3 (FP16). Over time the outpainting degrades the original image and turns magenta, creating visible differences where overlap occurs.

Urethramancer commented 2 years ago

I have noticed the same problem in all versions of DiffusionBee. I just installed the FP32 version, and it still changes the colours and mutates faces outside areas I want to change.

I did notice a bit of colour change while outpainting, but no real mutations of other features yet. I spent an hour working on a thing just to test it, and I sort of fixed the colour issues by just erasing the lines where the transition becomes obvious. Individual spots were different tones, but the overall result was acceptable.

Leland commented 2 years ago

Ok, apparently one part of this is because Stable Diffusion causes color distortion, which hews towards magenta. Google for "img2img Color Correction" to find more, or check out this Reddit post.. Various people have seemingly solved this in various ways.

You can very clearly see this happening in DiffusionBee in my first example:

image

But, also, I can confirm that this sort of image quality drop does not happen in other SD interfaces – or if it does, it is drastically more severe in DiffusionBee.

Im not sure if using a naive composite thing might produce good blending.

After eliminating the magento skew, it may? But I agree that good blending > quality loss.

divamgupta commented 2 years ago

Is this for FP16 or FP32. I think there will be more color distortion in FP16

On Sat, Nov 5, 2022 at 7:44 PM Leland Clemmons @.***> wrote:

Ok, apparently one part of this is because Stable Diffusion causes color distortion, which hews towards magenta. Google for "img2img Color Correction" to find more, or check out this Reddit post. https://old.reddit.com/r/StableDiffusion/comments/ydwnc3/good_news_vae_prevents_the_loopback_magenta_skew/. Various people have seemingly solved this in various ways.

You can very clearly see this happening in DiffusionBee in my first example:

[image: image] https://user-images.githubusercontent.com/714017/200147144-a087470a-fd23-4a7d-aaf0-be9850073b2f.png

But, also, I can confirm that this sort of image quality drop does not happen in other SD interfaces – or if it does, it is drastically more severe in DiffusionBee.

Im not sure if using a naive composite thing might produce good blending.

After eliminating the magento skew, it may? But I agree that good blending

quality loss.

— Reply to this email directly, view it on GitHub https://github.com/divamgupta/diffusionbee-stable-diffusion-ui/issues/269#issuecomment-1304667409, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAONR5MG23VK5U7APWKDQ6LWG3WPNANCNFSM6AAAAAARSPNYUY . You are receiving this because you were mentioned.Message ID: @.*** com>

Urethramancer commented 2 years ago

I'm seeing it in FP32 also. It's subtle at first, but after a few iterations I'm wondering how I didn't notice.

MrSamSeen commented 1 year ago

anyone found a solution for this?

novirusallowed commented 1 year ago

Same issue here. I have a custom model that works perfectly on Automatic111 but not when I use it with diffusers.

Even if a use a mask, it still modifies all the faces and other small details all over the image.