Open ThereforeGames opened 1 year ago
My inpainting even doesn't work, try to correct eyes the results is only black color painted over eyes
Is it still washed out if you changed the VAE's used? Mine were always have those kind of results if I'm using VAE meant for anime generations (like they put greyish layer on top of it).
Is it still washed out if you changed the VAE's used? Mine were always have those kind of results if I'm using VAE meant for anime generations (like they put greyish layer on top of it).
Yeah, changing the VAE didn't seem to make a difference. It is definitely like a "greyish layer on top" though.
I'll run some more tests soon, see if I can identify the culprit and post some example pics.
Oh! I figured it out!
I made a customization in processing.py that I neglected to include in my PR #5644 :
I don't understand the inpainting model well enough to explain why my one-line version works better, but look at the difference it makes:
Current version (washed out)
My version
At least for faceswaps, I don't notice any side effects with my version, but I'm hesitant to submit another PR without understanding the reason for the washed-out look.
Thoughts?
Well, subjectively speaking, given the example above, I like your version better. My observation so far concluding that SD treat faces differently than any other parts of a picture, way too much emphasis on human face or faces in general. Perhaps you could give another examples that does not include face if you trying to raise a new PR without proper reasoning behind it, IMHO.
Perhaps you could give another examples that does not include face if you trying to raise a new PR without proper reasoning behind it, IMHO.
Sure, here's one more.
Original image:
Swap to "tomato" at 0.75 conditioning mask strength
Current version:
My version:
I don't know if the current implementation is bugged, but I have never been able to get it to produce a pleasing result when the conditioning mask strength value is > 0 and like < 0.95.
Not only is it washed out, it also doesn't seem to morph the shape of the image at high values nearly as much as one might expect.
I don't really know how the results is meant or supposed to be, TBH. I kind of just adapt to what I have kind of guy.
Using your source image, I can confirm that it is washed out and does not even try to morph to what in prompt if we are using the 'original' masked content. AFAIK, not changing the subject of the masked content is what original supposed to do, but TBH I don't know how it was supposed to. Regarding the washed out, yes, I can confirm that.
My results:
Current version:
Your version:
Side by side comparison:
EDIT: * It is even more DESATURATED in my examples. Different configs somewhere perhaps?
@EllangoK Why close an issue that isn't solved? Just try to run img2img with conditioning mask < 1, and then run it again in inpainting; the second result is entirely different, way too close to the source image, and with washed out colors.
For this guy, if we inpaint his shirt (instead of face) to show his skin, lol, the result will be the alpha blend of skin color and shirt color. Looks extermly bad when he is wearing a black shirt. Hmm, seems the color correction is NOT (updated, was missing) designed for this anyways.
Excuse me,sorry I don't understand where the command"--disable-safe-unpickle --xformers, though I tried other combinations of arguments"should write
Dunno if this is the same issue, but some models (possibly vae related?) will give scuffed colors when you inpaint
Even though in the preview just before generation completes it seems fine
Does anyone know what I could do to fix this? It makes editing really hard
It was actually somehow the default orangemix vae not being loaded in img2img. Forced it on in settings, set --no-half-vae (it wouldn't work otherwise) and way better results now
Related/dup issues: https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/2754, https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/5557
I do think this is VAE related but it would be appreciated if somebody more knowledgeable can give feedback and/or know how to fix what exactly is happening here before closing this. The VAE of choice, --no-half-vae
, and the color correction option can resolve this in some cases, but results are still inconsistent. The answers and advice most seem to give is based on their own experience and workarounds, and not actual solutions or well-informed reasons. VAE of choice is likely the root cause though, as far as I can tell.
It should be mentioned as well that there are only a few known VAEs for SD1/2 models that are not simply merges or out encoder layer fixes, the latter of which is actually a bad idea to even use as it harms hires fix results. https://github.com/sALTaccount/VAE-BlessUp/issues/1#issuecomment-1459008225 (tl;dr: loss of detail and color range)
This is what should be worked off of for resolving this. Any other VAE is likely tampered with or is a merge that is not really representative of it's original training.
vae-ft-ema-560000-ema-pruned
)vae-ft-mse-840000-ema-pruned
)kl-f8-anime
& kl-f8-anime2
)animevae
) (might be finetuned off the SD1.0 VAE, which could explain the somewhat desaturated colors)For what it's worth, I only ever encountered issues with one model/vae combination(anything that orangemix vae is a rename or merge of) so it's probably not a widespread issue, which is why I guess there are barely any people talking about it in the past 6 months.
Definitely running into this very commonly with multiple models. The weird thing is that it seems to get notably worse over time. Inpainting right after starting StableDiffusion and it's barely noticeable with color correction enabled unless you really look. After hours of inpainting on multiple images/models it's really noticeable. Restarting the process cleans it up again for a while.
After a couple quick tests I did notice running img2img @ 0 denoise strength (basically testing to see if encoding and then decoding the latent produces the same result, which it should) only produces desaturated colors with the NovelAI VAE. No other VAEs (of the ones I listed above, sans SD1.0) cause this issue. Lines up with what @Enferlain experienced too.
@catboxanon Does it still desaturate if you use a NovelAI based model but force a different VAE globally via the settings menu? Might be a half-decent workaround until the root cause can be determined.
Looking at the code I had a hunch that maybe it was line 194 that was possibly causing some kind of rounding error specific to the affected VAE, but even running your edited version without that line it doesn't seem to make any difference for me - still desaturates as bad as the original code.
Does it still desaturate if you use a NovelAI based model but force a different VAE globally via the settings menu? Might be a half-decent workaround until the root cause can be determined.
It doesn't. This is basically what I already pointed out though.
No other VAEs (of the ones I listed above, sans SD1.0 [and the NovelAI VAE]) cause this issue.
If you're referring to the code mentioned in https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/6074#issuecomment-1367263652, that should only apply to inpainting models. That might be a separate issue altogether.
left nai vae (orangemix) right pastel-wafiu vae. Left can throw
Pretty much same experience as earlier this year but I use a different vae now
pastel-wafiu vae
This is just kl-f8-anime
btw. The hashes are identical.
I believe the NovelAI VAE also never produces NaNs only if you use their leaked checkpoint and VAE both at full precision (fp32). The leaked code indicates this is how they run their models, they don't run at half precision anywhere in their pipeline.
yeah they look pretty much the same
I was facing the same issue of "washed-out" colors after any kind of inpainting or img2img. After some tests I realized I was not using the VAE recommended by the model, and using the proper VAE solved all the issues. So, I'd recommend checking that before going into half precision or using color correction.
I was facing the same issue of "washed-out" colors after any kind of inpainting or img2img. After some tests I realized I was not using the VAE recommended by the model, and using the proper VAE solved all the issues. So, I'd recommend checking that before going into half precision or using color correction.
This worked. For those wondering how to change it, go to 'Settings' tab and in the sidebar, look for 'VAE'. Then, in that page, change it in 'SD VAE'
@ThereforeGames does updating adetailer #308 on the latest version solve your issue?
Is there an existing issue for this?
What happened?
Hi,
It seems that img2img inpainting produces very washed out results, regardless of color correction mode. I'll list what I think are the relevant settings below.
I may update this post with example images later, please let me know if that would be helpful.
Steps to reproduce the problem
All other settings--including sampler--seem to be irrelevant here.
What should have happened?
The contrast of the original image should be generally preserved. I reverted to a commit that's ~4 days old (don't know the hash, sorry) and it was working fine then.
Commit where the problem happens
4af3ca5
What platforms do you use to access UI ?
Windows
What browsers do you use to access the UI ?
Brave
Command Line Arguments
Additional information, context and logs
N/A