Acly / krita-ai-diffusion

Streamlined interface for generating images with AI in Krita. Inpaint and outpaint with optional text prompt, no tweaking required.
https://www.interstice.cloud
GNU General Public License v3.0
6.31k stars 301 forks source link

Color correction #26

Open Miraihi opened 11 months ago

Miraihi commented 11 months ago

Hello, Acly. I'm enjoying your plugin a lot. Since I'm using Krita-SD plugins mainly for inpainting, it surprized me how this function works in your plugin. Even though the weirdest part of it all is how "Strength" parameter works (In comparison with "Denoising Strength" it barely changes anything at 95% but changes everything at 100%), what I feel the most lacking is the automatic color correction, normally present in Automatic1111 SD UI. Could the automatic color correction be added?

By the way, live preview would also be welcomed.

Acly commented 11 months ago

Hi, I'm surprised too :) The strength value does in fact translate 1:1 to denoising strength, and 95% allows for completely different content in my experience. There is a different between 100% and not-100%: to compensate for the lack of context the 100% uses additional information from the original image outside the masked area. But anything below 100% should behave very simlar to img2img in Auto1111 (with inpaint area = only masked and masked content = original).

Automatic color correction could be useful in some cases. I don't think it exists yet for Comfy, so it's a bit more work.

Miraihi commented 10 months ago

@Acly Thanks for the response! In general, is it possible to make the inpainting more similar how it works in Automatic UI? Right now the selection area and the "mask" are essentially the same things (If we don't count "Grow selection"). I'd love to be able to select the extended area of the image to provide enough context, but to inpaint only a focused area. I'm not entirely sure why, but I can't get the results similar to Automatic, even though I like the plugin in all other aspects.

Acly commented 10 months ago

There is also an additional padding around the mask. I was reluctant to expose it as setting because it takes various things into account (selection size, image borders, it prefers to grow towards a square shape, etc.) But I can expose the base padding as a % of selection size similar to grow/feather, and it should end up something very close to it (+ some technical considerations like it has to be multiple of 8). Manually selecting another area on top of the mask sounds cumbersome and shouldn't be necessary IMO.

If you can give an example image/mask + inpaint result from Auto1111 I might get a better idea what you mean with "similar results" - some differences are to be expected, but at least for <100% strength they should work in a simlar fashion.

Are you using inpaint ControlNet in Auto1111? (it's used automatically in the plugin if you have a selection)

Miraihi commented 10 months ago

I generally don't use inpaint controlnet. Is there an option to disable it in your plugin?

Acly commented 10 months ago

It's indeed the Krita update what's broke it.

Broke what?

You can't disable inpaint controlnet via UI. If you want to try/compare it's not hard to disable it in code though, comment/delete this line

cond.control.append(Control(ControlMode.inpaint, in_image, mask=in_mask))
Miraihi commented 10 months ago

I geniunely have no idea how this comment appeared here :D Nothing's broken, I've been commenting on other topic of another extension, but somehow edited this comment instead of other. Still, controlnet inpainting has that washed-out look I really don't like. I'll try your solution

EDIT: There's no such line in your Version 1.3.0.

Miraihi commented 10 months ago

I think I'll show you my main problem with "Strength" slider. Base image, original Dreamshaper model

A segment altered with 100% strength, ("inpainting" version of Dreamshaper model).: A drastic difference. A segment altered by 95% (Same "Inpainting" version of Dreamshaper model).: Barely any difference, the overall contours of the castle are completely untouched. This is certainly not the same as the difference between Denoising strength 1 and Denoising Strength 0.95. The only good thing about it is that it has preserved the original colours, also the little details are different, but that's not always useful. Usually I'd choose DS of 0.4-0.5 if I want to leave the overall shape intact and only change the details.

Same manipulations in Automatic1111 UI (Same model, "Masked content: Original" for both, no controlnet)

Denoising Strength = 1 Denoising Strength = 0.95

You can clearly see how the overall shape of the castle in DS = 0.95 has been changed by almost the same amount as DS = 1.

Could my problem here be using "inpainting" model instead of the base one? Actually, YES. If I use the base Dreamshaper model, the colors look alright, and the change is overall very desireable. And it changes accordingly at 95%.

Looks like I've solved the problem of my own making! You should state somewhere that "Inpainting" models are not needed. I'm sure I'm not the only user inclined to work the image the old way.

Acly commented 10 months ago

Interesting, thanks for investigating. I haven't tried inpaint models because the inpaint controlnet does (almost) the same thing, without requiring you to have a duplicate of every checkpoint. My first thought was that combining them is maybe not a good idea, but that isn't the problem.

I did some more tests, and denoise<1.0 inpainting with an inpaint model generally doesn't work as expected when keeping the original content. This seems to be a quirk in ComfyUI. I've found some people reporting the same issue on reddit, but so far not a good explanation.

It's not a huge issue I think since control-net is a good replacement for inpaint models. But I'll look into adding a warning when selecting one to avoid confusion.

Miraihi commented 10 months ago

@Acly Yes, actually I'm surprised how well it works when it actually works (Especially with your newer IP-adapter implementation). Though the colors still shift unpredictably sometimes, but it's not as bad as it was. What I also noticed is OpenPose and Depth ControlNets not working with inpainting model chosen (Some error related to "Shape" parameter).

LuluViBritannia commented 9 months ago

The biggest strength of AI implemented in an editing software is the ability to use that software's features for generated images. In Krita, click on Filter > Adjust. You've got a lot of color correction stuff here. I'm sure there's more elsewhere. And since every generation is its own layer, you can fix an inpainted part pretty easily.