Closed xalteropsx closed 3 months ago
Hi, you're not giving information on what you're doing, the code you're using or even the result image. We can't help you if you don't provide some minimal reproducible example.
If I have to guess is that you're using the controlnet with too much strength, also the inpaint model does make the image a little less saturated depending on the denoise strength.
sorry i forget to provide reproduction give me a few minute
@asomoza
controlnet = ControlNetModel.from_pretrained("lllyasviel/control_v11p_sd15_inpaint", torch_dtype=torch.float16, use_safetensors=True)
pipe = StableDiffusionControlNetInpaintPipeline.from_pretrained('frankjoshua/dreamshaper_8Inpainting', controlnet=controlnet, torch_dtype=torch.float16, use_safetensors=True)
pipe.scheduler = UniPCMultistepScheduler.from_config(pipe.scheduler.config)
pipe.enable_model_cpu_offload()
image = pipe(
"corgi face with large ears, detailed, pixar, animated, disney",
eta=1.0,
image=batman,
control_image=control_image,
num_inference_steps=20,
mask_image=mask,
).images[0]
test urself and see the result i think it will same with all model
what non mask has to do with the inpaiting area ? cannot we able to control it ?
The difference you see is mostly the vae decoding and encoding, this is a lossy process, no matter what you do you'll always lose some details.
Also you're using an inpainting model with an inpaint controlnet. You don't really need both as they do the same thing. If you use the controlnet you will have to pass the whole image as a context and get a new one back, so it will be always different.
if you want to preserve the original image as much as possible use an inpainting model without the controlnet and use padding_mask_crop which only changes the area of the mask.
@asomoza actually i have some inpainting model if i do use normal model with it show missmatch model size will check padding_mask_crop >.< / brb doing some daily quest once done i will tell u result
@asomoza sorry bro for tag u again it work like charm but i have something to ask in padding_mask_crop is it divided by mask or by whole image like what is 32 if we place it on padding also it doesnt support multi image ?
File "Z:\software\python11\Lib\site-packages\diffusers\pipelines\controlnet\pipeline_controlnet_inpaint.py", line 772, in check_inputs
raise ValueError(
ValueError: The image should be a PIL image when inpainting mask crop, but is of type <class 'list'>.
padding_mask_crop is it divided by mask or by whole image like what is 32 if we place it on padding
Don't fully understand what you're trying to say, but when you enable the padding_mask_crop
, the image gets cropped with the mask, upscaled, inpainted and then scaled down again to finally paste it over the same part of the image.
The padding just tells how much space you want between the mask and the original image.
also it doesnt support multi image?
yeah, I never use multi image with inpainting, and I wasn't here when it was implemented, but the logic probably is that is a task specific to each image so not that much need to make it multi image.
I'm curious on what you're doing that requires the same inpainting for multiple images.
seems like i got it ah u are correct same mask image on multiple image inpaint is not much needed but sometime it good to have
thanks alot bro >.</
why controlnet paint destroy the original state of color contrast