Closed kelisiya closed 1 year ago
No. @kelisiya
You can directly use our uploaded pipeline (StableDiffusionControlNetInpaintImg2ImgPipeline).
pipe_control = StableDiffusionControlNetInpaintImg2ImgPipeline.from_pretrained(control_model_path)
is all you need, why you want to replace the unet?
No. @kelisiya
You can directly use our uploaded pipeline (StableDiffusionControlNetInpaintImg2ImgPipeline).
pipe_control = StableDiffusionControlNetInpaintImg2ImgPipeline.from_pretrained(control_model_path)
is all you need, why you want to replace the unet?
When I load my diffuser pretrain pipe_control = StableDiffusionControlNetInpaintImg2ImgPipeline.from_pretrained(".mypath/ControlNet/diffusers/control_canny",torch_dtype=torch.float16).to('cuda')
there is a error Cannot load <class 'diffusers.models.unet_2d_condition.UNet2DConditionModel'> from .mtpath/ControlNet/diffusers/control_canny/controlnet because the following keys are missing: up_blocks.xxxx
Oh, my mistake!
You need to modify the pipeline a bit. This pipeline is for inpainting!
Oh, my mistake!
You need to modify the pipeline a bit. This pipeline is for inpainting!
Of course, I'm trying to try this model for inpainting, but the same error occurs. Should I convert a diffuser model from .pth again?
Below is what I have done. The inpaint_model_path is from here, only sd-1.5 is supported now. The control_model_path is converted from controlnet .pth using our tutorial.
pipe_control = StableDiffusionControlNetInpaintImg2ImgPipeline.from_pretrained(control_model_path, torch_dtype=torch.float16).to('cuda')
pipe_inpaint = StableDiffusionInpaintPipeline.from_pretrained(inpaint_model_path, torch_dtype=torch.float16).to('cuda')
pipe_control.unet = pipe_inpaint.unet
pipe_control.unet.in_channels = 4
If it still cannot work, please provide more info here so that I can help you.
Below is what I have done. The inpaint_model_path is from here, only sd-1.5 is supported now. The control_model_path is converted from controlnet .pth using our tutorial.
pipe_control = StableDiffusionControlNetInpaintImg2ImgPipeline.from_pretrained(control_model_path, torch_dtype=torch.float16).to('cuda') pipe_inpaint = StableDiffusionInpaintPipeline.from_pretrained(inpaint_model_path, torch_dtype=torch.float16).to('cuda') pipe_control.unet = pipe_inpaint.unet pipe_control.unet.in_channels = 4
If it still cannot work, please provide more info here so that I can help you.
can you give me your email adress and I want to ask you to give me your WeChat ~~~?
Below is what I have done. The inpaint_model_path is from here, only sd-1.5 is supported now. The control_model_path is converted from controlnet .pth using our tutorial.
pipe_control = StableDiffusionControlNetInpaintImg2ImgPipeline.from_pretrained(control_model_path, torch_dtype=torch.float16).to('cuda') pipe_inpaint = StableDiffusionInpaintPipeline.from_pretrained(inpaint_model_path, torch_dtype=torch.float16).to('cuda') pipe_control.unet = pipe_inpaint.unet pipe_control.unet.in_channels = 4
If it still cannot work, please provide more info here so that I can help you. this is a diffuser version error , In the previous version your model use diffusers-0.14.0.dev0 , bug I'm 0.13.0dev0 . Thank you ~
I'm going to try img2img controlnet
if I want to contronet support sd img2img , in difusser I only make
pipe_sd.unet = pipe_control.unet
? is right ?