I've trained controlnet for sd2 inpainting model by concatenating control image, masked image and the mask itself.
What do I need to change in inference code to use this model?
size mismatch for input_blocks.0.0.weight: copying a param with shape torch.Size([320, 9, 3, 3]) from checkpoint, the shape in current model is torch.Size([320, 4, 3, 3])
I've trained controlnet for sd2 inpainting model by concatenating control image, masked image and the mask itself. What do I need to change in inference code to use this model?
size mismatch for input_blocks.0.0.weight: copying a param with shape torch.Size([320, 9, 3, 3]) from checkpoint, the shape in current model is torch.Size([320, 4, 3, 3])