I have the following function for controlnet and was wondering if register_controlnet_pipeline will support this. It seems in your current implementation you don't support this?
Any help with this is much appreciated.
def generate_sd_images(self, image, mask, prompt):
torch.cuda.empty_cache()
resized_image = cv2.resize(image, (256, 256))
canny_image_pil = self.get_canny_controlnet(image)
with torch.autocast("cuda"):
x_samples = self.pipe.call(
prompt,
negative_prompt = "deformed iris, deformed pupils, semi-realistic, cgi, 3d, render, sketch, cartoon, drawing, anime:1.4), text, close up, cropped, out of frame, worst quality, low quality, jpeg artifacts, ugly, duplicate, morbid, mutilated, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, mutation, deformed, blurry, dehydrated, bad anatomy, bad proportions, extra limbs, cloned face, disfigured, gross proportions, malformed limbs, missing arms, missing legs, extra arms, extra legs, fused fingers, too many fingers, long neck, nude , naked,lowres, bad anatomy, bad hands, cropped, worst quality" ,
num_images_per_prompt=self.num_samples,
num_inference_steps=self.ddim_steps,
image=Image.fromarray(resized_image),
generator=self.generator,
control_image=canny_image_pil,
mask=Image.fromarray(mask),
height=256,
width=256
).images
return x_samples
I have the following function for controlnet and was wondering if register_controlnet_pipeline will support this. It seems in your current implementation you don't support this?
Any help with this is much appreciated.