Open GoGiants1 opened 11 months ago
It works.
Just use the below:
`from diffusers import StableDiffusionXLControlNetPipeline, DDIMScheduler import torch import mediapy import sa_handler import math
scheduler = DDIMScheduler( beta_start=0.00085, beta_end=0.012, beta_schedule="scaled_linear", clip_sample=False, set_alpha_to_one=False)
from diffusers import StableDiffusionXLControlNetImg2ImgPipeline,ControlNetModel canny_controlnet = ControlNetModel.from_pretrained( "diffusers/controlnet-canny-sdxl-1.0", variant="fp16", use_safetensors=True, torch_dtype=torch.float16, ).to("cuda") pipeline = StableDiffusionXLControlNetPipeline.from_pretrained( "stabilityai/stable-diffusion-xl-base-1.0", controlnet=canny_controlnet, scheduler=scheduler, variant="fp16", use_safetensors=True, torch_dtype=torch.float16, ).to("cuda")`
And for inference:
images_a = pipeline(prompts, latents=latents, image=canny_image, controlnet_conditioning_scale=0.1, callback_on_step_end=inversion_callback, num_inference_steps=num_inference_steps, guidance_scale=8).images
I would also like to know how to inversion reference images using controlnet, can you provide more detailed code? Thanks!
I would also like to know how to inversion reference images using controlnet, can you provide more detailed code? Thanks!
Refer to this : Control-Style-Aligned-Generation
Thank you for sharing this interesting project.
I've explored the code and tried the demo. I noticed that the inversion.py, which generates images based on reference images, only provides the part related to StableDiffusionXLPipeline. Could you also provide the part corresponding to the StableDiffusion pipeline and the Controlnet models?