google / style-aligned

Official code for "Style Aligned Image Generation via Shared Attention"
Apache License 2.0
1.16k stars 83 forks source link

[Request] inversion for `Stablediffusion` and `Controlnet` #12

Open GoGiants1 opened 7 months ago

GoGiants1 commented 7 months ago

Thank you for sharing this interesting project.

I've explored the code and tried the demo. I noticed that the inversion.py, which generates images based on reference images, only provides the part related to StableDiffusionXLPipeline. Could you also provide the part corresponding to the StableDiffusion pipeline and the Controlnet models?

imiraoui commented 7 months ago

It works.

Just use the below:

`from diffusers import StableDiffusionXLControlNetPipeline, DDIMScheduler import torch import mediapy import sa_handler import math

scheduler = DDIMScheduler( beta_start=0.00085, beta_end=0.012, beta_schedule="scaled_linear", clip_sample=False, set_alpha_to_one=False)

from diffusers import StableDiffusionXLControlNetImg2ImgPipeline,ControlNetModel canny_controlnet = ControlNetModel.from_pretrained( "diffusers/controlnet-canny-sdxl-1.0", variant="fp16", use_safetensors=True, torch_dtype=torch.float16, ).to("cuda") pipeline = StableDiffusionXLControlNetPipeline.from_pretrained( "stabilityai/stable-diffusion-xl-base-1.0", controlnet=canny_controlnet, scheduler=scheduler, variant="fp16", use_safetensors=True, torch_dtype=torch.float16, ).to("cuda")`

And for inference:

images_a = pipeline(prompts, latents=latents, image=canny_image, controlnet_conditioning_scale=0.1, callback_on_step_end=inversion_callback, num_inference_steps=num_inference_steps, guidance_scale=8).images

LDYang694 commented 3 months ago

I would also like to know how to inversion reference images using controlnet, can you provide more detailed code? Thanks!