tencent-ailab / IP-Adapter

The image prompt adapter is designed to enable a pretrained text-to-image diffusion model to generate images with image prompt.
Apache License 2.0
5.36k stars 340 forks source link

How to add LoRA to the pipeline? #297

Open FurkanGozukara opened 9 months ago

FurkanGozukara commented 9 months ago

This is an example pipe on your Hugging Face. How can we add LoRA to this?

v2 = False
base_model_path = "SG161222/Realistic_Vision_V4.0_noVAE"
vae_model_path = "stabilityai/sd-vae-ft-mse"
image_encoder_path = "laion/CLIP-ViT-H-14-laion2B-s32B-b79K"
ip_ckpt = "ip-adapter-faceid-plus_sd15.bin" if not v2 else "ip-adapter-faceid-plusv2_sd15.bin"
device = "cuda"

noise_scheduler = DDIMScheduler(
    num_train_timesteps=1000,
    beta_start=0.00085,
    beta_end=0.012,
    beta_schedule="scaled_linear",
    clip_sample=False,
    set_alpha_to_one=False,
    steps_offset=1,
)
vae = AutoencoderKL.from_pretrained(vae_model_path).to(dtype=torch.float16)
pipe = StableDiffusionPipeline.from_pretrained(
    base_model_path,
    torch_dtype=torch.float16,
    scheduler=noise_scheduler,
    vae=vae,
    feature_extractor=None,
    safety_checker=None
)
xiaohu2015 commented 9 months ago

pipe.load_lora_weights(lora_ckpt) pipe.fuse_lora()

FurkanGozukara commented 9 months ago

pipe.load_lora_weights(lora_ckpt) pipe.fuse_lora()

awesome thank you

jszhujun2010 commented 9 months ago

is is possible train lora while fixing ip adapter weights? Or is it reasonable? If it is, is there any sample code?

xiaohu2015 commented 9 months ago

yes, I think you can