mulanai / MuLan

MuLan: Adapting Multilingual Diffusion Models for 110+ Languages (无需额外训练为任意扩散模型支持多语言能力)
125 stars 3 forks source link

how to load lora model after transformed pipe #5

Open zengjie617789 opened 4 months ago

zengjie617789 commented 4 months ago

Here are code snippet below:

pipe = RegionalDiffusionXLPipeline.from_pretrained(model_id,torch_dtype=torch.float16, use_safetensors=True, variant="fp16").to("cuda")
pipe.scheduler = DPMSolverMultistepScheduler.from_config(pipe.scheduler.config,use_karras_sigmas=True)

mulan_dir = "/****/sdxl_aesthetic.pth"
text_encoder_path = "**/models/InternVL-14B-224px"
pipe = mulankit.transform(pipe, mulan_dir,text_encoder_path=text_encoder_path)

I found the pipe not work when I transformed the pipe to mulan pipe, it raised the error:

ValueError: do not know how to get attention modules for: InternVLTextModel

but when I load lora model firstly and then transformed pipe, it works.

Zeqiang-Lai commented 4 months ago

It might be caused by that diffusers's load_lora has logic for text encoder lora, which is not compatible with InternVL.

Nonetheless, you could safely transform pipe to mulan pipe at the last step if your lora has no text encoder part.