Open m-nameer opened 2 months ago
@m-nameer You can just load the lora on the first pipe; using the example from: https://github.com/theblackhatmagician/adetailer_sdxl/blob/main/inference_example.py:
model_path = r"F:\Pranav\checkpoints\stable-diffusion-xl-base-1.0\sd_xl_base_1.0.safetensors"
pipe = StableDiffusionXLPipeline.from_single_file(model_path, safety_checker=None, variant="fp16", torch_dtype=torch.float16).to("cuda")
lora_path = "/some/path/to/lora.safetensors"
pipe.load_lora_weights(lora_path, adapter_name="nameofyourlora")
pipe.set_adapters(['nameofyourlora'], adapter_weights=[1.0])
In the example, the inpainting pipeline is reusing the weights of the original pipe (on line 38):
ad_pipe = AdPipelineBase(**ad_components)
So any changes made to the original pipeline should be applied to the AdPipelineBase
as well.
How can I use different lora models to detail faces as we did in automatic1111. In the webui we used [SEP] to differentiate between different concepts, how can we do this here?