william-murray1204 / stable-diffusion-cpp-python

stable-diffusion.cpp bindings for python
MIT License
20 stars 2 forks source link

Can you provide a example of use lora in stable-diffusion-xl ? #2

Closed svjack closed 4 months ago

svjack commented 4 months ago

Thank you for providing such a convenient tool. Can you provide an example of using lora in sdxl?😊 I use the following test lora calling, but the python kernel was killed .

wget https://huggingface.co/svjack/sd-ggml/resolve/main/sd_xl_base_1.0.safetensors -O sd_xl_base_1.0.safetensors
wget https://huggingface.co/madebyollin/sdxl-vae-fp16-fix/resolve/main/sdxl_vae.safetensors -O sdxl_vae.safetensors
wget https://huggingface.co/svjack/Genshin-Impact-LandScape-lora-sd-xl-rk32/resolve/main/pytorch_lora_weights.safetensors -O pytorch_lora_weights.safetensors
mkdir lora_dir
cp pytorch_lora_weights.safetensors lora_dir
from stable_diffusion_cpp import StableDiffusion
stable_diffusion = StableDiffusion(
      model_path="sd_xl_base_1.0.safetensors",
      vae_path = "sdxl_vae.safetensors",
      wtype="q4_0", # Weight type (options: default, f32, f16, q4_0, q4_1, q5_0, q5_1, q8_0)
      # seed=1337, # Uncomment to set a specific seed
    lora_model_dir="lora_dir/"
)

#### <lora:pytorch_lora_weights:1>
prompt = "European, green coniferous tree, yellow coniferous tree, rock, creek, sunny day, pastel tones, 3D<lora:pytorch_lora_weights:1>"
output = stable_diffusion.txt_to_img(
        prompt, # Prompt
        width=1024,
        height=1024,
        sample_steps = 1,
        seed = -1
)
output[0]
william-murray1204 commented 4 months ago

I suspect the issue is coming from setting wtype="q4_0" when your model is a safetensors model. Try removing that line or setting wtype="default". The code should automatically assign the right model type. Something like this should work:

from stable_diffusion_cpp import StableDiffusion
stable_diffusion = StableDiffusion(
      model_path="sd_xl_base_1.0.safetensors",
      vae_path = "sdxl_vae.safetensors",
      # Weight type (options: default, f32, f16, q4_0, q4_1, q5_0, q5_1, q8_0)
      wtype="default",  # or remove this line
      lora_model_dir="lora_dir/"
)

#### <lora:pytorch_lora_weights:1>
prompt = "European, green coniferous tree, yellow coniferous tree, rock, creek, sunny day, pastel tones, 3D<lora:pytorch_lora_weights:1>"
output = stable_diffusion.txt_to_img(
        prompt, # Prompt
        width=1024,
        height=1024,
        sample_steps = 1,
        seed = -1
)
output[0]

If you intend on quantizing the sd_xl_base_1.0 model, you can use the low-level API like this:

import stable_diffusion_cpp.stable_diffusion_cpp as sd_cpp

sd_cpp.convert(
    "sd_xl_base_1.0.safetensors".encode("utf-8"),  # SafeTensors model path
    "sdxl_vae.safetensors".encode("utf-8"),
    "sd_xl_base_1.0.q4_0.gguf".encode("utf-8"),  # Output quantized GGUF model path
    sd_cpp.GGMLType.SD_TYPE_Q4_0,  # Quantization type
)

Then use the new quantized GGUF model in place of your safetensors model.

It's worth noting however that as far as I can tell, it isn't possible to use a quantized model with a Lora without it causing a "GGML_ASSERT" error. I believe this to be a stable-diffusion.cpp issue as I get the same errors when using the original stable-diffusion.cpp CLI tool and this issue has been raised in the stable-diffusion.cpp repo before: SDXL: LoRa problem

Even if the quantized model + Lora was working, stable-diffusion.cpp doesn't recommend it and warns that "In quantized models when applying LoRA, the images have poor quality".