Open DimitriosKakouris opened 2 months ago
Hi @DimitriosKakouris, I have the same issue with my fine-tuned LoRa models, while the pipeline is working with other weights trained with DreamBooth.
Has anyone successfully run inference with the AI-toolkit LoRa weights, either with or without diffusers?
I was having the same issue with a black image. I think there might have been an issue with authentication. Here's my code now that works:
from huggingface_hub import login
login(token="...")
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained("black-forest-labs/FLUX.1-dev", torch_dtype=torch.float16, use_safetensors=True).to('cuda')
pipeline.load_lora_weights('output/lora', weight_name='lora.safetensors')
prompt="in the style of CNSTLL, white car at a gas station, night time, cinestill 800T"
image = pipeline(prompt).images[0]
image.save("./fluxlora/flux-lora.png")
I was having the same issue with a black image. I think there might have been an issue with authentication. Here's my code now that works:
from huggingface_hub import login login(token="...") from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained("black-forest-labs/FLUX.1-dev", torch_dtype=torch.float16, use_safetensors=True).to('cuda') pipeline.load_lora_weights('output/lora', weight_name='lora.safetensors') prompt="in the style of CNSTLL, white car at a gas station, night time, cinestill 800T" image = pipeline(prompt).images[0] image.save("./fluxlora/flux-lora.png")
Hi @sitefeng , I do not believe there is an issue with the authentication token, unless it is explicitly in the output of your terminal when you are running the script, the weights are fetched for me at least. I did have some success by converting kohya_ss format to diffusers with this script convert_flux_lora.py the output image is unaffected by the LoRa weight for some reason.
i got be killed...
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained("/ai/FLUX.1-dev", torch_dtype=torch.bfloat16)
pipeline.enable_model_cpu_offload()
pipeline.load_lora_weights('/ai/ai-toolkit/output/my_first_flux_lora_v1', weight_name='my_first_flux_lora_v1_000001000.safetensors')
image = pipeline('a Yarn art style tarot card').images[0]
This seems to work. I'm not sure if set_adapters
is necessary.
EDIT: the output image does not seem to be affected by LoRA
pipe = FluxPipeline.from_pretrained("black-forest-labs/FLUX.1-schnell", torch_dtype=torch.bfloat16, device_map='balanced')
pipe.load_lora_weights(lora_folder_path, weight_name='finetuning_flux_lora_v1-dev.safetensors', adapter_name='lora')
pipe.set_adapters('lora')
pipe.fuse_lora(adapter_names=['lora'])
I want to use a Flux.1 Dev LoRa from a huggingface repo https://huggingface.co/adirik/flux-cinestill . I got the safetensors file and I run it with the diffusers library with the python script below:
I get a black image with size of about 1.3kB:
While I get normal generations of about 1.4MB when not using the LoRa. What is wrong?