Open Xuzzo opened 3 months ago
Some of your LoRA adapters are not loaded in Huggingface. The easiest way to try this is by generating images e.g. here
Prompt: "A cute corgi lives in a house made out of sushi, anime" LoRA: flux-anime CFG Scale 3.5 Steps 28 height 512 width 256 seed: 0
The image with LoRA Scale 0.9
The image with LoRA scale 0
In other words, the LoRA weights have no effect
I have also tried directly with the code
base_model = "black-forest-labs/FLUX.1-dev" pipe = DiffusionPipeline.from_pretrained(base_model, torch_dtype=torch.bfloat16) pipe.load_lora_weights("XLabs-AI/flux-lora-collection", weight_name="anime_lora.safetensors") image = pipe( prompt="A cute corgi lives in a house made out of sushi, anime", num_inference_steps=28, guidance_scale=3.5, width=512, height=256, generator=torch.Generator(device='cpu').manual_seed(0), joint_attention_kwargs={"scale": 0.9}, ).images[0]
and including the pipe.load_lora_weights or not makes no difference
pipe.load_lora_weights
Some of your LoRA adapters are not loaded in Huggingface. The easiest way to try this is by generating images e.g. here
Prompt: "A cute corgi lives in a house made out of sushi, anime" LoRA: flux-anime CFG Scale 3.5 Steps 28 height 512 width 256 seed: 0
The image with LoRA Scale 0.9
The image with LoRA scale 0
In other words, the LoRA weights have no effect
I have also tried directly with the code
and including the
pipe.load_lora_weights
or not makes no difference