black-forest-labs / flux

Official inference repo for FLUX.1 models
Apache License 2.0
15.34k stars 1.1k forks source link

Lora doesn't work with fp8 version FLUX! #157

Open 1499544 opened 1 month ago

1499544 commented 1 month ago

hi,i tried fp8 version flux on diffusers,it is amazing. However, it seems that the lora doesn't work. With or without lora, fp8 version output the same pictures. Here's my code, can someone help me?

import torch from diffusers import DiffusionPipeline, FluxTransformer2DModel from optimum.quanto import freeze, qfloat8, quantize

device = "cuda" if torch.cuda.is_available() else "cpu"

transformer = FluxTransformer2DModel.from_single_file("https://huggingface.co/Kijai/flux-fp8/blob/main/flux1-dev-fp8.safetensors", torch_dtype=torch.bfloat16) quantize(transformer, weights=qfloat8) freeze(transformer)

pipe = DiffusionPipeline.from_pretrained("black-forest-labs/FLUX.1-dev", torch_dtype=torch.bfloat16) pipe.transformer = transformer pipe = pipe.to(device)

prompt = "A blue jay standing on a large basket of rainbow macarons, disney style"

prompt = "The portrait of a brazilian person"

generator = torch.Generator(device="cpu").manual_seed(42) image = pipe(prompt, generator=generator, guidance_scale=3.5).images[0] image.save("no_lora.png")

pipe.load_lora_weights("XLabs-AI/flux-lora-collection", weight_name="disney_lora.safetensors")

pipe.load_lora_weights("XLabs-AI/flux-RealismLora")

generator = torch.Generator(device="cpu").manual_seed(42) image = pipe(prompt, generator=generator, joint_attention_kwargs={"scale": 1}, guidance_scale=3.5).images[0] image.save("lora_1_0.png")

generator = torch.Generator(device="cpu").manual_seed(42) image = pipe(prompt, generator=generator, joint_attention_kwargs={"scale": 1.75}, guidance_scale=3.5).images[0] image.save("lora_1_75.png")

sitatec commented 1 day ago

If you are using the diffusers lib, you need to fuse the lora first, then quantize.