Open BenjaminBossan opened 1 month ago
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
not stale
This would be extremely helpful, currently Flux with lora mostly run on A100 like this, but flux can comfortably run on 4090, a consumer card. If this issue is resolved, it would be a way to run Flux with lora on consumer card like 4090. Thank you for your work @BenjaminBossan
I am able to run this on a 4090:
from diffusers import DiffusionPipeline
import torch
pipeline = DiffusionPipeline.from_pretrained(
"black-forest-labs/FLUX.1-dev", torch_dtype=torch.bfloat16
)
pipeline.load_lora_weights(
"TheLastBen/Jon_Snow_Flux_LoRA", weight_name="jon_snow.safetensors"
)
pipeline.fuse_lora()
pipeline.unload_lora_weights()
pipeline.enable_model_cpu_offload()
prompt = "jon snow eating pizza with ketchup"
out = pipeline(prompt, num_inference_steps=20, guidance_scale=4.0)
out.images[0].save("output.png")
What am I missing?
Thanks for the write up. I will try it again later
Thank you @sayakpaul I was running it using the FP8 version following diffusers
doc, and it did not load lora successfully. Running a bfloat16 will cause my 4090 out of CUDA memory
torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 90.00 MiB. GPU 0 has a total capacity of 23.65 GiB of which 38.06 MiB is free. Including non-PyTorch memory, this process has 23.60 GiB memory in use. Of the allocated memory 21.21 GiB is allocated by PyTorch, and 1.94 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
import torch
from diffusers import FluxTransformer2DModel, FluxPipeline
from transformers import T5EncoderModel, CLIPTextModel
from optimum.quanto import freeze, qfloat8, quantize
bfl_repo = "black-forest-labs/FLUX.1-dev"
dtype = torch.bfloat16
transformer = FluxTransformer2DModel.from_single_file("https://huggingface.co/Kijai/flux-fp8/blob/main/flux1-dev-fp8.safetensors", torch_dtype=dtype)
quantize(transformer, weights=qfloat8)
freeze(transformer)
text_encoder_2 = T5EncoderModel.from_pretrained(bfl_repo, subfolder="text_encoder_2", torch_dtype=dtype)
quantize(text_encoder_2, weights=qfloat8)
freeze(text_encoder_2)
pipe = FluxPipeline.from_pretrained(bfl_repo, transformer=None, text_encoder_2=None, torch_dtype=dtype)
pipe.transformer = transformer
pipe.text_encoder_2 = text_encoder_2
pipe.enable_model_cpu_offload()
prompt = "A cat holding a sign that says hello world"
image = pipe(
prompt,
guidance_scale=3.5,
output_type="pil",
num_inference_steps=20,
generator=torch.Generator("cpu").manual_seed(0)
).images[0]
image.save("flux-fp8-dev.png")
Why are you quantizing when you can perfectly run it without quantization? And we haven't yet landed the support to load LoRAs in a quantized base model. So, I am not going to comment on that.
Understood. Really appreciate your work. Let me reframe my problems, as of now, quantimized Flux can comfortably run on 4090, but 4090 cannot run a BF16 version of Flux seamlessly, for example, in my case. I guess BF16 version of Flux is really pushing limit on 24GB GPU memory
I have provided a working code snippet. If you run the example with the latest versions of peft
, accelerate
, transformers
, and diffusers
-- they are expected to work. If not, please create a new issue on diffusers
. If they do work, please notify this thread, as well.
Sure, will run the test again
Sure, will run the test again
Sorry you are right. Using latest hugging face libraries they do work on 4090. Sorry for the trouble @sayakpaul
No issues, glad things worked out. You might also be interested in https://github.com/huggingface/diffusers/pull/9213/ :)
Feature request
Let's add a new quantization method to LoRA, namely optimum-quanto.
There is some more context in this diffusers issue.
Motivation
First of all, the more quantization methods we support the better. But notably, quanto also works with MPS, which distinguishes it from other quantization methods.
Your contribution
I did some preliminary testing and partly, quanto already works with PEFT, as the
QLinear
layer is a subclass ofnn.Linear
and as such,lora.Linear
is applied. Some features like inference appear to work already. However, some features don't work correctly, like merging. Here is a very quick test:Note that all the outputs involving merging are not as expected.
I can certainly take this when I have time but contributions are highly welcome. For inspiration, check out past PRs that add new quantization methods.