Closed kalle07 closed 1 day ago
I use Automatic (fp16 LoRA) and loras do work, at least most I've tried. I did come across several loras that failed to do anything, not sure what the common issue with these was, but there was always some similar one that did work. Foe example the Amateur Photography lora V2 at 0.1 weight (first image) and 1.0, with flux1-dev-Q8_0.gguf
Test lora works , set automatic (fp16 LoRA)
,For example flux1-dev-Q8_0.gguf
, need it work with t5-v1_1-xxl-encoder-Q8_0.gguf , same rule apply to others also .eg Q4,Q5,Q6... . also you may need to increase your virtual memory . I had 24g vram (16g dedicted + 8G share memory ) still too tight and lead to crash now and then . with Q4, Q6 will take less vram in risk of the quality . but less crash .
so i need no special VAE or CLIP ?
When you say it "doesn't work," what do you mean? Are the outputs bad? Does it not generate at all? I've been training a lora with flugym/kohya, and it doesn't generate good results at all with Flux Q8 GGUF (very blurry/noisy), but if I use it with Flux NF4 or FP8 it looks great. So it seems there is something going on with the Q8 GGUF. I'm going to try changing my T5 encoder to Q8 as recommended by @likelovewant above, and see if that helps.
Edit: Actually, I had "Diffusion in low bits" set to "Automatic." I changed it to "Automatic (fp16 LoRA)" and that fixed it.
with the right t5xxl encoder it seems to work now i struggle i bit with RAM and speed
all alpha phase ;)
5 types of main models incl hper, dev, speed, gguf, XYZ and different VAE / Text Encoder and still different Diffusion in Low Bits
: D
who writes an article what works with what togehter wich RAM which speed : D
its still not working ... all other with "normal FLUX" runs fine
can some one tell me what the right ???
and i use the standart 3
once it works, but bad lora result and VRAM(16GB) and RAM(64GB) overload