lllyasviel / stable-diffusion-webui-forge

GNU Affero General Public License v3.0
7.44k stars 722 forks source link

GGUF still not work with loras #1811

Closed kalle07 closed 1 day ago

kalle07 commented 5 days ago

its still not working ... all other with "normal FLUX" runs fine

can some one tell me what the right ???

366964069-8b5d416d-1a7e-45a9-994d-d6d24d52a381

and i use the standart 3

366964775-2e62167d-29fb-4120-93ec-2eefcbefc627

once it works, but bad lora result and VRAM(16GB) and RAM(64GB) overload

HMRMike commented 5 days ago

I use Automatic (fp16 LoRA) and loras do work, at least most I've tried. I did come across several loras that failed to do anything, not sure what the common issue with these was, but there was always some similar one that did work. Foe example the Amateur Photography lora V2 at 0.1 weight (first image) and 1.0, with flux1-dev-Q8_0.gguf 00025-12345 00028-12345

likelovewant commented 4 days ago

Test lora works , set automatic (fp16 LoRA) ,For example flux1-dev-Q8_0.gguf , need it work with t5-v1_1-xxl-encoder-Q8_0.gguf , same rule apply to others also .eg Q4,Q5,Q6... . also you may need to increase your virtual memory . I had 24g vram (16g dedicted + 8G share memory ) still too tight and lead to crash now and then . with Q4, Q6 will take less vram in risk of the quality . but less crash .

kalle07 commented 4 days ago

so i need no special VAE or CLIP ?

Jonseed commented 4 days ago

When you say it "doesn't work," what do you mean? Are the outputs bad? Does it not generate at all? I've been training a lora with flugym/kohya, and it doesn't generate good results at all with Flux Q8 GGUF (very blurry/noisy), but if I use it with Flux NF4 or FP8 it looks great. So it seems there is something going on with the Q8 GGUF. I'm going to try changing my T5 encoder to Q8 as recommended by @likelovewant above, and see if that helps.

Edit: Actually, I had "Diffusion in low bits" set to "Automatic." I changed it to "Automatic (fp16 LoRA)" and that fixed it.

kalle07 commented 4 days ago

with the right t5xxl encoder it seems to work now i struggle i bit with RAM and speed

all alpha phase ;) 5 types of main models incl hper, dev, speed, gguf, XYZ and different VAE / Text Encoder and still different Diffusion in Low Bits
: D who writes an article what works with what togehter wich RAM which speed : D

DocShotgun commented 4 days ago

https://github.com/lllyasviel/stable-diffusion-webui-forge/issues/1807#issuecomment-2346805239

lllyasviel commented 1 day ago

moved to https://github.com/lllyasviel/stable-diffusion-webui-forge/issues/1807