-
### Your question
I can merge with ComfyUI Flux1 models: FP16 and FP8 Kijai using LoadCheckpont and LoadDiffusioModel respectively, however with the GGUF and bnb-NF4 models I cannot even if I use t…
-
### Feature Idea
reference https://github.com/lllyasviel/stable-diffusion-webui-forge/discussions/981
### Existing Solutions
_No response_
### Other
_No response_
-
Does this support the nf4 model?
https://huggingface.co/lllyasviel/flux1-dev-bnb-nf4
-
### Feature request
In the quantization procedure for qlora, there is the 'nf'4 storage datatype and the compute datatype (in the paper bfloat16 which is the original)(please refer to the image). The…
-
I tryed to use my Lora with NF4 but it doesn't seem to work will there be an update that this will work?
-
do you planing to do this for diffusers ,,, here the setup that made for bnb-NF4
https://github.com/huggingface/diffusers/issues/9165
i think you can remove bnb-nf4 stuff and add gguf
its featur…
-
lllyasviel uploaded a v2 version of the Flux NF4 checkpoint, the differences being explained [here](https://huggingface.co/lllyasviel/flux1-dev-bnb-nf4).
On Forge the performances and outputs are …
-
It is talked about here:
https://github.com/lllyasviel/stable-diffusion-webui-forge/discussions/981
According to the author:
> (i) NF4 is significantly faster than FP8. For GPUs with 6GB/8GB …
-
For some reason dev fp8 works better than nf4 when I use lora, Nf4 just use a lot of vram and ram making the generation speed absurdly slow.
With dev fp8 I'm getting speeds like 1.01 s/it and with …
-
https://github.com/lllyasviel/stable-diffusion-webui-forge/discussions/981
> Flux Checkpoints
> The currently supported Flux checkpoints are
>
> [flux1-dev-bnb-nf4.safetensors](https://huggingf…