-
Could you please add the following weights: **lllyasviel/flux1-dev-bnb-nf4/flux1-dev-bnb-nf4-v2.safetensors** and **ZhenyaYang/flux_1_dev_hyper_8steps_nf4/flux_1_dev_hyper_8steps_nf4.safetensors**. Th…
-
Reason of this issue in really big models, which are more than 60GB. So diffusers tries to put all of them to GPU VRAM.
Now there are couple ways to fix it.
First one is to add this line of code t…
-
I try to go the
**MODEL**: lllyasviel/flux1-dev-bnb-nf4 from https://huggingface.co/lllyasviel/flux1-dev-bnb-nf4
and all the components:
**VAE**: https://huggingface.co/black-forest-labs/FLUX.1-d…
-
# ComfyUI Error Report
## Error Details
- **Node Type:** PixtralGenerateText
- **Exception Type:** TypeError
- **Exception Message:** Input type float32 is not supported
## Stack Trace
```
Fi…
-
I tryed to use my Lora with NF4 but it doesn't seem to work will there be an update that this will work?
-
### Your question
I can merge with ComfyUI Flux1 models: FP16 and FP8 Kijai using LoadCheckpont and LoadDiffusioModel respectively, however with the GGUF and bnb-NF4 models I cannot even if I use t…
-
### Feature request
In the quantization procedure for qlora, there is the 'nf'4 storage datatype and the compute datatype (in the paper bfloat16 which is the original)(please refer to the image). The…
-
Does this support the nf4 model?
https://huggingface.co/lllyasviel/flux1-dev-bnb-nf4
-
Hello lllyasviel, I tried to do some DreamBooth training on an RTX 4090 with your flux1-dev-bnb-nf4-v2 model in Kohya but couldnt get it to train.
It works fine with the regular flux1-dev model. …
-
I tried loading it using `model = AutoModelForSequenceClassification.from_pretrained("nvidia/Llama-3.1-Nemotron-70B-Reward-HF", token=token, quantization_config=nf4_config).to('cuda:1')`, but this doe…