-
NF4 model 1024 X 1024 resolution 10 Series 20 Series 8G graphics card, running a picture to take four minutes
-
Flux.D needs to be patched when loading LoRA, which requires about 25GB of VRAM. being a 12B model, I understand that the usage will be larger, but is it possible to make it a little smaller?
Forge…
-
The loader is only for NF4? loras andcontronet don't work?
Error occurred when executing easy fluxLoader:
ERROR: Could not detect model type of: F:\ComfyUI\ComfyUI\models\checkpoints\flux\flux1-…
-
-
Is there a plan to include support for the NF4 data type from the qlora paper?
-
OS: W10 LTSC
RAM: 16gb
GPU: Nvidia 3060 12gb VRAM
I keep running into OOM errors when i try to use a single LoRA:
![oom](https://github.com/user-attachments/assets/10622df5-a035-46b0-989a-f387…
Atoli updated
3 weeks ago
-
Since ba01ad37, LoRas loaded in 8bit to the Q8_0 GGUF generate to a poor quality. Loading the LoRa in 16bit appears to fix this issue, but there are subtle differences in the generations from rounding…
-
Did your FLUX easy loader support the new loader Bitsandbyte it loads Flux super-fast and I'm able to use it with 6gb V-Ram
Here is the link to models and implementation:
https://github.com/comfyano…
-
OOM (16gb VRAM)
version: [f2.0.1v1.10.1-previous-304-g394da019](https://github.com/lllyasviel/stable-diffusion-webui-forge/commit/394da01959ae09acca361dc2be0e559ca26829d4) • python: 3.10.6 • to…
-
I did a standard Git clone into the Custom Nodes, and have the BnB requirement installed, but the new node does not appear anywhere. Even a search for 'NF4' turns up nothing. ComfyUI is updated alread…