city96 / ComfyUI-GGUF

GGUF Quantization support for native ComfyUI models
Apache License 2.0
696 stars 39 forks source link

It is not clear how to use this #71

Open Elendil211 opened 2 weeks ago

Elendil211 commented 2 weeks ago

I tried using this, and I failed on knowing what to download and how to configure the program.

As described in the README, I used Unet Loader (GGUF) to load https://huggingface.co/city96/FLUX.1-schnell-gguf/tree/main. However, I also need CLIP and VAE.

I tried guessing:

But none of this works, and the CLIP Text Encode (Prompt) just never finishes without any indication of what is wrong.

city96 commented 2 weeks ago

Yeah, the readme really needs to be rewritten. You're supposed to use the DualCLIPLoader, which needs both the T5 and the clip-l model with the mode set to "flux". The clip-l and default T5 models are here. Even with the gguf one you still need the regular clip_l.safetensors one.

Elendil211 commented 2 weeks ago

And what do I use for VAE?

city96 commented 2 weeks ago

The one you linked is correct, the quantization stuff doesn't change anything about the VAE. (There also an example page here that links to all the default models).

al-swaiti commented 2 weeks ago

this workflow could help u https://civitai.com/models/652981/gguf-workflow-simple