city96 / ComfyUI-GGUF

GGUF Quantization support for native ComfyUI models
Apache License 2.0
1.08k stars 72 forks source link

It is not clear how to use this #71

Open Elendil211 opened 3 months ago

Elendil211 commented 3 months ago

I tried using this, and I failed on knowing what to download and how to configure the program.

As described in the README, I used Unet Loader (GGUF) to load https://huggingface.co/city96/FLUX.1-schnell-gguf/tree/main. However, I also need CLIP and VAE.

I tried guessing:

But none of this works, and the CLIP Text Encode (Prompt) just never finishes without any indication of what is wrong.

city96 commented 3 months ago

Yeah, the readme really needs to be rewritten. You're supposed to use the DualCLIPLoader, which needs both the T5 and the clip-l model with the mode set to "flux". The clip-l and default T5 models are here. Even with the gguf one you still need the regular clip_l.safetensors one.

Elendil211 commented 3 months ago

And what do I use for VAE?

city96 commented 3 months ago

The one you linked is correct, the quantization stuff doesn't change anything about the VAE. (There also an example page here that links to all the default models).

al-swaiti commented 3 months ago

this workflow could help u https://civitai.com/models/652981/gguf-workflow-simple

B0rner commented 2 months ago

@city96

Yeah, the readme really needs to be rewritten.

Could it be possible to place a workflow file in that repository with a setup, that is max. simple as possible?

The linked workflow from @al-swaiti for example needs an gemini plugin. I'm new to comfyui and I would only like to make that running - without using any external ressources, like on gemini and I don't know, how to remove the ressource from the linked workflow. I found an other workflow on civitai which is much mor complicated, that need many custom_node. I have no idea, what they all doing and if they are only needed for using gguf.

city96 commented 2 months ago

@B0rner Don't have much free time lately but if all you want is a super basic workflow then this should do.

Btw, all you need to do to use this node pack is replacing the unet loader node (and optionally the dual clip loader node) with the gguf variants. There's zero other dependencies or custom nodes required to make it work, and you could even use the default comfy example workflow as a base for this.

base_flux_gguf_test.json

base_flux_gguf_test

(If you want to use cfg >1 or any of the fancy cfg stuff just add a second CLIP text encode node instead of connecting both positive/negative to the same one. At cfg=1 the negative prompt does nothing so this doesn't matter in this case.)

B0rner commented 2 months ago

@city96

Thank you. That was very helpful. I had some issues, from where to downlod what file and where to place it (becaus I'm new to comfy), but finally everythink is working fine. Using Flux with GGUF is realy great. Generatig images on my small laptop are much faster than using Flux-16bit.

trinhtuanvubk commented 1 month ago

Hi @city96 . What should I do if I want to leave nagative empty but set cfg > 1. I have run your above workflow in python inference successfully. Could you give me a new pic of workflow or some details like node name, class name, node connection ,... to help me convert to code easily. Thank you

city96 commented 1 month ago

@trinhtuanvubk Either of these should work in that case. You can modify the above workflow accordingly.

image

image