Open Elendil211 opened 3 months ago
Yeah, the readme really needs to be rewritten. You're supposed to use the DualCLIPLoader, which needs both the T5 and the clip-l model with the mode set to "flux". The clip-l and default T5 models are here. Even with the gguf one you still need the regular clip_l.safetensors one.
And what do I use for VAE?
The one you linked is correct, the quantization stuff doesn't change anything about the VAE. (There also an example page here that links to all the default models).
this workflow could help u https://civitai.com/models/652981/gguf-workflow-simple
@city96
Yeah, the readme really needs to be rewritten.
Could it be possible to place a workflow file in that repository with a setup, that is max. simple as possible?
The linked workflow from @al-swaiti for example needs an gemini plugin. I'm new to comfyui and I would only like to make that running - without using any external ressources, like on gemini and I don't know, how to remove the ressource from the linked workflow. I found an other workflow on civitai which is much mor complicated, that need many custom_node. I have no idea, what they all doing and if they are only needed for using gguf.
@B0rner Don't have much free time lately but if all you want is a super basic workflow then this should do.
Btw, all you need to do to use this node pack is replacing the unet loader node (and optionally the dual clip loader node) with the gguf variants. There's zero other dependencies or custom nodes required to make it work, and you could even use the default comfy example workflow as a base for this.
(If you want to use cfg >1 or any of the fancy cfg stuff just add a second CLIP text encode node instead of connecting both positive/negative to the same one. At cfg=1 the negative prompt does nothing so this doesn't matter in this case.)
@city96
Thank you. That was very helpful. I had some issues, from where to downlod what file and where to place it (becaus I'm new to comfy), but finally everythink is working fine. Using Flux with GGUF is realy great. Generatig images on my small laptop are much faster than using Flux-16bit.
Hi @city96 . What should I do if I want to leave nagative empty but set cfg > 1. I have run your above workflow in python inference successfully. Could you give me a new pic of workflow or some details like node name, class name, node connection ,... to help me convert to code easily. Thank you
@trinhtuanvubk Either of these should work in that case. You can modify the above workflow accordingly.
I tried using this, and I failed on knowing what to download and how to configure the program.
As described in the README, I used
Unet Loader (GGUF)
to load https://huggingface.co/city96/FLUX.1-schnell-gguf/tree/main. However, I also need CLIP and VAE.I tried guessing:
Load VAE
with eitherCLIPLoader (GGUF)
with https://huggingface.co/city96/t5-v1_1-xxl-encoder-gguf/tree/mainDualCLIPLoader (GGUF)
with a combination of https://huggingface.co/black-forest-labs/FLUX.1-schnell/blob/main/text_encoder/model.safetensors and https://huggingface.co/city96/t5-v1_1-xxl-encoder-gguf/tree/mainBut none of this works, and the
CLIP Text Encode (Prompt)
just never finishes without any indication of what is wrong.