Open enternalsaga opened 2 months ago
Hi, I am also having issues with Flux and XY Grid. But I want to note in your image, you have left the Clip loader setting at SDXL instead of choosing Flux. I do this all the time :) but it definitely can break workflows so I thought I should let you know.
hi, since native comfyui already supported t5 GUFF text encoder to speed up loading Flux model link can you also add support for it? the link is here: https://huggingface.co/city96/t5-v1_1-xxl-encoder-gguf/tree/main When i tried dual clip GUFF loader using fluxloader there is error:
`Error occurred when executing easy fluxLoader:
module 'gguf.quants' has no attribute 'dequantize'
File "I:\ComfyUI\execution.py", line 316, in execute output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) File "I:\ComfyUI\execution.py", line 191, in get_output_data return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) File "I:\ComfyUI\execution.py", line 168, in _map_node_over_list process_inputs(input_dict, i) File "I:\ComfyUI\execution.py", line 157, in process_inputs results.append(getattr(obj, func)(inputs)) File "I:\ComfyUI\custom_nodes\15_ComfyUI-Easy-Use\py\easyNodes.py", line 1977, in fluxloader return super().adv_pipeloader(ckpt_name, 'Default', vae_name, 0, File "I:\ComfyUI\custom_nodes\15_ComfyUI-Easy-Use\py\easyNodes.py", line 937, in adv_pipeloader positive_embeddings_final, positive_wildcard_prompt, model, clip = prompt_to_cond('positive', model, clip, clip_skip, lora_stack, positive, positive_token_normalization, positive_weight_interpretation, a1111_prompt_style, my_unique_id, prompt, easyCache, model_type=model_type) File "I:\ComfyUI\custom_nodes\15_ComfyUI-Easy-Use\py\libs\conditioning.py", line 17, in prompt_to_cond embeddings_final, = CLIPTextEncode().encode(clip, text) File "I:\ComfyUI\nodes.py", line 65, in encode output = clip.encode_from_tokens(tokens, return_pooled=True, return_dict=True) File "I:\ComfyUI\comfy\sd.py", line 126, in encode_from_tokens o = self.cond_stage_model.encode_token_weights(tokens) File "I:\ComfyUI\comfy\sdxl_clip.py", line 58, in encode_token_weights g_out, g_pooled = self.clip_g.encode_token_weights(token_weight_pairs_g) File "I:\ComfyUI\comfy\sd1_clip.py", line 41, in encode_token_weights o = self.encode(to_encode) File "I:\ComfyUI\comfy\sd1_clip.py", line 229, in encode return self(tokens) File "I:\ComfyUI\venv\lib\site-packages\torch\nn\modules\module.py", line 1716, in _wrapped_call_impl return self._call_impl(*args, *kwargs) File "I:\ComfyUI\venv\lib\site-packages\torch\nn\modules\module.py", line 1727, in _call_impl return forward_call(args, kwargs) File "I:\ComfyUI\comfy\sd1_clip.py", line 201, in forward outputs = self.transformer(tokens, attention_mask_model, intermediate_output=self.layer_idx, final_layer_norm_intermediate=self.layer_norm_hidden_state, dtype=torch.float32) File "I:\ComfyUI\venv\lib\site-packages\torch\nn\modules\module.py", line 1716, in _wrapped_call_impl return self._call_impl(*args, kwargs) File "I:\ComfyUI\venv\lib\site-packages\torch\nn\modules\module.py", line 1727, in _call_impl return forward_call(*args, *kwargs) File "I:\ComfyUI\comfy\clip_model.py", line 136, in forward x = self.text_model(args, kwargs) File "I:\ComfyUI\venv\lib\site-packages\torch\nn\modules\module.py", line 1716, in _wrapped_call_impl return self._call_impl(*args, *kwargs) File "I:\ComfyUI\venv\lib\site-packages\torch\nn\modules\module.py", line 1727, in _call_impl return forward_call(args, **kwargs) File "I:\ComfyUI\comfy\clip_model.py", line 112, in forward x, i = self.encoder(x, mask=mask, intermediate_output=intermediate_output) File "I:\ComfyUI\venv\lib\site-packages\torch\nn\modules\module.py", line 1716, in _wrapped_call_impl return self.`