Open t00350320 opened 2 months ago
Do you got --force-fp32 in the comandline? I have the same issue, but removing it helped me
Do you got --force-fp32 in the comandline? I have the same issue, but removing it helped me
no, pure command " python main.py --listen 0.0.0.0"
same issue.
same problem
if torch.backends.mps.is_available(): device = torch.device(“mps”) if torch.cuda.is_bf16_supported(): dtype_model = torch.float16# else: dtype_model = torch.float16#
model weight dtype torch.float8_e4m3fn, manual cast: torch.float16 modeltype FLUX Is model already patched? False Using old vit clip We are patching diffusion model, be patient please Patched succesfully! Requested to load FluxClipModel Loading 1 new model loaded completely 0.0 4778.66552734375 True Requested to load Flux Loading 1 new model loaded completely 0.0 12262.271545410156 True Sampling: 0%| | 0/25 [00:00<?, ?it/s] !!! Exception during processing !!! Expected query, key, and value to have the same dtype, but got query.dtype: c10::Half key.dtype: c10::BFloat16 and value.dtype: c10::BFloat16 instead. Traceback (most recent call last): File "/home/runner/ComfyUI/execution.py", line 323, in execute output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/runner/ComfyUI/execution.py", line 198, in get_output_data return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/runner/ComfyUI/execution.py", line 169, in _map_node_over_list process_inputs(input_dict, i) File "/home/runner/ComfyUI/execution.py", line 158, in process_inputs results.append(getattr(obj, func)(inputs)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/runner/ComfyUI/custom_nodes/x-flux-comfyui/nodes.py", line 412, in sampling x = denoise( ^^^^^^^^ File "/home/runner/ComfyUI/custom_nodes/x-flux-comfyui/sampling.py", line 193, in denoise pred = model_forward( ^^^^^^^^^^^^^^ File "/home/runner/ComfyUI/custom_nodes/x-flux-comfyui/sampling.py", line 51, in model_forward img, txt = block(img=img, txt=txt, vec=vec, pe=pe) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/lib64/python3.11/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl return self._call_impl(*args, *kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/lib64/python3.11/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl return forward_call(args, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/runner/ComfyUI/custom_nodes/x-flux-comfyui/xflux/src/flux/modules/layers.py", line 297, in forward return self.processor(self, img, txt, vec, pe) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/lib64/python3.11/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl return self._call_impl(*args, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/lib64/python3.11/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl return forward_call(*args, *kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/runner/ComfyUI/custom_nodes/x-flux-comfyui/layers.py", line 375, in forward self.shift_ip(img_q, attn, img) File "/home/runner/ComfyUI/custom_nodes/x-flux-comfyui/layers.py", line 304, in shift_ip x += block(img_qkv, attn) ^^^^^^^^^^^^^^^^^^^^ File "/usr/lib64/python3.11/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl return self._call_impl(args, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/lib64/python3.11/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl return forward_call(*args, *kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/runner/ComfyUI/custom_nodes/x-flux-comfyui/layers.py", line 257, in forward ip_attention = F.scaled_dot_product_attention( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/lib64/python3.11/site-packages/torch/_tensor.py", line 1443, in __torch_function__ ret = func(args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^ RuntimeError: Expected query, key, and value to have the same dtype, but got query.dtype: c10::Half key.dtype: c10::BFloat16 and value.dtype: c10::BFloat16 instead.
i change x-flux-comfyui/layers.py it can work ,but gen image bad
# Ensure consistent dtype for query, key, and value
dtype = torch.float16 # Choose appropriate dtype
ip_query = ip_query.to(dtype)
ip_key = ip_key.to(dtype)
ip_value = ip_value.to(dtype)
# Compute attention between IP projections and the latent query
ip_attention = F.scaled_dot_product_attention(
ip_query,
ip_key,
ip_value,
dropout_p=0.0,
is_causal=False
)
V100 ,same problem,look forward to this issue being resolved
Apple CPU m1max, same problem. I'm using GGUF flux model (https://github.com/city96/ComfyUI-GGUF), and this workflow: https://github.com/XLabs-AI/x-flux-comfyui/blob/main/workflows/ip_adapter_workflow.json (replace Unet Loader and DualClipLoader to their GGUF version). But the same workflow works fine on my 2080Ti.
with unet_name:flux1-dev.safetensors , weight_dtype:default. error log