comfyanonymous / ComfyUI

The most powerful and modular diffusion model GUI, api and backend with a graph/nodes interface.
https://www.comfy.org/
GNU General Public License v3.0
58.63k stars 6.22k forks source link

Error occurred when executing KSampler: Query/Key/Value should all have the same dtype #2344

Open zboing opened 11 months ago

zboing commented 11 months ago

Hi, I am receiving this error while loading this custom script https://openart.ai/workflows/neuralunk/fun-mini-figures-with-your-own-face/bYsXbkheJmRUetBlCvH6

Error occurred when executing KSampler:

Query/Key/Value should all have the same dtype query.dtype: torch.float16 key.dtype : torch.float32 value.dtype: torch.float32

ERROR:root:!!! Exception during processing !!! ERROR:root:Traceback (most recent call last): File "C:\Users\Desktop\sd\ComfyUI_windows_portable\ComfyUI\execution.py", line 153, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) File "C:\Users\Desktop\sd\ComfyUI_windows_portable\ComfyUI\execution.py", line 83, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) File "C:\Users\Desktop\sd\ComfyUI_windows_portable\ComfyUI\execution.py", line 76, in map_node_over_list results.append(getattr(obj, func)(slice_dict(input_data_all, i))) File "C:\Users\Desktop\sd\ComfyUI_windows_portable\ComfyUI\nodes.py", line 1299, in sample return common_ksampler(model, seed, steps, cfg, sampler_name, scheduler, positive, negative, latent_image, denoise=denoise) File "C:\Users\Desktop\sd\ComfyUI_windows_portable\ComfyUI\nodes.py", line 1269, in common_ksampler samples = comfy.sample.sample(model, noise, steps, cfg, sampler_name, scheduler, positive, negative, latent_image, File "C:\Users\Desktop\sd\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Impact-Pack\modules\impact\sample_error_enhancer.py", line 9, in informative_sample return original_sample(*args, *kwargs) # This code helps interpret error messages that occur within exceptions but does not have any impact on other operations. File "C:\Users\Desktop\sd\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-AnimateDiff-Evolved\animatediff\sampling.py", line 242, in motion_sample return orig_comfy_sample(model, noise, args, kwargs) File "C:\Users\Desktop\sd\ComfyUI_windows_portable\ComfyUI\comfy\sample.py", line 101, in sample samples = sampler.sample(noise, positive_copy, negative_copy, cfg=cfg, latent_image=latent_image, start_step=start_step, last_step=last_step, force_full_denoise=force_full_denoise, denoise_mask=noise_mask, sigmas=sigmas, callback=callback, disable_pbar=disable_pbar, seed=seed) File "C:\Users\Desktop\sd\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 716, in sample return sample(self.model, noise, positive, negative, cfg, self.device, sampler, sigmas, self.model_options, latent_image=latent_image, denoise_mask=denoise_mask, callback=callback, disable_pbar=disable_pbar, seed=seed) File "C:\Users\Desktop\sd\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 622, in sample samples = sampler.sample(model_wrap, sigmas, extra_args, callback, noise, latent_image, denoise_mask, disable_pbar) File "C:\Users\Desktop\sd\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 561, in sample samples = self.sampler_function(model_k, noise, sigmas, extra_args=extra_args, callback=k_callback, disable=disable_pbar, self.extra_options) File "C:\Users\Desktop\sd\ComfyUI_windows_portable\python_embeded\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context return func(*args, *kwargs) File "C:\Users\Desktop\sd\ComfyUI_windows_portable\ComfyUI\comfy\k_diffusion\sampling.py", line 137, in sample_euler denoised = model(x, sigma_hat s_in, extra_args) File "C:\Users\Desktop\sd\ComfyUI_windows_portable\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(*args, kwargs) File "C:\Users\Desktop\sd\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 285, in forward out = self.inner_model(x, sigma, cond=cond, uncond=uncond, cond_scale=cond_scale, model_options=model_options, seed=seed) File "C:\Users\Desktop\sd\ComfyUI_windows_portable\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(*args, *kwargs) File "C:\Users\Desktop\sd\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 275, in forward return self.apply_model(args, kwargs) File "C:\Users\Desktop\sd\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 272, in apply_model out = sampling_function(self.inner_model, x, timestep, uncond, cond, cond_scale, model_options=model_options, seed=seed) File "C:\Users\Desktop\sd\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 252, in sampling_function cond_pred, uncond_pred = calc_cond_uncondbatch(model, cond, uncond, x, timestep, model_options) File "C:\Users\Desktop\sd\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 226, in calc_cond_uncond_batch output = model.apply_model(inputx, timestep, c).chunk(batch_chunks) File "C:\Users\Desktop\sd\ComfyUI_windows_portable\ComfyUI\comfy\model_base.py", line 85, in apply_model model_output = self.diffusion_model(xc, t, context=context, control=control, transformer_options=transformer_options, extra_conds).float() File "C:\Users\Desktop\sd\ComfyUI_windows_portable\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(*args, kwargs) File "C:\Users\Desktop\sd\ComfyUI_windows_portable\ComfyUI\comfy\ldm\modules\diffusionmodules\openaimodel.py", line 854, in forward h = forward_timestep_embed(module, h, emb, context, transformer_options, time_context=time_context, num_video_frames=num_video_frames, image_only_indicator=image_only_indicator) File "C:\Users\Desktop\sd\ComfyUI_windows_portable\ComfyUI\comfy\ldm\modules\diffusionmodules\openaimodel.py", line 46, in forward_timestep_embed x = layer(x, context, transformer_options) File "C:\Users\Desktop\sd\ComfyUI_windows_portable\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(*args, *kwargs) File "C:\Users\Desktop\sd\ComfyUI_windows_portable\python_embeded\lib\site-packages\accelerate\hooks.py", line 165, in new_forward output = old_forward(args, kwargs) File "C:\Users\Desktop\sd\ComfyUI_windows_portable\ComfyUI\comfy\ldm\modules\attention.py", line 604, in forward x = block(x, context=context[i], transformer_options=transformer_options) File "C:\Users\Desktop\sd\ComfyUI_windows_portable\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(*args, *kwargs) File "C:\Users\Desktop\sd\ComfyUI_windows_portable\python_embeded\lib\site-packages\accelerate\hooks.py", line 165, in new_forward output = old_forward(args, *kwargs) File "C:\Users\Desktop\sd\ComfyUI_windows_portable\ComfyUI\comfy\ldm\modules\attention.py", line 431, in forward return checkpoint(self._forward, (x, context, transformer_options), self.parameters(), self.checkpoint) File "C:\Users\Desktop\sd\ComfyUI_windows_portable\ComfyUI\comfy\ldm\modules\diffusionmodules\util.py", line 189, in checkpoint return func(inputs) File "C:\Users\Desktop\sd\ComfyUI_windows_portable\ComfyUI\comfy\ldm\modules\attention.py", line 528, in _forward n = attn2_replace_patch[block_attn2](n, context_attn2, value_attn2, extra_options) File "C:\Users\Desktop\sd\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_IPAdapter_plus\IPAdapterPlus.py", line 310, in call out_ip = optimized_attention(q, ip_k, ip_v, extra_options["n_heads"]) File "C:\Users\Desktop\sd\ComfyUI_windows_portable\ComfyUI\comfy\ldm\modules\attention.py", line 298, in attention_xformers out = xformers.ops.memory_efficient_attention(q, k, v, attn_bias=None) File "C:\Users\Desktop\sd\ComfyUI_windows_portable\python_embeded\lib\site-packages\xformers\ops\fmha__init.py", line 192, in memory_efficient_attention return _memory_efficient_attention( File "C:\Users\Desktop\sd\ComfyUI_windows_portable\python_embeded\lib\site-packages\xformers\ops\fmha\init.py", line 290, in _memory_efficient_attention return _memory_efficient_attention_forward( File "C:\Users\Desktop\sd\ComfyUI_windows_portable\python_embeded\lib\site-packages\xformers\ops\fmha\init__.py", line 303, in _memory_efficient_attention_forward inp.validate_inputs() File "C:\Users\Desktop\sd\ComfyUI_windows_portable\python_embeded\lib\site-packages\xformers\ops\fmha\common.py", line 73, in validate_inputs raise ValueError( ValueError: Query/Key/Value should all have the same dtype query.dtype: torch.float16 key.dtype : torch.float32 value.dtype: torch.float32

Prompt executed in 78.07 seconds

After some digging I found out that in automatic1111 there is an option called "Upcast cross attention layer to float32" that needs to be checked, is there such option in comfy ui?

I have the latest version of comfy ui.

Do you have a solution? Thank you!

crafter312 commented 11 months ago

I haven't used that custom script, but I've encountered this exact same error in my own workflow. I think it's connected somehow to using IPAdapter with the face model. I've so far not been able to figure out how to fix it. Has anyone else looked at this yet?

crafter312 commented 11 months ago

Upon further investigation, see this issue for a solution that worked for me, specifically the "Dtype mismatch" issue.

Lucy-Wh commented 4 months ago

Has it been settled at last? How was it solved?