nullquant / ComfyUI-BrushNet

ComfyUI BrushNet nodes
Apache License 2.0
595 stars 22 forks source link

help!!!* KSampler 52: - Return type mismatch between linked nodes: latent_image, CONDITIONING != LATENT Output will be ignored invalid prompt: {'type': 'prompt_outputs_failed_validation', 'message': 'Prompt outputs failed validation', 'details': '', 'extra_info': {}} got prompt Failed to validate prompt for output 12: * KSampler 52: - Return type mismatch between linked nodes: latent_image, CONDITIONING != LATENT Output will be ignored invalid prompt: {'type': 'prompt_outputs_failed_validation', 'message': 'Prompt outputs failed validation', 'details': '', 'extra_info': {}} got prompt Failed to validate prompt for output 12: * KSampler 52: - Return type mismatch between linked nodes: latent_image, CONDITIONING != LATENT Output will be ignored invalid prompt: {'type': 'prompt_outputs_failed_validation', 'message': 'Prompt outputs failed validation', 'details': '', 'extra_info': {}} #46

Closed jokero3answer closed 5 months ago

jokero3answer commented 5 months ago

image

jokero3answer commented 5 months ago

错误信息 "Prompt outputs failed validation" 通常意味着在某个深度学习模型或系统中,期望的输出与实际产生的输出不一致,导致验证失败。具体到您提供的错误信息,这涉及到 KSampler 组件,它指出了两个具体的问题:

  1. Return type mismatch between linked nodes: 这表明在 KSampler 的上下文中,两个相互连接的节点(或组件)之间的数据类型不匹配。在深度学习模型中,每个节点或层通常都需要特定的输入类型以确保计算正确进行。

  2. latent_image, CONDITIONING != LATENT: 这里提到了两种类型 CONDITIONINGLATENT,并且指出它们不相等。在深度学习中,CONDITIONING 通常指的是条件信息,用于指导模型生成或处理数据。LATENT 则可能指的是潜在空间中的表示,这是生成模型(如变分自编码器 VAE 或生成对抗网络 GAN)中常见的概念,用于表示输入数据的压缩形式。

具体来说,latent_image 可能是指一个潜在空间中的图像表示,而错误信息表明 KSampler 预期的输入类型是 CONDITIONING,但实际上得到的是 LATENT 类型。这可能是因为:

为了解决这个问题,您可能需要:

如果您正在使用一个特定的深度学习框架,如 TensorFlow 或 PyTorch,您可能需要查阅相关的文档,了解如何正确地处理不同类型的数据,以及如何配置模型以避免这种类型不匹配的错误。

jokero3answer commented 5 months ago

@nullquant

jokero3answer commented 5 months ago

My comfyui ontology and nodes are up to date!

ComfyUI: 2158daa92a Manager: V2.24.1

nullquant commented 5 months ago

Just delete the BrushNet node from workflow then add it and connect all lines, sorry for inconvenience.

nullquant commented 5 months ago

Updated broken basic examples

jokero3answer commented 5 months ago

It's running successfully, thank you very much!

jokero3answer commented 5 months ago

random mask and segmentation mask,Is there a difference between the two models? @nullquant

jokero3answer commented 5 months ago

image When I use this model I get an error. segmentation mask

jokero3answer commented 5 months ago

Error occurred when executing KSampler:

Expected all tensors to be on the same device, but found at least two devices, cpu and cuda:0! (when checking argument for argument mat1 in method wrapper_CUDA_addmm)

File "D:\ComfyUI\execution.py", line 151, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) File "D:\ComfyUI\execution.py", line 81, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) File "D:\ComfyUI\execution.py", line 74, in map_node_over_list results.append(getattr(obj, func)(slice_dict(input_data_all, i))) File "D:\ComfyUI\nodes.py", line 1344, in sample return common_ksampler(model, seed, steps, cfg, sampler_name, scheduler, positive, negative, latent_image, denoise=denoise) File "D:\ComfyUI\custom_nodes\ComfyUI-BrushNet\brushnet_nodes.py", line 384, in modified_common_ksampler samples = comfy.sample.sample(model, noise, steps, cfg, sampler_name, scheduler, positive, negative, latent_image, File "D:\ComfyUI\custom_nodes\ComfyUI-Impact-Pack\modules\impact\sample_error_enhancer.py", line 22, in informative_sample raise e File "D:\ComfyUI\custom_nodes\ComfyUI-Impact-Pack\modules\impact\sample_error_enhancer.py", line 9, in informative_sample return original_sample(args, kwargs) # This code helps interpret error messages that occur within exceptions but does not have any impact on other operations. File "D:\ComfyUI\comfy\sample.py", line 37, in sample samples = sampler.sample(noise, positive, negative, cfg=cfg, latent_image=latent_image, start_step=start_step, last_step=last_step, force_full_denoise=force_full_denoise, denoise_mask=noise_mask, sigmas=sigmas, callback=callback, disable_pbar=disable_pbar, seed=seed) File "D:\ComfyUI\comfy\samplers.py", line 761, in sample return sample(self.model, noise, positive, negative, cfg, self.device, sampler, sigmas, self.model_options, latent_image=latent_image, denoise_mask=denoise_mask, callback=callback, disable_pbar=disable_pbar, seed=seed) File "D:\ComfyUI\comfy\samplers.py", line 663, in sample return cfg_guider.sample(noise, latent_image, sampler, sigmas, denoise_mask, callback, disable_pbar, seed) File "D:\ComfyUI\comfy\samplers.py", line 650, in sample output = self.inner_sample(noise, latent_image, device, sampler, sigmas, denoise_mask, callback, disable_pbar, seed) File "D:\ComfyUI\comfy\samplers.py", line 629, in inner_sample samples = sampler.sample(self, sigmas, extra_args, callback, noise, latent_image, denoise_mask, disable_pbar) File "D:\ComfyUI\comfy\samplers.py", line 534, in sample samples = self.sampler_function(model_k, noise, sigmas, extra_args=extra_args, callback=k_callback, disable=disable_pbar, self.extra_options) File "D:\ComfyUI\venv\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context return func(args, kwargs) File "D:\ComfyUI\comfy\k_diffusion\sampling.py", line 137, in sample_euler denoised = model(x, sigma_hat * s_in, extra_args) File "D:\ComfyUI\comfy\samplers.py", line 272, in call out = self.inner_model(x, sigma, model_options=model_options, seed=seed) File "D:\ComfyUI\comfy\samplers.py", line 616, in call return self.predict_noise(*args, kwargs) File "D:\ComfyUI\comfy\samplers.py", line 619, in predict_noise return sampling_function(self.inner_model, x, timestep, self.conds.get("negative", None), self.conds.get("positive", None), self.cfg, model_options=model_options, seed=seed) File "D:\ComfyUI\comfy\samplers.py", line 258, in sampling_function out = calc_cond_batch(model, conds, x, timestep, model_options) File "D:\ComfyUI\comfy\samplers.py", line 218, in calc_cond_batch output = model.apply_model(inputx, timestep, c).chunk(batch_chunks) File "D:\ComfyUI\comfy\model_base.py", line 97, in apply_model model_output = self.diffusion_model(xc, t, context=context, control=control, transformer_options=transformer_options, *extra_conds).float() File "D:\ComfyUI\venv\lib\site-packages\torch\nn\modules\module.py", line 1511, in _wrapped_call_impl return self._call_impl(args, kwargs) File "D:\ComfyUI\venv\lib\site-packages\torch\nn\modules\module.py", line 1520, in _call_impl return forward_call(*args, kwargs) File "D:\ComfyUI\custom_nodes\ComfyUI-BrushNet\brushnet_nodes.py", line 482, in forward_patched_by_brushnet input_samples, mid_sample, output_samples = brushnet_inference(x, timesteps, transformer_options) File "D:\ComfyUI\custom_nodes\ComfyUI-BrushNet\brushnet_nodes.py", line 423, in brushnet_inference return brushnet(x, File "D:\ComfyUI\venv\lib\site-packages\torch\nn\modules\module.py", line 1511, in _wrapped_call_impl return self._call_impl(*args, *kwargs) File "D:\ComfyUI\venv\lib\site-packages\torch\nn\modules\module.py", line 1520, in _call_impl return forward_call(args, kwargs) File "D:\ComfyUI\custom_nodes\ComfyUI-BrushNet\brushnet\brushnet.py", line 779, in forward emb = self.time_embedding(t_emb, timestep_cond) File "D:\ComfyUI\venv\lib\site-packages\torch\nn\modules\module.py", line 1511, in _wrapped_call_impl return self._call_impl(*args, kwargs) File "D:\ComfyUI\venv\lib\site-packages\torch\nn\modules\module.py", line 1520, in _call_impl return forward_call(*args, *kwargs) File "D:\ComfyUI\venv\lib\site-packages\diffusers\models\embeddings.py", line 227, in forward sample = self.linear_1(sample) File "D:\ComfyUI\venv\lib\site-packages\torch\nn\modules\module.py", line 1511, in _wrapped_call_impl return self._call_impl(args, kwargs) File "D:\ComfyUI\venv\lib\site-packages\torch\nn\modules\module.py", line 1520, in _call_impl return forward_call(*args, **kwargs) File "D:\ComfyUI\venv\lib\site-packages\torch\nn\modules\linear.py", line 116, in forward return F.linear(input, self.weight, self.bias)

关闭

nullquant commented 5 months ago

random mask and segmentation mask,Is there a difference between the two models? @nullquant

The checkpoint in segmentation_mask_brushnet_ckpt and segmentation_mask_brushnet_ckpt_sdxl_v0 provide checkpoints trained on BrushData, which has segmentation prior (mask are with the same shape of objects). The random_mask_brushnet_ckpt and random_mask_brushnet_ckpt_sdxl provide a more general ckpt for random mask shape.

nullquant commented 5 months ago

Could you please update nodes, run updated version and post ComfyUI console log?

jokero3answer commented 5 months ago

image ComfyUI: 2158daa92a Manager: V2.24.1 image

1.5brushnet+sam抠图+cn.json Error occurred when executing KSampler:

"upsample_nearest2d_channels_last" not implemented for 'Half'

Diagnostics-1714789198.log

@nullquant

jokero3answer commented 5 months ago

1.5-BrushNet_with_IPA.json image Error occurred when executing KSampler:

"upsample_nearest2d_channels_last" not implemented for 'Half'

Diagnostics-1714789512.log

jokero3answer commented 5 months ago

When I changed the precision to single precision it worked. Thanks to the author for his help! @nullquant