nullquant / ComfyUI-BrushNet

ComfyUI BrushNet nodes
Apache License 2.0
667 stars 24 forks source link

The size of tensor a (640) must match the size of tensor b (320) at non-singleton dimension 1 #154

Open jackqn opened 1 week ago

jackqn commented 1 week ago

BrushNet CL: image_latents shape = torch.Size([1, 4, 300, 135]) interpolated_mask shape = torch.Size([1, 1, 300, 135]) Requested to load BaseModel Loading 1 new model loaded partially 64.0 63.99903106689453 0 0%| | 0/50 [00:00<?, ?it/s]BrushNet inference, step = 0: image batch = 1, got 1 latents, starting from 0 BrushNet inference: sample torch.Size([1, 4, 300, 135]) , CL torch.Size([1, 5, 300, 135]) dtype torch.float32 G:\python\Lib\site-packages\diffusers\models\resnet.py:323: FutureWarning: scale is deprecated and will be removed in version 1.0.0. The scale argument is deprecated and will be ignored. Please remove it, as passing it will raise an error in the future. scale should directly be passed while calling the underlying pipeline component i.e., via cross_attention_kwargs. deprecate("scale", "1.0.0", deprecation_message) BrushNet can't find <class 'comfy.ops.disable_weight_init.Conv2d'> layer in 0 input block: None 0%| | 0/50 [00:21<?, ?it/s] !!! Exception during processing !!! The size of tensor a (640) must match the size of tensor b (320) at non-singleton dimension 1

bdp2024 commented 1 week ago

There are two ways to solve this problem: one is to delete --fast in the comfy ui startup parameters, and the other is to modify two lines of code in brushnet_nodes.py as shown below. image

jackqn commented 1 week ago

Thank you this has indeed solved my problem

xueqing0622 commented 5 days ago

both ways are still error to me. image image

``Prompt executed in 12.53 seconds

PS: User cm-1pxcfk1g3 disconnected from ComfyUI (cm)

got prompt Prompt executed in 0.70 seconds got prompt WARNING: '11' test en_text ['A dog'] prompt_result ['A dog'] SELECTED: input1 BrushNet model type: Loading SDXL BrushNet model file: F:\ComfyUI\ComfyUI\models\inpaint\brushnet\brushnet_random_mask_XL.safetensors BrushNet SDXL model is loaded Using xformers attention in VAE Using xformers attention in VAE

ggml_sd_loader: 1 912 0 2 8 766 model weight dtype torch.float16, manual cast: None model_type EPS SELECTED: input1 Requested to load SDXLClipModel Loading 1 new model loaded completely 0.0 1560.802734375 True Base model type: SDXL BrushNet image.shape = torch.Size([1, 958, 715, 3]) mask.shape = torch.Size([1, 958, 715]) Requested to load AutoencoderKL Loading 1 new model loaded completely 0.0 159.55708122253418 True BrushNet CL: image_latents shape = torch.Size([1, 4, 119, 89]) interpolated_mask shape = torch.Size([1, 1, 119, 89]) Requested to load SDXL Loading 1 new model loaded completely 0.0 2630.7519607543945 True 0%| | 0/20 [00:00<?, ?it/s]BrushNet inference: do_classifier_free_guidance is True BrushNet inference, step = 0: image batch = 1, got 2 latents, starting from 0 BrushNet inference: sample torch.Size([2, 4, 119, 89]) , CL torch.Size([2, 5, 119, 89]) dtype torch.float16 BrushNet can't find <class 'comfy.ops.disable_weight_init.Conv2d'> layer in 0 input block: None 0%| | 0/20 [00:00<?, ?it/s] !!! Exception during processing !!! The size of tensor a (640) must match the size of tensor b (320) at non-singleton dimension 1 Traceback (most recent call last): File "F:\ComfyUI\ComfyUI\execution.py", line 323, in execute output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:\ComfyUI\ComfyUI\execution.py", line 198, in get_output_data return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:\ComfyUI\ComfyUI\execution.py", line 169, in _map_node_over_list process_inputs(input_dict, i) File "F:\ComfyUI\ComfyUI\execution.py", line 158, in process_inputs results.append(getattr(obj, func)(inputs)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:\ComfyUI\ComfyUI\nodes.py", line 1457, in sample return common_ksampler(model, seed, steps, cfg, sampler_name, scheduler, positive, negative, latent_image, denoise=denoise) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:\ComfyUI\ComfyUI\nodes.py", line 1424, in common_ksampler samples = comfy.sample.sample(model, noise, steps, cfg, sampler_name, scheduler, positive, negative, latent_image, ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:\ComfyUI\ComfyUI\custom_nodes\ComfyUI-Impact-Pack\modules\impact\sample_error_enhancer.py", line 22, in informative_sample raise e File "F:\ComfyUI\ComfyUI\custom_nodes\ComfyUI-Impact-Pack\modules\impact\sample_error_enhancer.py", line 9, in informative_sample return original_sample(*args, *kwargs) # This code helps interpret error messages that occur within exceptions but does not have any impact on other operations. ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:\ComfyUI\ComfyUI\custom_nodes\ComfyUI-AnimateDiff-Evolved\animatediff\sampling.py", line 420, in motion_sample return orig_comfy_sample(model, noise, args, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:\ComfyUI\ComfyUI\custom_nodes\ComfyUI-Advanced-ControlNet\adv_control\sampling.py", line 116, in acn_sample return orig_comfy_sample(model, *args, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:\ComfyUI\ComfyUI\custom_nodes\ComfyUI-Advanced-ControlNet\adv_control\utils.py", line 117, in uncond_multiplier_check_cn_sample return orig_comfy_sample(model, args, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:\ComfyUI\ComfyUI\comfy\sample.py", line 43, in sample samples = sampler.sample(noise, positive, negative, cfg=cfg, latent_image=latent_image, start_step=start_step, last_step=last_step, force_full_denoise=force_full_denoise, denoise_mask=noise_mask, sigmas=sigmas, callback=callback, disable_pbar=disable_pbar, seed=seed) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:\ComfyUI\ComfyUI\comfy\samplers.py", line 855, in sample return sample(self.model, noise, positive, negative, cfg, self.device, sampler, sigmas, self.model_options, latent_image=latent_image, denoise_mask=denoise_mask, callback=callback, disable_pbar=disable_pbar, seed=seed) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:\ComfyUI\ComfyUI\custom_nodes\ComfyUI-Easy-Use\py\brushnet\model_patch.py", line 106, in modified_sample return cfg_guider.sample(noise, latent_image, sampler, sigmas, denoise_mask, callback, disable_pbar, seed) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:\ComfyUI\ComfyUI\comfy\samplers.py", line 740, in sample output = self.inner_sample(noise, latent_image, device, sampler, sigmas, denoise_mask, callback, disable_pbar, seed) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:\ComfyUI\ComfyUI\comfy\samplers.py", line 719, in inner_sample samples = sampler.sample(self, sigmas, extra_args, callback, noise, latent_image, denoise_mask, disable_pbar) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:\ComfyUI\ComfyUI\comfy\samplers.py", line 624, in sample samples = self.sampler_function(model_k, noise, sigmas, extra_args=extra_args, callback=k_callback, disable=disable_pbar, self.extra_options) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:\ComfyUI\python_embeded\Lib\site-packages\torch\utils_contextlib.py", line 116, in decorate_context return func(args, kwargs) ^^^^^^^^^^^^^^^^^^^^^ File "F:\ComfyUI\ComfyUI\comfy\k_diffusion\sampling.py", line 155, in sample_euler denoised = model(x, sigma_hat * s_in, extra_args) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:\ComfyUI\ComfyUI\comfy\samplers.py", line 299, in call out = self.inner_model(x, sigma, model_options=model_options, seed=seed) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:\ComfyUI\ComfyUI\comfy\samplers.py", line 706, in call return self.predict_noise(args, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:\ComfyUI\ComfyUI\comfy\samplers.py", line 709, in predict_noise return sampling_function(self.inner_model, x, timestep, self.conds.get("negative", None), self.conds.get("positive", None), self.cfg, model_options=model_options, seed=seed) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:\ComfyUI\ComfyUI\comfy\samplers.py", line 279, in sampling_function out = calc_cond_batch(model, conds, x, timestep, model_options) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:\ComfyUI\ComfyUI\comfy\samplers.py", line 226, in calc_cond_batch output = model_options['model_function_wrapper'](model.apply_model, {"input": inputx, "timestep": timestep, "c": c, "cond_or_uncond": cond_or_uncond}).chunk(batch_chunks) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:\ComfyUI\ComfyUI\custom_nodes\ComfyUI-BrushNet\model_patch.py", line 58, in brushnet_model_function_wrapper return apply_model_method(x, timestep, options_dict['c']) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:\ComfyUI\ComfyUI\custom_nodes\ComfyUI-Advanced-ControlNet\adv_control\utils.py", line 69, in apply_model_uncond_cleanup_wrapper return orig_apply_model(self, args, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:\ComfyUI\ComfyUI\comfy\model_base.py", line 145, in apply_model model_output = self.diffusion_model(xc, t, context=context, control=control, transformer_options=transformer_options, extra_conds).float() ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:\ComfyUI\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1736, in _wrapped_call_impl return self._call_impl(*args, *kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:\ComfyUI\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1747, in _call_impl return forward_call(args, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:\ComfyUI\ComfyUI\comfy\ldm\modules\diffusionmodules\openaimodel.py", line 857, in forward h = forward_timestep_embed(module, h, emb, context, transformer_options, time_context=time_context, num_video_frames=num_video_frames, image_only_indicator=image_only_indicator) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:\ComfyUI\ComfyUI\comfy\ldm\modules\diffusionmodules\openaimodel.py", line 44, in forward_timestep_embed x = layer(x, context, transformer_options) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:\ComfyUI\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1736, in _wrapped_call_impl return self._call_impl(*args, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:\ComfyUI\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1747, in _call_impl return forward_call(*args, *kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:\ComfyUI\ComfyUI\custom_nodes\ComfyUI-BrushNet\brushnet_nodes.py", line 1057, in forward_patched_by_brushnet h += to_add.to(h.dtype).to(h.device) File "F:\ComfyUI\python_embeded\Lib\site-packages\torch_tensor.py", line 1512, in __torch_function__ ret = func(args, kwargs) ^^^^^^^^^^^^^^^^^^^^^ RuntimeError: The size of tensor a (640) must match the size of tensor b (320) at non-singleton dimension 1

luck54321 commented 3 days ago

解决这个问题有两种方法:一种是删除 comfy ui 启动参数中的 --fast,另一种是修改 brushnet_nodes.py 中的两行代码,如下所示。 图像

有文件替换就好了!这两种方式,我都找不到文件所以位置,我是不是太笨了!

luck54321 commented 3 days ago

谢谢你,这确实解决了我的问题

请问:文件位置都在那啊?