huchenlei / ComfyUI-layerdiffuse

Layer Diffuse custom nodes
Apache License 2.0
1.28k stars 131 forks source link

[Bug]: Error occurred when executing LayeredDiffusionDecodeRGBA #33

Open Xiahussheng opened 3 months ago

Xiahussheng commented 3 months ago

What happened?

The layer_diffusion_diff_fg workflow you provided is missing the step to generate a transparent image in the last step, I tried to add the LayeredDiffusionDecodeRGBA node by myself but it shows a runtime error, I don't know the reason for that.

[Bug]: Error occurred when executing LayeredDiffusionDecodeRGBA: Sizes of tensors must match except in dimension 1. Expected size 40 but got size 39 for tensor number 1 in the list.

PixPin_2024-03-06_14-26-28

Steps to reproduce the problem

/

What should have happened?

/

Commit where the problem happens

ComfyUI: ComfyUI-layerdiffuse:

Sysinfo

Error occurred when executing LayeredDiffusionDecodeRGBA:

Sizes of tensors must match except in dimension 1. Expected size 40 but got size 39 for tensor number 1 in the list.

File "C:\ComfyUI_windows_portable\ComfyUI\execution.py", line 152, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\ComfyUI_windows_portable\ComfyUI\execution.py", line 82, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\ComfyUI_windows_portable\ComfyUI\execution.py", line 75, in map_node_over_list results.append(getattr(obj, func)(slice_dict(input_data_all, i))) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-layerdiffusion\layered_diffusion.py", line 160, in decode image, mask = super().decode(samples, images, sub_batch_size) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-layerdiffusion\layered_diffusion.py", line 136, in decode self.vae_transparent_decoder.decode_pixel( File "C:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context return func(*args, *kwargs) ^^^^^^^^^^^^^^^^^^^^^ File "C:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-layerdiffusion\lib_layerdiffusion\models.py", line 302, in decode_pixel y = self.estimate_augmented(pixel, latent) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context return func(args, kwargs) ^^^^^^^^^^^^^^^^^^^^^ File "C:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-layerdiffusion\lib_layerdiffusion\models.py", line 278, in estimate_augmented eps = self.estimate_single_pass(feed_pixel, feed_latent).clip(0, 1) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context return func(*args, kwargs) ^^^^^^^^^^^^^^^^^^^^^ File "C:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-layerdiffusion\lib_layerdiffusion\models.py", line 249, in estimate_single_pass y = self.model(pixel, latent) ^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, *kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl return forward_call(args, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-layerdiffusion\lib_layerdiffusion\models.py", line 212, in forward sample = upsample_block(sample, res_samples, emb) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, *kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl return forward_call(args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\diffusers\models\unet_2d_blocks.py", line 2181, in forward hidden_states = torch.cat([hidden_states, res_hidden_states], dim=1) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

Console logs

/

Workflow json file

fg.json

Additional information

No response

ana55e commented 3 months ago

hi the fix is as follows: the empty latent image batch_size should be equal to sub_batch_size in layer diffuse deocde( RGBA)......here in your case.....you should change the empty latent image batch_size to 16 or change sub_batch_size in layer diffuse deocde( RGBA) to 1

huchenlei commented 3 months ago

I don't think sub batch size is the issue. This tensor mismatch issue mostly comes from input image size don't match generation target size.

Xiahussheng commented 3 months ago

hi the fix is as follows: the empty latent image batch_size should be equal to sub_batch_size in layer diffuse deocde( RGBA)......here in your case.....you should change the empty latent image batch_size to 16 or change sub_batch_size in layer diffuse deocde( RGBA) to 1


I changed sub_batch_size in layer diffuse deocde( RGBA) to 1,it didn't work, still have the same error .

Xiahussheng commented 3 months ago

I don't think sub batch size is the issue. This tensor mismatch issue mostly comes from input image size don't match generation target size.


The size I filled in Empty Latent Image is the same as the size of the image I uploaded. And from the generation results, the image with gray background is normally generated, but the generation of transparent background is an error.

huchenlei commented 3 months ago

Can you try to use https://github.com/layerdiffusion/sd-forge-layerdiffuse for the same task? I would like to know whether this issue is ComfyUI-only.

YaseGar commented 3 months ago

i also got following error when try to run the example workflow:

ERROR:root:Traceback (most recent call last): File "ComfyUI\execution.py", line 152, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) File "ComfyUI\execution.py", line 82, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) File "ComfyUI\execution.py", line 75, in map_node_over_list results.append(getattr(obj, func)(*slice_dict(input_data_all, i))) File "ComfyUI\custom_nodes\ComfyUI-layerdiffusion\layered_diffusion.py", line 170, in decode image, mask = super().decode(samples, images, sub_batch_size) File "ComfyUI\custom_nodes\ComfyUI-layerdiffusion\layered_diffusion.py", line 127, in decode self.vae_transparent_decoder = TransparentVAEDecoder( File "ComfyUI\custom_nodes\ComfyUI-layerdiffusion\lib_layerdiffusion\models.py", line 241, in init model = UNet1024(in_channels=3, out_channels=4) File "python_embeded\lib\site-packages\diffusers\configuration_utils.py", line 636, in inner_init init(self, args, **init_kwargs) File "ComfyUI\custom_nodes\ComfyUI-layerdiffusion\lib_layerdiffusion\models.py", line 130, in init self.mid_block = UNetMidBlock2D( TypeError: UNetMidBlock2D.init() got an unexpected keyword argument 'attn_groups'

Xiahussheng commented 3 months ago

Can you try to use https://github.com/layerdiffusion/sd-forge-layerdiffuse for the same task? I would like to know whether this issue is ComfyUI-only.

yes , I've used it in webui forge and it works fine, but not in comfyui