smthemex / ComfyUI_StoryDiffusion

You can using StoryDiffusion in ComfyUI
Apache License 2.0
147 stars 190 forks source link

Changing Size from 768 to anything else result in error #63

Closed lumos675 closed 6 days ago

lumos675 commented 1 week ago

whenever i make images in other size i recieve an error

!!! Exception during processing !!! The size of tensor a (4128) must match the size of tensor b (4096) at non-singleton dimension 1 Traceback (most recent call last): File "D:\WorkSpace\Scripts\Python\ComfyUI\execution.py", line 317, in execute output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) File "D:\WorkSpace\Scripts\Python\ComfyUI\execution.py", line 192, in get_output_data return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) File "D:\WorkSpace\Scripts\Python\ComfyUI\execution.py", line 169, in _map_node_over_list process_inputs(input_dict, i) File "D:\WorkSpace\Scripts\Python\ComfyUI\execution.py", line 158, in process_inputs results.append(getattr(obj, func)(**inputs)) File "D:\WorkSpace\Scripts\Python\ComfyUI\custom_nodes\ComfyUI_StoryDiffusion\Storydiffusion_node.py", line 1874, in story_sampler image_dual = msdiffusion_main(pipe, image_a, image_b, prompts_dual, width, height, steps, seed, File "D:\WorkSpace\Scripts\Python\ComfyUI\custom_nodes\ComfyUI_StoryDiffusion\Storydiffusion_node.py", line 1360, in msdiffusion_main image_main = main_normal(prompt, pipe, phrases, ms_model, input_images, batch_size, steps, seed, File "D:\WorkSpace\Scripts\Python\ComfyUI\custom_nodes\ComfyUI_StoryDiffusion\Storydiffusion_node.py", line 1130, in main_normal images = ms_model.generate(pipe=pipe, pil_images=[input_images],processed_images=in_img, num_samples=num_samples, File "D:\WorkSpace\Scripts\Python\ComfyUI\custom_nodes\ComfyUI_StoryDiffusion\msdiffusion\models\model.py", line 258, in generate images = pipe( File "D:\WorkSpace\Scripts\Python\ComfyUI\env\lib\site-packages\torch\utils\_contextlib.py", line 116, in decorate_context return func(*args, **kwargs) File "D:\WorkSpace\Scripts\Python\ComfyUI\env\lib\site-packages\diffusers\pipelines\stable_diffusion_xl\pipeline_stable_diffusion_xl.py", line 1199, in __call__ noise_pred = self.unet( File "D:\WorkSpace\Scripts\Python\ComfyUI\env\lib\site-packages\torch\nn\modules\module.py", line 1553, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "D:\WorkSpace\Scripts\Python\ComfyUI\env\lib\site-packages\torch\nn\modules\module.py", line 1562, in _call_impl return forward_call(*args, **kwargs) File "D:\WorkSpace\Scripts\Python\ComfyUI\env\lib\site-packages\accelerate\hooks.py", line 169, in new_forward output = module._old_forward(*args, **kwargs) File "D:\WorkSpace\Scripts\Python\ComfyUI\env\lib\site-packages\diffusers\models\unets\unet_2d_condition.py", line 1216, in forward sample, res_samples = downsample_block( File "D:\WorkSpace\Scripts\Python\ComfyUI\env\lib\site-packages\torch\nn\modules\module.py", line 1553, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "D:\WorkSpace\Scripts\Python\ComfyUI\env\lib\site-packages\torch\nn\modules\module.py", line 1562, in _call_impl return forward_call(*args, **kwargs) File "D:\WorkSpace\Scripts\Python\ComfyUI\env\lib\site-packages\diffusers\models\unets\unet_2d_blocks.py", line 1288, in forward hidden_states = attn( File "D:\WorkSpace\Scripts\Python\ComfyUI\env\lib\site-packages\torch\nn\modules\module.py", line 1553, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "D:\WorkSpace\Scripts\Python\ComfyUI\env\lib\site-packages\torch\nn\modules\module.py", line 1562, in _call_impl return forward_call(*args, **kwargs) File "D:\WorkSpace\Scripts\Python\ComfyUI\env\lib\site-packages\diffusers\models\transformers\transformer_2d.py", line 442, in forward hidden_states = block( File "D:\WorkSpace\Scripts\Python\ComfyUI\env\lib\site-packages\torch\nn\modules\module.py", line 1553, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "D:\WorkSpace\Scripts\Python\ComfyUI\env\lib\site-packages\torch\nn\modules\module.py", line 1562, in _call_impl return forward_call(*args, **kwargs) File "D:\WorkSpace\Scripts\Python\ComfyUI\env\lib\site-packages\diffusers\models\attention.py", line 504, in forward attn_output = self.attn2( File "D:\WorkSpace\Scripts\Python\ComfyUI\env\lib\site-packages\torch\nn\modules\module.py", line 1553, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "D:\WorkSpace\Scripts\Python\ComfyUI\env\lib\site-packages\torch\nn\modules\module.py", line 1562, in _call_impl return forward_call(*args, **kwargs) File "D:\WorkSpace\Scripts\Python\ComfyUI\env\lib\site-packages\diffusers\models\attention_processor.py", line 490, in forward return self.processor( File "D:\WorkSpace\Scripts\Python\ComfyUI\custom_nodes\ComfyUI_StoryDiffusion\msdiffusion\models\attention_processor.py", line 276, in __call__ custom_attention_masks = self.prepare_attention_mask_qk(boxes, phrase_idxes, hidden_states.shape[1], File "D:\WorkSpace\Scripts\Python\ComfyUI\custom_nodes\ComfyUI_StoryDiffusion\msdiffusion\models\attention_processor.py", line 156, in prepare_attention_mask_qk dummy_attention_mask = torch.clamp(dummy_attention_mask - box_mask, min=0) RuntimeError: The size of tensor a (4128) must match the size of tensor b (4096) at non-singleton dimension 1

smthemex commented 1 week ago

I fix it at 2024/09/10