smthemex / ComfyUI_StoryDiffusion

You can using StoryDiffusion in ComfyUI
Apache License 2.0
156 stars 190 forks source link

执行demo工作流报错了,生成一张图没问题,生成两张图报如下错误,版本是最新的版本. #61

Closed libaiabcd closed 3 weeks ago

libaiabcd commented 3 weeks ago

Error occurred when executing Storydiffusion_Sampler:

mat1 and mat2 shapes cannot be multiplied (514x1280 and 1664x1280)

File "/root/ComfyUI/execution.py", line 316, in execute output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) File "/root/ComfyUI/execution.py", line 191, in get_output_data return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) File "/root/ComfyUI/execution.py", line 168, in _map_node_over_list process_inputs(input_dict, i) File "/root/ComfyUI/execution.py", line 157, in process_inputs results.append(getattr(obj, func)(inputs)) File "/root/ComfyUI/custom_nodes/ComfyUI_StoryDiffusion/Storydiffusion_node.py", line 1800, in story_sampler image_dual = msdiffusion_main(pipe, image_a, image_b, prompts_dual, width, height, steps, seed, File "/root/ComfyUI/custom_nodes/ComfyUI_StoryDiffusion/Storydiffusion_node.py", line 1350, in msdiffusion_main image_main = main_normal(prompt, pipe, phrases, ms_model, input_images, batch_size, steps, seed, File "/root/ComfyUI/custom_nodes/ComfyUI_StoryDiffusion/Storydiffusion_node.py", line 1120, in main_normal images = ms_model.generate(pipe=pipe, pil_images=[input_images],processed_images=in_img, num_samples=num_samples, File "/root/ComfyUI/custom_nodes/ComfyUI_StoryDiffusion/msdiffusion/models/model.py", line 227, in generate image_prompt_embeds = self.image_proj_model(image_embeds, grounding_kwargs=grounding_kwargs) File "/root/miniconda3/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, *kwargs) File "/root/miniconda3/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl return forward_call(args, kwargs) File "/root/ComfyUI/custom_nodes/ComfyUI_StoryDiffusion/msdiffusion/models/projection.py", line 228, in forward x = self.proj_in(x) File "/root/miniconda3/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, *kwargs) File "/root/miniconda3/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl return forward_call(args, **kwargs) File "/root/miniconda3/lib/python3.10/site-packages/torch/nn/modules/linear.py", line 114, in forward return F.linear(input, self.weight, self.bias)

smthemex commented 3 weeks ago

你的意思是生成2张双角色的吗?

libaiabcd commented 3 weeks ago

你的意思是生成2张双角色的吗?

对,demo中的文生图,跑到一张图有两个角色的地方会报这个错误,把两角色去掉后面的角色就正常了.

libaiabcd commented 3 weeks ago

是clip vision的问题么? 例子中提供的CLIP-ViT-bigG-14-laion2B-39B-b160k 太大了,我就随便找了个.

smthemex commented 3 weeks ago

你要用最新的示例 只有4G大小的clip

libaiabcd commented 3 weeks ago

你要用最新的示例 只有4G大小的clip 能给我个下载地址么?

libaiabcd commented 3 weeks ago

@smthemex 用这个openai/clip-vit-base-patch32 可以么?

smthemex commented 3 weeks ago

@smthemex 用这个openai/clip-vit-base-patch32 可以么?

要用基于CLIP-ViT-bigG-14-laion2B-39B-b160k 的clip_vision模型,名称里一般都会带个g 或者带bigG 你可以试试comfyUI作者发布的https://huggingface.co/comfyanonymous/clip_vision_g/tree/main 这个,注意,我没有测试

libaiabcd commented 3 weeks ago

@smthemex 用这个openai/clip-vit-base-patch32 可以么?

要用基于CLIP-ViT-bigG-14-laion2B-39B-b160k 的clip_vision模型,名称里一般都会带个g 或者带bigG 你可以试试comfyUI作者发布的https://huggingface.co/comfyanonymous/clip_vision_g/tree/main 这个,注意,我没有测试

https://huggingface.co/comfyanonymous/clip_vision_g/tree/main 这个是可以的