smthemex / ComfyUI_StoryDiffusion

You can using StoryDiffusion in ComfyUI
Apache License 2.0
241 stars 195 forks source link

启用双角色时Flux开启失败 #68

Closed Lecho303 closed 2 months ago

Lecho303 commented 2 months ago

开启双角色,步骤按照readme的图片搭建的工作流,启动报错。模型:FluxF8 报错信息:

ComfyUI Error Report

Error Details

Lecho303 commented 2 months ago
屏幕截图 2024-09-11 190508 屏幕截图 2024-09-11 190508
smthemex commented 2 months ago

该问题已修复,请更新到最新版本

mesflit commented 2 months ago

i got same error

Lecho303 commented 2 months ago

该问题已修复,请更新到最新版本

已经更新,Flux模型依旧不能使用,相同的报错……另外,大神还有个问题,我切换成SDXL绘制,总会到某一个环节就报错(这个是双角色或者单角色都一样的报错):Token indices sequence length is longer than the specified maximum sequence length for this model (110 > 77). Running this sequence through the model will result in indexing errors The following part of your input was truncated because CLIP can only handle sequences up to 77 tokens: ['arms. blonde hair. a white male. standing behind newman, not looking at viewer ;. graphic illustration, comic art, graphic novel art, vibrant, highly detailed <|endoftext|>', 'arms. blonde hair. a white male. looking at newman with his hands behind his back.. graphic illustration, comic art, graphic novel art, vibrant, highly detailed'] Token indices sequence length is longer than the specified maximum sequence length for this model (110 > 77). Running this sequence through the model will result in indexing errors The following part of your input was truncated because CLIP can only handle sequences up to 77 tokens: ['arms. blonde hair. a white male. standing behind newman, not looking at viewer ;. graphic illustration, comic art, graphic novel art, vibrant, highly detailed <|endoftext|>', 'arms. blonde hair. a white male. looking at newman with his hands behind his back.. graphic illustration, comic art, graphic novel art, vibrant, highly detailed'] 100%|██████████████████████████████████████████████████████████████████████████████████| 15/15 [08:03<00:00, 32.26s/it] !!! Exception during processing !!! Allocation on device Traceback (most recent call last): File "D:\comfyUI\ComfyUI_windows_portable_nvidia.7z\ComfyUI\execution.py", line 323, in execute output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\comfyUI\ComfyUI_windows_portable_nvidia.7z\ComfyUI\execution.py", line 198, in get_output_data return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\comfyUI\ComfyUI_windows_portable_nvidia.7z\ComfyUI\execution.py", line 169, in _map_node_over_list process_inputs(input_dict, i) File "D:\comfyUI\ComfyUI_windows_portable_nvidia.7z\ComfyUI\execution.py", line 158, in process_inputs results.append(getattr(obj, func)(inputs)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\comfyUI\ComfyUI_windows_portable_nvidia.7z\ComfyUI\custom_nodes\ComfyUI_StoryDiffusion\Storydiffusion_node.py", line 2044, in story_sampler for value in gen: File "D:\comfyUI\ComfyUI_windows_portable_nvidia.7z\ComfyUI\custom_nodes\ComfyUI_StoryDiffusion\Storydiffusion_node.py", line 844, in process_generation id_images = pipe( ^^^^^ File "D:\comfyUI\ComfyUI_windows_portable_nvidia.7z\python_embeded\Lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context return func(*args, *kwargs) ^^^^^^^^^^^^^^^^^^^^^ File "D:\comfyUI\ComfyUI_windows_portable_nvidia.7z\python_embeded\Lib\site-packages\diffusers\pipelines\stable_diffusion_xl\pipeline_stable_diffusion_xl.py", line 1279, in call image = self.vae.decode(latents, return_dict=False)[0] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\comfyUI\ComfyUI_windows_portable_nvidia.7z\python_embeded\Lib\site-packages\diffusers\utils\accelerate_utils.py", line 46, in wrapper return method(self, args, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\comfyUI\ComfyUI_windows_portable_nvidia.7z\python_embeded\Lib\site-packages\diffusers\models\autoencoders\autoencoder_kl.py", line 318, in decode decoded_slices = [self._decode(z_slice).sample for z_slice in z.split(1)] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\comfyUI\ComfyUI_windows_portable_nvidia.7z\python_embeded\Lib\site-packages\diffusers\models\autoencoders\autoencoder_kl.py", line 318, in decoded_slices = [self._decode(z_slice).sample for z_slice in z.split(1)] ^^^^^^^^^^^^^^^^^^^^^ File "D:\comfyUI\ComfyUI_windows_portable_nvidia.7z\python_embeded\Lib\site-packages\diffusers\models\autoencoders\autoencoder_kl.py", line 292, in _decode dec = self.decoder(z) ^^^^^^^^^^^^^^^ File "D:\comfyUI\ComfyUI_windows_portable_nvidia.7z\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1532, in _wrapped_call_impl return self._call_impl(*args, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\comfyUI\ComfyUI_windows_portable_nvidia.7z\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1541, in _call_impl return forward_call(*args, *kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\comfyUI\ComfyUI_windows_portable_nvidia.7z\python_embeded\Lib\site-packages\diffusers\models\autoencoders\vae.py", line 337, in forward sample = up_block(sample, latent_embeds) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\comfyUI\ComfyUI_windows_portable_nvidia.7z\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1532, in _wrapped_call_impl return self._call_impl(args, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\comfyUI\ComfyUI_windows_portable_nvidia.7z\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1541, in _call_impl return forward_call(*args, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\comfyUI\ComfyUI_windows_portable_nvidia.7z\python_embeded\Lib\site-packages\diffusers\models\unets\unet_2d_blocks.py", line 2750, in forward hidden_states = upsampler(hidden_states) ^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\comfyUI\ComfyUI_windows_portable_nvidia.7z\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1532, in _wrapped_call_impl return self._call_impl(*args, *kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\comfyUI\ComfyUI_windows_portable_nvidia.7z\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1541, in _call_impl return forward_call(args, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\comfyUI\ComfyUI_windows_portable_nvidia.7z\python_embeded\Lib\site-packages\diffusers\models\upsampling.py", line 180, in forward hidden_states = self.conv(hidden_states) ^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\comfyUI\ComfyUI_windows_portable_nvidia.7z\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1532, in _wrapped_call_impl return self._call_impl(*args, *kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\comfyUI\ComfyUI_windows_portable_nvidia.7z\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1541, in _call_impl return forward_call(args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\comfyUI\ComfyUI_windows_portable_nvidia.7z\python_embeded\Lib\site-packages\torch\nn\modules\conv.py", line 460, in forward return self._conv_forward(input, self.weight, self.bias) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\comfyUI\ComfyUI_windows_portable_nvidia.7z\python_embeded\Lib\site-packages\torch\nn\modules\conv.py", line 456, in _conv_forward return F.conv2d(input, weight, bias, self.stride, ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ torch.cuda.OutOfMemoryError: Allocation on device

Got an OOM, unloading all loaded models. Prompt executed in 1143.13 seconds

这是内存不够吗?不是程序员,只是看的半懂不懂,查了一下说是内存不足的问题,应该怎么解决?跪谢大神!!!!!

smthemex commented 2 months ago

我觉得你前面的问题可能是路径名的问题,如果要用flux ,repo那里必须是填写flux diffuser的绝对路口(详细看readme),结尾最好是按照readme那样,以官方的标准名来收尾,比如XXX/FLUX.1-dev,至于爆显存,如果小于24G显存,最好是在easy function 的字段输入加入cpu,我这个写法比官方16G的多了个量化T5,所以要省显存一些,然后最好是用cpu来跑insightface模型。总的来说,flux pulid当下并不是一个舒适的方法,起码要量化到bf4,可能会好点,走fp8显存始终是要那么多的,11多个G。 至于使用SDXL报错,按理不应该,如果出现clip超出77个的错误信息,并不需要理会。