taabata / LCM_Inpaint_Outpaint_Comfy

ComfyUI custom nodes for inpainting/outpainting using the new latent consistency model (LCM)
227 stars 14 forks source link

Error occurred when executing LCMGenerate: cannot fit 'int' into an index-sized integer #26

Open crazyzfxu opened 5 months ago

crazyzfxu commented 5 months ago

Error occurred when executing LCMGenerate:

cannot fit 'int' into an index-sized integer

File "D:\AI_comfyui\ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\execution.py", line 152, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\AI_comfyui\ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\execution.py", line 82, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\AI_comfyui\ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\execution.py", line 75, in map_node_over_list results.append(getattr(obj, func)(*slice_dict(input_data_all, i))) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\AI_comfyui\ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\custom_nodes\LCM_Inpaint-Outpaint_Comfy\LCM_Nodes.py", line 846, in mainfunc images = pipe(prompt=prompt, num_images_per_prompt=1, num_inference_steps=steps, guidance_scale=cfg, lcm_origin_steps=50,width=width,height=height,strength = 1.0, image=image, mask_image=mask).images ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\AI_comfyui\ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context return func(args, kwargs) ^^^^^^^^^^^^^^^^^^^^^ File "D:\AI_comfyui\ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\custom_nodes\LCM_Inpaint-Outpaint_Comfy\LCM\lcm_pipeline_inpaint.py", line 373, in call prompt_embeds = self._encode_prompt( ^^^^^^^^^^^^^^^^^^^^ File "D:\AI_comfyui\ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\custom_nodes\LCM_Inpaint-Outpaint_Comfy\LCM\lcm_pipeline_inpaint.py", line 109, in _encode_prompt text_inputs = self.tokenizer( ^^^^^^^^^^^^^^^ File "D:\AI_comfyui\ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\python_embeded\Lib\site-packages\transformers\tokenization_utils_base.py", line 2803, in call encodings = self._call_one(text=text, text_pair=text_pair, all_kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\AI_comfyui\ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\python_embeded\Lib\site-packages\transformers\tokenization_utils_base.py", line 2909, in _call_one return self.encode_plus( ^^^^^^^^^^^^^^^^^ File "D:\AI_comfyui\ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\python_embeded\Lib\site-packages\transformers\tokenization_utils_base.py", line 2982, in encode_plus return self._encode_plus( ^^^^^^^^^^^^^^^^^^ File "D:\AI_comfyui\ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\python_embeded\Lib\site-packages\transformers\tokenization_utils.py", line 722, in _encode_plus return self.prepare_for_model( ^^^^^^^^^^^^^^^^^^^^^^^ File "D:\AI_comfyui\ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\python_embeded\Lib\site-packages\transformers\tokenization_utils_base.py", line 3461, in prepare_for_model encoded_inputs = self.pad( ^^^^^^^^^ File "D:\AI_comfyui\ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\python_embeded\Lib\site-packages\transformers\tokenization_utils_base.py", line 3266, in pad encoded_inputs = self._pad( ^^^^^^^^^^ File "D:\AI_comfyui\ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\python_embeded\Lib\site-packages\transformers\tokenization_utils_base.py", line 3656, in _pad encoded_inputs["attention_mask"] = encoded_inputs["attention_mask"] + [0] * difference