ltdrdata / ComfyUI-Workflow-Component

This is a side project to experiment with using workflows as components.
GNU General Public License v3.0
196 stars 10 forks source link

RuntimeError: a leaf Variable that requires grad is being used in an in-place operation. #42

Open Glzchat-liang opened 6 days ago

Glzchat-liang commented 6 days ago

model_type EPS Using xformers attention in VAE Using xformers attention in VAE Error handling request Traceback (most recent call last): File "D:\ComfyUI\python_embeded\Lib\site-packages\aiohttp\web_protocol.py", line 452, in _handle_request resp = await request_handler(request) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI\python_embeded\Lib\site-packages\aiohttp\web_app.py", line 543, in _handle resp = await handler(request) ^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI\python_embeded\Lib\site-packages\aiohttp\web_middlewares.py", line 114, in impl return await handler(request) ^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI\ComfyUI\server.py", line 42, in cache_control response: web.Response = await handler(request) ^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI\ComfyUI\server.py", line 54, in cors_middleware response = await handler(request) ^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI\ComfyUI\custom_nodes\ComfyUI-Workflow-Component\image_refiner\custom_server.py", line 69, in imagerefiner_generate result = ir.generate(base_pil.convert('RGB'), mask_pil, prompt_data) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI\ComfyUI\custom_nodes\ComfyUI-Workflow-Component\image_refiner\imagerefiner.py", line 174, in generate input_data_all = prepare_input(class_def, merged_pil, mask_pil, prompt_data) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI\ComfyUI\custom_nodes\ComfyUI-Workflow-Component\image_refiner\imagerefiner.py", line 94, in prepare_input model, clip, vae = load_checkpoint(v['checkpoint']) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI\ComfyUI\custom_nodes\ComfyUI-Workflow-Component\image_refiner\imagerefiner.py", line 38, in load_checkpoint model, clip, vae = comfy_nodes.CheckpointLoaderSimple().load_checkpoint(ckpt_name) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI\ComfyUI\nodes.py", line 516, in load_checkpoint out = comfy.sd.load_checkpoint_guess_config(ckpt_path, output_vae=True, output_clip=True, embedding_directory=folder_paths.get_folder_paths("embeddings")) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI\ComfyUI\comfy\sd.py", line 539, in load_checkpoint_guess_config clip = CLIP(clip_target, embedding_directory=embedding_directory) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI\ComfyUI\comfy\sd.py", line 105, in init self.cond_stage_model = clip((params)) ^^^^^^^^^^^^^^^^ File "D:\ComfyUI\ComfyUI\comfy\sd1_clip.py", line 513, in init setattr(self, self.clip, clip_model(device=device, dtype=dtype, kwargs)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI\ComfyUI\comfy\sd1_clip.py", line 83, in init self.transformer = model_class(config, dtype, device, comfy.ops.manual_cast) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI\ComfyUI\comfy\clip_model.py", line 124, in init self.textprojection.weight.copy(torch.eye(embed_dim)) RuntimeError: a leaf Variable that requires grad is being used in an in-place operation. b669ada59c066e1b86cf02246e616d2e

XylitolJ commented 5 days ago

yes, I have the same issue when click Regenerate

model_type EPS Using xformers attention in VAE Using xformers attention in VAE Error handling request Traceback (most recent call last): File "D:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\aiohttp\web_protocol.py", line 452, in _handle_request resp = await request_handler(request) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\aiohttp\web_app.py", line 543, in _handle resp = await handler(request) ^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\aiohttp\web_middlewares.py", line 114, in impl return await handler(request) ^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\ComfyUI\server.py", line 42, in cache_control response: web.Response = await handler(request) ^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\ComfyUI\server.py", line 54, in cors_middleware response = await handler(request) ^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Workflow-Component\image_refiner\custom_server.py", line 69, in imagerefiner_generate result = ir.generate(base_pil.convert('RGB'), mask_pil, prompt_data) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Workflow-Component\image_refiner\imagerefiner.py", line 174, in generate input_data_all = prepare_input(class_def, merged_pil, mask_pil, prompt_data) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Workflow-Component\image_refiner\imagerefiner.py", line 94, in prepare_input model, clip, vae = load_checkpoint(v['checkpoint']) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Workflow-Component\image_refiner\imagerefiner.py", line 38, in load_checkpoint model, clip, vae = comfy_nodes.CheckpointLoaderSimple().load_checkpoint(ckpt_name) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\ComfyUI\nodes.py", line 516, in load_checkpoint out = comfy.sd.load_checkpoint_guess_config(ckpt_path, output_vae=True, output_clip=True, embedding_directory=folder_paths.get_folder_paths("embeddings")) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\ComfyUI\comfy\sd.py", line 539, in load_checkpoint_guess_config clip = CLIP(clip_target, embedding_directory=embedding_directory) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\ComfyUI\comfy\sd.py", line 105, in __init__ self.cond_stage_model = clip(**(params)) ^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\ComfyUI\comfy\sd1_clip.py", line 513, in __init__ setattr(self, self.clip, clip_model(device=device, dtype=dtype, **kwargs)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\ComfyUI\comfy\sd1_clip.py", line 83, in __init__ self.transformer = model_class(config, dtype, device, comfy.ops.manual_cast) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\ComfyUI\comfy\clip_model.py", line 124, in __init__ self.text_projection.weight.copy_(torch.eye(embed_dim)) RuntimeError: a leaf Variable that requires grad is being used in an in-place operation.

image

wilmoot commented 2 days ago

same for me