Open jkrauss82 opened 3 months ago
API in main.py prompt_worker have bug either
bg_change_workflow_api.json
@guill @comfyanonymous @mcmonkey4eva the API exception resulting in server lockup happened again with the following json workflow { "4": { "inputs": { "ckpt_name": "juggernautXL_juggernautX.safetensors" }, "class_type": "CheckpointLoaderSimple", "_meta": { "title": "Load Checkpoint" } }, "6": { "inputs": { "text": "fine details, official art, very detailed 8k wallpaper,", "clip": [ "277", 1 ] }, "class_type": "CLIPTextEncode", "_meta": { "title": "CLIP Text Encode (Prompt)" } }, "7": { "inputs": { "text": "asymmetry,lowres, low quality, worst quality, (text:1.2), watermark, painting, drawing, illustration, glitch, deformed, mutated, cross-eyed, ugly,(nsfw:1.2),bare breast,text, watermark", "clip": [ "4", 1 ] }, "class_type": "CLIPTextEncode", "_meta": { "title": "CLIP Text Encode (Prompt)" } }, "48": { "inputs": { "image": "man.jpg", "upload": "image" }, "class_type": "LoadImage", "_meta": { "title": "Load Image" } }, "107": { "inputs": { "model": [ "277", 0 ], "patch": [ "108", 0 ], "latent": [ "253", 0 ] }, "class_type": "INPAINT_ApplyFooocusInpaint", "_meta": { "title": "Apply Fooocus Inpaint" } }, "108": { "inputs": { "head": "fooocus_inpaint_head.pth", "patch": "inpaint_v26.fooocus.patch" }, "class_type": "INPAINT_LoadFooocusInpaint", "_meta": { "title": "Load Fooocus Inpaint" } }, "130": { "inputs": { "seed": 681194540782487, "steps": 30, "cfg": 8, "sampler_name": "euler_ancestral", "scheduler": "karras", "denoise": 1, "model": [ "107", 0 ], "positive": [ "264", 0 ], "negative": [ "264", 1 ], "latent_image": [ "264", 2 ] }, "class_type": "KSampler", "_meta": { "title": "KSampler" } }, "131": { "inputs": { "samples": [ "130", 0 ], "vae": [ "4", 2 ] }, "class_type": "VAEDecode", "_meta": { "title": "VAE Decode" } }, "253": { "inputs": { "grow_mask_by": 1, "pixels": [ "48", 0 ], "vae": [ "4", 2 ], "mask": [ "275", 0 ] }, "class_type": "VAEEncodeForInpaint", "_meta": { "title": "VAE Encode (for Inpainting)" } }, "264": { "inputs": { "positive": [ "6", 0 ], "negative": [ "7", 0 ], "vae": [ "4", 2 ], "pixels": [ "48", 0 ], "mask": [ "275", 0 ] }, "class_type": "InpaintModelConditioning", "_meta": { "title": "InpaintModelConditioning" } }, "272": { "inputs": { "filename_prefix": "ComfyUI", "images": [ "131", 0 ] }, "class_type": "SaveImage", "_meta": { "title": "Save Image" } }, "273": { "inputs": { "rmbgmodel": [ "274", 0 ], "image": [ "48", 0 ] }, "class_type": "BRIA_RMBG_Zho", "_meta": { "title": "BRIA RMBG" } }, "274": { "inputs": {}, "class_type": "BRIA_RMBG_ModelLoader_Zho", "_meta": { "title": "BRIA_RMBG Model Loader" } }, "275": { "inputs": { "mask": [ "273", 1 ] }, "class_type": "InvertMask", "_meta": { "title": "InvertMask" } }, "277": { "inputs": { "lora_name": "XL_add_details.safetensors", "strength_model": 1, "strength_clip": 1, "model": [ "4", 0 ], "clip": [ "4", 1 ] }, "class_type": "LoraLoader", "_meta": { "title": "Load LoRA" } } }
, i mean it ran well via default UI, but when switch to API, just failed as my previous bug report,
when i print the input_keys = sorted(inputs.keys())
in def get_ordered_ancestry_internal(self, dynprompt, node_id, ancestors, order_mapping):
every time with different input nodes dictionary{'ckpt_name': 'juggernautXL_juggernautX.safetensors'} <class 'dict'> {'text': 'fine details, official art, very detailed 8k wallpaper,Blue background wall,', 'clip': ['277', 1]} <class 'dict'> {'lora_name': 'XL_add_details.safetensors', 'strength_model': 1.0, 'strength_clip': 1.0, 'model': ['4', 0], 'clip': ['4', 1]} <class 'dict'> {'ckpt_name': 'juggernautXL_juggernautX.safetensors'} <class 'dict'> {'text': 'asymmetry,lowres, low quality, worst quality, (text:1.2), watermark, painting, drawing, illustration, glitch, deformed, mutated, cross-eyed, ugly,(nsfw:1.2),bare breast,text, watermark', 'clip': ['4', 1]} <class 'dict'> {'ckpt_name': 'juggernautXL_juggernautX.safetensors'} <class 'dict'> {'image': '1727524924_35031.png', 'upload': 'image'} <class 'dict'> {'model': ['277', 0], 'patch': ['108', 0], 'latent': ['253', 0]} <class 'dict'> {'grow_mask_by': 1, 'pixels': ['48', 0], 'vae': ['4', 2], 'mask': ['275', 0]} <class 'dict'> {'mask': ['273', 1]} <class 'dict'> {'rmbgmodel': ['274', 0], 'image': ['48', 0]} <class 'dict'> {'image': '1727524924_35031.png', 'upload': 'image'} <class 'dict'> [] <class 'list'> {'ckpt_name': 'juggernautXL_juggernautX.safetensors'} <class 'dict'> {'lora_name': 'XL_add_details.safetensors', 'strength_model': 1.0, 'strength_clip': 1.0, 'model': ['4', 0], 'clip': ['4', 1]} <class 'dict'> {'head': 'fooocus_inpaint_head.pth', 'patch': 'inpaint_v26.fooocus.patch'} <class 'dict'>
you can see one of them is a blank list,not all of them are dict
for current workaround,i just added if isinstance(inputs, dict)
in both def get_immediate_node_signature(self, dynprompt, node_id, ancestor_order_mapping):
and def get_ordered_ancestry(self, dynprompt, node_id):
@frankchieng I'm not able to reproduce this issue on master. The workflow you posted works fine for me, both within the default UI and submitting via the raw API via curl. If you have a workflow that's working in the default UI but not when saved as an API, the most likely possibility is that there's a node relying on extra_pnginfo, though I don't see anything like that here.
Expected Behavior
Prompt validation should fail and return 400 to client as usual
Actual Behavior
Seeing this stack trace and UI is not reacting anymore. Prompt worker is still running and receiving requests but nothing else works. See debug logs for stack trace
Steps to Reproduce
see the python code below and the attached workflow for a minimal example to reproduce the error.
we delete the one checkpoint loader in the json, then submit it leading to the observed state
workflow_api_min_example.json
Debug Logs
Other
No response