Closed AlaFrost closed 1 year ago
Yeah it isn't outoutting the pooled outouts. Needs to be updated. Felt pooled outputs could have been optional part of the main dict.
I am planning on showing a workflow to colleague in 2 weeks where I need that functionality. I can use it with sd1.5 for demo purposes, but it would be amazing to update that to SDXL. (I have tested the workflow by emulating the node manually, and it works much better with SDXL).
Is it feasible to fix that bug within the next two weeks?
I really appreciate the work you are doing, thanks.
I am planning on showing a workflow to colleague in 2 weeks where I need that functionality. I can use it with sd1.5 for demo purposes, but it would be amazing to update that to SDXL. (I have tested the workflow by emulating the node manually, and it works much better with SDXL).
Is it feasible to fix that bug within the next two weeks?
I really appreciate the work you are doing, thanks.
It should be fixed as of couple days ago.
Thanks for your quick response. I updated Comfy and WAS-Suite and I now get a different error.
I get error 1 when I connect it to a KSampler, error 2 when I connect it to the input of a control net. I don't know if those are different issues or symptoms of the same problem, so I am posting both.
Both errors occur not only with SDXL but with SD1.5 aswell, I don't know if that helps to zero in on the problem or when replicating the issue.
Error occurred when executing KSampler:
Tensor.contains only supports Tensor or scalar, but you passed in a .
File "E:\ComfyUI\execution.py", line 151, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) File "E:\ComfyUI\execution.py", line 81, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) File "E:\ComfyUI\execution.py", line 74, in map_node_over_list results.append(getattr(obj, func)(**slice_dict(input_data_all, i))) File "E:\ComfyUI\nodes.py", line 1211, in sample return common_ksampler(model, seed, steps, cfg, sampler_name, scheduler, positive, negative, latent_image, denoise=denoise) File "E:\ComfyUI\nodes.py", line 1181, in common_ksampler samples = comfy.sample.sample(model, noise, steps, cfg, sampler_name, scheduler, positive, negative, latent_image, File "E:\ComfyUI\comfy\sample.py", line 82, in sample models, inference_memory = get_additional_models(positive, negative, model.model_dtype()) File "E:\ComfyUI\comfy\sample.py", line 56, in get_additional_models control_nets = set(get_models_from_cond(positive, "control") + get_models_from_cond(negative, "control")) File "E:\ComfyUI\comfy\sample.py", line 50, in get_models_from_cond if model_type in c[1]: File "C:\Python310\lib\site-packages\torch_tensor.py", line 999, in contains raise RuntimeError(
Error occurred when executing ControlNetApply:
'Tensor' object has no attribute 'copy'
File "E:\ComfyUI\execution.py", line 151, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) File "E:\ComfyUI\execution.py", line 81, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) File "E:\ComfyUI\execution.py", line 74, in map_node_over_list results.append(getattr(obj, func)(**slice_dict(input_data_all, i))) File "E:\ComfyUI\nodes.py", line 617, in apply_controlnet n = [t[0], t[1].copy()]
Doesn't sound like my node but something you are doing to the conditioning. I do not use those functions. I just run ComfUI's encode function on text.
I have a minimal example that I will attach as json.
If you connect the standard prompt encode to the Ksampler as positive, it works just fine. Connecting the Text to Conditioning throws the error. There is something wrong with the interaction. Is this bug reproducible for you?
The workflow is pretty vanilla, I don't see what I could do wrong here to mess it up.
{ "last_node_id": 9, "last_link_id": 14, "nodes": [ { "id": 3, "type": "CLIPTextEncode", "pos": [ 936, 677 ], "size": { "0": 400, "1": 200 }, "flags": {}, "order": 4, "mode": 0, "inputs": [ { "name": "clip", "type": "CLIP", "link": 2 } ], "outputs": [ { "name": "CONDITIONING", "type": "CONDITIONING", "links": [ 5 ], "shape": 3, "slot_index": 0 } ], "properties": { "Node name for S&R": "CLIPTextEncode" }, "widgets_values": [ "" ] }, { "id": 5, "type": "EmptyLatentImage", "pos": [ 362, 469 ], "size": { "0": 315, "1": 106 }, "flags": {}, "order": 0, "mode": 0, "outputs": [ { "name": "LATENT", "type": "LATENT", "links": [ 8 ], "shape": 3, "slot_index": 0 } ], "properties": { "Node name for S&R": "EmptyLatentImage" }, "widgets_values": [ 1024, 1024, 1 ] }, { "id": 4, "type": "KSampler", "pos": [ 1556, 354 ], "size": { "0": 315, "1": 262 }, "flags": {}, "order": 6, "mode": 0, "inputs": [ { "name": "model", "type": "MODEL", "link": 7 }, { "name": "positive", "type": "CONDITIONING", "link": 14 }, { "name": "negative", "type": "CONDITIONING", "link": 5 }, { "name": "latent_image", "type": "LATENT", "link": 8 } ], "outputs": [ { "name": "LATENT", "type": "LATENT", "links": [ 9 ], "shape": 3, "slot_index": 0 } ], "properties": { "Node name for S&R": "KSampler" }, "widgets_values": [ 524458705144789, "randomize", 20, 8, "euler", "normal", 1 ] }, { "id": 7, "type": "PreviewImage", "pos": [ 2282.0010047265614, 382.33196044860824 ], "size": [ 210, 246 ], "flags": {}, "order": 8, "mode": 0, "inputs": [ { "name": "images", "type": "IMAGE", "link": 11 } ], "properties": { "Node name for S&R": "PreviewImage" } }, { "id": 6, "type": "VAEDecode", "pos": [ 1985, 367 ], "size": { "0": 210, "1": 46 }, "flags": {}, "order": 7, "mode": 0, "inputs": [ { "name": "samples", "type": "LATENT", "link": 9 }, { "name": "vae", "type": "VAE", "link": 10, "slot_index": 1 } ], "outputs": [ { "name": "IMAGE", "type": "IMAGE", "links": [ 11 ], "shape": 3, "slot_index": 0 } ], "properties": { "Node name for S&R": "VAEDecode" } }, { "id": 1, "type": "CheckpointLoaderSimple", "pos": [ 365.4941047265622, 288.92235587417593 ], "size": { "0": 315, "1": 98 }, "flags": {}, "order": 1, "mode": 0, "outputs": [ { "name": "MODEL", "type": "MODEL", "links": [ 7 ], "shape": 3, "slot_index": 0 }, { "name": "CLIP", "type": "CLIP", "links": [ 1, 2, 13 ], "shape": 3, "slot_index": 1 }, { "name": "VAE", "type": "VAE", "links": [ 10 ], "shape": 3, "slot_index": 2 } ], "properties": { "Node name for S&R": "CheckpointLoaderSimple" }, "widgets_values": [ "sd_xl_base_1.0.safetensors" ] }, { "id": 9, "type": "Text Multiline", "pos": [ 652, 30 ], "size": { "0": 400, "1": 200 }, "flags": {}, "order": 2, "mode": 0, "outputs": [ { "name": "STRING", "type": "STRING", "links": [ 12 ], "shape": 3, "slot_index": 0 } ], "properties": { "Node name for S&R": "Text Multiline" }, "widgets_values": [ "A bear" ] }, { "id": 8, "type": "Text to Conditioning", "pos": [ 1118, 251 ], "size": { "0": 216.59999084472656, "1": 46 }, "flags": {}, "order": 5, "mode": 0, "inputs": [ { "name": "clip", "type": "CLIP", "link": 13 }, { "name": "text", "type": "STRING", "link": 12 } ], "outputs": [ { "name": "CONDITIONING", "type": "CONDITIONING", "links": [ 14 ], "shape": 3, "slot_index": 0 } ], "properties": { "Node name for S&R": "Text to Conditioning" } }, { "id": 2, "type": "CLIPTextEncode", "pos": [ 917, 377 ], "size": { "0": 400, "1": 200 }, "flags": {}, "order": 3, "mode": 0, "inputs": [ { "name": "clip", "type": "CLIP", "link": 1 } ], "outputs": [ { "name": "CONDITIONING", "type": "CONDITIONING", "links": [], "shape": 3, "slot_index": 0 } ], "properties": { "Node name for S&R": "CLIPTextEncode" }, "widgets_values": [ "A bear" ] } ], "links": [ [ 1, 1, 1, 2, 0, "CLIP" ], [ 2, 1, 1, 3, 0, "CLIP" ], [ 5, 3, 0, 4, 2, "CONDITIONING" ], [ 7, 1, 0, 4, 0, "MODEL" ], [ 8, 5, 0, 4, 3, "LATENT" ], [ 9, 4, 0, 6, 0, "LATENT" ], [ 10, 1, 2, 6, 1, "VAE" ], [ 11, 6, 0, 7, 0, "IMAGE" ], [ 12, 9, 0, 8, 1, "STRING" ], [ 13, 1, 1, 8, 0, "CLIP" ], [ 14, 8, 0, 4, 1, "CONDITIONING" ] ], "groups": [], "config": {}, "extra": {}, "version": 0.4 }
Lemmie try in in a bit here. Thanks for sharing the workflow
Alright should be patched this time.
Thanks a lot, it works perfectly now.
When replacing a standard CLIP Text Encode with a Text to Conditioning node, I get the error below. The exact same workflow has no problems if I just replace the SDXL model with a SD1.5 model. It seems there is something going on under the hood that causes this error when using SDXL.
Error occurred when executing KSampler:
'pooled_output'
File "E:\ComfyUI\execution.py", line 151, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) File "E:\ComfyUI\execution.py", line 81, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) File "E:\ComfyUI\execution.py", line 74, in map_node_over_list results.append(getattr(obj, func)(slice_dict(input_data_all, i))) File "E:\ComfyUI\nodes.py", line 1206, in sample return common_ksampler(model, seed, steps, cfg, sampler_name, scheduler, positive, negative, latent_image, denoise=denoise) File "E:\ComfyUI\nodes.py", line 1176, in common_ksampler samples = comfy.sample.sample(model, noise, steps, cfg, sampler_name, scheduler, positive, negative, latent_image, File "E:\ComfyUI\comfy\sample.py", line 95, in sample samples = sampler.sample(noise, positive_copy, negative_copy, cfg=cfg, latent_image=latent_image, start_step=start_step, last_step=last_step, force_full_denoise=force_full_denoise, denoise_mask=noise_mask, sigmas=sigmas, callback=callback, disable_pbar=disable_pbar, seed=seed) File "E:\ComfyUI\comfy\samplers.py", line 643, in sample positive = encode_adm(self.model, positive, noise.shape[0], noise.shape[3], noise.shape[2], self.device, "positive") File "E:\ComfyUI\comfy\samplers.py", line 529, in encode_adm adm_out = model.encode_adm(device=device, params) File "E:\ComfyUI\comfy\model_base.py", line 191, in encode_adm clip_pooled = sdxl_pooled(kwargs, self.noise_augmentor) File "E:\ComfyUI\comfy\model_base.py", line 155, in sdxl_pooled return args["pooled_output"]