Open killporter opened 1 year ago
Yes it's this change: https://github.com/comfyanonymous/ComfyUI/commit/1c012d69afa8bd92a007a3e468e2a1f874365d39
The ComfyUI_Fooocus_KSampler node needs to be updated.
I am also getting this error since updating. I don't use ComfyUI_Fooocus_KSampler Im getting this error from stock default workflow:
That's because the node patches stuff in comfyui even if you don't use it in your workflow. You can disable it by adding a .disabled to the folder name in the custom nodes folder.
Hi I'm getting the same error today. I tried changing the name of ComfyUI_Fooocus_KSampler to ComfyUI_Fooocus_KSampler.disabled but it did nothing. I'm totally out of my dpeth here and have no idea what i'm doing so any help would be greatly appreciated. Would just deleting the file work?
Fwiw, this is clearly something that needs to be fixed in other repos. What I did:
Whenever I want to update ComfyUI
It works so far, but unless the other repos get fixed I guess I'll stop using their nodes.
Thank you everyone for responding but I literally don't understand how to impiment your solutions. I'm trying my best to learn but I'm struggling. Do you type this
into the black cmd box that pops up when you first run Comfi UI?
@BertErny The issue with the ComfyUI_Fooocus_KSampler came from this https://github.com/comfyanonymous/ComfyUI/commit/1c012d69afa8bd92a007a3e468e2a1f874365d39 commit from ComfyUI. The right solution is to fix the issue in the ComfyUI_Fooocus_KSampler, but as a temporary workaround @harrr1 is proposing to rollback back this particular commit using git: extracting what changed and applying to ComfyUI.
I did some test and you actually don't need to rollback the whole commit only change this line:
context = c_crossattn
to
context = torch.cat(c_crossattn, 1)
In comfy/mode_base.py (it should be line 56)
btw I tried to fix it in ComfyUI_Fooocus_KSampler but I don't know python enough to understand how to do it (they're some magic happening there) I don't understand why c_crossattn is not set to 'None' (which should also not work) as I don't see this variable initialized from ComfyUI_Fooocus_KSampler calc_cond_uncond_batch function
@BertErny "Do you type this into the black cmd box that pops up when you first run Comfi UI?" you have to apply these command in a terminal/powershell before launching ComfyUI.
@VisualFox thanks for providing more info, I don't really understand the sampler code either but created a potential fix from the ComfyUI side. I'd totally understand if you wouldn't want to do it like this though.
@harrr1 I actually do like your solution and it's non-intrusive. Thank you!
Closing as the root issue here is in the fooocus_ksampler addon and should be reported/fixed there.
@mcmonkey4eva Um, sure, it's up to you, but there is a trivial fix on this side that makes the comfyui change backwards compatible.
Here's the current patch:
diff --git a/comfy/model_base.py b/comfy/model_base.py
index ca154db..88dee17 100644
--- a/comfy/model_base.py
+++ b/comfy/model_base.py
@@ -53,7 +53,10 @@ def apply_model(self, x, t, c_concat=None, c_crossattn=None, c_adm=None, control
xc = torch.cat([x] + [c_concat], dim=1)
else:
xc = x
- context = c_crossattn
+ if type(c_crossattn) is list:
+ context = torch.cat(c_crossattn, 1)
+ else:
+ context = c_crossattn
dtype = self.get_dtype()
xc = xc.to(dtype)
t = t.to(dtype)
diff --git a/comfy/samplers.py b/comfy/samplers.py
index 3250b2e..f3c7784 100644
--- a/comfy/samplers.py
+++ b/comfy/samplers.py
@@ -390,6 +390,9 @@ def get_mask_aabb(masks):
return bounding_boxes, is_empty
+def resolve_cond_masks(conditions, h, w, device):
+ return resolve_areas_and_cond_masks(conditions, h, w, device)
+
def resolve_areas_and_cond_masks(conditions, h, w, device):
# We need to decide on an area outside the sampling loop in order to properly generate opposite areas of equal sizes.
# While we're doing this, we can also resolve the mask device and scaling for performance reasons
Dunno, changing args and outputs or renaming public methods (by convention private ones are prefixed with _ in python) does seem to make it hard to maintain custom nodes. Couldn't find anything in the docs about API stability, and there last release is like half a year old.
Thanks for all the help In the end I ran an update again once it would let me, previously I was up to date. Now it runs fine. I have no idea why.
"only change this line:
context = c_crossattn
to
context = torch.cat(c_crossattn, 1)
In comfy/mode_base.py (it should be line 56)"
hmm this didn't fix it for me. My main ComfyUI install has been broken for 3 days now, I really don't want to have to re-install all 30+ custom nodes I have.
Think I will have to try harrr1's patch fix instead.
That's a fair argument, at least temporary cross-compat for an older format seems reasonable to add I'll reopen this and leave the final decision up to @comfyanonymous
Fwiw, I've opened a pull request against the Fooocus sampler that fixes all current issues. The maintainer doesn't seem super responsive, feel free to use my fork for now https://github.com/harrr1/ComfyUI_Fooocus_KSampler, no guarantee I'll keep maintaining it though.
I tried multiple workflows but everytime i try to generate an image, i get this : 'list' object has no attribute 'to'
with focus or searge sampler, is the same:
"Error occurred when executing KSampler With Refiner (Fooocus):
'list' object has no attribute 'to'
File "/content/drive/MyDrive/ComfyUI/execution.py", line 151, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) File "/content/drive/MyDrive/ComfyUI/execution.py", line 81, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) File "/content/drive/MyDrive/ComfyUI/execution.py", line 74, in map_node_over_list results.append(getattr(obj, func)(slice_dict(input_data_all, i))) File "/content/drive/MyDrive/ComfyUI/custom_nodes/ComfyUI_Fooocus_KSampler/sampler/nodes.py", line 49, in sample return (core.ksampler_with_refiner(model, positive, negative, refiner_model, refiner_positive, refiner_negative, latent_image, noise_seed, steps, refiner_switch_step, cfg, sampler_name, scheduler, denoise=denoise, disable_noise=disable_noise, start_step=start_at_step, last_step=end_at_step, force_full_denoise=force_full_denoise), ) File "/usr/local/lib/python3.10/dist-packages/torch/utils/_contextlib.py", line 115, in decorate_context return func(*args, *kwargs) File "/content/drive/MyDrive/ComfyUI/custom_nodes/ComfyUI_Fooocus_KSampler/sampler/Fooocus/core.py", line 243, in ksampler_with_refiner samples = sampler.sample(noise, positive_copy, negative_copy, refiner_positive=refiner_positive_copy, File "/content/drive/MyDrive/ComfyUI/custom_nodes/ComfyUI_Fooocus_KSampler/sampler/Fooocus/samplers_advanced.py", line 236, in sample samples = getattr(k_diffusionsampling, "sample{}".format(self.sampler))(self.model_k, noise, sigmas, File "/usr/local/lib/python3.10/dist-packages/torch/utils/_contextlib.py", line 115, in decorate_context return func(args, kwargs) File "/content/drive/MyDrive/ComfyUI/comfy/k_diffusion/sampling.py", line 701, in sample_dpmpp_2m_sde_gpu return sample_dpmpp_2m_sde(model, x, sigmas, extra_args=extra_args, callback=callback, disable=disable, eta=eta, s_noise=s_noise, noise_sampler=noise_sampler, solver_type=solver_type) File "/usr/local/lib/python3.10/dist-packages/torch/utils/_contextlib.py", line 115, in decorate_context return func(*args, kwargs) File "/content/drive/MyDrive/ComfyUI/comfy/k_diffusion/sampling.py", line 613, in sample_dpmpp_2m_sde denoised = model(x, sigmas[i] * s_in, *extra_args) File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(args, kwargs) File "/content/drive/MyDrive/ComfyUI/comfy/samplers.py", line 323, in forward out = self.inner_model(x, sigma, cond=cond, uncond=uncond, cond_scale=cond_scale, cond_concat=cond_concat, model_options=model_options, seed=seed) File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(*args, kwargs) File "/content/drive/MyDrive/ComfyUI/comfy/k_diffusion/external.py", line 125, in forward eps = self.get_eps(input * c_in, self.sigma_to_t(sigma), *kwargs) File "/content/drive/MyDrive/ComfyUI/comfy/k_diffusion/external.py", line 151, in get_eps return self.inner_model.apply_model(args, kwargs) File "/content/drive/MyDrive/ComfyUI/comfy/samplers.py", line 311, in apply_model out = sampling_function(self.inner_model.apply_model, x, timestep, uncond, cond, cond_scale, cond_concat, model_options=model_options, seed=seed) File "/content/drive/MyDrive/ComfyUI/custom_nodes/ComfyUI_Fooocus_KSampler/sampler/Fooocus/patch.py", line 296, in sampling_function_patched cond, uncond = calc_cond_uncond_batch(model_function, cond, uncond, x, timestep, max_total_area, cond_concat, File "/content/drive/MyDrive/ComfyUI/custom_nodes/ComfyUI_Fooocus_KSampler/sampler/Fooocus/patch.py", line 266, in calc_cond_uncond_batch output = model_function(inputx, timestep, **c).chunk(batch_chunks) File "/content/drive/MyDrive/ComfyUI/comfy/model_base.py", line 60, in apply_model"
I run the latest 8 minutes old, update